fix: remove hardcoded Google backend_url that caused 404 errors
The CLI hardcoded https://generativelanguage.googleapis.com/v1 as the backend_url for the Google provider. When forwarded as base_url to ChatGoogleGenerativeAI, the google-genai SDK constructs incorrect request paths resulting in 404 Not Found for all Gemini models. Fix by setting the Google provider's backend_url to None so the SDK uses its default endpoint. GoogleClient.get_llm() still forwards base_url when explicitly provided, preserving proxy/custom endpoint support. Reproducer: ChatGoogleGenerativeAI( model="gemini-2.5-flash", base_url="https://generativelanguage.googleapis.com/v1", ).invoke("Hello") # → ChatGoogleGenerativeAIError: 404 Not Found
This commit is contained in:
parent
00815a5ade
commit
fe47dd0983
|
|
@ -192,7 +192,7 @@ def select_llm_provider() -> tuple[str, str]:
|
|||
# Define OpenAI api options with their corresponding endpoints
|
||||
BASE_URLS = [
|
||||
("OpenAI", "https://api.openai.com/v1"),
|
||||
("Google", "https://generativelanguage.googleapis.com/v1"),
|
||||
("Google", None), # google-genai SDK manages its own endpoint internally
|
||||
("Anthropic", "https://api.anthropic.com/"),
|
||||
("xAI", "https://api.x.ai/v1"),
|
||||
("Openrouter", "https://openrouter.ai/api/v1"),
|
||||
|
|
|
|||
|
|
@ -28,10 +28,8 @@ class GoogleClient(BaseLLMClient):
|
|||
self.warn_if_unknown_model()
|
||||
llm_kwargs = {"model": self.model}
|
||||
|
||||
# base_url is intentionally NOT passed to ChatGoogleGenerativeAI.
|
||||
# The google-genai SDK manages its own endpoint and API versioning internally.
|
||||
# Passing a base_url (e.g. https://generativelanguage.googleapis.com/v1)
|
||||
# causes 404s because the SDK appends its own paths onto the override URL.
|
||||
if self.base_url:
|
||||
llm_kwargs["base_url"] = self.base_url
|
||||
|
||||
for key in ("timeout", "max_retries", "callbacks", "http_client", "http_async_client"):
|
||||
if key in self.kwargs:
|
||||
|
|
|
|||
Loading…
Reference in New Issue