The CLI hardcoded https://generativelanguage.googleapis.com/v1 as the
backend_url for the Google provider. When forwarded as base_url to
ChatGoogleGenerativeAI, the google-genai SDK constructs incorrect request
paths resulting in 404 Not Found for all Gemini models.
Fix by setting the Google provider's backend_url to None so the SDK uses
its default endpoint. GoogleClient.get_llm() still forwards base_url when
explicitly provided, preserving proxy/custom endpoint support.
Reproducer:
ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
base_url="https://generativelanguage.googleapis.com/v1",
).invoke("Hello")
# → ChatGoogleGenerativeAIError: 404 Not Found
The CLI hardcodes backend_url as https://generativelanguage.googleapis.com/v1
for the Google provider and passes it as base_url to ChatGoogleGenerativeAI.
However, the google-genai SDK manages its own endpoint and API versioning
internally — passing an external base_url causes the SDK to construct
incorrect request paths, resulting in 404 Not Found errors for all Gemini
models including stable ones like gemini-2.5-flash.
Remove the base_url forwarding for Google clients so the SDK uses its
default endpoint logic.
Reproducer:
ChatGoogleGenerativeAI(model="gemini-2.5-flash",
base_url="https://generativelanguage.googleapis.com/v1").invoke("hi")
# → ChatGoogleGenerativeAIError: 404 Not Found
Apply review suggestions: use concise `or` pattern for API key
resolution, consolidate tests into parameterized subTest, move
import to module level per PEP 8.
GoogleClient now accepts the unified `api_key` parameter used by
OpenAI and Anthropic clients, mapping it to the provider-specific
`google_api_key` that ChatGoogleGenerativeAI expects. Legacy
`google_api_key` still works for backward compatibility.
Resolves TODO.md item #2 (inconsistent parameter handling).
- Point requirements.txt to pyproject.toml as single source of truth
- Resolve welcome.txt path relative to module for CLI portability
- Include cli/static files in package build
- Extract shared normalize_content for OpenAI Responses API and
Gemini 3 list-format responses into base_client.py
- Update README install and CLI usage instructions
- Add http_client and http_async_client parameters to all LLM clients
- OpenAIClient, GoogleClient, AnthropicClient now support custom httpx clients
- Fixes SSL certificate verification errors on Windows Conda environments
- Users can now pass custom httpx.Client with verify=False or custom certs
Fixes#369
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)