The CLI hardcoded https://generativelanguage.googleapis.com/v1 as the
backend_url for the Google provider. When forwarded as base_url to
ChatGoogleGenerativeAI, the google-genai SDK constructs incorrect request
paths resulting in 404 Not Found for all Gemini models.
Fix by setting the Google provider's backend_url to None so the SDK uses
its default endpoint. GoogleClient.get_llm() still forwards base_url when
explicitly provided, preserving proxy/custom endpoint support.
Reproducer:
ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
base_url="https://generativelanguage.googleapis.com/v1",
).invoke("Hello")
# → ChatGoogleGenerativeAIError: 404 Not Found
Add effort parameter (high/medium/low) for Claude 4.5+ and 4.6 models,
consistent with OpenAI reasoning_effort and Google thinking_level.
Also add content normalization for Anthropic responses.
- OpenAI: add GPT-5.4, GPT-5.4 Pro; remove o-series and legacy GPT-4o
- Anthropic: add Claude Opus 4.6, Sonnet 4.6; remove legacy 4.1/4.0/3.x
- Google: add Gemini 3.1 Pro, 3.1 Flash Lite; remove deprecated
gemini-3-pro-preview and Gemini 2.0 series
- xAI: clean up model list to match current API
- Simplify UnifiedChatOpenAI GPT-5 temperature handling
- Add missing tradingagents/__init__.py (fixes pip install building)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER