TradingAgents/tradingagents/llm_clients
jyek 93a6894e3d feat: add MiniMax provider support (M2.7, M2.5, highspeed variants)
Adds MiniMax as a natively supported LLM provider via its OpenAI-compatible
API. Includes retry handling for null-choices responses caused by MiniMax's
content moderation filter.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-16 11:36:25 +08:00
..
TODO.md fix: pass base_url to Google and Anthropic clients for proxy support (#427) 2026-03-29 17:59:52 +00:00
__init__.py feat: add multi-provider LLM support with factory pattern 2026-02-03 22:27:20 +00:00
anthropic_client.py Merge pull request #464 from CadeYu/sync-validator-models 2026-03-29 11:07:51 -07:00
azure_client.py feat: add DeepSeek, Qwen, GLM, and Azure OpenAI provider support 2026-04-13 07:12:07 +00:00
base_client.py Merge pull request #464 from CadeYu/sync-validator-models 2026-03-29 11:07:51 -07:00
factory.py feat: add MiniMax provider support (M2.7, M2.5, highspeed variants) 2026-04-16 11:36:25 +08:00
google_client.py Merge pull request #464 from CadeYu/sync-validator-models 2026-03-29 11:07:51 -07:00
model_catalog.py feat: add MiniMax provider support (M2.7, M2.5, highspeed variants) 2026-04-16 11:36:25 +08:00
openai_client.py feat: add MiniMax provider support (M2.7, M2.5, highspeed variants) 2026-04-16 11:36:25 +08:00
validators.py sync model validation with cli catalog 2026-03-25 21:23:02 +08:00