Add MiniMax (MiniMax-M2.5 and MiniMax-M2.5-highspeed) as a supported
LLM provider. MiniMax offers an OpenAI-compatible API with 204K context
window support.
Changes:
- Add MiniMax provider routing in factory (via OpenAI-compatible client)
- Add MiniMax API endpoint and key handling in OpenAIClient
- Add MiniMax model validation in validators
- Add MiniMax models to CLI quick/deep thinking selection
- Add MiniMax to provider selection in CLI
- Update .env.example with MINIMAX_API_KEY
- Update README with MiniMax documentation
- Add save prompt after analysis with organized subfolder structure
- Fix report truncation by using sequential panels instead of Columns
- Add optional full report display prompt
- Add update_analyst_statuses() for unified status logic (pending/in_progress/completed)
- Normalize analyst selection to predefined ANALYST_ORDER for consistent execution
- Add message deduplication to prevent duplicates from stream_mode=values
- Restructure streaming loop so state handlers run on every chunk
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER