- Add MiniMax as a new LLM provider via OpenAI-compatible API
- Support MiniMax-M2.7 (default), MiniMax-M2.7-highspeed, and legacy M2.5 models
- Wire MiniMax into factory, validator, CLI model selection, and provider list
- Update README with MiniMax API key docs and provider references
- Add http_client and http_async_client parameters to all LLM clients
- OpenAIClient, GoogleClient, AnthropicClient now support custom httpx clients
- Fixes SSL certificate verification errors on Windows Conda environments
- Users can now pass custom httpx.Client with verify=False or custom certs
Fixes#369
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)