- Add MiniMax as a new LLM provider via OpenAI-compatible API
- Support MiniMax-M2.7 (default), MiniMax-M2.7-highspeed, and legacy M2.5 models
- Wire MiniMax into factory, validator, CLI model selection, and provider list
- Update README with MiniMax API key docs and provider references
- Add .env.example file with API key placeholders
- Update README.md with .env file setup instructions
- Add dotenv loading in main.py for environment variables
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER