Models added: - OpenAI: GPT-5.2, GPT-5.1, GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-4.1 - Anthropic: Claude Opus 4.5/4.1, Claude Sonnet 4.5/4, Claude Haiku 4.5 - Google: Gemini 3 Pro/Flash, Gemini 2.5 Flash/Flash Lite - xAI: Grok 4, Grok 4.1 Fast (Reasoning/Non-Reasoning) Configs updated: - Add unified thinking_level for Gemini (maps to thinking_level for Gemini 3, thinking_budget for Gemini 2.5; handles Pro's lack of "minimal" support) - Add OpenAI reasoning_effort configuration - Add NormalizedChatGoogleGenerativeAI for consistent response handling Fixes: - Fix Bull/Bear researcher display truncation - Replace ChromaDB with BM25 for memory retrieval
- Fix typo 'Start' 'End' - Add llama3.1 selection - Use 'quick_think_llm' model instead of hard-coding GPT
This reverts commit 78ea029a0b.
78ea029a0b
- Added support for running CLI and Ollama server via Docker - Introduced tests for local embeddings model and standalone Docker setup - Enabled conditional Ollama server launch via LLM_PROVIDER
This aims to offer alternative OpenAI capable api's. This offers people to experiment with running the application locally