- Added LLMFactory for provider-agnostic LLM creation - Supports OpenAI, Ollama (local/free), Anthropic, Google, Groq, Azure, Together, HuggingFace, OpenRouter - Updated memory system to be provider-agnostic - Fixed Ollama integration with tool calling support (llama3.2, llama3.1, mistral-nemo, qwen2.5) - Added comprehensive documentation and examples - Updated CLI with new Ollama model selections - 100% backward compatible - OpenAI remains default - Verified working with tests
- Fix typo 'Start' 'End' - Add llama3.1 selection - Use 'quick_think_llm' model instead of hard-coding GPT
This reverts commit 78ea029a0b.
78ea029a0b
- Added support for running CLI and Ollama server via Docker - Introduced tests for local embeddings model and standalone Docker setup - Enabled conditional Ollama server launch via LLM_PROVIDER
This aims to offer alternative OpenAI capable api's. This offers people to experiment with running the application locally