Add 'llamacpp' as a new provider for running TradingAgents fully
offline with a local llama-server (llama.cpp).
Changes:
- factory.py: register 'llamacpp' provider alongside openai/ollama
- validators.py: accept any model name for llamacpp (like ollama)
- openai_client.py: llamacpp branch sets base_url from env/config,
uses placeholder api_key so no auth error is raised
- default_config.py: load .env via python-dotenv (optional dep);
LLM_PROVIDER, BACKEND_URL, DEEP_THINK_LLM, QUICK_THINK_LLM are
all overridable via environment variables
- .env.example: document llamacpp setup alongside cloud providers
- .gitignore: ensure .env is ignored, .env.example is tracked
Fully backward-compatible: OpenAI remains the default when no
.env is present. Also works for LM Studio, vLLM, or any other
OpenAI-compatible local server via BACKEND_URL + LLM_PROVIDER=openai.
Tested with: llama.cpp llama-server + Qwen3.5-35B-A3B-Q3_K_M
- Add http_client and http_async_client parameters to all LLM clients
- OpenAIClient, GoogleClient, AnthropicClient now support custom httpx clients
- Fixes SSL certificate verification errors on Windows Conda environments
- Users can now pass custom httpx.Client with verify=False or custom certs
Fixes#369
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)