- openai_client.py: don't use self.base_url for llamacpp (it would
inherit openai default); read BACKEND_URL then LLAMACPP_BASE_URL directly
- default_config.py: remove redundant LLAMACPP_BASE_URL fallback from
backend_url, keep only generic BACKEND_URL env var
Add 'llamacpp' as a new provider for running TradingAgents fully
offline with a local llama-server (llama.cpp).
Changes:
- factory.py: register 'llamacpp' provider alongside openai/ollama
- validators.py: accept any model name for llamacpp (like ollama)
- openai_client.py: llamacpp branch sets base_url from env/config,
uses placeholder api_key so no auth error is raised
- default_config.py: load .env via python-dotenv (optional dep);
LLM_PROVIDER, BACKEND_URL, DEEP_THINK_LLM, QUICK_THINK_LLM are
all overridable via environment variables
- .env.example: document llamacpp setup alongside cloud providers
- .gitignore: ensure .env is ignored, .env.example is tracked
Fully backward-compatible: OpenAI remains the default when no
.env is present. Also works for LM Studio, vLLM, or any other
OpenAI-compatible local server via BACKEND_URL + LLM_PROVIDER=openai.
Tested with: llama.cpp llama-server + Qwen3.5-35B-A3B-Q3_K_M
- Replace FinnHub with Alpha Vantage API in README documentation
- Implement comprehensive Alpha Vantage modules:
- Stock data (daily OHLCV with date filtering)
- Technical indicators (SMA, EMA, MACD, RSI, Bollinger Bands, ATR)
- Fundamental data (overview, balance sheet, cashflow, income statement)
- News and sentiment data with insider transactions
- Update news analyst tools to use ticker-based news search
- Integrate Alpha Vantage vendor methods into interface routing
- Maintain backward compatibility with existing vendor system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>