- Added Z.AI GLM 4.5 Air (free) and GLM 4.6 to quick-thinking options
- Added Z.AI GLM 4.5 Air (free) and GLM 4.6 to deep-thinking options
- Added TNG DeepSeek R1T2 Chimera (free and paid) for advanced reasoning
- Improved Google Gemini 2.0 Flash Exp description for consistency
These models provide additional OpenRouter options for:
- Cost-effective reasoning with free GLM 4.5 Air
- Advanced versatile performance with GLM 4.6
- Specialized reasoning capabilities with DeepSeek R1T2 Chimera
- Enhanced cli/main.py with structured logging throughout
- Added logging to MessageBuffer class for agent tracking
- Integrated performance monitoring and API call tracking
- Enhanced error handling with full stack traces
- Added contextual logging to all key operations
Features:
- Session start/end logging with statistics
- User selection and configuration logging
- Agent status transition tracking
- Tool call logging with arguments
- Report generation tracking with metadata
- Performance timing for operations
- Automatic log rotation to prevent disk space issues
Documentation:
- Created comprehensive CLI logging integration guide
- Added logging quick reference card
- Included integration summary with examples
Testing:
- Added test_cli_logging.py for validation
- All tests passing with proper log file generation
Benefits:
- Better debugging with structured, contextual logs
- Performance monitoring and bottleneck identification
- Complete audit trail for compliance
- API cost tracking and optimization
- Production-ready with enterprise-grade logging
- Maintains backward compatibility with legacy log files
- Add explicit API key retrieval and validation for OpenRouter, OpenAI, Anthropic, and Google providers
- Pass api_key parameter explicitly to ChatOpenAI, ChatAnthropic, and ChatGoogleGenerativeAI constructors
- Provide helpful error messages with instructions when API keys are missing
- Fixes 401 Authentication Error when using OpenRouter without OPENROUTER_API_KEY set
Previously, ChatOpenAI would default to looking for OPENAI_API_KEY even when using OpenRouter,
causing authentication failures. Now each provider correctly uses its own API key.
Resolves authentication issues across all supported LLM providers.
- Now checks for .env file existence and location
- Loads environment variables from .env file automatically
- Provides fallback .env parser if python-dotenv not installed
- Shows which API keys are configured from .env
- Gives specific guidance based on what's found/missing
- Checks which API keys are configured
- Shows status for OpenAI, OpenRouter, Anthropic, Google
- Provides setup instructions for common scenarios
- Helps users debug authentication errors
The _configure_embeddings method was incorrectly trying to initialize
graph components (conditional_logic, graph_setup, etc.) which caused
an AttributeError because tool_nodes hadn't been created yet.
This fix:
- Moves component initialization back to __init__ method
- Keeps only embedding configuration logic in _configure_embeddings
- Maintains correct initialization order
This commit implements a comprehensive solution for separating embedding
and chat model configurations, enabling flexible provider combinations
and graceful handling of embedding failures.
## Problem Statement
Previously, the TradingAgents memory system used the same backend_url for
both chat models and embeddings. This caused critical failures when:
- Using OpenRouter for chat (doesn't support OpenAI embedding endpoints)
- Using Anthropic/Google for chat (don't provide embeddings)
- The embedding endpoint returned HTML error pages instead of JSON
- Users wanted to mix providers (e.g., OpenRouter chat + OpenAI embeddings)
Error example:
AttributeError: 'str' object has no attribute 'data'
# Caused by: OpenRouter returned HTML page instead of embedding JSON
## Solution
Implemented three key features:
1. **Separate Embedding Client Configuration**
- New config parameters independent of chat LLM settings
- embedding_provider: "openai", "ollama", or "none"
- embedding_backend_url: Separate API endpoint
- embedding_model: Specific model to use
- enable_memory: Boolean flag to enable/disable memory
2. **Multiple Provider Support**
- OpenAI: Production-grade embeddings (recommended)
- Ollama: Local embeddings for offline/development
- None: Disable memory system entirely
3. **Graceful Fallback**
- System continues when embeddings fail
- Comprehensive error logging
- Memory operations return empty results instead of crashing
- Agents function without historical context when memory disabled
## Changes
### Core Framework
- tradingagents/default_config.py: Added 4 new embedding config params
- tradingagents/agents/utils/memory.py: Complete refactor with error handling
- tradingagents/graph/trading_graph.py: Separated embedding initialization
### CLI/User Interface
- cli/utils.py: Added select_embedding_provider() function
- cli/main.py: Added Step 7 for embedding provider selection
### Documentation (New Files)
- docs/EMBEDDING_CONFIGURATION.md: Complete usage guide (381 lines)
- docs/EMBEDDING_MIGRATION.md: Implementation details (374 lines)
- CHANGELOG_EMBEDDING.md: Release notes (225 lines)
- FEATURE_EMBEDDING_README.md: Branch overview (418 lines)
### Testing & Verification
- tests/test_embedding_config.py: Comprehensive test suite
- verify_config.py: Simple config verification script
## Example Usage
```python
# OpenRouter for chat, OpenAI for embeddings
config = {
"llm_provider": "openrouter",
"backend_url": "https://openrouter.ai/api/v1",
"deep_think_llm": "deepseek/deepseek-chat-v3-0324:free",
"embedding_provider": "openai",
"embedding_backend_url": "https://api.openai.com/v1",
"embedding_model": "text-embedding-3-small",
"enable_memory": True,
}
```
## Backward Compatibility
✅ 100% Backward Compatible - No breaking changes!
Existing configurations work without modification. Smart defaults
applied when embedding settings are omitted.
## Testing
- All core files pass diagnostics with no errors
- Configuration verification script passes all checks
- Supports scenarios: OpenRouter+OpenAI, All Ollama, Disabled Memory
- Graceful fallback tested for invalid URLs and missing API keys
## Benefits
- Enables using OpenRouter/other providers for chat
- Reduces costs (can use local embeddings or disable memory)
- Improves reliability (graceful degradation on failures)
- Maintains full backward compatibility
- Comprehensive documentation and examples
Fixes: OpenRouter compatibility issues
Closes: Embedding/chat provider coupling
Implements: Graceful fallback for memory operations
- Add .env.example file with API key placeholders
- Update README.md with .env file setup instructions
- Add dotenv loading in main.py for environment variables
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add data vendor configuration examples in README and main.py showing how to configure Alpha Vantage as the primary data provider. Update documentation to reflect the current default behavior of using Alpha Vantage for real-time market data access.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace hardcoded column indices with column name lookup
- Add mapping for all supported indicators to their expected CSV column names
- Handle missing columns gracefully with descriptive error messages
- Strip whitespace from header parsing for reliability
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace FinnHub with Alpha Vantage API in README documentation
- Implement comprehensive Alpha Vantage modules:
- Stock data (daily OHLCV with date filtering)
- Technical indicators (SMA, EMA, MACD, RSI, Bollinger Bands, ATR)
- Fundamental data (overview, balance sheet, cashflow, income statement)
- News and sentiment data with insider transactions
- Update news analyst tools to use ticker-based news search
- Integrate Alpha Vantage vendor methods into interface routing
- Maintain backward compatibility with existing vendor system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER