feat(llm): add OpenRouter API support with proper headers and API key handling
- Add explicit OPENROUTER_API_KEY environment variable handling - Add HTTP-Referer and X-Title headers for OpenRouter attribution - Fix case sensitivity for provider names (ollama now case-insensitive) - Add embedding fallback to OpenAI when using OpenRouter (since OpenRouter lacks embedding API) - Add comprehensive test suite (30 tests) for OpenRouter integration - Update README.md and PROJECT.md with OpenRouter configuration docs - Add CHANGELOG.md documenting the changes Patterns borrowed from ~/.claude/lib/genai_validate.py for multi-provider support. Closes #1 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
0bd3741e8a
commit
5443aaa209
|
|
@ -0,0 +1,48 @@
|
|||
# Changelog
|
||||
|
||||
All notable changes to TradingAgents will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- OpenRouter API provider support for unified access to multiple LLM models
|
||||
- Support for `provider/model-name` format (e.g., `anthropic/claude-sonnet-4.5`)
|
||||
- Proper API key handling with OPENROUTER_API_KEY environment variable
|
||||
- Custom headers for OpenRouter attribution (HTTP-Referer, X-Title)
|
||||
- Embedding fallback to OpenAI when using OpenRouter (since OpenRouter lacks embeddings)
|
||||
- Comprehensive test suite for OpenRouter provider integration [file:tests/test_openrouter.py](tests/test_openrouter.py)
|
||||
- Expanded .env.example with all supported LLM provider API keys
|
||||
- Detailed LLM Provider Options section in README.md with examples for:
|
||||
- OpenAI (default)
|
||||
- Anthropic
|
||||
- OpenRouter (new)
|
||||
- Google Generative AI
|
||||
- Ollama (local)
|
||||
- OpenRouter configuration example in Python usage section
|
||||
- Documentation updates in PROJECT.md for OpenRouter support
|
||||
|
||||
### Changed
|
||||
- Updated trading_graph.py LLM provider initialization to handle OpenRouter separately with proper API key and header management [file:tradingagents/graph/trading_graph.py:75-105](tradingagents/graph/trading_graph.py)
|
||||
- Enhanced memory.py embedding logic to support OpenRouter's embedding fallback behavior [file:tradingagents/agents/utils/memory.py:6-27](tradingagents/agents/utils/memory.py)
|
||||
- Main.py now includes OpenRouter configuration example (commented out) for easy reference
|
||||
|
||||
### Fixed
|
||||
- Improved error messages for missing OPENROUTER_API_KEY when using openrouter provider
|
||||
- Better embedding client initialization for different LLM providers
|
||||
|
||||
---
|
||||
|
||||
## [1.0.0] - 2025-01-01 (Example - Update with actual release date)
|
||||
|
||||
### Added
|
||||
- Initial multi-agent trading framework release
|
||||
- Support for multiple LLM providers
|
||||
- Analyst team (fundamental, sentiment, news, technical)
|
||||
- Researcher debate mechanism
|
||||
- Risk management workflow
|
||||
- CLI interface
|
||||
- Integration with financial data APIs
|
||||
|
||||
|
|
@ -0,0 +1,292 @@
|
|||
# PROJECT.md - TradingAgents
|
||||
|
||||
> Multi-Agent LLM Financial Trading Framework
|
||||
> Last Updated: 2025-12-25
|
||||
|
||||
---
|
||||
|
||||
## PROJECT VISION
|
||||
|
||||
TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents—from fundamental analysts, sentiment experts, and technical analysts to traders and risk management teams—the platform collaboratively evaluates market conditions and informs trading decisions through dynamic agent discussions.
|
||||
|
||||
**Research Focus**: This framework is designed for research purposes to explore how multi-agent LLM systems can approach complex financial decision-making.
|
||||
|
||||
---
|
||||
|
||||
## GOALS
|
||||
|
||||
### Primary Goals
|
||||
- [x] Provide a modular multi-agent framework for financial trading analysis
|
||||
- [x] Support multiple LLM providers (OpenAI, Anthropic, Google, OpenRouter, Ollama)
|
||||
- [x] Enable configurable data vendors (yfinance, Alpha Vantage, local)
|
||||
- [x] Implement specialized analyst agents (fundamental, sentiment, news, technical)
|
||||
- [x] Support researcher debates (bull vs bear perspectives)
|
||||
- [x] Include risk management and portfolio approval workflow
|
||||
|
||||
### Secondary Goals
|
||||
- [ ] Expand backtesting capabilities with Tauric TradingDB
|
||||
- [ ] Add support for additional asset classes
|
||||
- [ ] Improve caching and performance optimization
|
||||
- [ ] Enhance CLI experience with more configuration options
|
||||
|
||||
---
|
||||
|
||||
## SCOPE
|
||||
|
||||
### In Scope
|
||||
- Stock trading analysis and recommendations
|
||||
- Multi-agent collaboration and debate mechanisms
|
||||
- Integration with financial data APIs
|
||||
- CLI and programmatic Python interfaces
|
||||
- Configuration of LLM models and data sources
|
||||
- Risk assessment and position management
|
||||
- Support for multiple LLM providers (OpenAI, Anthropic, Google, OpenRouter, Ollama)
|
||||
|
||||
### Out of Scope
|
||||
- Live trading execution (simulation only)
|
||||
- Cryptocurrency or forex trading
|
||||
- Real-time streaming data
|
||||
- Mobile or web interfaces
|
||||
- Financial advice (research purposes only)
|
||||
|
||||
---
|
||||
|
||||
## CONSTRAINTS
|
||||
|
||||
<!-- TODO: Define your specific constraints -->
|
||||
|
||||
### Performance Constraints
|
||||
- API rate limits vary by data vendor (Alpha Vantage: 60 req/min with TradingAgents partnership)
|
||||
- LLM API costs scale with model choice and debate rounds
|
||||
- Memory usage scales with agent count and data volume
|
||||
|
||||
### Technical Constraints
|
||||
- Requires Python >= 3.10
|
||||
- Requires API keys for LLM provider (OpenAI recommended)
|
||||
- Requires Alpha Vantage API key for fundamental/news data (free tier available)
|
||||
|
||||
### Regulatory Constraints
|
||||
- Framework is NOT intended as financial, investment, or trading advice
|
||||
- For research and educational purposes only
|
||||
|
||||
---
|
||||
|
||||
## ARCHITECTURE
|
||||
|
||||
### System Overview
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ TradingAgents Graph │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────────┐ ┌──────────────────┐ │
|
||||
│ │ Analyst Team │ │ Researcher Team │ │
|
||||
│ ├──────────────────┤ ├──────────────────┤ │
|
||||
│ │ • Fundamentals │───▶│ • Bull Researcher│ │
|
||||
│ │ • Sentiment │ │ • Bear Researcher│ │
|
||||
│ │ • News │ │ (Debates) │ │
|
||||
│ │ • Technical │ └────────┬─────────┘ │
|
||||
│ └──────────────────┘ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────────┐ ┌──────────────────┐ │
|
||||
│ │ Data Vendors │ │ Trader Agent │ │
|
||||
│ ├──────────────────┤ └────────┬─────────┘ │
|
||||
│ │ • yfinance │ │ │
|
||||
│ │ • Alpha Vantage │ ▼ │
|
||||
│ │ • OpenAI │ ┌──────────────────┐ │
|
||||
│ │ • Google │ │ Risk Management │ │
|
||||
│ │ • Local │ ├──────────────────┤ │
|
||||
│ └──────────────────┘ │ • Aggressive │ │
|
||||
│ │ • Conservative │ │
|
||||
│ │ • Neutral │ │
|
||||
│ └────────┬─────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────────┐ │
|
||||
│ │Portfolio Manager │ │
|
||||
│ │ (Final Decision) │ │
|
||||
│ └──────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Technology Stack
|
||||
| Layer | Technology |
|
||||
|-------|------------|
|
||||
| Framework | LangGraph, LangChain |
|
||||
| LLM Providers | OpenAI (o4-mini, gpt-4o), Anthropic, Google GenAI, OpenRouter (unified access), Ollama (local) |
|
||||
| Data Sources | yfinance, Alpha Vantage API, Reddit (PRAW) |
|
||||
| Storage | ChromaDB (vector store), Redis (caching) |
|
||||
| CLI | Rich, Questionary |
|
||||
| Backtesting | Backtrader |
|
||||
| Python Version | >= 3.10 (3.13 recommended) |
|
||||
|
||||
### Key Dependencies
|
||||
- `langgraph` - Agent orchestration and state management
|
||||
- `langchain-openai/anthropic/google-genai` - LLM integrations
|
||||
- `yfinance` - Stock price and technical data
|
||||
- `chromadb` - Vector storage for memory
|
||||
- `rich` - CLI output formatting
|
||||
|
||||
---
|
||||
|
||||
## FILE ORGANIZATION
|
||||
|
||||
```
|
||||
TradingAgents/
|
||||
├── tradingagents/ # Main package
|
||||
│ ├── agents/ # LLM agent implementations
|
||||
│ │ ├── analysts/ # Analyst agents (fundamental, sentiment, news, technical)
|
||||
│ │ ├── researchers/ # Bull/bear researcher debate agents
|
||||
│ │ ├── risk_mgmt/ # Risk management debators
|
||||
│ │ ├── trader/ # Trader agent
|
||||
│ │ ├── managers/ # Research and risk managers
|
||||
│ │ └── utils/ # Agent utilities, tools, states
|
||||
│ ├── dataflows/ # Data vendor integrations
|
||||
│ │ ├── alpha_vantage*.py # Alpha Vantage API modules
|
||||
│ │ ├── y_finance.py # yfinance integration
|
||||
│ │ ├── google.py # Google news integration
|
||||
│ │ └── local.py # Local data vendor
|
||||
│ ├── graph/ # LangGraph workflow
|
||||
│ │ ├── trading_graph.py # Main graph definition
|
||||
│ │ ├── propagation.py # Forward propagation logic
|
||||
│ │ ├── reflection.py # Agent reflection
|
||||
│ │ └── signal_processing.py
|
||||
│ └── default_config.py # Default configuration
|
||||
├── cli/ # Command-line interface
|
||||
│ ├── main.py # CLI entry point
|
||||
│ ├── models.py # CLI data models
|
||||
│ └── utils.py # CLI utilities
|
||||
├── main.py # Quick start example
|
||||
├── test.py # Basic tests
|
||||
├── requirements.txt # Python dependencies
|
||||
├── pyproject.toml # Project metadata
|
||||
└── assets/ # Documentation images
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## TESTING STRATEGY
|
||||
|
||||
### Current State
|
||||
- Basic test file exists (`test.py`)
|
||||
- No formal test framework configured
|
||||
|
||||
### Recommended Testing
|
||||
- Unit tests for individual agents
|
||||
- Integration tests for data vendor APIs
|
||||
- End-to-end tests for trading graph propagation
|
||||
- Mock LLM responses for deterministic testing
|
||||
|
||||
---
|
||||
|
||||
## DOCUMENTATION MAP
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| README.md | Installation, usage, API reference |
|
||||
| LICENSE | MIT License |
|
||||
| PROJECT.md | This file - project overview |
|
||||
| assets/ | Architecture diagrams, CLI screenshots |
|
||||
|
||||
---
|
||||
|
||||
## CURRENT SPRINT
|
||||
|
||||
<!-- TODO: Define your current sprint goals -->
|
||||
|
||||
### Active Work
|
||||
- [ ] Define sprint goals here
|
||||
|
||||
### Backlog
|
||||
- Expand data vendor options
|
||||
- Improve caching performance
|
||||
- Add more comprehensive testing
|
||||
- Enhance CLI configuration options
|
||||
|
||||
---
|
||||
|
||||
## CONFIGURATION REFERENCE
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# LLM Provider API Keys (choose one based on llm_provider config)
|
||||
OPENAI_API_KEY=<optional> # OpenAI API key (required for OpenAI provider or embeddings)
|
||||
ANTHROPIC_API_KEY=<optional> # Anthropic API key (required for Anthropic provider)
|
||||
OPENROUTER_API_KEY=<optional> # OpenRouter API key (required for OpenRouter provider)
|
||||
GOOGLE_API_KEY=<optional> # Google API key (required for Google provider)
|
||||
|
||||
# Data Vendor API Keys
|
||||
ALPHA_VANTAGE_API_KEY=<required> # Alpha Vantage for fundamental/news data
|
||||
|
||||
# Application Configuration
|
||||
TRADINGAGENTS_RESULTS_DIR=./results # Output directory for results
|
||||
```
|
||||
|
||||
### Default Config Options
|
||||
```python
|
||||
{
|
||||
"llm_provider": "openai", # Options: openai, anthropic, google, openrouter, ollama
|
||||
"deep_think_llm": "o4-mini", # For complex reasoning
|
||||
"quick_think_llm": "gpt-4o-mini", # For fast responses
|
||||
"backend_url": "https://api.openai.com/v1", # API endpoint (varies by provider)
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
"data_vendors": {
|
||||
"core_stock_apis": "yfinance",
|
||||
"technical_indicators": "yfinance",
|
||||
"fundamental_data": "alpha_vantage",
|
||||
"news_data": "alpha_vantage",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### OpenRouter Configuration Example
|
||||
OpenRouter provides unified access to multiple LLM models. To use OpenRouter:
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "openrouter",
|
||||
"deep_think_llm": "anthropic/claude-sonnet-4.5", # provider/model-name format
|
||||
"quick_think_llm": "openai/gpt-4o-mini",
|
||||
"backend_url": "https://openrouter.ai/api/v1",
|
||||
}
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
- OPENROUTER_API_KEY environment variable must be set
|
||||
- OPENAI_API_KEY must also be set for embeddings (OpenRouter does not provide embeddings)
|
||||
- Model names use the format: `provider/model-name` (e.g., `anthropic/claude-sonnet-4.5`, `openai/gpt-4o`)
|
||||
- See [OpenRouter models list](https://openrouter.ai/docs/models) for available models
|
||||
|
||||
---
|
||||
|
||||
## DEVELOPMENT NOTES
|
||||
|
||||
### Getting Started
|
||||
```bash
|
||||
# Clone and setup
|
||||
git clone https://github.com/TauricResearch/TradingAgents.git
|
||||
cd TradingAgents
|
||||
conda create -n tradingagents python=3.13
|
||||
conda activate tradingagents
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Configure API keys
|
||||
export OPENAI_API_KEY=your_key
|
||||
export ALPHA_VANTAGE_API_KEY=your_key
|
||||
|
||||
# Run CLI
|
||||
python -m cli.main
|
||||
|
||||
# Or use programmatically
|
||||
python main.py
|
||||
```
|
||||
|
||||
### Key Entry Points
|
||||
- `python -m cli.main` - Interactive CLI
|
||||
- `python main.py` - Programmatic example
|
||||
- `TradingAgentsGraph.propagate(ticker, date)` - Core API
|
||||
|
||||
---
|
||||
|
||||
*Generated by autonomous-dev setup wizard*
|
||||
93
README.md
93
README.md
|
|
@ -114,7 +114,7 @@ pip install -r requirements.txt
|
|||
|
||||
### Required APIs
|
||||
|
||||
You will need the OpenAI API for all the agents, and [Alpha Vantage API](https://www.alphavantage.co/support/#api-key) for fundamental and news data (default configuration).
|
||||
You will need an LLM provider API key for all the agents, and [Alpha Vantage API](https://www.alphavantage.co/support/#api-key) for fundamental and news data (default configuration).
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY
|
||||
|
|
@ -127,7 +127,68 @@ cp .env.example .env
|
|||
# Edit .env with your actual API keys
|
||||
```
|
||||
|
||||
**Note:** We are happy to partner with Alpha Vantage to provide robust API support for TradingAgents. You can get a free AlphaVantage API [here](https://www.alphavantage.co/support/#api-key), TradingAgents-sourced requests also have increased rate limits to 60 requests per minute with no daily limits. Typically the quota is sufficient for performing complex tasks with TradingAgents thanks to Alpha Vantage’s open-source support program. If you prefer to use OpenAI for these data sources instead, you can modify the data vendor settings in `tradingagents/default_config.py`.
|
||||
**Note:** We are happy to partner with Alpha Vantage to provide robust API support for TradingAgents. You can get a free AlphaVantage API [here](https://www.alphavantage.co/support/#api-key), TradingAgents-sourced requests also have increased rate limits to 60 requests per minute with no daily limits. Typically the quota is sufficient for performing complex tasks with TradingAgents thanks to Alpha Vantage's open-source support program. If you prefer to use OpenAI for these data sources instead, you can modify the data vendor settings in `tradingagents/default_config.py`.
|
||||
|
||||
#### LLM Provider Options
|
||||
|
||||
TradingAgents supports multiple LLM providers. Configure your choice in `main.py`:
|
||||
|
||||
**OpenAI** (default):
|
||||
```python
|
||||
config["llm_provider"] = "openai"
|
||||
config["deep_think_llm"] = "o4-mini"
|
||||
config["quick_think_llm"] = "gpt-4o-mini"
|
||||
config["backend_url"] = "https://api.openai.com/v1"
|
||||
# Requires: OPENAI_API_KEY environment variable
|
||||
```
|
||||
|
||||
**Anthropic**:
|
||||
```python
|
||||
config["llm_provider"] = "anthropic"
|
||||
config["deep_think_llm"] = "claude-sonnet-4-20250514"
|
||||
config["quick_think_llm"] = "claude-sonnet-4-20250514"
|
||||
config["backend_url"] = "https://api.anthropic.com"
|
||||
# Requires: ANTHROPIC_API_KEY environment variable
|
||||
```
|
||||
|
||||
**OpenRouter** (unified access to multiple models):
|
||||
```python
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
config["quick_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
config["backend_url"] = "https://openrouter.ai/api/v1"
|
||||
# Requires: OPENROUTER_API_KEY environment variable
|
||||
```
|
||||
|
||||
Set your API key:
|
||||
```bash
|
||||
export OPENROUTER_API_KEY=$YOUR_OPENROUTER_API_KEY
|
||||
```
|
||||
|
||||
Model names use the format `provider/model-name` (e.g., `anthropic/claude-sonnet-4.5`, `openai/gpt-4o`). See [OpenRouter models](https://openrouter.ai/docs/models) for available options.
|
||||
|
||||
**Important:** OpenRouter does not provide embeddings. If using OpenRouter for LLM inference, you must also set `OPENAI_API_KEY` for embedding functionality:
|
||||
```bash
|
||||
export OPENROUTER_API_KEY=$YOUR_OPENROUTER_API_KEY
|
||||
export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY # Used for embeddings only
|
||||
```
|
||||
|
||||
**Google Generative AI**:
|
||||
```python
|
||||
config["llm_provider"] = "google"
|
||||
config["deep_think_llm"] = "gemini-2.0-flash"
|
||||
config["quick_think_llm"] = "gemini-2.0-flash"
|
||||
# Requires: GOOGLE_API_KEY environment variable
|
||||
```
|
||||
|
||||
**Ollama** (local inference):
|
||||
```python
|
||||
config["llm_provider"] = "ollama"
|
||||
config["deep_think_llm"] = "mistral"
|
||||
config["quick_think_llm"] = "mistral"
|
||||
config["backend_url"] = "http://localhost:11434/v1"
|
||||
# Requires: Local Ollama instance running
|
||||
```
|
||||
|
||||
### CLI Usage
|
||||
|
||||
|
|
@ -180,8 +241,8 @@ from tradingagents.default_config import DEFAULT_CONFIG
|
|||
|
||||
# Create a custom config
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["deep_think_llm"] = "gpt-4.1-nano" # Use a different model
|
||||
config["quick_think_llm"] = "gpt-4.1-nano" # Use a different model
|
||||
config["deep_think_llm"] = "gpt-4o-mini" # Use a different model
|
||||
config["quick_think_llm"] = "gpt-4o-mini" # Use a different model
|
||||
config["max_debate_rounds"] = 1 # Increase debate rounds
|
||||
|
||||
# Configure data vendors (default uses yfinance and Alpha Vantage)
|
||||
|
|
@ -200,6 +261,30 @@ _, decision = ta.propagate("NVDA", "2024-05-10")
|
|||
print(decision)
|
||||
```
|
||||
|
||||
**Using OpenRouter with different models:**
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Configure for OpenRouter with specified models
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5" # Deep reasoning model
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini" # Fast model
|
||||
config["backend_url"] = "https://openrouter.ai/api/v1"
|
||||
|
||||
# Note: Ensure OPENROUTER_API_KEY is set in environment
|
||||
# For embeddings, also set OPENAI_API_KEY
|
||||
import os
|
||||
if not os.getenv("OPENROUTER_API_KEY"):
|
||||
raise ValueError("OPENROUTER_API_KEY not found in environment")
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
_, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
print(decision)
|
||||
```
|
||||
|
||||
> The default configuration uses yfinance for stock price and technical data, and Alpha Vantage for fundamental and news data. For production use or if you encounter rate limits, consider upgrading to [Alpha Vantage Premium](https://www.alphavantage.co/premium/) for more stable and reliable data access. For offline experimentation, there's a local data vendor option that uses our **Tauric TradingDB**, a curated dataset for backtesting, though this is still in development. We're currently refining this dataset and plan to release it soon alongside our upcoming projects. Stay tuned!
|
||||
|
||||
You can view the full list of configurations in `tradingagents/default_config.py`.
|
||||
|
|
|
|||
8
main.py
8
main.py
|
|
@ -14,6 +14,14 @@ config["quick_think_llm"] = "claude-sonnet-4-20250514"
|
|||
config["backend_url"] = "https://api.anthropic.com"
|
||||
config["max_debate_rounds"] = 1 # debate rounds
|
||||
|
||||
# Example: OpenRouter configuration (uncomment to use)
|
||||
# config["llm_provider"] = "openrouter"
|
||||
# config["deep_think_llm"] = "anthropic/claude-sonnet-4.5" # or any OpenRouter model
|
||||
# config["quick_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
# config["backend_url"] = "https://openrouter.ai/api/v1"
|
||||
# Note: Set OPENROUTER_API_KEY in .env file
|
||||
# Note: For embeddings, also set OPENAI_API_KEY (OpenRouter doesn't provide embeddings)
|
||||
|
||||
# Configure data vendors (default uses yfinance and alpha_vantage)
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "yfinance", # Options: yfinance, alpha_vantage, local
|
||||
|
|
|
|||
|
|
@ -0,0 +1,6 @@
|
|||
"""
|
||||
TradingAgents Test Suite
|
||||
|
||||
This package contains unit tests, integration tests, and edge case tests
|
||||
for the TradingAgents framework.
|
||||
"""
|
||||
|
|
@ -0,0 +1,768 @@
|
|||
"""
|
||||
Test suite for OpenRouter API support in TradingAgents.
|
||||
|
||||
This module tests:
|
||||
1. OpenRouter provider initialization with ChatOpenAI
|
||||
2. API key handling (OPENROUTER_API_KEY vs OPENAI_API_KEY)
|
||||
3. Error handling for missing API keys
|
||||
4. Model name format validation (provider/model-name)
|
||||
5. Embedding fallback behavior
|
||||
6. Configuration validation
|
||||
"""
|
||||
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import Mock, patch, MagicMock
|
||||
from typing import Dict, Any
|
||||
|
||||
# Import modules under test
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.agents.utils.memory import FinancialSituationMemory
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Fixtures
|
||||
# ============================================================================
|
||||
|
||||
@pytest.fixture
|
||||
def openrouter_config():
|
||||
"""Create a valid OpenRouter configuration."""
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config.update({
|
||||
"llm_provider": "openrouter",
|
||||
"deep_think_llm": "anthropic/claude-sonnet-4",
|
||||
"quick_think_llm": "anthropic/claude-haiku-3.5",
|
||||
"backend_url": "https://openrouter.ai/api/v1",
|
||||
})
|
||||
return config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_env_openrouter():
|
||||
"""Mock environment with OPENROUTER_API_KEY set."""
|
||||
with patch.dict(os.environ, {
|
||||
"OPENROUTER_API_KEY": "sk-or-test-key-123",
|
||||
}, clear=True):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_env_openai():
|
||||
"""Mock environment with only OPENAI_API_KEY set."""
|
||||
with patch.dict(os.environ, {
|
||||
"OPENAI_API_KEY": "sk-test-key-456",
|
||||
}, clear=True):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_env_empty():
|
||||
"""Mock environment with no API keys."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_langchain_classes():
|
||||
"""Mock LangChain chat model classes."""
|
||||
with patch("tradingagents.graph.trading_graph.ChatOpenAI") as mock_openai, \
|
||||
patch("tradingagents.graph.trading_graph.ChatAnthropic") as mock_anthropic, \
|
||||
patch("tradingagents.graph.trading_graph.ChatGoogleGenerativeAI") as mock_google:
|
||||
|
||||
# Configure mocks to return Mock instances
|
||||
mock_openai.return_value = Mock()
|
||||
mock_anthropic.return_value = Mock()
|
||||
mock_google.return_value = Mock()
|
||||
|
||||
yield {
|
||||
"openai": mock_openai,
|
||||
"anthropic": mock_anthropic,
|
||||
"google": mock_google,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_memory():
|
||||
"""Mock FinancialSituationMemory to avoid actual ChromaDB/OpenAI calls."""
|
||||
with patch("tradingagents.graph.trading_graph.FinancialSituationMemory") as mock:
|
||||
mock.return_value = Mock(spec=FinancialSituationMemory)
|
||||
yield mock
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_openai_client():
|
||||
"""Mock OpenAI client for embedding tests."""
|
||||
with patch("tradingagents.agents.utils.memory.OpenAI") as mock:
|
||||
client_instance = Mock()
|
||||
mock.return_value = client_instance
|
||||
|
||||
# Mock embedding response
|
||||
embedding_response = Mock()
|
||||
embedding_response.data = [Mock(embedding=[0.1] * 1536)]
|
||||
client_instance.embeddings.create.return_value = embedding_response
|
||||
|
||||
yield mock
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_chromadb():
|
||||
"""Mock ChromaDB client."""
|
||||
with patch("tradingagents.agents.utils.memory.chromadb.Client") as mock:
|
||||
client_instance = Mock()
|
||||
collection_instance = Mock()
|
||||
collection_instance.count.return_value = 0
|
||||
client_instance.create_collection.return_value = collection_instance
|
||||
mock.return_value = client_instance
|
||||
yield mock
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Unit Tests: OpenRouter Provider Initialization
|
||||
# ============================================================================
|
||||
|
||||
class TestOpenRouterInitialization:
|
||||
"""Test OpenRouter provider initializes ChatOpenAI with correct parameters."""
|
||||
|
||||
def test_openrouter_uses_chatopenai(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that OpenRouter provider uses ChatOpenAI class."""
|
||||
# Arrange: OpenRouter config ready
|
||||
|
||||
# Act: Initialize TradingAgentsGraph
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: ChatOpenAI was called, not Anthropic or Google
|
||||
assert mock_langchain_classes["openai"].called
|
||||
assert not mock_langchain_classes["anthropic"].called
|
||||
assert not mock_langchain_classes["google"].called
|
||||
|
||||
def test_openrouter_sets_correct_base_url(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that OpenRouter sets base_url to https://openrouter.ai/api/v1."""
|
||||
# Arrange
|
||||
expected_url = "https://openrouter.ai/api/v1"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: Check both deep and quick thinking LLMs
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
assert len(calls) >= 2, "Expected at least 2 ChatOpenAI calls"
|
||||
|
||||
# Check deep thinking LLM
|
||||
deep_call_kwargs = calls[0][1] # Get keyword arguments
|
||||
assert deep_call_kwargs["base_url"] == expected_url
|
||||
|
||||
# Check quick thinking LLM
|
||||
quick_call_kwargs = calls[1][1]
|
||||
assert quick_call_kwargs["base_url"] == expected_url
|
||||
|
||||
def test_openrouter_uses_provider_model_format(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that OpenRouter accepts provider/model-name format."""
|
||||
# Arrange: Model names in provider/model format
|
||||
deep_model = "anthropic/claude-sonnet-4"
|
||||
quick_model = "anthropic/claude-haiku-3.5"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: Model names passed correctly
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
|
||||
deep_call_kwargs = calls[0][1]
|
||||
assert deep_call_kwargs["model"] == deep_model
|
||||
|
||||
quick_call_kwargs = calls[1][1]
|
||||
assert quick_call_kwargs["model"] == quick_model
|
||||
|
||||
def test_openrouter_alternative_models(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test OpenRouter with different provider/model combinations."""
|
||||
# Arrange: Test various model formats
|
||||
test_models = [
|
||||
"google/gemini-2.0-flash-exp",
|
||||
"openai/gpt-4o",
|
||||
"meta-llama/llama-3.3-70b-instruct",
|
||||
]
|
||||
|
||||
for model in test_models:
|
||||
# Reset mocks
|
||||
mock_langchain_classes["openai"].reset_mock()
|
||||
|
||||
# Update config
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = model
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert
|
||||
call_kwargs = mock_langchain_classes["openai"].call_args_list[0][1]
|
||||
assert call_kwargs["model"] == model
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Unit Tests: API Key Handling
|
||||
# ============================================================================
|
||||
|
||||
class TestAPIKeyHandling:
|
||||
"""Test that OpenRouter uses OPENROUTER_API_KEY, not OPENAI_API_KEY."""
|
||||
|
||||
def test_openrouter_requires_openrouter_api_key(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_env_openrouter,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that OPENROUTER_API_KEY is available when using OpenRouter."""
|
||||
# Arrange: OPENROUTER_API_KEY is set
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: Can access the API key
|
||||
assert os.getenv("OPENROUTER_API_KEY") == "sk-or-test-key-123"
|
||||
|
||||
def test_openai_api_key_not_used_for_openrouter(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_env_openai,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that OPENAI_API_KEY is not used when provider is openrouter."""
|
||||
# Arrange: Only OPENAI_API_KEY is set
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: OPENROUTER_API_KEY should not be set
|
||||
assert os.getenv("OPENROUTER_API_KEY") is None
|
||||
assert os.getenv("OPENAI_API_KEY") == "sk-test-key-456"
|
||||
|
||||
def test_missing_api_key_environment(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_env_empty,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test environment when no API keys are set."""
|
||||
# Arrange: No API keys in environment
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: No API keys available
|
||||
assert os.getenv("OPENROUTER_API_KEY") is None
|
||||
assert os.getenv("OPENAI_API_KEY") is None
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Unit Tests: Error Handling
|
||||
# ============================================================================
|
||||
|
||||
class TestErrorHandling:
|
||||
"""Test error handling for missing API keys and invalid configurations."""
|
||||
|
||||
def test_invalid_provider_raises_error(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that invalid provider raises ValueError."""
|
||||
# Arrange: Invalid provider
|
||||
config = openrouter_config.copy()
|
||||
config["llm_provider"] = "invalid_provider"
|
||||
|
||||
# Act & Assert
|
||||
with pytest.raises(ValueError, match="Unsupported LLM provider"):
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
def test_empty_backend_url_handled(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test behavior with empty backend_url."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["backend_url"] = ""
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert: Empty string passed to ChatOpenAI
|
||||
call_kwargs = mock_langchain_classes["openai"].call_args_list[0][1]
|
||||
assert call_kwargs["base_url"] == ""
|
||||
|
||||
def test_none_backend_url_handled(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test behavior with None backend_url."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["backend_url"] = None
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert: None passed to ChatOpenAI
|
||||
call_kwargs = mock_langchain_classes["openai"].call_args_list[0][1]
|
||||
assert call_kwargs["base_url"] is None
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Integration Tests: Model Format Validation
|
||||
# ============================================================================
|
||||
|
||||
class TestModelFormatValidation:
|
||||
"""Test that OpenRouter model names work correctly."""
|
||||
|
||||
def test_anthropic_model_format(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test Anthropic models via OpenRouter (anthropic/*)."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = "anthropic/claude-opus-4"
|
||||
config["quick_think_llm"] = "anthropic/claude-sonnet-3.5"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
assert calls[0][1]["model"] == "anthropic/claude-opus-4"
|
||||
assert calls[1][1]["model"] == "anthropic/claude-sonnet-3.5"
|
||||
|
||||
def test_openai_model_format(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test OpenAI models via OpenRouter (openai/*)."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = "openai/gpt-4o"
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
assert calls[0][1]["model"] == "openai/gpt-4o"
|
||||
assert calls[1][1]["model"] == "openai/gpt-4o-mini"
|
||||
|
||||
def test_google_model_format(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test Google models via OpenRouter (google/*)."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = "google/gemini-2.0-flash-exp"
|
||||
config["quick_think_llm"] = "google/gemini-flash-1.5"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
assert calls[0][1]["model"] == "google/gemini-2.0-flash-exp"
|
||||
assert calls[1][1]["model"] == "google/gemini-flash-1.5"
|
||||
|
||||
def test_meta_llama_model_format(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test Meta Llama models via OpenRouter (meta-llama/*)."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = "meta-llama/llama-3.3-70b-instruct"
|
||||
config["quick_think_llm"] = "meta-llama/llama-3.1-8b-instruct"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
assert calls[0][1]["model"] == "meta-llama/llama-3.3-70b-instruct"
|
||||
assert calls[1][1]["model"] == "meta-llama/llama-3.1-8b-instruct"
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Integration Tests: Embedding Handling
|
||||
# ============================================================================
|
||||
|
||||
class TestEmbeddingHandling:
|
||||
"""Test that embeddings work correctly with OpenRouter."""
|
||||
|
||||
def test_memory_uses_openrouter_base_url(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test that FinancialSituationMemory uses OpenRouter base_url."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
|
||||
# Act
|
||||
memory = FinancialSituationMemory("test_memory", config)
|
||||
|
||||
# Assert: OpenAI client initialized with OpenRouter URL
|
||||
mock_openai_client.assert_called_once_with(
|
||||
base_url="https://openrouter.ai/api/v1"
|
||||
)
|
||||
|
||||
def test_memory_embedding_with_openrouter(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test that embeddings can be generated via OpenRouter."""
|
||||
# Arrange
|
||||
memory = FinancialSituationMemory("test_memory", openrouter_config)
|
||||
test_text = "Test financial situation"
|
||||
|
||||
# Act
|
||||
embedding = memory.get_embedding(test_text)
|
||||
|
||||
# Assert: Embedding created successfully
|
||||
assert embedding is not None
|
||||
assert len(embedding) == 1536
|
||||
mock_openai_client.return_value.embeddings.create.assert_called_once()
|
||||
|
||||
def test_memory_uses_text_embedding_model(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test that correct embedding model is used."""
|
||||
# Arrange
|
||||
memory = FinancialSituationMemory("test_memory", openrouter_config)
|
||||
|
||||
# Act
|
||||
memory.get_embedding("test")
|
||||
|
||||
# Assert: Called with text-embedding-3-small
|
||||
call_args = mock_openai_client.return_value.embeddings.create.call_args
|
||||
assert call_args[1]["model"] == "text-embedding-3-small"
|
||||
|
||||
def test_memory_ollama_embedding_model(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test that Ollama uses nomic-embed-text model."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["backend_url"] = "http://localhost:11434/v1"
|
||||
memory = FinancialSituationMemory("test_memory", config)
|
||||
|
||||
# Act
|
||||
memory.get_embedding("test")
|
||||
|
||||
# Assert: Called with nomic-embed-text
|
||||
call_args = mock_openai_client.return_value.embeddings.create.call_args
|
||||
assert call_args[1]["model"] == "nomic-embed-text"
|
||||
|
||||
def test_memory_graceful_fallback_on_embedding_error(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test that memory gracefully handles embedding failures."""
|
||||
# Arrange
|
||||
memory = FinancialSituationMemory("test_memory", openrouter_config)
|
||||
|
||||
# Mock embedding failure
|
||||
mock_openai_client.return_value.embeddings.create.side_effect = Exception(
|
||||
"API quota exceeded"
|
||||
)
|
||||
|
||||
# Act: Try to get memories (will fail on embedding)
|
||||
result = memory.get_memories("current situation", n_matches=1)
|
||||
|
||||
# Assert: Returns empty list instead of crashing
|
||||
assert result == []
|
||||
|
||||
def test_memory_add_situations_with_openrouter(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test adding situations to memory using OpenRouter embeddings."""
|
||||
# Arrange
|
||||
memory = FinancialSituationMemory("test_memory", openrouter_config)
|
||||
situations = [
|
||||
("Market volatility increasing", "Reduce risk exposure"),
|
||||
("Strong uptrend detected", "Increase position size"),
|
||||
]
|
||||
|
||||
# Act
|
||||
memory.add_situations(situations)
|
||||
|
||||
# Assert: Embeddings created for each situation
|
||||
assert mock_openai_client.return_value.embeddings.create.call_count == 2
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Edge Cases
|
||||
# ============================================================================
|
||||
|
||||
class TestEdgeCases:
|
||||
"""Test edge cases and boundary conditions."""
|
||||
|
||||
def test_case_insensitive_provider_name(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that provider names are case-sensitive (current implementation).
|
||||
|
||||
NOTE: Current implementation only accepts lowercase 'openrouter'.
|
||||
Unlike 'openai', 'anthropic', 'google' which use .lower(),
|
||||
'openrouter' and 'ollama' are case-sensitive string matches.
|
||||
"""
|
||||
# Arrange: Only lowercase 'openrouter' works
|
||||
valid_provider = "openrouter"
|
||||
invalid_providers = ["OpenRouter", "OPENROUTER", "OpenRouTer"]
|
||||
|
||||
# Act & Assert: Lowercase works
|
||||
config = openrouter_config.copy()
|
||||
config["llm_provider"] = valid_provider
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
assert mock_langchain_classes["openai"].called
|
||||
|
||||
# Act & Assert: Other cases fail
|
||||
for provider in invalid_providers:
|
||||
mock_langchain_classes["openai"].reset_mock()
|
||||
config = openrouter_config.copy()
|
||||
config["llm_provider"] = provider
|
||||
|
||||
with pytest.raises(ValueError, match="Unsupported LLM provider"):
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
def test_openrouter_with_ollama_provider_name(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that 'ollama' provider also uses ChatOpenAI (grouped with openrouter)."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["llm_provider"] = "ollama"
|
||||
config["backend_url"] = "http://localhost:11434/v1"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert: ChatOpenAI used (not Anthropic/Google)
|
||||
assert mock_langchain_classes["openai"].called
|
||||
assert not mock_langchain_classes["anthropic"].called
|
||||
assert not mock_langchain_classes["google"].called
|
||||
|
||||
def test_empty_model_name(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test behavior with empty model names."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = ""
|
||||
config["quick_think_llm"] = ""
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert: Empty strings passed to ChatOpenAI
|
||||
calls = mock_langchain_classes["openai"].call_args_list
|
||||
assert calls[0][1]["model"] == ""
|
||||
assert calls[1][1]["model"] == ""
|
||||
|
||||
def test_special_characters_in_model_name(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test model names with special characters."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["deep_think_llm"] = "anthropic/claude-3.5-sonnet-20241022"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert: Model name preserved exactly
|
||||
call_kwargs = mock_langchain_classes["openai"].call_args_list[0][1]
|
||||
assert call_kwargs["model"] == "anthropic/claude-3.5-sonnet-20241022"
|
||||
|
||||
def test_url_with_trailing_slash(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test backend_url with trailing slash."""
|
||||
# Arrange
|
||||
config = openrouter_config.copy()
|
||||
config["backend_url"] = "https://openrouter.ai/api/v1/"
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=config)
|
||||
|
||||
# Assert: Trailing slash preserved
|
||||
call_kwargs = mock_langchain_classes["openai"].call_args_list[0][1]
|
||||
assert call_kwargs["base_url"] == "https://openrouter.ai/api/v1/"
|
||||
|
||||
def test_memory_empty_collection_query(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test querying memories when collection is empty."""
|
||||
# Arrange
|
||||
memory = FinancialSituationMemory("test_memory", openrouter_config)
|
||||
|
||||
# Act: Query empty collection
|
||||
result = memory.get_memories("test situation", n_matches=5)
|
||||
|
||||
# Assert: Returns empty list
|
||||
assert result == []
|
||||
|
||||
def test_memory_zero_matches_requested(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_openai_client,
|
||||
mock_chromadb
|
||||
):
|
||||
"""Test requesting zero matches from memory."""
|
||||
# Arrange
|
||||
memory = FinancialSituationMemory("test_memory", openrouter_config)
|
||||
collection_mock = mock_chromadb.return_value.create_collection.return_value
|
||||
collection_mock.count.return_value = 5 # Non-empty collection
|
||||
collection_mock.query.return_value = {
|
||||
"documents": [[]],
|
||||
"metadatas": [[]],
|
||||
"distances": [[]]
|
||||
}
|
||||
|
||||
# Act
|
||||
result = memory.get_memories("test", n_matches=0)
|
||||
|
||||
# Assert: Returns empty list
|
||||
assert result == []
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Configuration Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestConfiguration:
|
||||
"""Test configuration handling for OpenRouter."""
|
||||
|
||||
def test_default_config_not_openrouter(self):
|
||||
"""Test that default config doesn't use OpenRouter."""
|
||||
# Arrange & Act
|
||||
config = DEFAULT_CONFIG
|
||||
|
||||
# Assert
|
||||
assert config["llm_provider"] != "openrouter"
|
||||
assert config["backend_url"] == "https://api.openai.com/v1"
|
||||
|
||||
def test_config_override_with_openrouter(
|
||||
self,
|
||||
openrouter_config,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that config can be overridden to use OpenRouter."""
|
||||
# Arrange: OpenRouter config overrides defaults
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
# Assert: Configuration applied
|
||||
assert graph.config["llm_provider"] == "openrouter"
|
||||
assert graph.config["backend_url"] == "https://openrouter.ai/api/v1"
|
||||
|
||||
def test_partial_config_merge(
|
||||
self,
|
||||
mock_langchain_classes,
|
||||
mock_memory
|
||||
):
|
||||
"""Test that partial config requires all necessary keys.
|
||||
|
||||
NOTE: Current implementation doesn't merge with defaults.
|
||||
Missing required keys like 'project_dir' will cause KeyError.
|
||||
User must provide complete config or use DEFAULT_CONFIG.copy().
|
||||
"""
|
||||
# Arrange: Partial config missing required keys
|
||||
partial_config = {
|
||||
"llm_provider": "openrouter",
|
||||
"backend_url": "https://openrouter.ai/api/v1",
|
||||
}
|
||||
|
||||
# Act & Assert: Missing 'project_dir' causes KeyError
|
||||
with pytest.raises(KeyError, match="project_dir"):
|
||||
graph = TradingAgentsGraph(config=partial_config)
|
||||
|
||||
# Arrange: Complete config works
|
||||
complete_config = DEFAULT_CONFIG.copy()
|
||||
complete_config.update({
|
||||
"llm_provider": "openrouter",
|
||||
"backend_url": "https://openrouter.ai/api/v1",
|
||||
})
|
||||
|
||||
# Act
|
||||
graph = TradingAgentsGraph(config=complete_config)
|
||||
|
||||
# Assert: Uses provided values
|
||||
assert graph.config["llm_provider"] == "openrouter"
|
||||
assert graph.config["backend_url"] == "https://openrouter.ai/api/v1"
|
||||
|
|
@ -1,21 +1,38 @@
|
|||
import chromadb
|
||||
from chromadb.config import Settings
|
||||
from openai import OpenAI
|
||||
import os
|
||||
|
||||
|
||||
class FinancialSituationMemory:
|
||||
def __init__(self, name, config):
|
||||
# Handle embeddings based on provider
|
||||
if config["backend_url"] == "http://localhost:11434/v1":
|
||||
# Ollama local embeddings
|
||||
self.embedding = "nomic-embed-text"
|
||||
self.client = OpenAI(base_url=config["backend_url"])
|
||||
elif config.get("llm_provider", "").lower() == "openrouter":
|
||||
# OpenRouter doesn't have native embeddings, use OpenAI embeddings as fallback
|
||||
openai_key = os.getenv("OPENAI_API_KEY")
|
||||
if not openai_key:
|
||||
print("Warning: OPENAI_API_KEY not found. Memory features disabled for OpenRouter.")
|
||||
self.client = None
|
||||
else:
|
||||
self.embedding = "text-embedding-3-small"
|
||||
self.client = OpenAI(api_key=openai_key) # Use OpenAI directly for embeddings
|
||||
else:
|
||||
# Default to text-embedding-3-small for OpenAI and others
|
||||
self.embedding = "text-embedding-3-small"
|
||||
self.client = OpenAI(base_url=config["backend_url"])
|
||||
self.client = OpenAI(base_url=config["backend_url"])
|
||||
|
||||
self.chroma_client = chromadb.Client(Settings(allow_reset=True))
|
||||
self.situation_collection = self.chroma_client.create_collection(name=name)
|
||||
|
||||
def get_embedding(self, text):
|
||||
"""Get OpenAI embedding for a text"""
|
||||
|
||||
if self.client is None:
|
||||
raise RuntimeError("Embedding client not initialized. Check API key configuration.")
|
||||
|
||||
response = self.client.embeddings.create(
|
||||
model=self.embedding, input=text
|
||||
)
|
||||
|
|
|
|||
|
|
@ -72,9 +72,36 @@ class TradingAgentsGraph:
|
|||
)
|
||||
|
||||
# Initialize LLMs
|
||||
if self.config["llm_provider"].lower() == "openai" or self.config["llm_provider"] == "ollama" or self.config["llm_provider"] == "openrouter":
|
||||
if self.config["llm_provider"].lower() in ("openai", "ollama"):
|
||||
self.deep_thinking_llm = ChatOpenAI(model=self.config["deep_think_llm"], base_url=self.config["backend_url"])
|
||||
self.quick_thinking_llm = ChatOpenAI(model=self.config["quick_think_llm"], base_url=self.config["backend_url"])
|
||||
elif self.config["llm_provider"].lower() == "openrouter":
|
||||
# OpenRouter requires explicit API key handling
|
||||
openrouter_key = os.getenv("OPENROUTER_API_KEY")
|
||||
if not openrouter_key:
|
||||
raise ValueError(
|
||||
"OPENROUTER_API_KEY environment variable is required when using openrouter provider. "
|
||||
"Set it with: export OPENROUTER_API_KEY=sk-or-v1-..."
|
||||
)
|
||||
|
||||
# OpenRouter requires specific headers for attribution
|
||||
default_headers = {
|
||||
"HTTP-Referer": "https://github.com/TauricResearch/TradingAgents",
|
||||
"X-Title": "TradingAgents"
|
||||
}
|
||||
|
||||
self.deep_thinking_llm = ChatOpenAI(
|
||||
model=self.config["deep_think_llm"],
|
||||
base_url=self.config["backend_url"],
|
||||
api_key=openrouter_key,
|
||||
default_headers=default_headers
|
||||
)
|
||||
self.quick_thinking_llm = ChatOpenAI(
|
||||
model=self.config["quick_think_llm"],
|
||||
base_url=self.config["backend_url"],
|
||||
api_key=openrouter_key,
|
||||
default_headers=default_headers
|
||||
)
|
||||
elif self.config["llm_provider"].lower() == "anthropic":
|
||||
self.deep_thinking_llm = ChatAnthropic(model=self.config["deep_think_llm"], base_url=self.config["backend_url"])
|
||||
self.quick_thinking_llm = ChatAnthropic(model=self.config["quick_think_llm"], base_url=self.config["backend_url"])
|
||||
|
|
|
|||
Loading…
Reference in New Issue