feat(docs): add comprehensive documentation structure - Fixes #52
- Add docs/ directory with 17 documentation files
- Architecture: multi-agent-system, data-flow, llm-integration
- API Reference: trading-graph, agents, dataflows
- Guides: adding-new-analyst, adding-llm-provider, adding-data-vendor, configuration
- Testing: README, running-tests, writing-tests
- Development: setup, contributing
- Update PROJECT.md with TESTING STRATEGY requirements
- Add test_documentation_structure.py for validation
🤖 Generated with Claude Code
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
436f6cc092
commit
c0dfb21c00
11
CHANGELOG.md
11
CHANGELOG.md
|
|
@ -8,6 +8,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- Comprehensive documentation structure (Issue #52)
|
||||
- Organized `docs/` directory with structured documentation sections
|
||||
- Quick start guide at `docs/QUICKSTART.md`
|
||||
- Architecture documentation in `docs/architecture/` (multi-agent-system, data-flow, llm-integration)
|
||||
- API reference documentation in `docs/api/` (trading-graph, agents, dataflows)
|
||||
- Developer guides in `docs/guides/` (adding-new-analyst, adding-llm-provider, adding-data-vendor, configuration)
|
||||
- Testing documentation in `docs/testing/` (README, running-tests, writing-tests)
|
||||
- Development setup guide in `docs/development/`
|
||||
- Central documentation index at `docs/README.md` with navigation and key concepts
|
||||
- Updated PROJECT.md DOCUMENTATION MAP section to reference new docs/ structure
|
||||
- Added Documentation section to README.md with links to key guides
|
||||
- Export reports to file with metadata (Issue #21)
|
||||
- YAML frontmatter formatting for report metadata [file:tradingagents/utils/report_exporter.py:63-111](tradingagents/utils/report_exporter.py)
|
||||
- Report creation with combined YAML frontmatter and markdown content [file:tradingagents/utils/report_exporter.py:112-136](tradingagents/utils/report_exporter.py)
|
||||
|
|
|
|||
397
PROJECT.md
397
PROJECT.md
|
|
@ -1,53 +1,83 @@
|
|||
# PROJECT.md - TradingAgents
|
||||
# PROJECT.md - TradingAgents Investment Platform
|
||||
|
||||
> Multi-Agent LLM Financial Trading Framework
|
||||
> Last Updated: 2025-12-25
|
||||
> Multi-Agent LLM Investment Platform with Execution Capabilities
|
||||
> Last Updated: 2025-12-26
|
||||
|
||||
---
|
||||
|
||||
## PROJECT VISION
|
||||
|
||||
TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents—from fundamental analysts, sentiment experts, and technical analysts to traders and risk management teams—the platform collaboratively evaluates market conditions and informs trading decisions through dynamic agent discussions.
|
||||
TradingAgents is evolving from a signal-generation research framework into a **complete investment platform** that:
|
||||
|
||||
**Research Focus**: This framework is designed for research purposes to explore how multi-agent LLM systems can approach complex financial decision-making.
|
||||
1. **Analyzes markets** using multi-agent LLM collaboration (existing capability)
|
||||
2. **Executes trades** via broker APIs (Alpaca, Interactive Brokers)
|
||||
3. **Manages portfolios** with performance tracking and Australian CGT compliance
|
||||
4. **Simulates strategies** to compare effectiveness before risking capital
|
||||
5. **Learns from outcomes** using a layered memory system (FinMem pattern)
|
||||
|
||||
**Target Markets:** US Stocks, ETFs, Crypto, Futures, Australian Equities
|
||||
|
||||
**Patterns Borrowed From:**
|
||||
- **FinMem**: Layered memory system (recency, relevancy, importance scoring)
|
||||
- **Microsoft Qlib**: Modular loose-coupled architecture
|
||||
- **Alpaca Bots**: Order execution, position tracking, risk controls
|
||||
|
||||
---
|
||||
|
||||
## GOALS
|
||||
|
||||
### Primary Goals
|
||||
- [x] Provide a modular multi-agent framework for financial trading analysis
|
||||
- [x] Support multiple LLM providers (OpenAI, Anthropic, Google, OpenRouter, Ollama)
|
||||
- [x] Enable configurable data vendors (yfinance, Alpha Vantage, local)
|
||||
- [x] Implement specialized analyst agents (fundamental, sentiment, news, technical)
|
||||
- [x] Support researcher debates (bull vs bear perspectives)
|
||||
- [x] Include risk management and portfolio approval workflow
|
||||
### Phase 1: Foundation (Current)
|
||||
- [x] Multi-agent framework for financial analysis
|
||||
- [x] Multiple LLM providers (OpenAI, Anthropic, Google, OpenRouter, Ollama)
|
||||
- [x] Data vendors (yfinance, Alpha Vantage, Google News)
|
||||
- [x] Analyst agents (fundamental, sentiment, news, technical)
|
||||
- [x] Researcher debates (bull vs bear)
|
||||
- [x] Risk management workflow
|
||||
- [ ] **Database layer** for user persistence (#2-7)
|
||||
- [ ] **Enhanced data layer** - FRED, multi-timeframe, benchmarks (#8-12)
|
||||
|
||||
### Secondary Goals
|
||||
- [ ] Expand backtesting capabilities with Tauric TradingDB
|
||||
- [ ] Add support for additional asset classes
|
||||
- [ ] Improve caching and performance optimization
|
||||
- [ ] Enhance CLI experience with more configuration options
|
||||
### Phase 2: Enhanced Analysis
|
||||
- [ ] **Momentum Analyst** - multi-timeframe momentum, ROC, ADX (#13)
|
||||
- [ ] **Macro Analyst** - FRED interpretation, regime detection (#14)
|
||||
- [ ] **Correlation Analyst** - cross-asset, sector rotation (#15)
|
||||
- [ ] **Position Sizing Manager** - Kelly, risk parity, ATR (#16)
|
||||
- [ ] **Memory System** - FinMem pattern for learning (#18-21)
|
||||
|
||||
### Phase 3: Execution & Portfolio
|
||||
- [ ] **Broker Integration** - Alpaca (US), IBKR (futures, ASX) (#22-28)
|
||||
- [ ] **Portfolio Management** - positions, performance, CGT (#29-32)
|
||||
- [ ] **Simulation Mode** - strategy comparison without real money (#33-37)
|
||||
|
||||
### Phase 4: Alerts & Polish
|
||||
- [ ] **Alert System** - Email, Slack, SMS (#38-41)
|
||||
- [ ] **Backtest Engine** - historical simulation (#42-44)
|
||||
- [ ] **REST API** - external access (#45-48)
|
||||
|
||||
---
|
||||
|
||||
## SCOPE
|
||||
|
||||
### In Scope
|
||||
- Stock trading analysis and recommendations
|
||||
- Multi-agent collaboration and debate mechanisms
|
||||
- Integration with financial data APIs
|
||||
- Multi-agent LLM analysis (fundamentals, sentiment, news, technical, momentum, macro)
|
||||
- **Live trade execution** via Alpaca and Interactive Brokers
|
||||
- **Paper trading / simulation mode** for strategy testing
|
||||
- Multi-asset support: US stocks, ETFs, crypto, futures, Australian equities
|
||||
- Portfolio tracking with mark-to-market valuation
|
||||
- **Australian CGT calculations** with 50% discount for >12 month holdings
|
||||
- Multi-timeframe analysis (daily, weekly, monthly)
|
||||
- Macro-economic data integration (FRED)
|
||||
- User database for profiles, portfolios, settings
|
||||
- Alert notifications (email, Slack, SMS)
|
||||
- Backtesting with historical data
|
||||
- CLI and programmatic Python interfaces
|
||||
- Configuration of LLM models and data sources
|
||||
- Risk assessment and position management
|
||||
- Support for multiple LLM providers (OpenAI, Anthropic, Google, OpenRouter, Ollama)
|
||||
- Optional REST API
|
||||
|
||||
### Out of Scope
|
||||
- Live trading execution (simulation only)
|
||||
- Cryptocurrency or forex trading
|
||||
- Real-time streaming data
|
||||
- Mobile or web interfaces
|
||||
- Financial advice (research purposes only)
|
||||
- Mobile or web UI (API only for now)
|
||||
- Real-time streaming data (polling-based)
|
||||
- Options trading
|
||||
- Financial advice (investment decisions are user's responsibility)
|
||||
- Tax advice (CGT calculations are informational only)
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -73,40 +103,67 @@ TradingAgents is a multi-agent trading framework that mirrors the dynamics of re
|
|||
|
||||
## ARCHITECTURE
|
||||
|
||||
### System Overview
|
||||
### System Overview (Extended)
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ TradingAgents Graph │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────────┐ ┌──────────────────┐ │
|
||||
│ │ Analyst Team │ │ Researcher Team │ │
|
||||
│ ├──────────────────┤ ├──────────────────┤ │
|
||||
│ │ • Fundamentals │───▶│ • Bull Researcher│ │
|
||||
│ │ • Sentiment │ │ • Bear Researcher│ │
|
||||
│ │ • News │ │ (Debates) │ │
|
||||
│ │ • Technical │ └────────┬─────────┘ │
|
||||
│ └──────────────────┘ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────────┐ ┌──────────────────┐ │
|
||||
│ │ Data Vendors │ │ Trader Agent │ │
|
||||
│ ├──────────────────┤ └────────┬─────────┘ │
|
||||
│ │ • yfinance │ │ │
|
||||
│ │ • Alpha Vantage │ ▼ │
|
||||
│ │ • OpenAI │ ┌──────────────────┐ │
|
||||
│ │ • Google │ │ Risk Management │ │
|
||||
│ │ • Local │ ├──────────────────┤ │
|
||||
│ └──────────────────┘ │ • Aggressive │ │
|
||||
│ │ • Conservative │ │
|
||||
│ │ • Neutral │ │
|
||||
│ └────────┬─────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────────┐ │
|
||||
│ │Portfolio Manager │ │
|
||||
│ │ (Final Decision) │ │
|
||||
│ └──────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ DATA LAYER │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ yfinance │ Alpha Vantage │ FRED (NEW) │ Alpaca │ Multi-Timeframe │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ ANALYSIS LAYER │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Market │ Momentum │ Macro │ Correlation │ News │ Fundamentals │
|
||||
│ Analyst │ Analyst │ Analyst │ Analyst │ │ │
|
||||
│ │ (NEW) │ (NEW) │ (NEW) │ │ │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Bull ←── Debate ──→ Bear → Research Manager │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Trader → Signal + Confidence Score │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Risk Debate → Position Sizing Manager (NEW) │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ STRATEGY LAYER (NEW) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Signal-to-Order │ Position Sizing │ Timeframe Coordinator │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ EXECUTION LAYER (NEW) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Order Validator │ Risk Controls │ Broker Router │
|
||||
│ │ │ (Paper / Alpaca / IBKR) │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ PORTFOLIO LAYER (NEW) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Position Tracker │ Portfolio State │ Performance │ CGT Calculator │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ MEMORY & LEARNING (Enhanced) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Layered Memory (FinMem) │ Trade History │ Risk Profiles │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Broker Routing
|
||||
```
|
||||
Asset Type → Broker Selection
|
||||
─────────────────────────────────────
|
||||
US Stocks/ETFs → Alpaca
|
||||
Crypto → Alpaca
|
||||
Futures (GC, SI) → Interactive Brokers
|
||||
ASX (Australia) → Interactive Brokers
|
||||
```
|
||||
|
||||
### Technology Stack
|
||||
|
|
@ -133,49 +190,121 @@ TradingAgents is a multi-agent trading framework that mirrors the dynamics of re
|
|||
|
||||
```
|
||||
TradingAgents/
|
||||
├── tradingagents/ # Main package
|
||||
│ ├── agents/ # LLM agent implementations
|
||||
│ │ ├── analysts/ # Analyst agents (fundamental, sentiment, news, technical)
|
||||
│ │ ├── researchers/ # Bull/bear researcher debate agents
|
||||
│ │ ├── risk_mgmt/ # Risk management debators
|
||||
│ │ ├── trader/ # Trader agent
|
||||
│ │ ├── managers/ # Research and risk managers
|
||||
│ │ └── utils/ # Agent utilities, tools, states
|
||||
│ ├── dataflows/ # Data vendor integrations
|
||||
│ │ ├── alpha_vantage*.py # Alpha Vantage API modules
|
||||
│ │ ├── y_finance.py # yfinance integration
|
||||
│ │ ├── google.py # Google news integration
|
||||
│ │ └── local.py # Local data vendor
|
||||
│ ├── graph/ # LangGraph workflow
|
||||
│ │ ├── trading_graph.py # Main graph definition
|
||||
│ │ ├── propagation.py # Forward propagation logic
|
||||
│ │ ├── reflection.py # Agent reflection
|
||||
│ │ └── signal_processing.py
|
||||
│ └── default_config.py # Default configuration
|
||||
├── cli/ # Command-line interface
|
||||
│ ├── main.py # CLI entry point
|
||||
│ ├── models.py # CLI data models
|
||||
│ └── utils.py # CLI utilities
|
||||
├── main.py # Quick start example
|
||||
├── test.py # Basic tests
|
||||
├── requirements.txt # Python dependencies
|
||||
├── pyproject.toml # Project metadata
|
||||
└── assets/ # Documentation images
|
||||
├── tradingagents/ # Main package (existing + enhanced)
|
||||
│ ├── agents/
|
||||
│ │ ├── analysts/ # Analyst agents
|
||||
│ │ │ ├── fundamentals_analyst.py
|
||||
│ │ │ ├── sentiment_analyst.py
|
||||
│ │ │ ├── news_analyst.py
|
||||
│ │ │ ├── market_analyst.py (technical)
|
||||
│ │ │ ├── momentum_analyst.py # NEW
|
||||
│ │ │ ├── macro_analyst.py # NEW
|
||||
│ │ │ └── correlation_analyst.py # NEW
|
||||
│ │ ├── managers/
|
||||
│ │ │ └── position_sizing_manager.py # NEW
|
||||
│ │ └── ...
|
||||
│ ├── dataflows/
|
||||
│ │ ├── fred.py # NEW - Federal Reserve data
|
||||
│ │ ├── multi_timeframe.py # NEW - Weekly/Monthly
|
||||
│ │ ├── benchmark.py # NEW - SPY, sectors
|
||||
│ │ └── ...
|
||||
│ └── memory/ # NEW - FinMem pattern
|
||||
│ ├── layered_memory.py
|
||||
│ ├── trade_history.py
|
||||
│ └── risk_profiles.py
|
||||
│
|
||||
├── execution/ # NEW - Broker integration
|
||||
│ ├── brokers/
|
||||
│ │ ├── base.py
|
||||
│ │ ├── broker_router.py
|
||||
│ │ ├── alpaca_broker.py
|
||||
│ │ ├── ibkr_broker.py
|
||||
│ │ └── paper_broker.py
|
||||
│ ├── orders/
|
||||
│ └── risk_controls/
|
||||
│
|
||||
├── portfolio/ # NEW - Portfolio management
|
||||
│ ├── portfolio_state.py
|
||||
│ ├── position_tracker.py
|
||||
│ ├── performance.py
|
||||
│ └── tax_calculator.py # Australian CGT
|
||||
│
|
||||
├── simulation/ # NEW - Strategy testing
|
||||
│ ├── scenario_runner.py
|
||||
│ ├── strategy_comparator.py
|
||||
│ └── economic_conditions.py
|
||||
│
|
||||
├── strategy/ # NEW
|
||||
│ ├── signal_to_order.py
|
||||
│ ├── position_sizing.py
|
||||
│ └── strategy_executor.py
|
||||
│
|
||||
├── backtest/ # NEW
|
||||
│ ├── backtest_engine.py
|
||||
│ └── results_analyzer.py
|
||||
│
|
||||
├── alerts/ # NEW
|
||||
│ ├── alert_manager.py
|
||||
│ └── channels/
|
||||
│
|
||||
├── database/ # NEW - User persistence
|
||||
│ ├── models/
|
||||
│ │ ├── user.py
|
||||
│ │ ├── portfolio.py
|
||||
│ │ ├── settings.py
|
||||
│ │ └── trade.py
|
||||
│ ├── migrations/
|
||||
│ └── db.py
|
||||
│
|
||||
├── api/ # NEW - REST API (optional)
|
||||
│ └── app.py
|
||||
│
|
||||
├── cli/ # Existing CLI
|
||||
├── main.py
|
||||
└── scripts/
|
||||
└── create_issues.py # GitHub issue creation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## TESTING STRATEGY
|
||||
|
||||
### Current State
|
||||
- Basic test file exists (`test.py`)
|
||||
- No formal test framework configured
|
||||
### Test Organization (REQUIRED)
|
||||
All new code MUST include tests organized as follows:
|
||||
|
||||
### Recommended Testing
|
||||
- Unit tests for individual agents
|
||||
- Integration tests for data vendor APIs
|
||||
- End-to-end tests for trading graph propagation
|
||||
- Mock LLM responses for deterministic testing
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Shared fixtures (LLM mocks, env mocks)
|
||||
├── unit/ # Fast, mocked tests
|
||||
│ ├── conftest.py # Unit-specific fixtures
|
||||
│ └── test_*.py
|
||||
├── integration/ # Tests with real internal components
|
||||
│ ├── conftest.py # Integration fixtures
|
||||
│ └── test_*.py
|
||||
└── e2e/ # End-to-end tests
|
||||
└── test_*.py
|
||||
```
|
||||
|
||||
### Testing Requirements
|
||||
1. **TDD Approach**: Write tests BEFORE implementation
|
||||
2. **Unit Tests**: All new functions must have unit tests
|
||||
3. **Integration Tests**: All new features must have integration tests
|
||||
4. **Mocking**: Use fixtures in conftest.py for LLM and API mocking
|
||||
5. **Markers**: Use `@pytest.mark.unit`, `@pytest.mark.integration`, `@pytest.mark.e2e`
|
||||
|
||||
### Test Fixtures (conftest.py)
|
||||
Standard fixtures that MUST be used:
|
||||
- `mock_env_openrouter`, `mock_env_openai`, `mock_env_anthropic` - Environment isolation
|
||||
- `mock_langchain_classes` - LLM class mocking
|
||||
- `mock_chromadb` - Database mocking (uses `get_or_create_collection`)
|
||||
- `mock_yfinance`, `mock_alpha_vantage` - Data vendor mocking
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
pytest tests/unit -m unit # Fast unit tests only
|
||||
pytest tests/integration -m integration # Integration tests
|
||||
pytest tests/ --tb=short # All tests
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -183,25 +312,81 @@ TradingAgents/
|
|||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| README.md | Installation, usage, API reference |
|
||||
| README.md | Installation, usage, API reference, feature overview |
|
||||
| PROJECT.md | This file - project roadmap, architecture, configuration |
|
||||
| LICENSE | MIT License |
|
||||
| PROJECT.md | This file - project overview |
|
||||
| docs/ | Comprehensive documentation structure (see below) |
|
||||
| assets/ | Architecture diagrams, CLI screenshots |
|
||||
|
||||
### Documentation Structure (`docs/`)
|
||||
Located in `/docs/` directory with the following sections:
|
||||
|
||||
- **Getting Started**
|
||||
- `QUICKSTART.md` - Get up and running with TradingAgents
|
||||
- `development/setup.md` - Development environment setup
|
||||
- `guides/configuration.md` - Configuration reference for LLM providers and data vendors
|
||||
|
||||
- **Architecture & Design**
|
||||
- `architecture/multi-agent-system.md` - Agent roles and collaboration patterns
|
||||
- `architecture/data-flow.md` - System data flow and integrations
|
||||
- `architecture/llm-integration.md` - LLM provider abstraction and selection
|
||||
|
||||
- **API Reference**
|
||||
- `api/trading-graph.md` - Core TradingGraph orchestration API
|
||||
- `api/agents.md` - Agent interfaces and implementations
|
||||
- `api/dataflows.md` - Data vendor integrations and APIs
|
||||
|
||||
- **Developer Guides**
|
||||
- `guides/adding-new-analyst.md` - Extend framework with custom analysts
|
||||
- `guides/adding-llm-provider.md` - Integrate new LLM providers
|
||||
- `guides/adding-data-vendor.md` - Add new data vendor integrations
|
||||
|
||||
- **Testing**
|
||||
- `testing/README.md` - Testing philosophy and overview
|
||||
- `testing/running-tests.md` - Test suite execution guide
|
||||
- `testing/writing-tests.md` - Guidelines for writing new tests
|
||||
|
||||
- **Development**
|
||||
- `development/setup.md` - Development environment configuration
|
||||
- `development/contributing.md` - Contributing guidelines
|
||||
|
||||
**For full documentation index, see `docs/README.md`**
|
||||
|
||||
---
|
||||
|
||||
## CURRENT SPRINT
|
||||
|
||||
<!-- TODO: Define your current sprint goals -->
|
||||
### Sprint: Platform Foundation
|
||||
|
||||
**Goal:** Build the foundation for the investment platform
|
||||
|
||||
### Active Work
|
||||
- [ ] Define sprint goals here
|
||||
See [GitHub Issues](https://github.com/akaszubski/TradingAgents/issues) for full backlog.
|
||||
|
||||
### Backlog
|
||||
- Expand data vendor options
|
||||
- Improve caching performance
|
||||
- Add more comprehensive testing
|
||||
- Enhance CLI configuration options
|
||||
**Phase 1: Database (Issues #2-7)**
|
||||
- [ ] #2 Database setup - SQLAlchemy + PostgreSQL/SQLite
|
||||
- [ ] #3 User model - profiles, tax jurisdiction
|
||||
- [ ] #4 Portfolio model - live, paper, backtest
|
||||
- [ ] #5 Settings model - risk profiles, alerts
|
||||
- [ ] #6 Trade model - CGT tracking
|
||||
- [ ] #7 Alembic migrations
|
||||
|
||||
**Phase 2: Data Layer (Issues #8-12)**
|
||||
- [ ] #8 FRED API integration
|
||||
- [ ] #9 Multi-timeframe aggregation
|
||||
- [ ] #10 Benchmark data
|
||||
- [ ] #11 Interface routing
|
||||
- [ ] #12 Data caching
|
||||
|
||||
### Backlog (47 total issues)
|
||||
- Phase 3: New Analysts (#13-17)
|
||||
- Phase 4: Memory System (#18-21)
|
||||
- Phase 5: Execution Layer (#22-28)
|
||||
- Phase 6: Portfolio Management (#29-32)
|
||||
- Phase 7: Simulation (#33-37)
|
||||
- Phase 8: Alerts (#38-41)
|
||||
- Phase 9: Backtest (#42-44)
|
||||
- Phase 10: API & Docs (#45-48)
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
11
README.md
11
README.md
|
|
@ -335,6 +335,17 @@ grep ERROR ./logs/tradingagents.log
|
|||
|
||||
If an error occurs during analysis, partial results are automatically saved, allowing you to inspect completed work and resume processing. Partial results are saved to the results directory in JSON format.
|
||||
|
||||
## Documentation
|
||||
|
||||
For comprehensive documentation, guides, and API references, please visit the [docs/](docs/) directory:
|
||||
|
||||
- **[Quick Start Guide](docs/QUICKSTART.md)** - Get up and running quickly
|
||||
- **[Architecture Documentation](docs/architecture/)** - Understand system design and components
|
||||
- **[API Reference](docs/api/)** - Detailed API documentation
|
||||
- **[Developer Guides](docs/guides/)** - How-to guides for extending the framework
|
||||
- **[Testing Guide](docs/testing/)** - Testing infrastructure and best practices
|
||||
- **[Complete Documentation Index](docs/README.md)** - Full table of contents
|
||||
|
||||
## Contributing
|
||||
|
||||
We welcome contributions from the community! Whether it's fixing a bug, improving documentation, or suggesting a new feature, your input helps make this project better. If you are interested in this line of research, please consider joining our open-source financial AI research community [Tauric Research](https://tauric.ai/).
|
||||
|
|
|
|||
|
|
@ -0,0 +1,241 @@
|
|||
# Quick Start Guide
|
||||
|
||||
Get started with TradingAgents in under 10 minutes.
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python >= 3.10 (Python 3.13 recommended)
|
||||
- pip package manager
|
||||
- Conda or virtualenv (recommended)
|
||||
|
||||
### Step 1: Clone the Repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/TauricResearch/TradingAgents.git
|
||||
cd TradingAgents
|
||||
```
|
||||
|
||||
### Step 2: Create Virtual Environment
|
||||
|
||||
Using conda (recommended):
|
||||
|
||||
```bash
|
||||
conda create -n tradingagents python=3.13
|
||||
conda activate tradingagents
|
||||
```
|
||||
|
||||
Or using venv:
|
||||
|
||||
```bash
|
||||
python -m venv venv
|
||||
source venv/bin/activate # On Windows: venv\Scripts\activate
|
||||
```
|
||||
|
||||
### Step 3: Install Dependencies
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Required APIs
|
||||
|
||||
TradingAgents requires API keys for LLM providers and data sources.
|
||||
|
||||
### LLM Provider (choose one)
|
||||
|
||||
**Option 1: OpenAI (default)**
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
Get your key at: [https://platform.openai.com/api-keys](https://platform.openai.com/api-keys)
|
||||
|
||||
**Option 2: Anthropic**
|
||||
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
Get your key at: [https://console.anthropic.com/](https://console.anthropic.com/)
|
||||
|
||||
**Option 3: OpenRouter (unified access)**
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY=your_api_key_here
|
||||
export OPENAI_API_KEY=your_api_key_here # Still needed for embeddings
|
||||
```
|
||||
|
||||
Get your key at: [https://openrouter.ai/keys](https://openrouter.ai/keys)
|
||||
|
||||
**Option 4: Google Generative AI**
|
||||
|
||||
```bash
|
||||
export GOOGLE_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
Get your key at: [https://makersuite.google.com/app/apikey](https://makersuite.google.com/app/apikey)
|
||||
|
||||
### Data Vendor
|
||||
|
||||
**Alpha Vantage (required for fundamental and news data)**
|
||||
|
||||
```bash
|
||||
export ALPHA_VANTAGE_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
Get a free key at: [https://www.alphavantage.co/support/#api-key](https://www.alphavantage.co/support/#api-key)
|
||||
|
||||
TradingAgents users get increased rate limits (60 requests/minute, no daily limits) thanks to Alpha Vantage's open-source support program.
|
||||
|
||||
### Using .env File
|
||||
|
||||
Alternatively, create a `.env` file in the project root:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env with your actual API keys
|
||||
```
|
||||
|
||||
Example `.env`:
|
||||
|
||||
```env
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
ALPHA_VANTAGE_API_KEY=your_alpha_vantage_key_here
|
||||
TRADINGAGENTS_RESULTS_DIR=./results
|
||||
```
|
||||
|
||||
## Your First Analysis
|
||||
|
||||
### CLI Mode
|
||||
|
||||
Run the interactive CLI:
|
||||
|
||||
```bash
|
||||
python -m cli.main
|
||||
```
|
||||
|
||||
You'll see a menu where you can:
|
||||
- Select ticker symbols (e.g., NVDA, AAPL, TSLA)
|
||||
- Choose analysis date
|
||||
- Configure LLM models
|
||||
- Set research depth (debate rounds)
|
||||
|
||||
The CLI will display real-time progress as agents analyze the market and generate trading signals.
|
||||
|
||||
### Programmatic Mode
|
||||
|
||||
Create a Python script:
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Initialize the trading graph
|
||||
ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
|
||||
|
||||
# Run analysis for NVDA on a specific date
|
||||
_, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
# Print the trading decision
|
||||
print(f"Decision: {decision['action']}")
|
||||
print(f"Confidence: {decision['confidence_score']}")
|
||||
print(f"Reasoning: {decision['reasoning']}")
|
||||
```
|
||||
|
||||
Run your script:
|
||||
|
||||
```bash
|
||||
python your_script.py
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Using Different LLM Providers
|
||||
|
||||
**OpenAI (default):**
|
||||
|
||||
```python
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "openai"
|
||||
config["deep_think_llm"] = "o4-mini"
|
||||
config["quick_think_llm"] = "gpt-4o-mini"
|
||||
config["backend_url"] = "https://api.openai.com/v1"
|
||||
```
|
||||
|
||||
**Anthropic:**
|
||||
|
||||
```python
|
||||
config["llm_provider"] = "anthropic"
|
||||
config["deep_think_llm"] = "claude-sonnet-4-20250514"
|
||||
config["quick_think_llm"] = "claude-sonnet-4-20250514"
|
||||
config["backend_url"] = "https://api.anthropic.com"
|
||||
```
|
||||
|
||||
**OpenRouter:**
|
||||
|
||||
```python
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini"
|
||||
config["backend_url"] = "https://openrouter.ai/api/v1"
|
||||
```
|
||||
|
||||
### Customizing Data Vendors
|
||||
|
||||
```python
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "yfinance", # Stock prices
|
||||
"technical_indicators": "yfinance", # Technical analysis
|
||||
"fundamental_data": "alpha_vantage", # Company fundamentals
|
||||
"news_data": "alpha_vantage", # News and sentiment
|
||||
}
|
||||
```
|
||||
|
||||
See [Configuration Guide](guides/configuration.md) for all available options.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **[Architecture Overview](architecture/multi-agent-system.md)** - Understand how agents work together
|
||||
- **[API Reference](api/trading-graph.md)** - Explore the full API
|
||||
- **[Adding New Analysts](guides/adding-new-analyst.md)** - Extend the framework
|
||||
- **[Configuration Guide](guides/configuration.md)** - Advanced configuration options
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**API Rate Limits**
|
||||
|
||||
If you hit rate limits, the framework will automatically save partial analysis state. Wait for the suggested retry time and re-run.
|
||||
|
||||
**Missing API Keys**
|
||||
|
||||
Ensure environment variables are set:
|
||||
|
||||
```bash
|
||||
echo $OPENAI_API_KEY
|
||||
echo $ALPHA_VANTAGE_API_KEY
|
||||
```
|
||||
|
||||
**Import Errors**
|
||||
|
||||
Ensure you're in the correct virtual environment:
|
||||
|
||||
```bash
|
||||
conda activate tradingagents # or source venv/bin/activate
|
||||
```
|
||||
|
||||
**Data Vendor Errors**
|
||||
|
||||
Check your Alpha Vantage API key is valid and has remaining quota. Free tier allows 25 requests/day; TradingAgents users get 60 requests/minute.
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Documentation**: Browse the [full documentation](README.md)
|
||||
- **Discord**: Join our [Discord community](https://discord.com/invite/hk9PGKShPK)
|
||||
- **GitHub Issues**: [Report bugs or ask questions](https://github.com/TauricResearch/TradingAgents/issues)
|
||||
|
||||
Happy trading!
|
||||
|
|
@ -0,0 +1,105 @@
|
|||
# TradingAgents Documentation
|
||||
|
||||
Welcome to the TradingAgents documentation. This guide will help you understand, use, and extend the TradingAgents multi-agent financial trading framework.
|
||||
|
||||
## Overview
|
||||
|
||||
TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents - from fundamental analysts, sentiment experts, and technical analysts, to traders and risk management teams - the platform collaboratively evaluates market conditions and informs trading decisions.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### Getting Started
|
||||
|
||||
- **[Quick Start Guide](QUICKSTART.md)** - Get up and running quickly with TradingAgents
|
||||
- **[Development Setup](development/setup.md)** - Set up your development environment
|
||||
- **[Configuration Guide](guides/configuration.md)** - Configure LLM providers, data vendors, and system settings
|
||||
|
||||
### Architecture
|
||||
|
||||
Understand the system design and how components interact:
|
||||
|
||||
- **[Multi-Agent System](architecture/multi-agent-system.md)** - Agent roles, responsibilities, and collaboration patterns
|
||||
- **[Data Flow](architecture/data-flow.md)** - How data moves through the system
|
||||
- **[LLM Integration](architecture/llm-integration.md)** - Provider abstraction and model selection
|
||||
|
||||
### API Reference
|
||||
|
||||
Detailed API documentation for developers:
|
||||
|
||||
- **[TradingGraph API](api/trading-graph.md)** - Core orchestration API
|
||||
- **[Agents API](api/agents.md)** - Agent interfaces and implementations
|
||||
- **[Data Flows API](api/dataflows.md)** - Data vendor integrations
|
||||
|
||||
### Guides
|
||||
|
||||
Step-by-step tutorials for common tasks:
|
||||
|
||||
- **[Adding a New Analyst](guides/adding-new-analyst.md)** - Extend the framework with custom analyst agents
|
||||
- **[Adding an LLM Provider](guides/adding-llm-provider.md)** - Integrate new language model providers
|
||||
- **[Configuration Options](guides/configuration.md)** - Comprehensive configuration reference
|
||||
|
||||
### Testing
|
||||
|
||||
Learn about the testing infrastructure:
|
||||
|
||||
- **[Testing Overview](testing/README.md)** - Testing philosophy and structure
|
||||
- **[Running Tests](testing/running-tests.md)** - How to run the test suite
|
||||
- **[Writing Tests](testing/writing-tests.md)** - Guidelines for writing new tests
|
||||
|
||||
### Development
|
||||
|
||||
Contributing and development guidelines:
|
||||
|
||||
- **[Development Setup](development/setup.md)** - Set up your development environment
|
||||
- **[Contributing Guide](development/contributing.md)** - How to contribute to TradingAgents
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Multi-Agent Architecture
|
||||
|
||||
TradingAgents decomposes complex trading tasks into specialized roles:
|
||||
|
||||
- **Analyst Team**: Fundamentals, Sentiment, News, and Technical analysts
|
||||
- **Researcher Team**: Bull and Bear researchers who debate insights
|
||||
- **Trader Agent**: Makes trading decisions based on comprehensive analysis
|
||||
- **Risk Management**: Evaluates portfolio risk and validates strategies
|
||||
- **Portfolio Manager**: Final approval and execution oversight
|
||||
|
||||
### LangGraph Framework
|
||||
|
||||
Built on LangGraph for:
|
||||
- State management across agent workflows
|
||||
- Tool orchestration for data access
|
||||
- Conditional routing based on agent outputs
|
||||
- Memory persistence for context retention
|
||||
|
||||
### Data Vendor Abstraction
|
||||
|
||||
Flexible data sourcing through configurable vendors:
|
||||
- **yfinance**: Stock prices and technical indicators
|
||||
- **Alpha Vantage**: Fundamental data and news
|
||||
- **Google News**: Alternative news sources
|
||||
- **Local**: Offline backtesting data
|
||||
|
||||
## Quick Links
|
||||
|
||||
- [Installation Instructions](QUICKSTART.md#installation)
|
||||
- [API Keys Setup](QUICKSTART.md#required-apis)
|
||||
- [First Analysis](QUICKSTART.md#your-first-analysis)
|
||||
- [Configuration Options](guides/configuration.md)
|
||||
- [GitHub Repository](https://github.com/TauricResearch/TradingAgents)
|
||||
- [Research Paper](https://arxiv.org/abs/2412.20138)
|
||||
|
||||
## Support
|
||||
|
||||
- **Discord**: [Join our community](https://discord.com/invite/hk9PGKShPK)
|
||||
- **GitHub Issues**: [Report bugs or request features](https://github.com/TauricResearch/TradingAgents/issues)
|
||||
- **Twitter**: [@TauricResearch](https://x.com/TauricResearch)
|
||||
|
||||
## License
|
||||
|
||||
TradingAgents is released under the MIT License. See the [LICENSE](../LICENSE) file for details.
|
||||
|
||||
## Disclaimer
|
||||
|
||||
TradingAgents is designed for research and educational purposes. It is not intended as financial, investment, or trading advice. Trading performance may vary based on many factors including model selection, data quality, and market conditions. See [Tauric AI Disclaimer](https://tauric.ai/disclaimer/) for full details.
|
||||
|
|
@ -0,0 +1,391 @@
|
|||
# Agents API Reference
|
||||
|
||||
This document provides API reference for all agent types in the TradingAgents framework.
|
||||
|
||||
## Agent Types
|
||||
|
||||
All agents are located in `tradingagents/agents/`
|
||||
|
||||
### Analyst Agents
|
||||
|
||||
Analysts conduct specialized analysis on market data.
|
||||
|
||||
#### Base Analyst Interface
|
||||
|
||||
All analysts inherit from a common interface pattern:
|
||||
|
||||
```python
|
||||
class BaseAnalyst:
|
||||
def __init__(self, llm, tools):
|
||||
self.llm = llm
|
||||
self.tools = tools
|
||||
|
||||
def analyze(self, ticker: str, date: str) -> str:
|
||||
"""Perform analysis and return report."""
|
||||
pass
|
||||
```
|
||||
|
||||
#### Market Analyst
|
||||
|
||||
**Location**: `tradingagents/agents/analysts/market_analyst.py`
|
||||
|
||||
**Purpose**: Technical analysis using price patterns and indicators
|
||||
|
||||
**Tools**:
|
||||
- `get_stock_data()` - Historical prices
|
||||
- `get_indicators()` - Technical indicators (MACD, RSI, Bollinger Bands)
|
||||
|
||||
**Output**: Technical analysis report with trend identification and signals
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
report = market_analyst.analyze("NVDA", "2024-05-10")
|
||||
# Returns: "Technical analysis shows bullish MACD crossover..."
|
||||
```
|
||||
|
||||
#### Fundamentals Analyst
|
||||
|
||||
**Location**: `tradingagents/agents/analysts/fundamentals_analyst.py`
|
||||
|
||||
**Purpose**: Company financial health and valuation analysis
|
||||
|
||||
**Tools**:
|
||||
- `get_fundamentals()` - Financial ratios and metrics
|
||||
- `get_balance_sheet()` - Balance sheet data
|
||||
- `get_income_statement()` - Income statement
|
||||
- `get_cashflow()` - Cash flow statement
|
||||
|
||||
**Output**: Financial health assessment and valuation analysis
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
report = fundamentals_analyst.analyze("NVDA", "2024-05-10")
|
||||
# Returns: "Strong balance sheet with P/E ratio of 35..."
|
||||
```
|
||||
|
||||
#### Sentiment Analyst
|
||||
|
||||
**Location**: `tradingagents/agents/analysts/sentiment_analyst.py`
|
||||
|
||||
**Purpose**: Social media and public sentiment analysis
|
||||
|
||||
**Tools**:
|
||||
- Reddit data via PRAW
|
||||
- Sentiment scoring algorithms
|
||||
|
||||
**Output**: Sentiment score and trending topics
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
report = sentiment_analyst.analyze("NVDA", "2024-05-10")
|
||||
# Returns: "Positive social sentiment with score 0.75..."
|
||||
```
|
||||
|
||||
#### News Analyst
|
||||
|
||||
**Location**: `tradingagents/agents/analysts/news_analyst.py`
|
||||
|
||||
**Purpose**: News and macroeconomic event analysis
|
||||
|
||||
**Tools**:
|
||||
- `get_news()` - Company-specific news
|
||||
- `get_global_news()` - Market-wide news
|
||||
|
||||
**Output**: Event impact assessment
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
report = news_analyst.analyze("NVDA", "2024-05-10")
|
||||
# Returns: "Recent product launch expected to boost revenue..."
|
||||
```
|
||||
|
||||
### Researcher Agents
|
||||
|
||||
Researchers debate analyst findings to evaluate opportunities and risks.
|
||||
|
||||
#### Bull Researcher
|
||||
|
||||
**Purpose**: Identify bullish opportunities and positive catalysts
|
||||
|
||||
**Input**: Analyst reports
|
||||
|
||||
**Output**: Bull case arguments with supporting evidence
|
||||
|
||||
#### Bear Researcher
|
||||
|
||||
**Purpose**: Identify risks and potential downsides
|
||||
|
||||
**Input**: Analyst reports
|
||||
|
||||
**Output**: Bear case arguments with risk assessments
|
||||
|
||||
#### Research Manager
|
||||
|
||||
**Purpose**: Moderate debates and synthesize perspectives
|
||||
|
||||
**Input**: Bull/bear arguments from debate rounds
|
||||
|
||||
**Output**: Balanced research synthesis
|
||||
|
||||
### Trader Agent
|
||||
|
||||
**Location**: `tradingagents/agents/trader.py`
|
||||
|
||||
**Purpose**: Make final trading decisions based on comprehensive analysis
|
||||
|
||||
**Input**:
|
||||
- Analyst reports
|
||||
- Research synthesis
|
||||
- Market conditions
|
||||
|
||||
**Output**:
|
||||
```python
|
||||
{
|
||||
"action": "BUY" | "SELL" | "HOLD",
|
||||
"confidence_score": 0.0 to 1.0,
|
||||
"reasoning": str,
|
||||
"position_size": float
|
||||
}
|
||||
```
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
decision = trader.decide(state)
|
||||
print(decision["action"]) # "BUY"
|
||||
print(decision["confidence_score"]) # 0.75
|
||||
```
|
||||
|
||||
### Risk Management Agents
|
||||
|
||||
#### Risk Analysts
|
||||
|
||||
**Purpose**: Assess portfolio risk (volatility, liquidity, correlation)
|
||||
|
||||
**Tools**: Risk metrics, scenario analysis
|
||||
|
||||
**Output**: Risk assessment with mitigation recommendations
|
||||
|
||||
#### Portfolio Manager
|
||||
|
||||
**Location**: `tradingagents/agents/portfolio_manager.py`
|
||||
|
||||
**Purpose**: Final approval/rejection of trading proposals
|
||||
|
||||
**Input**:
|
||||
- Trading decision
|
||||
- Risk assessment
|
||||
|
||||
**Output**:
|
||||
```python
|
||||
{
|
||||
"approved": bool,
|
||||
"reasoning": str,
|
||||
"modifications": dict # Suggested changes if not approved
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Tools
|
||||
|
||||
Location: `tradingagents/agents/utils/agent_utils.py`
|
||||
|
||||
### Data Access Tools
|
||||
|
||||
```python
|
||||
get_stock_data(ticker: str, start_date: str, end_date: str) -> dict
|
||||
```
|
||||
Get historical stock prices (OHLCV data)
|
||||
|
||||
```python
|
||||
get_indicators(ticker: str, indicators: List[str]) -> dict
|
||||
```
|
||||
Calculate technical indicators. Available: MACD, RSI, BollingerBands, SMA, EMA
|
||||
|
||||
```python
|
||||
get_fundamentals(ticker: str) -> dict
|
||||
```
|
||||
Get company fundamental metrics (P/E, P/B, ROE, etc.)
|
||||
|
||||
```python
|
||||
get_balance_sheet(ticker: str) -> dict
|
||||
```
|
||||
Get balance sheet data
|
||||
|
||||
```python
|
||||
get_income_statement(ticker: str) -> dict
|
||||
```
|
||||
Get income statement
|
||||
|
||||
```python
|
||||
get_cashflow(ticker: str) -> dict
|
||||
```
|
||||
Get cash flow statement
|
||||
|
||||
```python
|
||||
get_news(ticker: str, date: str) -> dict
|
||||
```
|
||||
Get company-specific news articles
|
||||
|
||||
```python
|
||||
get_global_news(date: str) -> dict
|
||||
```
|
||||
Get market-wide news and events
|
||||
|
||||
```python
|
||||
get_insider_sentiment(ticker: str) -> dict
|
||||
```
|
||||
Get insider trading sentiment
|
||||
|
||||
```python
|
||||
get_insider_transactions(ticker: str) -> dict
|
||||
```
|
||||
Get insider transaction history
|
||||
|
||||
## Agent State
|
||||
|
||||
Location: `tradingagents/agents/utils/agent_states.py`
|
||||
|
||||
### AgentState
|
||||
|
||||
Main state object passed through the workflow:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class AgentState:
|
||||
ticker: str
|
||||
date: str
|
||||
analyst_reports: Dict[str, str]
|
||||
research_synthesis: str
|
||||
trading_decision: Dict[str, Any]
|
||||
risk_assessment: Dict[str, Any]
|
||||
final_decision: Dict[str, Any]
|
||||
```
|
||||
|
||||
### InvestDebateState
|
||||
|
||||
State for research debate rounds:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class InvestDebateState:
|
||||
bull_arguments: List[str]
|
||||
bear_arguments: List[str]
|
||||
debate_round: int
|
||||
synthesis: str
|
||||
```
|
||||
|
||||
### RiskDebateState
|
||||
|
||||
State for risk management discussions:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class RiskDebateState:
|
||||
risk_assessments: List[str]
|
||||
discussion_round: int
|
||||
final_recommendation: str
|
||||
```
|
||||
|
||||
## Memory System
|
||||
|
||||
Location: `tradingagents/agents/utils/memory.py`
|
||||
|
||||
### FinancialSituationMemory
|
||||
|
||||
Vector-based memory for storing and retrieving analysis context:
|
||||
|
||||
```python
|
||||
class FinancialSituationMemory:
|
||||
def __init__(self, persist_directory: str = "./memory_cache"):
|
||||
"""Initialize memory with ChromaDB backend."""
|
||||
|
||||
def add_situation(
|
||||
self,
|
||||
ticker: str,
|
||||
date: str,
|
||||
analysis: dict,
|
||||
metadata: dict = None
|
||||
):
|
||||
"""Store an analysis in memory."""
|
||||
|
||||
def search_similar(
|
||||
self,
|
||||
query: str,
|
||||
k: int = 5,
|
||||
filter: dict = None
|
||||
) -> List[dict]:
|
||||
"""Search for similar past analyses."""
|
||||
|
||||
def get_by_ticker(self, ticker: str, limit: int = 10) -> List[dict]:
|
||||
"""Get all analyses for a specific ticker."""
|
||||
```
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
memory = FinancialSituationMemory()
|
||||
|
||||
# Store analysis
|
||||
memory.add_situation(
|
||||
ticker="NVDA",
|
||||
date="2024-05-10",
|
||||
analysis=final_state,
|
||||
metadata={"confidence": 0.75}
|
||||
)
|
||||
|
||||
# Retrieve similar analyses
|
||||
similar = memory.search_similar(
|
||||
query="NVDA technical bullish",
|
||||
k=5
|
||||
)
|
||||
```
|
||||
|
||||
## Creating Custom Agents
|
||||
|
||||
### Step 1: Define Agent Class
|
||||
|
||||
```python
|
||||
from typing import List
|
||||
|
||||
class CustomAnalyst:
|
||||
def __init__(self, llm, tools: List):
|
||||
self.llm = llm
|
||||
self.tools = tools
|
||||
|
||||
def analyze(self, ticker: str, date: str) -> str:
|
||||
# Your analysis logic
|
||||
data = self.tools["get_stock_data"](ticker, date)
|
||||
prompt = f"Analyze {ticker} data: {data}"
|
||||
response = self.llm.invoke(prompt)
|
||||
return response.content
|
||||
```
|
||||
|
||||
### Step 2: Register Tools
|
||||
|
||||
```python
|
||||
from tradingagents.agents.utils.agent_utils import get_stock_data
|
||||
|
||||
tools = {
|
||||
"get_stock_data": get_stock_data,
|
||||
# Add more tools as needed
|
||||
}
|
||||
|
||||
analyst = CustomAnalyst(llm, tools)
|
||||
```
|
||||
|
||||
### Step 3: Integrate into Graph
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
|
||||
# Register custom analyst
|
||||
ta = TradingAgentsGraph()
|
||||
ta.add_analyst("custom", custom_analyst)
|
||||
```
|
||||
|
||||
See [Adding New Analyst Guide](../guides/adding-new-analyst.md) for complete details.
|
||||
|
||||
## See Also
|
||||
|
||||
- [Multi-Agent System Architecture](../architecture/multi-agent-system.md)
|
||||
- [TradingGraph API](trading-graph.md)
|
||||
- [Data Flows API](dataflows.md)
|
||||
- [Adding New Analyst Guide](../guides/adding-new-analyst.md)
|
||||
|
|
@ -0,0 +1,368 @@
|
|||
# Data Flows API Reference
|
||||
|
||||
This document describes the data vendor abstraction layer and available data sources.
|
||||
|
||||
## Overview
|
||||
|
||||
TradingAgents uses a vendor-agnostic interface for data access, allowing seamless switching between data providers.
|
||||
|
||||
Location: `tradingagents/dataflows/`
|
||||
|
||||
## Configuration
|
||||
|
||||
### Setting Data Vendors
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.config import set_config
|
||||
|
||||
config = {
|
||||
"data_vendors": {
|
||||
"core_stock_apis": "yfinance",
|
||||
"technical_indicators": "yfinance",
|
||||
"fundamental_data": "alpha_vantage",
|
||||
"news_data": "alpha_vantage"
|
||||
}
|
||||
}
|
||||
|
||||
set_config(config)
|
||||
```
|
||||
|
||||
### Getting Current Configuration
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.config import get_config
|
||||
|
||||
config = get_config()
|
||||
print(config["data_vendors"])
|
||||
```
|
||||
|
||||
## Data Vendors
|
||||
|
||||
### yfinance
|
||||
|
||||
**Location**: `tradingagents/dataflows/yfinance.py`
|
||||
|
||||
**Capabilities**:
|
||||
- Stock prices (OHLCV)
|
||||
- Technical indicators
|
||||
- Basic company information
|
||||
|
||||
**Setup**: No API key required
|
||||
|
||||
**Rate Limits**: None (public data)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
from tradingagents.dataflows.yfinance import (
|
||||
yfinance_get_stock_data,
|
||||
yfinance_get_indicators
|
||||
)
|
||||
|
||||
# Get stock data
|
||||
data = yfinance_get_stock_data("NVDA", "2024-01-01", "2024-12-31")
|
||||
|
||||
# Get indicators
|
||||
indicators = yfinance_get_indicators("NVDA", ["MACD", "RSI"])
|
||||
```
|
||||
|
||||
### Alpha Vantage
|
||||
|
||||
**Location**: `tradingagents/dataflows/alpha_vantage.py`
|
||||
|
||||
**Capabilities**:
|
||||
- Fundamental data
|
||||
- Company financials
|
||||
- News articles
|
||||
- Economic indicators
|
||||
|
||||
**Setup**:
|
||||
```bash
|
||||
export ALPHA_VANTAGE_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
**Rate Limits**: 60 requests/minute for TradingAgents users
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
from tradingagents.dataflows.alpha_vantage import (
|
||||
alphavantage_get_fundamentals,
|
||||
alphavantage_get_news
|
||||
)
|
||||
|
||||
# Get fundamentals
|
||||
fundamentals = alphavantage_get_fundamentals("NVDA")
|
||||
|
||||
# Get news
|
||||
news = alphavantage_get_news("NVDA", "2024-01-15")
|
||||
```
|
||||
|
||||
### Google News
|
||||
|
||||
**Location**: `tradingagents/dataflows/google.py`
|
||||
|
||||
**Capabilities**:
|
||||
- Real-time news articles
|
||||
- Global news search
|
||||
|
||||
**Setup**:
|
||||
```bash
|
||||
export GOOGLE_API_KEY=your_key_here # Optional for basic usage
|
||||
```
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
from tradingagents.dataflows.google import google_get_news
|
||||
|
||||
news = google_get_news("NVDA", "2024-01-15")
|
||||
```
|
||||
|
||||
### Local Cache
|
||||
|
||||
**Location**: `tradingagents/dataflows/local.py`
|
||||
|
||||
**Capabilities**:
|
||||
- Offline backtesting
|
||||
- Pre-downloaded data access
|
||||
|
||||
**Setup**: Place data files in `data_cache_dir`
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
from tradingagents.dataflows.local import local_get_stock_data
|
||||
|
||||
data = local_get_stock_data("NVDA", "2024-01-01", "2024-12-31")
|
||||
```
|
||||
|
||||
## Interface Layer
|
||||
|
||||
**Location**: `tradingagents/dataflows/interface.py`
|
||||
|
||||
### Unified Data Access
|
||||
|
||||
The interface layer provides vendor-agnostic functions:
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.interface import (
|
||||
get_stock_data,
|
||||
get_indicators,
|
||||
get_fundamentals,
|
||||
get_news
|
||||
)
|
||||
```
|
||||
|
||||
### Automatic Routing
|
||||
|
||||
Based on configuration, requests are automatically routed:
|
||||
|
||||
```python
|
||||
# Config says: "core_stock_apis": "yfinance"
|
||||
data = get_stock_data("NVDA", "2024-01-01", "2024-12-31")
|
||||
# Automatically calls yfinance_get_stock_data()
|
||||
```
|
||||
|
||||
## Data Schemas
|
||||
|
||||
### Stock Data (OHLCV)
|
||||
|
||||
```python
|
||||
{
|
||||
"ticker": "NVDA",
|
||||
"dates": ["2024-01-01", "2024-01-02", ...],
|
||||
"open": [150.0, 151.2, ...],
|
||||
"high": [152.5, 153.0, ...],
|
||||
"low": [149.8, 150.5, ...],
|
||||
"close": [151.0, 152.0, ...],
|
||||
"volume": [1000000, 1200000, ...],
|
||||
"adj_close": [151.0, 152.0, ...] # Optional
|
||||
}
|
||||
```
|
||||
|
||||
### Technical Indicators
|
||||
|
||||
```python
|
||||
{
|
||||
"MACD": {
|
||||
"macd": [0.5, 0.6, ...],
|
||||
"signal": [0.4, 0.5, ...],
|
||||
"histogram": [0.1, 0.1, ...]
|
||||
},
|
||||
"RSI": {
|
||||
"rsi": [65.0, 67.5, ...]
|
||||
},
|
||||
"BollingerBands": {
|
||||
"upper": [155.0, 156.0, ...],
|
||||
"middle": [150.0, 151.0, ...],
|
||||
"lower": [145.0, 146.0, ...]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fundamental Data
|
||||
|
||||
```python
|
||||
{
|
||||
"Symbol": "NVDA",
|
||||
"MarketCapitalization": 2800000000000,
|
||||
"PERatio": 35.2,
|
||||
"PEGRatio": 1.8,
|
||||
"BookValue": 25.50,
|
||||
"DividendYield": 0.005,
|
||||
"EPS": 4.25,
|
||||
"ProfitMargin": 0.25,
|
||||
"OperatingMarginTTM": 0.30,
|
||||
"ReturnOnAssetsTTM": 0.22,
|
||||
"ReturnOnEquityTTM": 0.45,
|
||||
"RevenueTTM": 60000000000,
|
||||
"GrossProfitTTM": 45000000000
|
||||
}
|
||||
```
|
||||
|
||||
### News Data
|
||||
|
||||
```python
|
||||
{
|
||||
"ticker": "NVDA",
|
||||
"date": "2024-01-15",
|
||||
"articles": [
|
||||
{
|
||||
"title": "Company Announces Record Earnings",
|
||||
"source": "Reuters",
|
||||
"url": "https://...",
|
||||
"published_at": "2024-01-15T10:30:00Z",
|
||||
"sentiment": 0.8, # -1 to 1
|
||||
"summary": "Full article summary...",
|
||||
"authors": ["John Doe"],
|
||||
"time_published": "20240115T103000"
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### VendorError
|
||||
|
||||
Base exception for data vendor errors:
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.exceptions import VendorError
|
||||
|
||||
try:
|
||||
data = get_stock_data("INVALID", "2024-01-01", "2024-12-31")
|
||||
except VendorError as e:
|
||||
print(f"Vendor error: {e}")
|
||||
```
|
||||
|
||||
### RateLimitError
|
||||
|
||||
Raised when API rate limits are exceeded:
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.exceptions import RateLimitError
|
||||
|
||||
try:
|
||||
data = get_fundamentals("NVDA")
|
||||
except RateLimitError as e:
|
||||
print(f"Rate limit hit. Retry after {e.retry_after}s")
|
||||
time.sleep(e.retry_after)
|
||||
```
|
||||
|
||||
### DataUnavailableError
|
||||
|
||||
Raised when requested data is not available:
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.exceptions import DataUnavailableError
|
||||
|
||||
try:
|
||||
data = get_stock_data("NVDA", "1900-01-01", "1900-12-31")
|
||||
except DataUnavailableError:
|
||||
print("Historical data not available for this date range")
|
||||
```
|
||||
|
||||
## Caching
|
||||
|
||||
### Cache Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"data_cache_dir": "./dataflows/data_cache",
|
||||
"cache_ttl": {
|
||||
"stock_data": 3600, # 1 hour
|
||||
"fundamentals": 86400, # 1 day
|
||||
"news": 3600 # 1 hour
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Functions
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.cache import (
|
||||
get_cached,
|
||||
save_cache,
|
||||
clear_cache
|
||||
)
|
||||
|
||||
# Get from cache
|
||||
cached_data = get_cached("nvda_stock_2024")
|
||||
|
||||
# Save to cache
|
||||
save_cache("nvda_stock_2024", data, ttl=3600)
|
||||
|
||||
# Clear cache
|
||||
clear_cache()
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use Configuration**: Don't hardcode vendors
|
||||
2. **Handle Errors**: Implement retry logic for rate limits
|
||||
3. **Cache Data**: Avoid redundant API calls
|
||||
4. **Validate Inputs**: Check ticker symbols and dates
|
||||
5. **Use Fallbacks**: Have backup vendors configured
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Data Retrieval
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.interface import get_stock_data
|
||||
|
||||
data = get_stock_data("NVDA", "2024-01-01", "2024-12-31")
|
||||
print(f"Close prices: {data['close']}")
|
||||
```
|
||||
|
||||
### Multiple Indicators
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.interface import get_indicators
|
||||
|
||||
indicators = get_indicators("NVDA", ["MACD", "RSI", "BollingerBands"])
|
||||
print(f"RSI: {indicators['RSI']['rsi'][-1]}")
|
||||
```
|
||||
|
||||
### With Error Handling
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.interface import get_news
|
||||
from tradingagents.dataflows.exceptions import VendorError
|
||||
import time
|
||||
|
||||
def get_news_with_retry(ticker, date, max_retries=3):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return get_news(ticker, date)
|
||||
except VendorError as e:
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(2 ** attempt)
|
||||
else:
|
||||
raise
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Data Flow Architecture](../architecture/data-flow.md)
|
||||
- [Configuration Guide](../guides/configuration.md)
|
||||
- [Adding Data Vendor Guide](../guides/adding-data-vendor.md)
|
||||
|
|
@ -0,0 +1,457 @@
|
|||
# TradingGraph API Reference
|
||||
|
||||
The `TradingAgentsGraph` class is the main entry point for the TradingAgents framework. It orchestrates all agents, manages state, and coordinates the analysis workflow.
|
||||
|
||||
## Class: TradingAgentsGraph
|
||||
|
||||
Location: `tradingagents/graph/trading_graph.py`
|
||||
|
||||
### Constructor
|
||||
|
||||
```python
|
||||
TradingAgentsGraph(
|
||||
selected_analysts: List[str] = ["market", "social", "news", "fundamentals"],
|
||||
debug: bool = False,
|
||||
config: Dict[str, Any] = None
|
||||
)
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
- **selected_analysts** (List[str], optional): List of analyst types to include in analysis
|
||||
- Default: `["market", "social", "news", "fundamentals"]`
|
||||
- Available: `"market"`, `"social"`, `"news"`, `"fundamentals"`
|
||||
- Example: `["market", "fundamentals"]` for technical and fundamental analysis only
|
||||
|
||||
- **debug** (bool, optional): Enable debug mode with verbose logging
|
||||
- Default: `False`
|
||||
- When `True`: Prints detailed execution traces and intermediate states
|
||||
|
||||
- **config** (Dict[str, Any], optional): Configuration dictionary
|
||||
- Default: `None` (uses `DEFAULT_CONFIG`)
|
||||
- See [Configuration Reference](../guides/configuration.md) for all options
|
||||
|
||||
#### Returns
|
||||
|
||||
- Instance of `TradingAgentsGraph`
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Basic initialization
|
||||
ta = TradingAgentsGraph()
|
||||
|
||||
# With custom analysts
|
||||
ta = TradingAgentsGraph(
|
||||
selected_analysts=["market", "fundamentals"],
|
||||
debug=True
|
||||
)
|
||||
|
||||
# With custom configuration
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "anthropic"
|
||||
config["deep_think_llm"] = "claude-sonnet-4-20250514"
|
||||
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
```
|
||||
|
||||
### Methods
|
||||
|
||||
#### propagate()
|
||||
|
||||
Run the complete trading analysis workflow for a company on a specific date.
|
||||
|
||||
```python
|
||||
propagate(
|
||||
company_name: str,
|
||||
trade_date: str
|
||||
) -> Tuple[Dict[str, Any], Dict[str, Any]]
|
||||
```
|
||||
|
||||
##### Parameters
|
||||
|
||||
- **company_name** (str): Ticker symbol of the company to analyze
|
||||
- Example: `"NVDA"`, `"AAPL"`, `"TSLA"`
|
||||
- Must be a valid US stock ticker
|
||||
|
||||
- **trade_date** (str): Date for analysis in YYYY-MM-DD format
|
||||
- Example: `"2024-05-10"`
|
||||
- Must be a valid trading day (not weekend or holiday)
|
||||
|
||||
##### Returns
|
||||
|
||||
Tuple of two dictionaries:
|
||||
|
||||
1. **Final State** (Dict[str, Any]): Complete agent state after analysis
|
||||
- Contains all analyst reports, debate outcomes, risk assessments
|
||||
- Useful for debugging and detailed inspection
|
||||
|
||||
2. **Trading Decision** (Dict[str, Any]): The final trading recommendation
|
||||
- `action`: `"BUY"`, `"SELL"`, or `"HOLD"`
|
||||
- `confidence_score`: Float between 0.0 and 1.0
|
||||
- `reasoning`: Detailed explanation of the decision
|
||||
- `position_size`: Recommended position size (if applicable)
|
||||
- `risk_assessment`: Risk evaluation summary
|
||||
|
||||
##### Example
|
||||
|
||||
```python
|
||||
ta = TradingAgentsGraph(debug=True)
|
||||
|
||||
# Run analysis
|
||||
final_state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
# Access decision
|
||||
print(f"Action: {decision['action']}")
|
||||
print(f"Confidence: {decision['confidence_score']:.2%}")
|
||||
print(f"Reasoning: {decision['reasoning']}")
|
||||
|
||||
# Access detailed state
|
||||
print(f"Analyst Reports: {final_state['analyst_reports']}")
|
||||
print(f"Research Synthesis: {final_state['research_synthesis']}")
|
||||
```
|
||||
|
||||
##### Raises
|
||||
|
||||
- **ValueError**: Invalid ticker or date format
|
||||
- **LLMRateLimitError**: LLM API rate limit exceeded
|
||||
- **DataUnavailableError**: Required data not available for the ticker/date
|
||||
- **APIError**: Generic API error from LLM or data provider
|
||||
|
||||
## Configuration
|
||||
|
||||
### Default Configuration
|
||||
|
||||
The default configuration is defined in `tradingagents/default_config.py`:
|
||||
|
||||
```python
|
||||
DEFAULT_CONFIG = {
|
||||
# Directories
|
||||
"project_dir": "<auto-detected>",
|
||||
"results_dir": "./results",
|
||||
"data_cache_dir": "./dataflows/data_cache",
|
||||
|
||||
# LLM settings
|
||||
"llm_provider": "openai",
|
||||
"deep_think_llm": "o4-mini",
|
||||
"quick_think_llm": "gpt-4o-mini",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
|
||||
# Workflow settings
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
"max_recur_limit": 100,
|
||||
|
||||
# Data vendors
|
||||
"data_vendors": {
|
||||
"core_stock_apis": "yfinance",
|
||||
"technical_indicators": "yfinance",
|
||||
"fundamental_data": "alpha_vantage",
|
||||
"news_data": "alpha_vantage"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Customizing Configuration
|
||||
|
||||
```python
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
|
||||
# Change LLM provider
|
||||
config["llm_provider"] = "anthropic"
|
||||
config["deep_think_llm"] = "claude-sonnet-4-20250514"
|
||||
|
||||
# Increase debate rounds
|
||||
config["max_debate_rounds"] = 2
|
||||
|
||||
# Change data vendors
|
||||
config["data_vendors"]["news_data"] = "google"
|
||||
|
||||
# Initialize with custom config
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
```
|
||||
|
||||
## Workflow Stages
|
||||
|
||||
The `propagate()` method executes these stages:
|
||||
|
||||
### 1. Data Collection
|
||||
|
||||
All selected analysts collect relevant data in parallel:
|
||||
|
||||
- **Market Analyst**: Stock prices, technical indicators
|
||||
- **Fundamentals Analyst**: Financial statements, ratios
|
||||
- **Sentiment Analyst**: Social media sentiment
|
||||
- **News Analyst**: News articles, events
|
||||
|
||||
### 2. Analyst Reports
|
||||
|
||||
Each analyst generates a specialized report:
|
||||
|
||||
```python
|
||||
state.analyst_reports = {
|
||||
"market": "Technical analysis shows bullish MACD crossover...",
|
||||
"fundamentals": "Strong balance sheet with P/E ratio of 35...",
|
||||
"sentiment": "Positive social sentiment with score 0.75...",
|
||||
"news": "Recent product launch expected to boost revenue..."
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Research Debate
|
||||
|
||||
Bull and Bear researchers debate the analyst findings:
|
||||
|
||||
```python
|
||||
# Round 1
|
||||
bull_researcher: "Strong fundamentals support upside potential..."
|
||||
bear_researcher: "High valuation creates downside risk..."
|
||||
|
||||
# Round 2 (if configured)
|
||||
bull_researcher: "Growth prospects justify premium valuation..."
|
||||
bear_researcher: "Market volatility increases uncertainty..."
|
||||
|
||||
# Synthesis
|
||||
research_manager: "Balanced view: Bullish bias with risk management..."
|
||||
```
|
||||
|
||||
### 4. Trading Decision
|
||||
|
||||
Trader agent synthesizes all inputs and makes a decision:
|
||||
|
||||
```python
|
||||
decision = {
|
||||
"action": "BUY",
|
||||
"confidence_score": 0.75,
|
||||
"reasoning": "Strong fundamentals and positive momentum outweigh valuation concerns...",
|
||||
"position_size": 0.05 # 5% of portfolio
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Risk Validation
|
||||
|
||||
Risk management team evaluates the proposal:
|
||||
|
||||
```python
|
||||
risk_assessment = {
|
||||
"approved": True,
|
||||
"risk_score": 0.3, # Low to medium risk
|
||||
"recommendations": [
|
||||
"Set stop-loss at -5%",
|
||||
"Monitor volatility",
|
||||
"Review position after earnings"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Final Decision
|
||||
|
||||
Portfolio manager approves or rejects:
|
||||
|
||||
```python
|
||||
final_decision = {
|
||||
"approved": True,
|
||||
"action": "BUY",
|
||||
"confidence_score": 0.75,
|
||||
"execution_details": {
|
||||
"position_size": 0.05,
|
||||
"stop_loss": -0.05,
|
||||
"take_profit": 0.15
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
The graph maintains state through the `AgentState` class:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class AgentState:
|
||||
ticker: str
|
||||
date: str
|
||||
analyst_reports: Dict[str, str]
|
||||
research_synthesis: str
|
||||
trading_decision: Dict[str, Any]
|
||||
risk_assessment: Dict[str, Any]
|
||||
final_decision: Dict[str, Any]
|
||||
```
|
||||
|
||||
Location: `tradingagents/agents/utils/agent_states.py`
|
||||
|
||||
## Memory System
|
||||
|
||||
The graph uses `FinancialSituationMemory` for context retention:
|
||||
|
||||
```python
|
||||
from tradingagents.agents.utils.memory import FinancialSituationMemory
|
||||
|
||||
memory = FinancialSituationMemory(
|
||||
persist_directory="./memory_cache"
|
||||
)
|
||||
|
||||
# Store analysis
|
||||
memory.add_situation(
|
||||
ticker="NVDA",
|
||||
date="2024-05-10",
|
||||
analysis=state
|
||||
)
|
||||
|
||||
# Retrieve similar past analyses
|
||||
similar = memory.search_similar(
|
||||
query="NVDA technical analysis",
|
||||
k=5
|
||||
)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Handling Rate Limits
|
||||
|
||||
```python
|
||||
from tradingagents.utils.exceptions import LLMRateLimitError
|
||||
import time
|
||||
|
||||
def run_with_retry(ta, ticker, date, max_retries=3):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return ta.propagate(ticker, date)
|
||||
except LLMRateLimitError as e:
|
||||
if attempt < max_retries - 1:
|
||||
wait_time = e.retry_after or 60
|
||||
print(f"Rate limit hit. Waiting {wait_time}s...")
|
||||
time.sleep(wait_time)
|
||||
else:
|
||||
raise
|
||||
```
|
||||
|
||||
### Handling Missing Data
|
||||
|
||||
```python
|
||||
from tradingagents.utils.exceptions import DataUnavailableError
|
||||
|
||||
try:
|
||||
state, decision = ta.propagate("INVALID", "2024-05-10")
|
||||
except DataUnavailableError as e:
|
||||
print(f"Data not available: {e}")
|
||||
# Fall back to alternative ticker or date
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Execution Time
|
||||
|
||||
Typical execution times (single ticker):
|
||||
|
||||
- **1 debate round**: 30-60 seconds
|
||||
- **2 debate rounds**: 60-120 seconds
|
||||
- **3 debate rounds**: 120-180 seconds
|
||||
|
||||
Factors affecting speed:
|
||||
- Number of selected analysts
|
||||
- Number of debate rounds
|
||||
- LLM provider and model choice
|
||||
- Data vendor API latency
|
||||
|
||||
### Cost Optimization
|
||||
|
||||
Estimated costs per analysis:
|
||||
|
||||
| Configuration | LLM Calls | Cost (USD) |
|
||||
|---------------|-----------|------------|
|
||||
| Minimal (1 round, 2 analysts) | ~10-15 | $0.05-0.10 |
|
||||
| Standard (1 round, 4 analysts) | ~20-25 | $0.10-0.20 |
|
||||
| Deep (2 rounds, 4 analysts) | ~35-45 | $0.20-0.40 |
|
||||
|
||||
Cost reduction strategies:
|
||||
- Use `gpt-4o-mini` instead of `o4-mini` for testing
|
||||
- Reduce debate rounds
|
||||
- Select only necessary analysts
|
||||
- Enable caching
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
Analyze multiple tickers in parallel:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
tickers = ["NVDA", "AAPL", "MSFT", "TSLA"]
|
||||
date = "2024-05-10"
|
||||
|
||||
def analyze_ticker(ticker):
|
||||
ta = TradingAgentsGraph()
|
||||
return ta.propagate(ticker, date)
|
||||
|
||||
with ThreadPoolExecutor(max_workers=4) as executor:
|
||||
results = list(executor.map(analyze_ticker, tickers))
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
|
||||
ta = TradingAgentsGraph(debug=True)
|
||||
state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
print(f"Decision: {decision['action']}")
|
||||
print(f"Confidence: {decision['confidence_score']:.2%}")
|
||||
```
|
||||
|
||||
### Custom Analysts
|
||||
|
||||
```python
|
||||
# Only technical and fundamental analysis
|
||||
ta = TradingAgentsGraph(
|
||||
selected_analysts=["market", "fundamentals"],
|
||||
debug=True
|
||||
)
|
||||
|
||||
state, decision = ta.propagate("AAPL", "2024-05-10")
|
||||
```
|
||||
|
||||
### Multiple LLM Providers
|
||||
|
||||
```python
|
||||
# Use different models for deep vs. quick thinking
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini"
|
||||
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
state, decision = ta.propagate("TSLA", "2024-05-10")
|
||||
```
|
||||
|
||||
### Batch Analysis
|
||||
|
||||
```python
|
||||
tickers = ["NVDA", "AAPL", "MSFT"]
|
||||
date = "2024-05-10"
|
||||
|
||||
results = {}
|
||||
ta = TradingAgentsGraph()
|
||||
|
||||
for ticker in tickers:
|
||||
state, decision = ta.propagate(ticker, date)
|
||||
results[ticker] = decision
|
||||
|
||||
# Compare decisions
|
||||
for ticker, decision in results.items():
|
||||
print(f"{ticker}: {decision['action']} ({decision['confidence_score']:.2%})")
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Multi-Agent System Architecture](../architecture/multi-agent-system.md)
|
||||
- [Agents API Reference](agents.md)
|
||||
- [Configuration Guide](../guides/configuration.md)
|
||||
- [Adding New Analyst](../guides/adding-new-analyst.md)
|
||||
|
|
@ -0,0 +1,408 @@
|
|||
# Data Flow Architecture
|
||||
|
||||
This document describes how data flows through the TradingAgents system, from external data sources to final trading decisions.
|
||||
|
||||
## Overview
|
||||
|
||||
TradingAgents implements a flexible data abstraction layer that allows seamless switching between data vendors without changing agent code.
|
||||
|
||||
## Data Flow Diagram
|
||||
|
||||
```
|
||||
External Sources Abstraction Layer Agents Decision
|
||||
───────────────── ────────────────── ──────── ─────────
|
||||
|
||||
yfinance ─┐
|
||||
Alpha Vantage ─┼→ Interface Layer → Analysts → Researchers → Trader
|
||||
Google News ─┤ (config-driven) ↓ ↓ ↓
|
||||
Local Cache ─┘ Reports Debates Decision
|
||||
↓ ↓ ↓
|
||||
Vector Memory Synthesis Risk Check
|
||||
```
|
||||
|
||||
## Data Vendors
|
||||
|
||||
### Core Data Vendors
|
||||
|
||||
TradingAgents supports multiple data vendors, configurable per data category:
|
||||
|
||||
#### yfinance
|
||||
- **Purpose**: Stock prices, technical indicators
|
||||
- **Pros**: Free, reliable, comprehensive market data
|
||||
- **Cons**: Limited fundamental data
|
||||
- **Rate Limits**: None (public data)
|
||||
- **Location**: `tradingagents/dataflows/yfinance.py`
|
||||
|
||||
#### Alpha Vantage
|
||||
- **Purpose**: Fundamental data, news, company financials
|
||||
- **Pros**: Rich fundamental data, partnership with TradingAgents for enhanced limits
|
||||
- **Cons**: Requires API key
|
||||
- **Rate Limits**: 60 requests/minute for TradingAgents users (normally 25/day free tier)
|
||||
- **Location**: `tradingagents/dataflows/alpha_vantage.py`
|
||||
|
||||
#### Google News
|
||||
- **Purpose**: News articles and headlines
|
||||
- **Pros**: Real-time news, comprehensive coverage
|
||||
- **Cons**: Requires API key for full access
|
||||
- **Location**: `tradingagents/dataflows/google.py`
|
||||
|
||||
#### Local Cache
|
||||
- **Purpose**: Offline backtesting, development
|
||||
- **Pros**: Fast, no API limits, reproducible
|
||||
- **Cons**: Data must be pre-downloaded
|
||||
- **Location**: `tradingagents/dataflows/local.py`
|
||||
|
||||
### Data Categories
|
||||
|
||||
Data vendor configuration is organized by category:
|
||||
|
||||
```python
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "yfinance", # Price data, quotes
|
||||
"technical_indicators": "yfinance", # MACD, RSI, etc.
|
||||
"fundamental_data": "alpha_vantage", # Financials, ratios
|
||||
"news_data": "alpha_vantage", # News and events
|
||||
}
|
||||
```
|
||||
|
||||
## Interface Layer
|
||||
|
||||
### Unified Interface
|
||||
|
||||
All agents access data through a unified interface:
|
||||
|
||||
```python
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
get_stock_data,
|
||||
get_indicators,
|
||||
get_fundamentals,
|
||||
get_news
|
||||
)
|
||||
```
|
||||
|
||||
### Interface Routing
|
||||
|
||||
The interface layer routes requests to the configured vendor:
|
||||
|
||||
**Configuration:**
|
||||
```python
|
||||
from tradingagents.dataflows.config import set_config
|
||||
|
||||
config = {
|
||||
"data_vendors": {
|
||||
"core_stock_apis": "yfinance"
|
||||
}
|
||||
}
|
||||
set_config(config)
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```python
|
||||
# Automatically routes to yfinance based on config
|
||||
data = get_stock_data("NVDA", "2024-01-01", "2024-12-31")
|
||||
```
|
||||
|
||||
**Implementation:**
|
||||
```python
|
||||
def get_stock_data(ticker: str, start_date: str, end_date: str):
|
||||
vendor = get_vendor_for_category("core_stock_apis")
|
||||
|
||||
if vendor == "yfinance":
|
||||
return yfinance_get_stock_data(ticker, start_date, end_date)
|
||||
elif vendor == "alpha_vantage":
|
||||
return alphavantage_get_stock_data(ticker, start_date, end_date)
|
||||
elif vendor == "local":
|
||||
return local_get_stock_data(ticker, start_date, end_date)
|
||||
```
|
||||
|
||||
Location: `tradingagents/dataflows/interface.py`
|
||||
|
||||
## Data Types
|
||||
|
||||
### Price Data
|
||||
|
||||
Historical stock prices (OHLCV):
|
||||
|
||||
```python
|
||||
{
|
||||
"dates": ["2024-01-01", "2024-01-02", ...],
|
||||
"open": [150.0, 151.2, ...],
|
||||
"high": [152.5, 153.0, ...],
|
||||
"low": [149.8, 150.5, ...],
|
||||
"close": [151.0, 152.0, ...],
|
||||
"volume": [1000000, 1200000, ...]
|
||||
}
|
||||
```
|
||||
|
||||
### Technical Indicators
|
||||
|
||||
Calculated technical analysis metrics:
|
||||
|
||||
```python
|
||||
{
|
||||
"MACD": {
|
||||
"macd": [...],
|
||||
"signal": [...],
|
||||
"histogram": [...]
|
||||
},
|
||||
"RSI": {
|
||||
"rsi": [...]
|
||||
},
|
||||
"BollingerBands": {
|
||||
"upper": [...],
|
||||
"middle": [...],
|
||||
"lower": [...]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fundamental Data
|
||||
|
||||
Company financial metrics:
|
||||
|
||||
```python
|
||||
{
|
||||
"MarketCapitalization": 2800000000000,
|
||||
"PERatio": 35.2,
|
||||
"PEGRatio": 1.8,
|
||||
"BookValue": 25.50,
|
||||
"DividendYield": 0.005,
|
||||
"ProfitMargin": 0.25,
|
||||
"OperatingMarginTTM": 0.30,
|
||||
"ReturnOnAssetsTTM": 0.22,
|
||||
"ReturnOnEquityTTM": 0.45
|
||||
}
|
||||
```
|
||||
|
||||
### News Data
|
||||
|
||||
News articles and headlines:
|
||||
|
||||
```python
|
||||
{
|
||||
"articles": [
|
||||
{
|
||||
"title": "Company Announces Record Earnings",
|
||||
"source": "Reuters",
|
||||
"published_at": "2024-01-15T10:30:00Z",
|
||||
"sentiment": 0.8, # -1 to 1
|
||||
"summary": "..."
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Data Caching
|
||||
|
||||
### Cache Strategy
|
||||
|
||||
TradingAgents implements multi-level caching:
|
||||
|
||||
1. **Memory Cache**: In-process cache for repeated requests within a session
|
||||
2. **Disk Cache**: Persistent cache for expensive API calls
|
||||
3. **Vector Store**: Semantic cache for analysis results
|
||||
|
||||
### Cache Configuration
|
||||
|
||||
```python
|
||||
config["data_cache_dir"] = "./dataflows/data_cache"
|
||||
```
|
||||
|
||||
### Cache Keys
|
||||
|
||||
Cache keys are generated from request parameters:
|
||||
|
||||
```python
|
||||
cache_key = f"{vendor}_{function}_{ticker}_{start_date}_{end_date}"
|
||||
```
|
||||
|
||||
### Cache Invalidation
|
||||
|
||||
Caches expire based on data freshness requirements:
|
||||
|
||||
- **Price Data**: 1 hour (intraday), 1 day (historical)
|
||||
- **Fundamental Data**: 1 day
|
||||
- **News Data**: 1 hour
|
||||
- **Technical Indicators**: Based on underlying price data
|
||||
|
||||
Location: `tradingagents/dataflows/cache.py`
|
||||
|
||||
## Data Validation
|
||||
|
||||
### Input Validation
|
||||
|
||||
All data inputs are validated before processing:
|
||||
|
||||
```python
|
||||
def validate_ticker(ticker: str) -> bool:
|
||||
"""Validate ticker symbol format."""
|
||||
return bool(re.match(r'^[A-Z]{1,5}$', ticker))
|
||||
|
||||
def validate_date(date_str: str) -> bool:
|
||||
"""Validate date format (YYYY-MM-DD)."""
|
||||
try:
|
||||
datetime.strptime(date_str, '%Y-%m-%d')
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
```
|
||||
|
||||
### Output Validation
|
||||
|
||||
Data vendor responses are validated for completeness:
|
||||
|
||||
```python
|
||||
def validate_stock_data(data: dict) -> bool:
|
||||
"""Ensure stock data has required fields."""
|
||||
required = ["dates", "open", "high", "low", "close", "volume"]
|
||||
return all(field in data for field in required)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Vendor Fallback
|
||||
|
||||
If a vendor fails, the system can fall back to alternatives:
|
||||
|
||||
```python
|
||||
def get_stock_data_with_fallback(ticker, start_date, end_date):
|
||||
vendors = ["yfinance", "alpha_vantage", "local"]
|
||||
|
||||
for vendor in vendors:
|
||||
try:
|
||||
return get_stock_data(ticker, start_date, end_date, vendor=vendor)
|
||||
except VendorError:
|
||||
continue
|
||||
|
||||
raise DataUnavailableError(f"No vendor could provide data for {ticker}")
|
||||
```
|
||||
|
||||
### Rate Limit Handling
|
||||
|
||||
Automatic retry with exponential backoff for rate limits:
|
||||
|
||||
```python
|
||||
def handle_rate_limit(func, max_retries=3):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return func()
|
||||
except RateLimitError as e:
|
||||
wait_time = e.retry_after or (2 ** attempt)
|
||||
time.sleep(wait_time)
|
||||
|
||||
raise RateLimitExceeded("Max retries exceeded")
|
||||
```
|
||||
|
||||
## Data Flow Examples
|
||||
|
||||
### Market Analyst Workflow
|
||||
|
||||
```python
|
||||
# 1. Market Analyst requests technical data
|
||||
data = get_stock_data("NVDA", "2024-01-01", "2024-12-31")
|
||||
indicators = get_indicators("NVDA", ["MACD", "RSI", "BollingerBands"])
|
||||
|
||||
# 2. Interface routes to configured vendor (yfinance)
|
||||
# 3. Data is fetched, cached, and validated
|
||||
# 4. Analyst processes data and generates report
|
||||
report = analyze_technical_signals(data, indicators)
|
||||
|
||||
# 5. Report is stored in agent state
|
||||
state.analyst_reports["market"] = report
|
||||
```
|
||||
|
||||
### Fundamentals Analyst Workflow
|
||||
|
||||
```python
|
||||
# 1. Fundamentals Analyst requests financial data
|
||||
fundamentals = get_fundamentals("NVDA")
|
||||
balance_sheet = get_balance_sheet("NVDA")
|
||||
income_statement = get_income_statement("NVDA")
|
||||
|
||||
# 2. Interface routes to configured vendor (alpha_vantage)
|
||||
# 3. Data is fetched from Alpha Vantage API
|
||||
# 4. Analyst evaluates financial health
|
||||
report = analyze_financial_health(fundamentals, balance_sheet, income_statement)
|
||||
|
||||
# 5. Report is stored in agent state
|
||||
state.analyst_reports["fundamentals"] = report
|
||||
```
|
||||
|
||||
### News Analyst Workflow
|
||||
|
||||
```python
|
||||
# 1. News Analyst requests news data
|
||||
company_news = get_news("NVDA", "2024-01-15")
|
||||
global_news = get_global_news("2024-01-15")
|
||||
|
||||
# 2. Interface routes to configured vendor (alpha_vantage or google)
|
||||
# 3. News articles are fetched and sentiment scored
|
||||
# 4. Analyst identifies market-moving events
|
||||
report = analyze_news_impact(company_news, global_news)
|
||||
|
||||
# 5. Report is stored in agent state
|
||||
state.analyst_reports["news"] = report
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Batch Requests
|
||||
|
||||
Request multiple data points in a single API call:
|
||||
|
||||
```python
|
||||
# Bad: Multiple API calls
|
||||
data1 = get_stock_data("NVDA", "2024-01-01", "2024-01-02")
|
||||
data2 = get_stock_data("NVDA", "2024-01-03", "2024-01-04")
|
||||
|
||||
# Good: Single API call
|
||||
data = get_stock_data("NVDA", "2024-01-01", "2024-01-04")
|
||||
```
|
||||
|
||||
### Parallel Requests
|
||||
|
||||
Fetch data for multiple tickers in parallel:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
async def fetch_multiple_tickers(tickers):
|
||||
tasks = [get_stock_data_async(ticker, start, end) for ticker in tickers]
|
||||
return await asyncio.gather(*tasks)
|
||||
```
|
||||
|
||||
### Data Preprocessing
|
||||
|
||||
Preprocess data once and cache results:
|
||||
|
||||
```python
|
||||
def get_preprocessed_indicators(ticker, start_date, end_date):
|
||||
cache_key = f"preprocessed_{ticker}_{start_date}_{end_date}"
|
||||
|
||||
if cached := get_from_cache(cache_key):
|
||||
return cached
|
||||
|
||||
data = get_stock_data(ticker, start_date, end_date)
|
||||
indicators = calculate_all_indicators(data)
|
||||
|
||||
save_to_cache(cache_key, indicators)
|
||||
return indicators
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use Configuration**: Always configure vendors through config, not hardcoded
|
||||
2. **Handle Errors Gracefully**: Implement fallbacks and retries
|
||||
3. **Cache Aggressively**: Cache expensive API calls with appropriate TTL
|
||||
4. **Validate Data**: Check data completeness before using
|
||||
5. **Monitor Usage**: Track API quotas and rate limits
|
||||
6. **Batch When Possible**: Minimize API calls through batching
|
||||
7. **Use Async for Parallelism**: Fetch multiple resources concurrently
|
||||
|
||||
## References
|
||||
|
||||
- [Multi-Agent System](multi-agent-system.md)
|
||||
- [Data Flows API](../api/dataflows.md)
|
||||
- [Configuration Guide](../guides/configuration.md)
|
||||
- [Adding Data Vendor Guide](../guides/adding-data-vendor.md)
|
||||
|
|
@ -0,0 +1,451 @@
|
|||
# LLM Integration Architecture
|
||||
|
||||
This document describes how TradingAgents integrates with different Large Language Model (LLM) providers through a unified abstraction layer.
|
||||
|
||||
## Overview
|
||||
|
||||
TradingAgents supports multiple LLM providers through a flexible configuration system that allows switching between providers without code changes.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
### OpenAI
|
||||
- **Models**: GPT-4o, GPT-4o-mini, o4-mini (default), o1-preview
|
||||
- **Strengths**: Strong reasoning, reliable, extensive fine-tuning
|
||||
- **Use Case**: Default choice for production
|
||||
- **API Key**: `OPENAI_API_KEY`
|
||||
- **Endpoint**: `https://api.openai.com/v1`
|
||||
|
||||
### Anthropic
|
||||
- **Models**: Claude Sonnet 4, Claude Opus 4
|
||||
- **Strengths**: Strong reasoning, long context windows, excellent instruction following
|
||||
- **Use Case**: Alternative to OpenAI, good for complex analysis
|
||||
- **API Key**: `ANTHROPIC_API_KEY`
|
||||
- **Endpoint**: `https://api.anthropic.com`
|
||||
|
||||
### OpenRouter
|
||||
- **Models**: Unified access to 100+ models from multiple providers
|
||||
- **Strengths**: Single API for multiple providers, competitive pricing
|
||||
- **Use Case**: Flexibility, cost optimization, accessing diverse models
|
||||
- **API Key**: `OPENROUTER_API_KEY` (plus `OPENAI_API_KEY` for embeddings)
|
||||
- **Endpoint**: `https://openrouter.ai/api/v1`
|
||||
|
||||
### Google Generative AI
|
||||
- **Models**: Gemini 2.0 Flash, Gemini Pro
|
||||
- **Strengths**: Fast inference, multimodal capabilities
|
||||
- **Use Case**: Cost-effective alternative, multimodal analysis
|
||||
- **API Key**: `GOOGLE_API_KEY`
|
||||
- **Endpoint**: Built-in (no custom endpoint)
|
||||
|
||||
### Ollama
|
||||
- **Models**: Local models (Llama, Mistral, etc.)
|
||||
- **Strengths**: No API costs, data privacy, offline operation
|
||||
- **Use Case**: Development, experimentation, privacy-sensitive analysis
|
||||
- **API Key**: None (local)
|
||||
- **Endpoint**: `http://localhost:11434/v1`
|
||||
|
||||
## Provider Abstraction
|
||||
|
||||
### Configuration-Driven Selection
|
||||
|
||||
LLM providers are selected through configuration:
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "openai", # Provider selection
|
||||
"deep_think_llm": "o4-mini", # Model for complex reasoning
|
||||
"quick_think_llm": "gpt-4o-mini", # Model for fast tasks
|
||||
"backend_url": "https://api.openai.com/v1"
|
||||
}
|
||||
```
|
||||
|
||||
### Initialization Logic
|
||||
|
||||
The `TradingAgentsGraph` class handles provider initialization:
|
||||
|
||||
```python
|
||||
if config["llm_provider"].lower() in ("openai", "ollama"):
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
self.deep_thinking_llm = ChatOpenAI(
|
||||
model=config["deep_think_llm"],
|
||||
base_url=config["backend_url"]
|
||||
)
|
||||
self.quick_thinking_llm = ChatOpenAI(
|
||||
model=config["quick_think_llm"],
|
||||
base_url=config["backend_url"]
|
||||
)
|
||||
|
||||
elif config["llm_provider"].lower() == "anthropic":
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
self.deep_thinking_llm = ChatAnthropic(
|
||||
model=config["deep_think_llm"],
|
||||
base_url=config["backend_url"]
|
||||
)
|
||||
self.quick_thinking_llm = ChatAnthropic(
|
||||
model=config["quick_think_llm"],
|
||||
base_url=config["backend_url"]
|
||||
)
|
||||
|
||||
elif config["llm_provider"].lower() == "openrouter":
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
openrouter_key = os.getenv("OPENROUTER_API_KEY")
|
||||
if not openrouter_key:
|
||||
raise ValueError("OPENROUTER_API_KEY required")
|
||||
|
||||
default_headers = {
|
||||
"HTTP-Referer": "https://github.com/TauricResearch/TradingAgents",
|
||||
"X-Title": "TradingAgents"
|
||||
}
|
||||
|
||||
self.deep_thinking_llm = ChatOpenAI(
|
||||
model=config["deep_think_llm"],
|
||||
base_url=config["backend_url"],
|
||||
api_key=openrouter_key,
|
||||
default_headers=default_headers
|
||||
)
|
||||
self.quick_thinking_llm = ChatOpenAI(
|
||||
model=config["quick_think_llm"],
|
||||
base_url=config["backend_url"],
|
||||
api_key=openrouter_key,
|
||||
default_headers=default_headers
|
||||
)
|
||||
|
||||
elif config["llm_provider"].lower() == "google":
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
||||
self.deep_thinking_llm = ChatGoogleGenerativeAI(
|
||||
model=config["deep_think_llm"]
|
||||
)
|
||||
self.quick_thinking_llm = ChatGoogleGenerativeAI(
|
||||
model=config["quick_think_llm"]
|
||||
)
|
||||
```
|
||||
|
||||
Location: `tradingagents/graph/trading_graph.py`
|
||||
|
||||
## Model Selection Strategy
|
||||
|
||||
### Two-Tier Model Approach
|
||||
|
||||
TradingAgents uses two types of LLMs for different tasks:
|
||||
|
||||
#### Deep Thinking LLM
|
||||
- **Purpose**: Complex reasoning, strategic analysis, debate moderation
|
||||
- **Characteristics**: Larger models, slower, more expensive, higher quality
|
||||
- **Use Cases**:
|
||||
- Researcher debate moderation
|
||||
- Trading decision synthesis
|
||||
- Risk assessment evaluation
|
||||
- **Recommended Models**:
|
||||
- OpenAI: o4-mini, o1-preview
|
||||
- Anthropic: claude-sonnet-4, claude-opus-4
|
||||
- OpenRouter: anthropic/claude-sonnet-4.5
|
||||
|
||||
#### Quick Thinking LLM
|
||||
- **Purpose**: Fast analysis, data summarization, routine tasks
|
||||
- **Characteristics**: Smaller models, faster, cost-effective
|
||||
- **Use Cases**:
|
||||
- Analyst report generation
|
||||
- Data interpretation
|
||||
- Tool calling
|
||||
- **Recommended Models**:
|
||||
- OpenAI: gpt-4o-mini, gpt-4o
|
||||
- Anthropic: claude-sonnet-4
|
||||
- OpenRouter: openai/gpt-4o-mini
|
||||
|
||||
### Model Selection Guidelines
|
||||
|
||||
**For Production:**
|
||||
```python
|
||||
config["deep_think_llm"] = "o1-preview" # Best reasoning
|
||||
config["quick_think_llm"] = "gpt-4o-mini" # Cost-effective
|
||||
```
|
||||
|
||||
**For Development/Testing:**
|
||||
```python
|
||||
config["deep_think_llm"] = "o4-mini" # Fast and cheaper
|
||||
config["quick_think_llm"] = "gpt-4o-mini" # Consistent quality
|
||||
```
|
||||
|
||||
**For Cost Optimization:**
|
||||
```python
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini"
|
||||
```
|
||||
|
||||
## Provider-Specific Configuration
|
||||
|
||||
### OpenAI Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "openai",
|
||||
"deep_think_llm": "o4-mini",
|
||||
"quick_think_llm": "gpt-4o-mini",
|
||||
"backend_url": "https://api.openai.com/v1"
|
||||
}
|
||||
```
|
||||
|
||||
Environment:
|
||||
```bash
|
||||
export OPENAI_API_KEY=sk-your_key_here
|
||||
```
|
||||
|
||||
### Anthropic Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "anthropic",
|
||||
"deep_think_llm": "claude-sonnet-4-20250514",
|
||||
"quick_think_llm": "claude-sonnet-4-20250514",
|
||||
"backend_url": "https://api.anthropic.com"
|
||||
}
|
||||
```
|
||||
|
||||
Environment:
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY=sk-ant-your_key_here
|
||||
```
|
||||
|
||||
### OpenRouter Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "openrouter",
|
||||
"deep_think_llm": "anthropic/claude-sonnet-4.5",
|
||||
"quick_think_llm": "openai/gpt-4o-mini",
|
||||
"backend_url": "https://openrouter.ai/api/v1"
|
||||
}
|
||||
```
|
||||
|
||||
Environment:
|
||||
```bash
|
||||
export OPENROUTER_API_KEY=sk-or-v1-your_key_here
|
||||
export OPENAI_API_KEY=sk-your_key_here # Required for embeddings
|
||||
```
|
||||
|
||||
**Note**: OpenRouter uses `provider/model-name` format:
|
||||
- `anthropic/claude-sonnet-4.5`
|
||||
- `openai/gpt-4o`
|
||||
- `google/gemini-pro`
|
||||
|
||||
### Google Generative AI Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "google",
|
||||
"deep_think_llm": "gemini-2.0-flash",
|
||||
"quick_think_llm": "gemini-2.0-flash"
|
||||
}
|
||||
```
|
||||
|
||||
Environment:
|
||||
```bash
|
||||
export GOOGLE_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Ollama Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "ollama",
|
||||
"deep_think_llm": "mistral",
|
||||
"quick_think_llm": "mistral",
|
||||
"backend_url": "http://localhost:11434/v1"
|
||||
}
|
||||
```
|
||||
|
||||
Prerequisites:
|
||||
```bash
|
||||
# Install Ollama
|
||||
curl -fsSL https://ollama.com/install.sh | sh
|
||||
|
||||
# Pull model
|
||||
ollama pull mistral
|
||||
|
||||
# Start Ollama server
|
||||
ollama serve
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Rate Limit Handling
|
||||
|
||||
Unified rate limit error handling across providers:
|
||||
|
||||
```python
|
||||
from tradingagents.utils.exceptions import LLMRateLimitError
|
||||
|
||||
try:
|
||||
response = llm.invoke(messages)
|
||||
except LLMRateLimitError as e:
|
||||
print(f"Rate limit hit: {e.message}")
|
||||
if e.retry_after:
|
||||
print(f"Retry after {e.retry_after} seconds")
|
||||
```
|
||||
|
||||
Location: `tradingagents/utils/exceptions.py`
|
||||
|
||||
### Provider-Specific Errors
|
||||
|
||||
Each provider may raise different errors:
|
||||
|
||||
**OpenAI:**
|
||||
- `RateLimitError` → Retry after specified time
|
||||
- `InvalidRequestError` → Check model name, parameters
|
||||
- `AuthenticationError` → Verify API key
|
||||
|
||||
**Anthropic:**
|
||||
- `RateLimitError` → Retry with backoff
|
||||
- `InvalidRequestError` → Check message format
|
||||
- `APIError` → Server-side issues
|
||||
|
||||
**OpenRouter:**
|
||||
- Follows OpenAI error format
|
||||
- Additional headers required for attribution
|
||||
|
||||
### Fallback Strategy
|
||||
|
||||
Implement provider fallback for resilience:
|
||||
|
||||
```python
|
||||
providers = ["openai", "anthropic", "openrouter"]
|
||||
|
||||
for provider in providers:
|
||||
try:
|
||||
config["llm_provider"] = provider
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
result = ta.propagate(ticker, date)
|
||||
break
|
||||
except LLMRateLimitError:
|
||||
continue
|
||||
```
|
||||
|
||||
## Cost Optimization
|
||||
|
||||
### Model Cost Comparison
|
||||
|
||||
**Deep Thinking Tasks:**
|
||||
| Provider | Model | Cost/1M Tokens (Input/Output) |
|
||||
|----------|-------|-------------------------------|
|
||||
| OpenAI | o4-mini | $1.50 / $6.00 |
|
||||
| OpenAI | o1-preview | $15.00 / $60.00 |
|
||||
| Anthropic | claude-sonnet-4 | $3.00 / $15.00 |
|
||||
| OpenRouter | Varies by model | Check OpenRouter pricing |
|
||||
|
||||
**Quick Thinking Tasks:**
|
||||
| Provider | Model | Cost/1M Tokens (Input/Output) |
|
||||
|----------|-------|-------------------------------|
|
||||
| OpenAI | gpt-4o-mini | $0.15 / $0.60 |
|
||||
| OpenAI | gpt-4o | $2.50 / $10.00 |
|
||||
| Google | gemini-2.0-flash | Free tier available |
|
||||
| Ollama | Local models | Free (local) |
|
||||
|
||||
### Cost Reduction Strategies
|
||||
|
||||
1. **Use Smaller Models for Simple Tasks**
|
||||
```python
|
||||
config["quick_think_llm"] = "gpt-4o-mini" # Instead of gpt-4o
|
||||
```
|
||||
|
||||
2. **Reduce Debate Rounds**
|
||||
```python
|
||||
config["max_debate_rounds"] = 1 # Instead of 2-3
|
||||
```
|
||||
|
||||
3. **Use OpenRouter for Competitive Pricing**
|
||||
```python
|
||||
config["llm_provider"] = "openrouter"
|
||||
```
|
||||
|
||||
4. **Cache LLM Responses**
|
||||
```python
|
||||
# Implemented in agent memory system
|
||||
memory.store_analysis(ticker, date, result)
|
||||
```
|
||||
|
||||
5. **Use Ollama for Development**
|
||||
```python
|
||||
config["llm_provider"] = "ollama" # No API costs
|
||||
```
|
||||
|
||||
## Embeddings
|
||||
|
||||
### Embedding Provider
|
||||
|
||||
TradingAgents uses OpenAI embeddings for vector storage (memory system):
|
||||
|
||||
```python
|
||||
from langchain_openai import OpenAIEmbeddings
|
||||
|
||||
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
|
||||
```
|
||||
|
||||
**Important**: Even when using non-OpenAI LLM providers (Anthropic, Google, etc.), `OPENAI_API_KEY` is still required for embeddings.
|
||||
|
||||
### Alternative Embedding Providers
|
||||
|
||||
For fully offline operation, consider:
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import HuggingFaceEmbeddings
|
||||
|
||||
embeddings = HuggingFaceEmbeddings(
|
||||
model_name="sentence-transformers/all-MiniLM-L6-v2"
|
||||
)
|
||||
```
|
||||
|
||||
Note: This requires updating the memory initialization code.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Latency
|
||||
|
||||
**Provider Latency (Approximate):**
|
||||
- OpenAI: 1-3 seconds per request
|
||||
- Anthropic: 1-2 seconds per request
|
||||
- Google: 0.5-1.5 seconds per request
|
||||
- OpenRouter: Varies by underlying model
|
||||
- Ollama: 0.5-5 seconds (depends on local hardware)
|
||||
|
||||
### Throughput
|
||||
|
||||
**Concurrent Requests:**
|
||||
- OpenAI: Tier-based limits (20-5000 RPM)
|
||||
- Anthropic: Tier-based limits (50-2000 RPM)
|
||||
- OpenRouter: Model-specific limits
|
||||
- Ollama: Limited by local GPU/CPU
|
||||
|
||||
### Caching
|
||||
|
||||
LangChain provides built-in caching:
|
||||
|
||||
```python
|
||||
from langchain.cache import SQLiteCache
|
||||
from langchain.globals import set_llm_cache
|
||||
|
||||
set_llm_cache(SQLiteCache(database_path=".langchain.db"))
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Set API Keys as Environment Variables**: Never hardcode keys
|
||||
2. **Use Two-Tier Model Strategy**: Deep/quick thinking separation
|
||||
3. **Implement Error Handling**: Catch rate limits and retry
|
||||
4. **Monitor Costs**: Track token usage and expenses
|
||||
5. **Test with Cheaper Models**: Use o4-mini/gpt-4o-mini for development
|
||||
6. **Cache When Possible**: Avoid redundant API calls
|
||||
7. **Use OpenRouter for Flexibility**: Easy switching between providers
|
||||
8. **Implement Timeouts**: Prevent hanging requests
|
||||
9. **Log API Usage**: Track which models are called
|
||||
10. **Consider Local Models**: Ollama for sensitive data or development
|
||||
|
||||
## References
|
||||
|
||||
- [Multi-Agent System](multi-agent-system.md)
|
||||
- [Configuration Guide](../guides/configuration.md)
|
||||
- [Adding LLM Provider Guide](../guides/adding-llm-provider.md)
|
||||
- [TradingGraph API](../api/trading-graph.md)
|
||||
|
|
@ -0,0 +1,353 @@
|
|||
# Multi-Agent System Architecture
|
||||
|
||||
TradingAgents implements a multi-agent architecture that mirrors real-world trading firms, where specialized teams collaborate to make informed investment decisions.
|
||||
|
||||
## System Overview
|
||||
|
||||
The framework decomposes complex trading analysis into specialized agent roles, each with specific responsibilities and expertise. Agents collaborate through structured workflows orchestrated by LangGraph.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ DATA LAYER │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ yfinance │ Alpha Vantage │ FRED (NEW) │ Alpaca │ Multi-Timeframe │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ ANALYSIS LAYER │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Market │ Momentum │ Macro │ Correlation │ News │ Fundamentals │
|
||||
│ Analyst │ Analyst │ Analyst │ Analyst │ │ │
|
||||
│ │ (NEW) │ (NEW) │ (NEW) │ │ │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Bull ←── Debate ──→ Bear → Research Manager │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Trader → Signal + Confidence Score │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Risk Debate → Position Sizing Manager (NEW) │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Agent Roles
|
||||
|
||||
### Analyst Team
|
||||
|
||||
The analyst team conducts specialized analysis, each agent focusing on a specific domain:
|
||||
|
||||
#### Market Analyst (Technical)
|
||||
- **Responsibility**: Technical analysis using price patterns and indicators
|
||||
- **Tools**: MACD, RSI, Bollinger Bands, moving averages
|
||||
- **Output**: Technical trends, support/resistance levels, momentum signals
|
||||
- **Location**: `tradingagents/agents/analysts/market_analyst.py`
|
||||
|
||||
#### Fundamentals Analyst
|
||||
- **Responsibility**: Company financial health and valuation analysis
|
||||
- **Tools**: Balance sheet, income statement, cash flow, financial ratios
|
||||
- **Output**: Intrinsic value estimates, financial health assessment
|
||||
- **Location**: `tradingagents/agents/analysts/fundamentals_analyst.py`
|
||||
|
||||
#### Sentiment Analyst
|
||||
- **Responsibility**: Social media and public sentiment analysis
|
||||
- **Tools**: Reddit data (PRAW), sentiment scoring algorithms
|
||||
- **Output**: Public sentiment scores, trending topics, investor mood
|
||||
- **Location**: `tradingagents/agents/analysts/sentiment_analyst.py`
|
||||
|
||||
#### News Analyst
|
||||
- **Responsibility**: Global news and macroeconomic event analysis
|
||||
- **Tools**: News APIs, event impact models
|
||||
- **Output**: Event impact assessments, market-moving news identification
|
||||
- **Location**: `tradingagents/agents/analysts/news_analyst.py`
|
||||
|
||||
### Researcher Team
|
||||
|
||||
Researchers engage in structured debates to evaluate analyst insights:
|
||||
|
||||
#### Bull Researcher
|
||||
- **Responsibility**: Identify bullish opportunities and positive catalysts
|
||||
- **Approach**: Seeks upside potential, growth drivers, favorable trends
|
||||
- **Output**: Bull case arguments with supporting evidence
|
||||
|
||||
#### Bear Researcher
|
||||
- **Responsibility**: Identify risks and potential downsides
|
||||
- **Approach**: Seeks red flags, overvaluation signals, adverse conditions
|
||||
- **Output**: Bear case arguments with risk assessments
|
||||
|
||||
#### Research Manager
|
||||
- **Responsibility**: Moderate debates, synthesize perspectives
|
||||
- **Process**: Coordinates debate rounds, ensures balanced analysis
|
||||
- **Output**: Balanced research report with bull/bear synthesis
|
||||
|
||||
### Trader Agent
|
||||
|
||||
The trader makes final trading decisions based on comprehensive analysis:
|
||||
|
||||
- **Input**: Analyst reports, researcher debates, market conditions
|
||||
- **Process**: Weighs evidence, assesses conviction levels
|
||||
- **Output**: Trading signal (BUY/SELL/HOLD) with confidence score
|
||||
- **Location**: `tradingagents/agents/trader.py`
|
||||
|
||||
### Risk Management Team
|
||||
|
||||
Risk agents evaluate portfolio impact and validate strategies:
|
||||
|
||||
#### Risk Analysts
|
||||
- **Responsibility**: Assess volatility, liquidity, correlation risks
|
||||
- **Tools**: Risk metrics, scenario analysis, stress testing
|
||||
- **Output**: Risk assessments with mitigation recommendations
|
||||
|
||||
#### Portfolio Manager
|
||||
- **Responsibility**: Final approval/rejection of trading proposals
|
||||
- **Process**: Reviews risk reports, validates against portfolio constraints
|
||||
- **Output**: Approved orders or rejection with reasoning
|
||||
- **Location**: `tradingagents/agents/portfolio_manager.py`
|
||||
|
||||
## Agent Workflow
|
||||
|
||||
### 1. Data Collection
|
||||
|
||||
All analysts access data through the unified data vendor interface:
|
||||
|
||||
```python
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
get_stock_data,
|
||||
get_indicators,
|
||||
get_fundamentals,
|
||||
get_news
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Parallel Analysis
|
||||
|
||||
Analysts work in parallel, each producing specialized reports:
|
||||
|
||||
```
|
||||
Market Analyst → Technical Report
|
||||
Fundamentals → Financial Report
|
||||
Sentiment → Sentiment Report
|
||||
News Analyst → Event Report
|
||||
```
|
||||
|
||||
### 3. Research Debate
|
||||
|
||||
Researchers debate analyst findings over multiple rounds:
|
||||
|
||||
```
|
||||
Round 1: Bull presents arguments → Bear counters
|
||||
Round 2: Bear presents risks → Bull defends
|
||||
...
|
||||
Final: Research Manager synthesizes
|
||||
```
|
||||
|
||||
Configuration: `config["max_debate_rounds"]` (default: 1)
|
||||
|
||||
### 4. Trading Decision
|
||||
|
||||
Trader evaluates research synthesis:
|
||||
|
||||
```python
|
||||
decision = {
|
||||
"action": "BUY" | "SELL" | "HOLD",
|
||||
"confidence_score": 0.0 to 1.0,
|
||||
"reasoning": "...",
|
||||
"position_size": float
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Risk Validation
|
||||
|
||||
Risk team reviews the trading proposal:
|
||||
|
||||
```
|
||||
Risk Analysts → Risk Assessment
|
||||
Portfolio Manager → Approve or Reject
|
||||
```
|
||||
|
||||
Configuration: `config["max_risk_discuss_rounds"]` (default: 1)
|
||||
|
||||
## State Management
|
||||
|
||||
TradingAgents uses LangGraph for state management across the agent workflow.
|
||||
|
||||
### AgentState
|
||||
|
||||
The main state object carries information through the graph:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class AgentState:
|
||||
ticker: str
|
||||
date: str
|
||||
analyst_reports: Dict[str, str]
|
||||
research_synthesis: str
|
||||
trading_decision: Dict[str, Any]
|
||||
risk_assessment: str
|
||||
final_decision: Dict[str, Any]
|
||||
```
|
||||
|
||||
Location: `tradingagents/agents/utils/agent_states.py`
|
||||
|
||||
### InvestDebateState
|
||||
|
||||
Manages researcher debate rounds:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class InvestDebateState:
|
||||
bull_arguments: List[str]
|
||||
bear_arguments: List[str]
|
||||
debate_round: int
|
||||
synthesis: str
|
||||
```
|
||||
|
||||
### RiskDebateState
|
||||
|
||||
Manages risk team discussions:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class RiskDebateState:
|
||||
risk_assessments: List[str]
|
||||
discussion_round: int
|
||||
final_recommendation: str
|
||||
```
|
||||
|
||||
## Memory System
|
||||
|
||||
Agents maintain context through a vector-based memory system:
|
||||
|
||||
### FinancialSituationMemory
|
||||
|
||||
- **Purpose**: Store and retrieve historical analysis context
|
||||
- **Backend**: ChromaDB vector store
|
||||
- **Features**:
|
||||
- Semantic search for relevant past analyses
|
||||
- Recency, relevancy, and importance scoring (FinMem pattern)
|
||||
- Persistent storage across runs
|
||||
- **Location**: `tradingagents/agents/utils/memory.py`
|
||||
|
||||
## Tool Integration
|
||||
|
||||
Agents access data through a unified tool interface:
|
||||
|
||||
### Data Tools
|
||||
|
||||
Available to all analyst agents:
|
||||
|
||||
- `get_stock_data(ticker, start_date, end_date)` - Historical prices
|
||||
- `get_indicators(ticker, indicators_list)` - Technical indicators
|
||||
- `get_fundamentals(ticker)` - Financial metrics
|
||||
- `get_balance_sheet(ticker)` - Balance sheet data
|
||||
- `get_cashflow(ticker)` - Cash flow statements
|
||||
- `get_income_statement(ticker)` - Income statements
|
||||
- `get_news(ticker, date)` - Company-specific news
|
||||
- `get_global_news(date)` - Market-wide news
|
||||
|
||||
Location: `tradingagents/agents/utils/agent_utils.py`
|
||||
|
||||
### Tool Nodes
|
||||
|
||||
LangGraph ToolNodes wrap data access functions:
|
||||
|
||||
```python
|
||||
from langgraph.prebuilt import ToolNode
|
||||
|
||||
analyst_tools = ToolNode([
|
||||
get_stock_data,
|
||||
get_indicators,
|
||||
get_fundamentals
|
||||
])
|
||||
```
|
||||
|
||||
## Conditional Routing
|
||||
|
||||
The graph uses conditional logic to route between agents:
|
||||
|
||||
### Debate Continuation
|
||||
|
||||
```python
|
||||
def should_continue_debate(state: InvestDebateState) -> str:
|
||||
if state.debate_round >= config["max_debate_rounds"]:
|
||||
return "finalize"
|
||||
return "continue_debate"
|
||||
```
|
||||
|
||||
### Risk Approval
|
||||
|
||||
```python
|
||||
def check_risk_approval(state: AgentState) -> str:
|
||||
if state.risk_assessment["approved"]:
|
||||
return "execute"
|
||||
return "reject"
|
||||
```
|
||||
|
||||
Location: `tradingagents/graph/conditional_logic.py`
|
||||
|
||||
## Extensibility
|
||||
|
||||
The multi-agent architecture is designed for extensibility:
|
||||
|
||||
### Adding New Analysts
|
||||
|
||||
1. Create analyst class inheriting from base analyst
|
||||
2. Implement `analyze()` method
|
||||
3. Register in analyst list
|
||||
4. Agent automatically joins parallel analysis
|
||||
|
||||
See [Adding New Analyst Guide](../guides/adding-new-analyst.md)
|
||||
|
||||
### Adding Custom Workflows
|
||||
|
||||
1. Define new state classes
|
||||
2. Create agent nodes
|
||||
3. Add conditional routing logic
|
||||
4. Integrate into main graph
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
Analysts run in parallel to minimize latency:
|
||||
|
||||
```python
|
||||
# Analysts execute simultaneously
|
||||
analyst_nodes = {
|
||||
"market": market_analyst,
|
||||
"fundamentals": fundamentals_analyst,
|
||||
"sentiment": sentiment_analyst,
|
||||
"news": news_analyst
|
||||
}
|
||||
```
|
||||
|
||||
### Debate Rounds
|
||||
|
||||
More debate rounds increase analysis depth but also API costs:
|
||||
|
||||
- 1 round: Fast, lower cost, adequate for most cases
|
||||
- 2-3 rounds: Deeper analysis, higher confidence
|
||||
- 4+ rounds: Diminishing returns, significantly higher cost
|
||||
|
||||
### Memory Optimization
|
||||
|
||||
Vector store queries are batched and cached:
|
||||
|
||||
```python
|
||||
memory = FinancialSituationMemory(
|
||||
persist_directory="./memory_cache"
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Select Relevant Analysts**: Only enable analysts needed for your strategy
|
||||
2. **Tune Debate Rounds**: Start with 1 round, increase only if needed
|
||||
3. **Monitor API Usage**: Track LLM API calls and costs
|
||||
4. **Use Memory Wisely**: Leverage past analyses for similar contexts
|
||||
5. **Test Incrementally**: Validate each agent's output before full integration
|
||||
|
||||
## References
|
||||
|
||||
- [Data Flow Architecture](data-flow.md)
|
||||
- [LLM Integration](llm-integration.md)
|
||||
- [TradingGraph API](../api/trading-graph.md)
|
||||
- [Agent APIs](../api/agents.md)
|
||||
|
|
@ -0,0 +1,405 @@
|
|||
# Contributing to TradingAgents
|
||||
|
||||
Thank you for your interest in contributing to TradingAgents! This guide will help you get started.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
By participating in this project, you agree to maintain a respectful and collaborative environment.
|
||||
|
||||
## How to Contribute
|
||||
|
||||
### Reporting Bugs
|
||||
|
||||
1. **Check existing issues**: Search [GitHub Issues](https://github.com/TauricResearch/TradingAgents/issues) first
|
||||
2. **Create detailed report**: Include:
|
||||
- Clear description of the bug
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Environment details (Python version, OS, etc.)
|
||||
- Relevant code snippets or logs
|
||||
- Screenshots if applicable
|
||||
|
||||
Example bug report:
|
||||
```markdown
|
||||
**Description**: Market analyst fails when ticker has no data
|
||||
|
||||
**Steps to Reproduce**:
|
||||
1. Initialize TradingAgentsGraph
|
||||
2. Call propagate("INVALID", "2024-01-01")
|
||||
3. Error occurs in market analyst
|
||||
|
||||
**Expected**: Graceful error handling
|
||||
**Actual**: Unhandled exception
|
||||
|
||||
**Environment**:
|
||||
- Python 3.13
|
||||
- macOS 14.0
|
||||
- TradingAgents v0.1.0
|
||||
|
||||
**Error**:
|
||||
```python
|
||||
KeyError: 'close'
|
||||
```
|
||||
|
||||
### Requesting Features
|
||||
|
||||
1. **Check existing requests**: Search issues for similar requests
|
||||
2. **Create feature request**: Include:
|
||||
- Clear use case
|
||||
- Proposed solution
|
||||
- Alternative approaches considered
|
||||
- Impact on existing functionality
|
||||
|
||||
Example feature request:
|
||||
```markdown
|
||||
**Feature**: Add momentum analyst for multi-timeframe analysis
|
||||
|
||||
**Use Case**: Traders need to analyze momentum across daily, weekly, and monthly timeframes
|
||||
|
||||
**Proposed Solution**: Create MomentumAnalyst that:
|
||||
- Calculates ROC, ADX across timeframes
|
||||
- Identifies trend strength
|
||||
- Generates momentum-based signals
|
||||
|
||||
**Alternatives**: Could extend existing MarketAnalyst
|
||||
|
||||
**Impact**: Adds new optional analyst, no breaking changes
|
||||
|
||||
### Contributing Code
|
||||
|
||||
#### 1. Fork and Clone
|
||||
|
||||
```bash
|
||||
# Fork repository on GitHub
|
||||
# Then clone your fork
|
||||
git clone https://github.com/YOUR_USERNAME/TradingAgents.git
|
||||
cd TradingAgents
|
||||
```
|
||||
|
||||
#### 2. Set Up Development Environment
|
||||
|
||||
Follow the [Development Setup Guide](setup.md):
|
||||
|
||||
```bash
|
||||
# Create virtual environment
|
||||
conda create -n tradingagents python=3.13
|
||||
conda activate tradingagents
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
pip install -e .
|
||||
|
||||
# Install development tools
|
||||
pip install pytest black isort flake8 mypy pre-commit
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
#### 3. Create Feature Branch
|
||||
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
Branch naming conventions:
|
||||
- `feature/` - New features
|
||||
- `fix/` - Bug fixes
|
||||
- `docs/` - Documentation updates
|
||||
- `test/` - Test additions/improvements
|
||||
- `refactor/` - Code refactoring
|
||||
|
||||
#### 4. Make Changes
|
||||
|
||||
Follow coding standards:
|
||||
- **PEP 8**: Python style guide
|
||||
- **Type Hints**: Add type annotations
|
||||
- **Docstrings**: Google-style docstrings
|
||||
- **Tests**: Write tests for new code
|
||||
- **Documentation**: Update relevant docs
|
||||
|
||||
Example with type hints and docstrings:
|
||||
|
||||
```python
|
||||
from typing import Dict, List, Any
|
||||
|
||||
def analyze_momentum(
|
||||
ticker: str,
|
||||
date: str,
|
||||
timeframes: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze momentum across multiple timeframes.
|
||||
|
||||
Args:
|
||||
ticker: Stock ticker symbol (e.g., "NVDA")
|
||||
date: Analysis date in YYYY-MM-DD format
|
||||
timeframes: List of timeframes ("daily", "weekly", "monthly")
|
||||
|
||||
Returns:
|
||||
Dictionary containing momentum analysis:
|
||||
- trend_strength: float between 0.0 and 1.0
|
||||
- direction: "bullish" or "bearish"
|
||||
- signals: list of identified signals
|
||||
|
||||
Raises:
|
||||
ValueError: If ticker or date format is invalid
|
||||
DataUnavailableError: If data cannot be retrieved
|
||||
|
||||
Example:
|
||||
>>> result = analyze_momentum("NVDA", "2024-05-10", ["daily", "weekly"])
|
||||
>>> print(result["trend_strength"])
|
||||
0.75
|
||||
"""
|
||||
# Implementation
|
||||
pass
|
||||
```
|
||||
|
||||
#### 5. Write Tests
|
||||
|
||||
Create tests following TDD approach:
|
||||
|
||||
```python
|
||||
# tests/unit/test_momentum_analyst.py
|
||||
|
||||
import pytest
|
||||
from tradingagents.agents.analysts.momentum_analyst import MomentumAnalyst
|
||||
from unittest.mock import Mock
|
||||
|
||||
def test_momentum_analyst_initialization():
|
||||
"""Test MomentumAnalyst can be initialized."""
|
||||
llm = Mock()
|
||||
tools = []
|
||||
|
||||
analyst = MomentumAnalyst(llm, tools)
|
||||
|
||||
assert analyst.name == "momentum"
|
||||
assert analyst.llm == llm
|
||||
|
||||
def test_momentum_analyst_analyze():
|
||||
"""Test analyst generates momentum analysis."""
|
||||
# Arrange
|
||||
llm = Mock()
|
||||
llm.invoke.return_value = Mock(
|
||||
content="Momentum analysis: Strong uptrend..."
|
||||
)
|
||||
tools = [Mock(name="get_stock_data")]
|
||||
|
||||
analyst = MomentumAnalyst(llm, tools)
|
||||
|
||||
# Act
|
||||
report = analyst.analyze("NVDA", "2024-05-10")
|
||||
|
||||
# Assert
|
||||
assert "momentum" in report.lower()
|
||||
assert llm.invoke.called
|
||||
```
|
||||
|
||||
Run tests:
|
||||
```bash
|
||||
pytest tests/ -v
|
||||
pytest tests/ --cov=tradingagents --cov-report=term-missing
|
||||
```
|
||||
|
||||
#### 6. Update Documentation
|
||||
|
||||
Update relevant documentation:
|
||||
- **API docs**: Add new classes/functions to API reference
|
||||
- **Guides**: Create/update guides for new features
|
||||
- **README**: Update if adding major features
|
||||
- **Docstrings**: Ensure all public APIs have docstrings
|
||||
|
||||
#### 7. Format and Lint
|
||||
|
||||
```bash
|
||||
# Format code
|
||||
black tradingagents/ tests/
|
||||
|
||||
# Sort imports
|
||||
isort tradingagents/ tests/
|
||||
|
||||
# Check linting
|
||||
flake8 tradingagents/ tests/
|
||||
|
||||
# Type checking
|
||||
mypy tradingagents/
|
||||
```
|
||||
|
||||
#### 8. Commit Changes
|
||||
|
||||
Follow conventional commits format:
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "feat(analysts): add momentum analyst for multi-timeframe analysis"
|
||||
```
|
||||
|
||||
Commit message format:
|
||||
```
|
||||
<type>(<scope>): <subject>
|
||||
|
||||
<body>
|
||||
|
||||
<footer>
|
||||
```
|
||||
|
||||
Types:
|
||||
- `feat`: New feature
|
||||
- `fix`: Bug fix
|
||||
- `docs`: Documentation changes
|
||||
- `test`: Test changes
|
||||
- `refactor`: Code refactoring
|
||||
- `style`: Code style changes (formatting)
|
||||
- `perf`: Performance improvements
|
||||
- `chore`: Maintenance tasks
|
||||
|
||||
Examples:
|
||||
```bash
|
||||
feat(agents): add momentum analyst
|
||||
fix(dataflows): handle missing Alpha Vantage data
|
||||
docs(guides): add configuration examples
|
||||
test(analysts): add tests for news analyst
|
||||
refactor(graph): simplify state management
|
||||
```
|
||||
|
||||
#### 9. Push and Create Pull Request
|
||||
|
||||
```bash
|
||||
git push origin feature/your-feature-name
|
||||
```
|
||||
|
||||
Create pull request on GitHub with:
|
||||
- Clear title and description
|
||||
- Reference related issues
|
||||
- List changes made
|
||||
- Add screenshots if UI changes
|
||||
- Mention breaking changes
|
||||
|
||||
Pull request template:
|
||||
```markdown
|
||||
## Description
|
||||
Brief description of changes
|
||||
|
||||
## Related Issues
|
||||
Fixes #123
|
||||
|
||||
## Changes Made
|
||||
- Added MomentumAnalyst class
|
||||
- Integrated multi-timeframe data access
|
||||
- Added comprehensive tests
|
||||
- Updated documentation
|
||||
|
||||
## Testing
|
||||
- [ ] Unit tests pass
|
||||
- [ ] Integration tests pass
|
||||
- [ ] Manual testing completed
|
||||
|
||||
## Documentation
|
||||
- [ ] API docs updated
|
||||
- [ ] Guide created/updated
|
||||
- [ ] Docstrings added
|
||||
|
||||
## Checklist
|
||||
- [ ] Code follows style guidelines
|
||||
- [ ] Tests written and passing
|
||||
- [ ] Documentation updated
|
||||
- [ ] No breaking changes (or documented)
|
||||
```
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### Code Style
|
||||
|
||||
- **PEP 8**: Follow Python style guide
|
||||
- **Line Length**: Maximum 100 characters (black default)
|
||||
- **Imports**: Sorted with isort
|
||||
- **Type Hints**: Add to all public functions
|
||||
- **Docstrings**: Google-style for all public APIs
|
||||
|
||||
### Testing
|
||||
|
||||
- **Coverage**: Aim for 80%+ overall
|
||||
- **Test First**: Write tests before implementation (TDD)
|
||||
- **Test Tiers**: Place tests in correct directories
|
||||
- `tests/unit/` - Fast, isolated tests
|
||||
- `tests/integration/` - Component interaction tests
|
||||
- `tests/regression/smoke/` - Critical path tests
|
||||
- **Naming**: `test_<function>_<scenario>_<expected>`
|
||||
|
||||
### Documentation
|
||||
|
||||
- **API Docs**: Update when adding public APIs
|
||||
- **Guides**: Create for significant features
|
||||
- **Inline Comments**: Explain complex logic
|
||||
- **Examples**: Provide working code examples
|
||||
|
||||
### Git Workflow
|
||||
|
||||
1. Create feature branch from `main`
|
||||
2. Make focused, logical commits
|
||||
3. Keep commits small and atomic
|
||||
4. Write clear commit messages
|
||||
5. Rebase on main before PR
|
||||
6. Squash commits if requested
|
||||
|
||||
## Review Process
|
||||
|
||||
### What Reviewers Look For
|
||||
|
||||
1. **Code Quality**
|
||||
- Follows style guidelines
|
||||
- Proper error handling
|
||||
- Clear variable/function names
|
||||
- No unnecessary complexity
|
||||
|
||||
2. **Tests**
|
||||
- Comprehensive coverage
|
||||
- Tests are clear and maintainable
|
||||
- Edge cases covered
|
||||
|
||||
3. **Documentation**
|
||||
- All public APIs documented
|
||||
- Guides updated if needed
|
||||
- Examples provided
|
||||
|
||||
4. **Compatibility**
|
||||
- No breaking changes (or properly documented)
|
||||
- Works with existing features
|
||||
- Backwards compatible if possible
|
||||
|
||||
### Addressing Feedback
|
||||
|
||||
- Respond to all comments
|
||||
- Make requested changes
|
||||
- Ask questions if unclear
|
||||
- Push updates to same branch
|
||||
- Mark conversations as resolved
|
||||
|
||||
## Release Process
|
||||
|
||||
Maintainers handle releases:
|
||||
|
||||
1. Update version in `setup.py`
|
||||
2. Update CHANGELOG.md
|
||||
3. Create release tag
|
||||
4. Publish to PyPI
|
||||
5. Create GitHub release
|
||||
|
||||
## Community
|
||||
|
||||
- **Discord**: [Join our community](https://discord.com/invite/hk9PGKShPK)
|
||||
- **GitHub Discussions**: Ask questions, share ideas
|
||||
- **Twitter**: [@TauricResearch](https://x.com/TauricResearch)
|
||||
|
||||
## Recognition
|
||||
|
||||
Contributors are recognized in:
|
||||
- CONTRIBUTORS.md file
|
||||
- Release notes
|
||||
- Project documentation
|
||||
|
||||
## Questions?
|
||||
|
||||
- Check [Development Setup](setup.md)
|
||||
- Read [Architecture Docs](../architecture/multi-agent-system.md)
|
||||
- Ask on Discord or GitHub Discussions
|
||||
|
||||
Thank you for contributing to TradingAgents!
|
||||
|
|
@ -0,0 +1,373 @@
|
|||
# Development Environment Setup
|
||||
|
||||
Complete guide for setting up a TradingAgents development environment.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python >= 3.10 (Python 3.13 recommended)
|
||||
- Git
|
||||
- Conda or virtualenv
|
||||
- Text editor or IDE (VS Code, PyCharm recommended)
|
||||
|
||||
## Step 1: Clone Repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/TauricResearch/TradingAgents.git
|
||||
cd TradingAgents
|
||||
```
|
||||
|
||||
## Step 2: Create Virtual Environment
|
||||
|
||||
### Using Conda (Recommended)
|
||||
|
||||
```bash
|
||||
conda create -n tradingagents python=3.13
|
||||
conda activate tradingagents
|
||||
```
|
||||
|
||||
### Using venv
|
||||
|
||||
```bash
|
||||
python -m venv venv
|
||||
source venv/bin/activate # macOS/Linux
|
||||
# or
|
||||
venv\Scripts\activate # Windows
|
||||
```
|
||||
|
||||
## Step 3: Install Dependencies
|
||||
|
||||
### Production Dependencies
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Development Dependencies
|
||||
|
||||
```bash
|
||||
# Install package in editable mode
|
||||
pip install -e .
|
||||
|
||||
# Install testing dependencies
|
||||
pip install pytest pytest-cov pytest-xdist pytest-mock
|
||||
|
||||
# Install linting/formatting tools
|
||||
pip install black isort flake8 mypy
|
||||
|
||||
# Install pre-commit hooks
|
||||
pip install pre-commit
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
## Step 4: Configure Environment Variables
|
||||
|
||||
Create `.env` file:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Edit `.env` with your API keys:
|
||||
|
||||
```env
|
||||
# LLM Provider (choose one or more)
|
||||
OPENAI_API_KEY=sk-your_key_here
|
||||
ANTHROPIC_API_KEY=sk-ant-your_key_here
|
||||
OPENROUTER_API_KEY=sk-or-v1-your_key_here
|
||||
GOOGLE_API_KEY=your_key_here
|
||||
|
||||
# Data Vendor
|
||||
ALPHA_VANTAGE_API_KEY=your_key_here
|
||||
|
||||
# Application Settings
|
||||
TRADINGAGENTS_RESULTS_DIR=./results
|
||||
```
|
||||
|
||||
## Step 5: Verify Installation
|
||||
|
||||
Run basic tests:
|
||||
|
||||
```bash
|
||||
# Run smoke tests
|
||||
pytest tests/regression/smoke/ -v
|
||||
|
||||
# Run unit tests
|
||||
pytest tests/unit/ -v
|
||||
|
||||
# Quick integration test
|
||||
python -c "from tradingagents.graph.trading_graph import TradingAgentsGraph; print('Import successful')"
|
||||
```
|
||||
|
||||
## Development Tools
|
||||
|
||||
### Code Formatting
|
||||
|
||||
```bash
|
||||
# Format with black
|
||||
black tradingagents/ tests/
|
||||
|
||||
# Sort imports with isort
|
||||
isort tradingagents/ tests/
|
||||
```
|
||||
|
||||
### Linting
|
||||
|
||||
```bash
|
||||
# Check style with flake8
|
||||
flake8 tradingagents/ tests/
|
||||
|
||||
# Type checking with mypy
|
||||
mypy tradingagents/
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest tests/
|
||||
|
||||
# Run with coverage
|
||||
pytest tests/ --cov=tradingagents --cov-report=html
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/unit/test_analysts.py -v
|
||||
```
|
||||
|
||||
## IDE Configuration
|
||||
|
||||
### VS Code
|
||||
|
||||
Create `.vscode/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"python.defaultInterpreterPath": "${workspaceFolder}/venv/bin/python",
|
||||
"python.linting.enabled": true,
|
||||
"python.linting.flake8Enabled": true,
|
||||
"python.linting.mypyEnabled": true,
|
||||
"python.formatting.provider": "black",
|
||||
"python.sortImports.args": ["--profile", "black"],
|
||||
"editor.formatOnSave": true,
|
||||
"editor.codeActionsOnSave": {
|
||||
"source.organizeImports": true
|
||||
},
|
||||
"[python]": {
|
||||
"editor.defaultFormatter": "ms-python.black-formatter"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create `.vscode/launch.json` for debugging:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Python: Current File",
|
||||
"type": "python",
|
||||
"request": "launch",
|
||||
"program": "${file}",
|
||||
"console": "integratedTerminal",
|
||||
"justMyCode": true,
|
||||
"envFile": "${workspaceFolder}/.env"
|
||||
},
|
||||
{
|
||||
"name": "Python: Pytest",
|
||||
"type": "python",
|
||||
"request": "launch",
|
||||
"module": "pytest",
|
||||
"args": [
|
||||
"tests/",
|
||||
"-v"
|
||||
],
|
||||
"console": "integratedTerminal",
|
||||
"justMyCode": false
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### PyCharm
|
||||
|
||||
1. Open project in PyCharm
|
||||
2. Configure interpreter:
|
||||
- File → Settings → Project → Python Interpreter
|
||||
- Select the virtual environment you created
|
||||
3. Enable pytest:
|
||||
- File → Settings → Tools → Python Integrated Tools
|
||||
- Set "Default test runner" to pytest
|
||||
4. Configure black:
|
||||
- File → Settings → Tools → External Tools
|
||||
- Add black as external tool
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### 1. Create Feature Branch
|
||||
|
||||
```bash
|
||||
git checkout -b feature/my-new-feature
|
||||
```
|
||||
|
||||
### 2. Make Changes
|
||||
|
||||
Edit code following project conventions:
|
||||
- Follow PEP 8 style guide
|
||||
- Add docstrings to functions
|
||||
- Write tests for new code
|
||||
- Update documentation
|
||||
|
||||
### 3. Run Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest tests/
|
||||
|
||||
# Run with coverage
|
||||
pytest tests/ --cov=tradingagents --cov-report=term-missing
|
||||
```
|
||||
|
||||
### 4. Format Code
|
||||
|
||||
```bash
|
||||
# Format code
|
||||
black tradingagents/ tests/
|
||||
|
||||
# Sort imports
|
||||
isort tradingagents/ tests/
|
||||
|
||||
# Check linting
|
||||
flake8 tradingagents/ tests/
|
||||
```
|
||||
|
||||
### 5. Commit Changes
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "feat: Add new feature description"
|
||||
```
|
||||
|
||||
Commit message format:
|
||||
- `feat:` New feature
|
||||
- `fix:` Bug fix
|
||||
- `docs:` Documentation changes
|
||||
- `test:` Test changes
|
||||
- `refactor:` Code refactoring
|
||||
|
||||
### 6. Push and Create PR
|
||||
|
||||
```bash
|
||||
git push origin feature/my-new-feature
|
||||
```
|
||||
|
||||
Then create a pull request on GitHub.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
TradingAgents/
|
||||
├── tradingagents/ # Main package
|
||||
│ ├── agents/ # Agent implementations
|
||||
│ │ ├── analysts/ # Analyst agents
|
||||
│ │ ├── utils/ # Agent utilities
|
||||
│ │ └── ...
|
||||
│ ├── dataflows/ # Data vendor integrations
|
||||
│ ├── graph/ # LangGraph workflow
|
||||
│ ├── utils/ # Shared utilities
|
||||
│ └── default_config.py # Default configuration
|
||||
├── tests/ # Test suite
|
||||
│ ├── unit/ # Unit tests
|
||||
│ ├── integration/ # Integration tests
|
||||
│ └── regression/ # Regression tests
|
||||
├── cli/ # CLI interface
|
||||
├── docs/ # Documentation
|
||||
├── examples/ # Example scripts
|
||||
├── requirements.txt # Dependencies
|
||||
└── setup.py # Package setup
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
### Using Python Debugger
|
||||
|
||||
```python
|
||||
# Add breakpoint in code
|
||||
import pdb; pdb.set_trace()
|
||||
|
||||
# Or use built-in breakpoint()
|
||||
breakpoint()
|
||||
```
|
||||
|
||||
### Using pytest with debugger
|
||||
|
||||
```bash
|
||||
# Drop into debugger on failure
|
||||
pytest tests/ --pdb
|
||||
|
||||
# Drop into debugger on error
|
||||
pytest tests/ --pdb -x
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
Enable debug logging:
|
||||
|
||||
```python
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Import Errors
|
||||
|
||||
**Issue**: `ModuleNotFoundError: No module named 'tradingagents'`
|
||||
|
||||
**Solution**: Install package in editable mode
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### API Key Errors
|
||||
|
||||
**Issue**: `ValueError: OPENAI_API_KEY not found`
|
||||
|
||||
**Solution**: Check `.env` file exists and is loaded
|
||||
```bash
|
||||
# Verify environment variable
|
||||
echo $OPENAI_API_KEY
|
||||
|
||||
# Or in Python
|
||||
import os
|
||||
print(os.getenv("OPENAI_API_KEY"))
|
||||
```
|
||||
|
||||
### Test Failures
|
||||
|
||||
**Issue**: Tests fail with mocked data
|
||||
|
||||
**Solution**: Check mock data format matches expected schema
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use Virtual Environment**: Always work in a virtual environment
|
||||
2. **Run Tests Frequently**: Run tests before committing
|
||||
3. **Format Code**: Use black and isort consistently
|
||||
4. **Write Tests**: Add tests for new features
|
||||
5. **Update Documentation**: Keep docs in sync with code
|
||||
6. **Small Commits**: Make focused, logical commits
|
||||
7. **Branch Strategy**: Create feature branches for new work
|
||||
8. **Code Review**: Get code reviewed before merging
|
||||
|
||||
## Resources
|
||||
|
||||
- [Contributing Guide](contributing.md)
|
||||
- [Testing Guide](../testing/README.md)
|
||||
- [Architecture Documentation](../architecture/multi-agent-system.md)
|
||||
- [GitHub Repository](https://github.com/TauricResearch/TradingAgents)
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Discord**: [Join our community](https://discord.com/invite/hk9PGKShPK)
|
||||
- **GitHub Issues**: [Report issues](https://github.com/TauricResearch/TradingAgents/issues)
|
||||
- **Documentation**: [Read the docs](../README.md)
|
||||
|
|
@ -0,0 +1,245 @@
|
|||
# Guide: Adding a New Data Vendor
|
||||
|
||||
This guide shows you how to add support for a new data vendor to TradingAgents.
|
||||
|
||||
## Overview
|
||||
|
||||
Adding a new data vendor involves:
|
||||
1. Creating the vendor implementation
|
||||
2. Adding it to the interface router
|
||||
3. Configuring vendor selection
|
||||
4. Testing the integration
|
||||
5. Updating documentation
|
||||
|
||||
## Step 1: Create Vendor Implementation
|
||||
|
||||
Create a new file in `tradingagents/dataflows/`:
|
||||
|
||||
```python
|
||||
# tradingagents/dataflows/new_vendor.py
|
||||
|
||||
from typing import Dict, List, Any
|
||||
from datetime import datetime
|
||||
|
||||
def newvendor_get_stock_data(
|
||||
ticker: str,
|
||||
start_date: str,
|
||||
end_date: str
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get historical stock data from NewVendor API.
|
||||
|
||||
Args:
|
||||
ticker: Stock ticker symbol
|
||||
start_date: Start date (YYYY-MM-DD)
|
||||
end_date: End date (YYYY-MM-DD)
|
||||
|
||||
Returns:
|
||||
Dictionary with stock data
|
||||
"""
|
||||
import requests
|
||||
|
||||
api_key = os.getenv("NEWVENDOR_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError("NEWVENDOR_API_KEY environment variable required")
|
||||
|
||||
url = f"https://api.newvendor.com/stocks/{ticker}"
|
||||
params = {
|
||||
"start": start_date,
|
||||
"end": end_date,
|
||||
"apikey": api_key
|
||||
}
|
||||
|
||||
response = requests.get(url, params=params)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
|
||||
# Transform to standard format
|
||||
return {
|
||||
"ticker": ticker,
|
||||
"dates": data["timestamps"],
|
||||
"open": data["open_prices"],
|
||||
"high": data["high_prices"],
|
||||
"low": data["low_prices"],
|
||||
"close": data["close_prices"],
|
||||
"volume": data["volumes"]
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2: Add to Interface Router
|
||||
|
||||
Modify `tradingagents/dataflows/interface.py`:
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.new_vendor import (
|
||||
newvendor_get_stock_data,
|
||||
newvendor_get_fundamentals
|
||||
)
|
||||
|
||||
def get_stock_data(ticker: str, start_date: str, end_date: str):
|
||||
"""Get stock data with vendor routing."""
|
||||
vendor = get_vendor_for_category("core_stock_apis")
|
||||
|
||||
if vendor == "yfinance":
|
||||
return yfinance_get_stock_data(ticker, start_date, end_date)
|
||||
elif vendor == "alpha_vantage":
|
||||
return alphavantage_get_stock_data(ticker, start_date, end_date)
|
||||
elif vendor == "newvendor": # Add new vendor
|
||||
return newvendor_get_stock_data(ticker, start_date, end_date)
|
||||
elif vendor == "local":
|
||||
return local_get_stock_data(ticker, start_date, end_date)
|
||||
else:
|
||||
raise ValueError(f"Unknown vendor: {vendor}")
|
||||
```
|
||||
|
||||
## Step 3: Configure Vendor Selection
|
||||
|
||||
Update configuration to allow vendor selection:
|
||||
|
||||
```python
|
||||
# In usage code
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["data_vendors"]["core_stock_apis"] = "newvendor"
|
||||
```
|
||||
|
||||
## Step 4: Add Error Handling
|
||||
|
||||
Implement vendor-specific error handling:
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.exceptions import (
|
||||
VendorError,
|
||||
RateLimitError,
|
||||
DataUnavailableError
|
||||
)
|
||||
|
||||
def newvendor_get_stock_data(ticker, start_date, end_date):
|
||||
try:
|
||||
# API call
|
||||
response = requests.get(url, params=params)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
except requests.exceptions.HTTPError as e:
|
||||
if e.response.status_code == 429:
|
||||
# Rate limit
|
||||
retry_after = int(e.response.headers.get("Retry-After", 60))
|
||||
raise RateLimitError(
|
||||
vendor="newvendor",
|
||||
message="Rate limit exceeded",
|
||||
retry_after=retry_after
|
||||
)
|
||||
elif e.response.status_code == 404:
|
||||
# Data not available
|
||||
raise DataUnavailableError(
|
||||
f"Data not available for {ticker}"
|
||||
)
|
||||
else:
|
||||
raise VendorError(f"NewVendor API error: {e}")
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
raise VendorError(f"NewVendor connection error: {e}")
|
||||
```
|
||||
|
||||
## Step 5: Test Integration
|
||||
|
||||
Create tests for your vendor:
|
||||
|
||||
```python
|
||||
# tests/integration/test_newvendor.py
|
||||
|
||||
import pytest
|
||||
import os
|
||||
from tradingagents.dataflows.new_vendor import newvendor_get_stock_data
|
||||
|
||||
@pytest.fixture
|
||||
def mock_newvendor_key(monkeypatch):
|
||||
"""Mock NewVendor API key."""
|
||||
monkeypatch.setenv("NEWVENDOR_API_KEY", "test_key")
|
||||
|
||||
def test_newvendor_get_stock_data(mock_newvendor_key):
|
||||
"""Test NewVendor returns stock data."""
|
||||
# This test requires actual API or mocking
|
||||
data = newvendor_get_stock_data("NVDA", "2024-01-01", "2024-01-10")
|
||||
|
||||
assert "dates" in data
|
||||
assert "close" in data
|
||||
assert len(data["close"]) > 0
|
||||
```
|
||||
|
||||
## Step 6: Update Documentation
|
||||
|
||||
After implementing the vendor, update the documentation:
|
||||
|
||||
1. **Add to data-flow.md**: Document vendor in `docs/architecture/data-flow.md`
|
||||
2. **Update configuration.md**: Add environment variable requirements
|
||||
3. **Add API docs**: Document functions in `docs/api/dataflows.md`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Follow Interface Pattern**: Implement all required methods matching the interface
|
||||
2. **Error Handling**: Map vendor-specific errors to unified exceptions
|
||||
3. **Testing**: Write both unit tests (mocked) and integration tests
|
||||
4. **Rate Limiting**: Implement retry logic with exponential backoff
|
||||
5. **Caching**: Consider caching responses to reduce API calls
|
||||
6. **Logging**: Use structured logging for debugging
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Handling Pagination
|
||||
|
||||
```python
|
||||
def get_all_pages(endpoint, params):
|
||||
"""Fetch all pages of paginated API."""
|
||||
all_data = []
|
||||
page = 1
|
||||
|
||||
while True:
|
||||
params["page"] = page
|
||||
response = requests.get(endpoint, params=params)
|
||||
data = response.json()
|
||||
|
||||
if not data["results"]:
|
||||
break
|
||||
|
||||
all_data.extend(data["results"])
|
||||
page += 1
|
||||
|
||||
return all_data
|
||||
```
|
||||
|
||||
### Caching Responses
|
||||
|
||||
```python
|
||||
from functools import lru_cache
|
||||
from datetime import datetime
|
||||
|
||||
@lru_cache(maxsize=100)
|
||||
def cached_get_stock_data(ticker: str, date: str):
|
||||
"""Cache stock data to reduce API calls."""
|
||||
return newvendor_get_stock_data(ticker, date, date)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Import Errors
|
||||
- Ensure vendor module is in `tradingagents/dataflows/`
|
||||
- Check `__init__.py` exports the functions
|
||||
|
||||
### API Authentication Errors
|
||||
- Verify environment variable is set correctly
|
||||
- Check API key has required permissions
|
||||
- Ensure API key is not expired
|
||||
|
||||
### Data Format Mismatches
|
||||
- Transform vendor response to standard format
|
||||
- Handle missing fields gracefully
|
||||
- Validate data types before returning
|
||||
|
||||
## See Also
|
||||
|
||||
- [Data Flow Architecture](../architecture/data-flow.md)
|
||||
- [Data Flows API Reference](../api/dataflows.md)
|
||||
- [Configuration Guide](configuration.md)
|
||||
- [Error Handling](adding-llm-provider.md#error-handling)
|
||||
|
|
@ -0,0 +1,405 @@
|
|||
# Guide: Adding a New LLM Provider
|
||||
|
||||
This guide shows you how to add support for a new LLM provider to TradingAgents.
|
||||
|
||||
## Overview
|
||||
|
||||
Adding a new LLM provider involves:
|
||||
1. Installing the provider's LangChain integration
|
||||
2. Adding initialization logic
|
||||
3. Configuring API keys
|
||||
4. Testing the integration
|
||||
5. Updating documentation
|
||||
|
||||
## Step 1: Install LangChain Integration
|
||||
|
||||
Most providers have official LangChain integrations:
|
||||
|
||||
```bash
|
||||
# Example: Adding Cohere
|
||||
pip install langchain-cohere
|
||||
|
||||
# Example: Adding Mistral
|
||||
pip install langchain-mistral
|
||||
|
||||
# Example: Adding HuggingFace
|
||||
pip install langchain-huggingface
|
||||
```
|
||||
|
||||
Add the dependency to `requirements.txt`:
|
||||
|
||||
```txt
|
||||
langchain-cohere>=0.1.0
|
||||
```
|
||||
|
||||
## Step 2: Add Initialization Logic
|
||||
|
||||
Modify `tradingagents/graph/trading_graph.py`:
|
||||
|
||||
```python
|
||||
# Add import at top of file
|
||||
from langchain_cohere import ChatCohere # Example for Cohere
|
||||
|
||||
class TradingAgentsGraph:
|
||||
def __init__(self, selected_analysts=None, debug=False, config=None):
|
||||
# ... existing initialization ...
|
||||
|
||||
# Add your provider to the initialization logic
|
||||
elif config["llm_provider"].lower() == "cohere":
|
||||
self.deep_thinking_llm = ChatCohere(
|
||||
model=config["deep_think_llm"],
|
||||
cohere_api_key=os.getenv("COHERE_API_KEY")
|
||||
)
|
||||
self.quick_thinking_llm = ChatCohere(
|
||||
model=config["quick_think_llm"],
|
||||
cohere_api_key=os.getenv("COHERE_API_KEY")
|
||||
)
|
||||
|
||||
# ... rest of initialization ...
|
||||
```
|
||||
|
||||
## Step 3: Configure API Keys
|
||||
|
||||
### Add Environment Variable
|
||||
|
||||
Update `.env.example`:
|
||||
|
||||
```env
|
||||
# LLM Provider API Keys
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
ANTHROPIC_API_KEY=your_anthropic_key_here
|
||||
COHERE_API_KEY=your_cohere_key_here # NEW
|
||||
```
|
||||
|
||||
### Validate API Key
|
||||
|
||||
Add validation in initialization:
|
||||
|
||||
```python
|
||||
elif config["llm_provider"].lower() == "cohere":
|
||||
cohere_key = os.getenv("COHERE_API_KEY")
|
||||
if not cohere_key:
|
||||
raise ValueError(
|
||||
"COHERE_API_KEY environment variable is required when using cohere provider. "
|
||||
"Set it with: export COHERE_API_KEY=your_key_here"
|
||||
)
|
||||
|
||||
self.deep_thinking_llm = ChatCohere(
|
||||
model=config["deep_think_llm"],
|
||||
cohere_api_key=cohere_key
|
||||
)
|
||||
```
|
||||
|
||||
## Step 4: Update Configuration
|
||||
|
||||
Add default configuration for your provider:
|
||||
|
||||
```python
|
||||
# In a configuration example or documentation
|
||||
|
||||
config = {
|
||||
"llm_provider": "cohere",
|
||||
"deep_think_llm": "command-r-plus", # Cohere model for deep thinking
|
||||
"quick_think_llm": "command-r", # Cohere model for quick tasks
|
||||
"backend_url": None # If provider doesn't need custom endpoint
|
||||
}
|
||||
```
|
||||
|
||||
## Step 5: Handle Provider-Specific Features
|
||||
|
||||
### Custom Headers
|
||||
|
||||
Some providers require specific headers:
|
||||
|
||||
```python
|
||||
elif config["llm_provider"].lower() == "cohere":
|
||||
default_headers = {
|
||||
"X-Client-Name": "TradingAgents"
|
||||
}
|
||||
|
||||
self.deep_thinking_llm = ChatCohere(
|
||||
model=config["deep_think_llm"],
|
||||
cohere_api_key=cohere_key,
|
||||
headers=default_headers
|
||||
)
|
||||
```
|
||||
|
||||
### Model Name Formats
|
||||
|
||||
Handle provider-specific model naming:
|
||||
|
||||
```python
|
||||
def _format_model_name(self, provider: str, model: str) -> str:
|
||||
"""Format model name based on provider conventions."""
|
||||
if provider == "openrouter":
|
||||
# OpenRouter uses "provider/model" format
|
||||
return model if "/" in model else f"default/{model}"
|
||||
elif provider == "cohere":
|
||||
# Cohere uses simple model names
|
||||
return model.replace("cohere/", "")
|
||||
return model
|
||||
```
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
Implement provider-specific rate limit handling:
|
||||
|
||||
```python
|
||||
from tradingagents.utils.exceptions import LLMRateLimitError
|
||||
|
||||
try:
|
||||
response = llm.invoke(messages)
|
||||
except Exception as e:
|
||||
# Map provider-specific errors to unified exceptions
|
||||
if "rate_limit" in str(e).lower():
|
||||
raise LLMRateLimitError(
|
||||
provider="cohere",
|
||||
message=str(e),
|
||||
retry_after=60 # Default retry time
|
||||
)
|
||||
raise
|
||||
```
|
||||
|
||||
## Step 6: Add Error Handling
|
||||
|
||||
Create unified error handling for your provider:
|
||||
|
||||
```python
|
||||
# In tradingagents/utils/exceptions.py
|
||||
|
||||
class CohereLLMError(LLMError):
|
||||
"""Cohere-specific LLM errors."""
|
||||
provider = "cohere"
|
||||
|
||||
def handle_cohere_error(error):
|
||||
"""Convert Cohere errors to unified exceptions."""
|
||||
if "rate limit" in str(error).lower():
|
||||
return LLMRateLimitError(
|
||||
provider="cohere",
|
||||
message=str(error),
|
||||
retry_after=extract_retry_time(error)
|
||||
)
|
||||
return CohereLLMError(str(error))
|
||||
```
|
||||
|
||||
## Step 7: Test Integration
|
||||
|
||||
Create tests for your provider:
|
||||
|
||||
```python
|
||||
# tests/integration/test_cohere_provider.py
|
||||
|
||||
import pytest
|
||||
import os
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
@pytest.fixture
|
||||
def cohere_config():
|
||||
"""Configuration for Cohere provider."""
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "cohere"
|
||||
config["deep_think_llm"] = "command-r-plus"
|
||||
config["quick_think_llm"] = "command-r"
|
||||
return config
|
||||
|
||||
@pytest.fixture
|
||||
def mock_env_cohere(monkeypatch):
|
||||
"""Mock Cohere API key."""
|
||||
monkeypatch.setenv("COHERE_API_KEY", "test_key")
|
||||
|
||||
def test_cohere_initialization(cohere_config, mock_env_cohere):
|
||||
"""Test Cohere provider can be initialized."""
|
||||
ta = TradingAgentsGraph(config=cohere_config)
|
||||
|
||||
assert ta.deep_thinking_llm is not None
|
||||
assert ta.quick_thinking_llm is not None
|
||||
|
||||
def test_cohere_missing_api_key(cohere_config):
|
||||
"""Test error when API key is missing."""
|
||||
# Don't set COHERE_API_KEY
|
||||
with pytest.raises(ValueError, match="COHERE_API_KEY"):
|
||||
TradingAgentsGraph(config=cohere_config)
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_cohere_analysis(cohere_config, mock_env_cohere):
|
||||
"""Test full analysis with Cohere provider."""
|
||||
ta = TradingAgentsGraph(
|
||||
selected_analysts=["market"],
|
||||
config=cohere_config
|
||||
)
|
||||
|
||||
# This requires actual API key
|
||||
if not os.getenv("COHERE_API_KEY"):
|
||||
pytest.skip("COHERE_API_KEY not set")
|
||||
|
||||
state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
assert decision["action"] in ["BUY", "SELL", "HOLD"]
|
||||
assert 0.0 <= decision["confidence_score"] <= 1.0
|
||||
```
|
||||
|
||||
## Step 8: Update Documentation
|
||||
|
||||
### Update Configuration Guide
|
||||
|
||||
Add provider details to `docs/guides/configuration.md`:
|
||||
|
||||
```markdown
|
||||
### Cohere Configuration
|
||||
|
||||
```python
|
||||
config = {
|
||||
"llm_provider": "cohere",
|
||||
"deep_think_llm": "command-r-plus",
|
||||
"quick_think_llm": "command-r"
|
||||
}
|
||||
```
|
||||
|
||||
Environment:
|
||||
```bash
|
||||
export COHERE_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
**Models Available**:
|
||||
- command-r-plus: Advanced reasoning
|
||||
- command-r: Fast, cost-effective
|
||||
- command: Basic model
|
||||
```
|
||||
|
||||
### Update LLM Integration Docs
|
||||
|
||||
Add to `docs/architecture/llm-integration.md`:
|
||||
|
||||
```markdown
|
||||
### Cohere
|
||||
- **Models**: Command-R-Plus, Command-R, Command
|
||||
- **Strengths**: Fast inference, multilingual support
|
||||
- **Use Case**: Cost-effective alternative with good performance
|
||||
- **API Key**: `COHERE_API_KEY`
|
||||
- **Endpoint**: Built-in
|
||||
```
|
||||
|
||||
## Step 9: Add Example Usage
|
||||
|
||||
Create example script `examples/cohere_example.py`:
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Configure for Cohere
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "cohere"
|
||||
config["deep_think_llm"] = "command-r-plus"
|
||||
config["quick_think_llm"] = "command-r"
|
||||
|
||||
# Initialize
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
|
||||
# Run analysis
|
||||
state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
print(f"Decision: {decision['action']}")
|
||||
print(f"Confidence: {decision['confidence_score']:.2%}")
|
||||
```
|
||||
|
||||
## Provider-Specific Considerations
|
||||
|
||||
### OpenAI-Compatible APIs
|
||||
|
||||
For OpenAI-compatible APIs (e.g., local models):
|
||||
|
||||
```python
|
||||
elif config["llm_provider"].lower() == "custom_openai":
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
self.deep_thinking_llm = ChatOpenAI(
|
||||
model=config["deep_think_llm"],
|
||||
base_url=config["backend_url"], # Custom endpoint
|
||||
api_key=os.getenv("CUSTOM_API_KEY")
|
||||
)
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```python
|
||||
elif config["llm_provider"].lower() == "azure":
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
|
||||
self.deep_thinking_llm = AzureChatOpenAI(
|
||||
deployment_name=config["deep_think_llm"],
|
||||
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
|
||||
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
|
||||
api_version="2024-02-15-preview"
|
||||
)
|
||||
```
|
||||
|
||||
### HuggingFace
|
||||
|
||||
```python
|
||||
elif config["llm_provider"].lower() == "huggingface":
|
||||
from langchain_huggingface import ChatHuggingFace
|
||||
|
||||
self.deep_thinking_llm = ChatHuggingFace(
|
||||
model=config["deep_think_llm"],
|
||||
huggingfacehub_api_token=os.getenv("HUGGINGFACE_API_KEY")
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Follow LangChain Patterns**: Use official LangChain integrations when available
|
||||
2. **Unified Error Handling**: Map provider errors to TradingAgents exceptions
|
||||
3. **Environment Variables**: Always use environment variables for API keys
|
||||
4. **Validation**: Validate API keys before usage
|
||||
5. **Testing**: Write comprehensive tests for the integration
|
||||
6. **Documentation**: Update all relevant documentation
|
||||
7. **Examples**: Provide working examples
|
||||
8. **Defaults**: Set sensible default models
|
||||
9. **Rate Limits**: Implement retry logic
|
||||
10. **Logging**: Add debug logging for troubleshooting
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Import Errors
|
||||
|
||||
**Issue**: `ModuleNotFoundError: No module named 'langchain_cohere'`
|
||||
|
||||
**Solution**: Install the provider package
|
||||
```bash
|
||||
pip install langchain-cohere
|
||||
```
|
||||
|
||||
### API Key Errors
|
||||
|
||||
**Issue**: `ValueError: COHERE_API_KEY environment variable is required`
|
||||
|
||||
**Solution**: Set the API key
|
||||
```bash
|
||||
export COHERE_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Model Name Errors
|
||||
|
||||
**Issue**: `Invalid model name: 'command-r-plus'`
|
||||
|
||||
**Solution**: Check provider documentation for correct model names
|
||||
|
||||
### Rate Limit Handling
|
||||
|
||||
**Issue**: Provider rate limits not being handled
|
||||
|
||||
**Solution**: Implement provider-specific error mapping
|
||||
```python
|
||||
except ProviderError as e:
|
||||
if "rate limit" in str(e).lower():
|
||||
raise LLMRateLimitError(provider="cohere", ...)
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [LLM Integration Architecture](../architecture/llm-integration.md)
|
||||
- [Configuration Guide](configuration.md)
|
||||
- [Error Handling Patterns](error-handling.md)
|
||||
- [Testing Guide](../testing/writing-tests.md)
|
||||
|
|
@ -0,0 +1,421 @@
|
|||
# Guide: Adding a New Analyst
|
||||
|
||||
This guide shows you how to extend TradingAgents with a custom analyst agent.
|
||||
|
||||
## Overview
|
||||
|
||||
Creating a new analyst involves:
|
||||
1. Creating the analyst class
|
||||
2. Defining analysis logic
|
||||
3. Integrating data access tools
|
||||
4. Registering the analyst
|
||||
5. Testing the implementation
|
||||
|
||||
## Step 1: Create Analyst Class
|
||||
|
||||
Create a new file in `tradingagents/agents/analysts/`:
|
||||
|
||||
```python
|
||||
# tradingagents/agents/analysts/momentum_analyst.py
|
||||
|
||||
from typing import List, Dict, Any
|
||||
from langchain.schema import HumanMessage
|
||||
|
||||
class MomentumAnalyst:
|
||||
"""Analyzes multi-timeframe momentum and trend strength."""
|
||||
|
||||
def __init__(self, llm, tools: List):
|
||||
"""
|
||||
Initialize momentum analyst.
|
||||
|
||||
Args:
|
||||
llm: Language model for analysis
|
||||
tools: List of data access tools
|
||||
"""
|
||||
self.llm = llm
|
||||
self.tools = {tool.name: tool for tool in tools}
|
||||
self.name = "momentum"
|
||||
|
||||
def analyze(self, ticker: str, date: str) -> str:
|
||||
"""
|
||||
Perform momentum analysis.
|
||||
|
||||
Args:
|
||||
ticker: Stock ticker symbol
|
||||
date: Analysis date (YYYY-MM-DD)
|
||||
|
||||
Returns:
|
||||
Analysis report as string
|
||||
"""
|
||||
# Step 1: Gather data
|
||||
data = self._gather_data(ticker, date)
|
||||
|
||||
# Step 2: Create analysis prompt
|
||||
prompt = self._create_prompt(ticker, date, data)
|
||||
|
||||
# Step 3: Generate analysis
|
||||
response = self.llm.invoke([HumanMessage(content=prompt)])
|
||||
|
||||
return response.content
|
||||
|
||||
def _gather_data(self, ticker: str, date: str) -> Dict[str, Any]:
|
||||
"""Gather required data for analysis."""
|
||||
# Get stock data for multiple timeframes
|
||||
stock_data = self.tools["get_stock_data"](
|
||||
ticker,
|
||||
start_date=self._get_start_date(date, days=90),
|
||||
end_date=date
|
||||
)
|
||||
|
||||
# Get momentum indicators
|
||||
indicators = self.tools["get_indicators"](
|
||||
ticker,
|
||||
indicators=["MACD", "RSI", "ADX"]
|
||||
)
|
||||
|
||||
return {
|
||||
"stock_data": stock_data,
|
||||
"indicators": indicators
|
||||
}
|
||||
|
||||
def _create_prompt(self, ticker: str, date: str, data: Dict[str, Any]) -> str:
|
||||
"""Create analysis prompt for LLM."""
|
||||
return f"""
|
||||
You are a Momentum Analyst specializing in multi-timeframe trend analysis.
|
||||
|
||||
Analyze the momentum and trend strength for {ticker} as of {date}.
|
||||
|
||||
Data provided:
|
||||
- Stock prices (90 days): {data['stock_data']}
|
||||
- MACD: {data['indicators']['MACD']}
|
||||
- RSI: {data['indicators']['RSI']}
|
||||
- ADX: {data['indicators']['ADX']}
|
||||
|
||||
Provide analysis covering:
|
||||
1. Short-term momentum (daily/weekly)
|
||||
2. Medium-term trend (monthly)
|
||||
3. Trend strength assessment
|
||||
4. Potential reversal signals
|
||||
5. Momentum-based trading recommendation
|
||||
|
||||
Format your response as a concise report.
|
||||
"""
|
||||
|
||||
def _get_start_date(self, end_date: str, days: int) -> str:
|
||||
"""Calculate start date for data retrieval."""
|
||||
from datetime import datetime, timedelta
|
||||
end = datetime.strptime(end_date, "%Y-%m-%d")
|
||||
start = end - timedelta(days=days)
|
||||
return start.strftime("%Y-%m-%d")
|
||||
```
|
||||
|
||||
## Step 2: Register Data Tools
|
||||
|
||||
Ensure your analyst has access to required data tools:
|
||||
|
||||
```python
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
get_stock_data,
|
||||
get_indicators,
|
||||
get_fundamentals,
|
||||
get_news
|
||||
)
|
||||
|
||||
# Tools will be passed to analyst constructor
|
||||
tools = [
|
||||
get_stock_data,
|
||||
get_indicators,
|
||||
# Add other tools as needed
|
||||
]
|
||||
```
|
||||
|
||||
## Step 3: Integrate into TradingGraph
|
||||
|
||||
Modify `tradingagents/graph/trading_graph.py` to include your analyst:
|
||||
|
||||
```python
|
||||
# Import your analyst
|
||||
from tradingagents.agents.analysts.momentum_analyst import MomentumAnalyst
|
||||
|
||||
class TradingAgentsGraph:
|
||||
def __init__(self, selected_analysts=None, debug=False, config=None):
|
||||
# ... existing initialization ...
|
||||
|
||||
# Initialize analysts
|
||||
self.analysts = {}
|
||||
|
||||
if "momentum" in selected_analysts:
|
||||
self.analysts["momentum"] = MomentumAnalyst(
|
||||
llm=self.quick_thinking_llm,
|
||||
tools=self.analyst_tools
|
||||
)
|
||||
|
||||
# ... rest of initialization ...
|
||||
```
|
||||
|
||||
## Step 4: Update Analyst Selection
|
||||
|
||||
Allow users to select your analyst:
|
||||
|
||||
```python
|
||||
# In main.py or CLI
|
||||
selected_analysts = ["market", "fundamentals", "momentum"]
|
||||
|
||||
ta = TradingAgentsGraph(
|
||||
selected_analysts=selected_analysts,
|
||||
debug=True
|
||||
)
|
||||
```
|
||||
|
||||
## Step 5: Test Your Analyst
|
||||
|
||||
Create a test file `tests/unit/test_momentum_analyst.py`:
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from tradingagents.agents.analysts.momentum_analyst import MomentumAnalyst
|
||||
from unittest.mock import Mock
|
||||
|
||||
def test_momentum_analyst_initialization():
|
||||
"""Test analyst can be initialized."""
|
||||
llm = Mock()
|
||||
tools = []
|
||||
|
||||
analyst = MomentumAnalyst(llm, tools)
|
||||
|
||||
assert analyst.name == "momentum"
|
||||
assert analyst.llm == llm
|
||||
|
||||
def test_momentum_analyst_analyze():
|
||||
"""Test analyst can perform analysis."""
|
||||
# Mock LLM
|
||||
llm = Mock()
|
||||
llm.invoke.return_value = Mock(
|
||||
content="Momentum analysis: Strong uptrend..."
|
||||
)
|
||||
|
||||
# Mock tools
|
||||
get_stock_data = Mock(return_value={
|
||||
"dates": ["2024-01-01", "2024-01-02"],
|
||||
"close": [150.0, 152.0]
|
||||
})
|
||||
get_indicators = Mock(return_value={
|
||||
"MACD": {"macd": [0.5], "signal": [0.4]},
|
||||
"RSI": {"rsi": [65.0]},
|
||||
"ADX": {"adx": [30.0]}
|
||||
})
|
||||
|
||||
get_stock_data.name = "get_stock_data"
|
||||
get_indicators.name = "get_indicators"
|
||||
|
||||
tools = [get_stock_data, get_indicators]
|
||||
|
||||
# Create analyst
|
||||
analyst = MomentumAnalyst(llm, tools)
|
||||
|
||||
# Run analysis
|
||||
report = analyst.analyze("NVDA", "2024-01-02")
|
||||
|
||||
# Verify
|
||||
assert "Momentum analysis" in report
|
||||
assert llm.invoke.called
|
||||
assert get_stock_data.called
|
||||
assert get_indicators.called
|
||||
```
|
||||
|
||||
Run tests:
|
||||
```bash
|
||||
pytest tests/unit/test_momentum_analyst.py -v
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Multi-Timeframe Analysis
|
||||
|
||||
```python
|
||||
def _gather_multi_timeframe_data(self, ticker: str, date: str):
|
||||
"""Get data for multiple timeframes."""
|
||||
return {
|
||||
"daily": self.tools["get_stock_data"](
|
||||
ticker,
|
||||
self._get_start_date(date, days=30),
|
||||
date
|
||||
),
|
||||
"weekly": self._aggregate_weekly(
|
||||
self.tools["get_stock_data"](
|
||||
ticker,
|
||||
self._get_start_date(date, days=90),
|
||||
date
|
||||
)
|
||||
),
|
||||
"monthly": self._aggregate_monthly(
|
||||
self.tools["get_stock_data"](
|
||||
ticker,
|
||||
self._get_start_date(date, days=365),
|
||||
date
|
||||
)
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### Custom Indicators
|
||||
|
||||
```python
|
||||
def _calculate_custom_indicators(self, data):
|
||||
"""Calculate custom momentum indicators."""
|
||||
import numpy as np
|
||||
|
||||
prices = np.array(data["close"])
|
||||
|
||||
# Rate of Change
|
||||
roc = (prices[-1] - prices[-20]) / prices[-20] * 100
|
||||
|
||||
# Momentum
|
||||
momentum = prices[-1] - prices[-10]
|
||||
|
||||
return {
|
||||
"roc": roc,
|
||||
"momentum": momentum
|
||||
}
|
||||
```
|
||||
|
||||
### Caching Analysis
|
||||
|
||||
```python
|
||||
def analyze(self, ticker: str, date: str) -> str:
|
||||
"""Analyze with caching."""
|
||||
# Check cache
|
||||
cache_key = f"momentum_{ticker}_{date}"
|
||||
if cached := self._get_from_cache(cache_key):
|
||||
return cached
|
||||
|
||||
# Perform analysis
|
||||
result = self._perform_analysis(ticker, date)
|
||||
|
||||
# Save to cache
|
||||
self._save_to_cache(cache_key, result)
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Responsibility**: Each analyst should have a focused domain
|
||||
2. **Consistent Interface**: Follow the `analyze(ticker, date)` pattern
|
||||
3. **Tool Usage**: Use the unified data interface for vendor independence
|
||||
4. **Error Handling**: Handle missing data and API failures gracefully
|
||||
5. **Structured Output**: Return well-formatted reports
|
||||
6. **Testing**: Write unit tests for your analyst
|
||||
7. **Documentation**: Add docstrings to all methods
|
||||
8. **Performance**: Cache expensive calculations
|
||||
9. **Logging**: Use logging for debugging
|
||||
10. **Configuration**: Make analyst behavior configurable
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Comparative Analysis
|
||||
|
||||
```python
|
||||
def compare_to_benchmark(self, ticker: str, benchmark: str, date: str):
|
||||
"""Compare ticker performance to benchmark."""
|
||||
ticker_data = self.tools["get_stock_data"](ticker, ...)
|
||||
benchmark_data = self.tools["get_stock_data"](benchmark, ...)
|
||||
|
||||
# Calculate relative strength
|
||||
relative_strength = self._calculate_relative_strength(
|
||||
ticker_data,
|
||||
benchmark_data
|
||||
)
|
||||
|
||||
return relative_strength
|
||||
```
|
||||
|
||||
### Sector Analysis
|
||||
|
||||
```python
|
||||
def analyze_sector_context(self, ticker: str, date: str):
|
||||
"""Analyze ticker in sector context."""
|
||||
sector = self._get_sector(ticker)
|
||||
peers = self._get_sector_peers(sector)
|
||||
|
||||
# Compare to sector average
|
||||
sector_analysis = self._compare_to_peers(ticker, peers, date)
|
||||
|
||||
return sector_analysis
|
||||
```
|
||||
|
||||
### Historical Patterns
|
||||
|
||||
```python
|
||||
def find_historical_patterns(self, ticker: str, date: str):
|
||||
"""Find similar historical patterns."""
|
||||
current_pattern = self._extract_pattern(ticker, date)
|
||||
|
||||
# Search memory for similar patterns
|
||||
similar = self.memory.search_similar(
|
||||
query=f"{ticker} pattern {current_pattern}",
|
||||
k=5
|
||||
)
|
||||
|
||||
return similar
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Analyst Not Running
|
||||
|
||||
**Issue**: Analyst not included in workflow
|
||||
|
||||
**Solution**: Check `selected_analysts` includes your analyst name
|
||||
|
||||
```python
|
||||
selected_analysts = ["market", "fundamentals", "momentum"]
|
||||
```
|
||||
|
||||
### Data Access Errors
|
||||
|
||||
**Issue**: Tools not available or returning errors
|
||||
|
||||
**Solution**: Verify tool registration and vendor configuration
|
||||
|
||||
```python
|
||||
# Check tools are available
|
||||
print(self.tools.keys())
|
||||
|
||||
# Verify vendor config
|
||||
from tradingagents.dataflows.config import get_config
|
||||
print(get_config()["data_vendors"])
|
||||
```
|
||||
|
||||
### LLM Errors
|
||||
|
||||
**Issue**: LLM returning unexpected responses
|
||||
|
||||
**Solution**: Improve prompt clarity and structure
|
||||
|
||||
```python
|
||||
def _create_prompt(self, ticker, date, data):
|
||||
"""Create clear, structured prompt."""
|
||||
return f"""
|
||||
You are a {self.name} analyst.
|
||||
|
||||
Task: Analyze {ticker} as of {date}
|
||||
|
||||
Data:
|
||||
{self._format_data(data)}
|
||||
|
||||
Required output format:
|
||||
1. Key findings
|
||||
2. Specific metrics
|
||||
3. Recommendation
|
||||
|
||||
Be concise and specific.
|
||||
"""
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Multi-Agent System Architecture](../architecture/multi-agent-system.md)
|
||||
- [Agents API Reference](../api/agents.md)
|
||||
- [Data Flows API](../api/dataflows.md)
|
||||
- [Testing Guide](../testing/writing-tests.md)
|
||||
|
|
@ -0,0 +1,447 @@
|
|||
# Configuration Guide
|
||||
|
||||
Complete reference for configuring TradingAgents.
|
||||
|
||||
## Configuration File
|
||||
|
||||
Location: `tradingagents/default_config.py`
|
||||
|
||||
## Default Configuration
|
||||
|
||||
```python
|
||||
DEFAULT_CONFIG = {
|
||||
# Directories
|
||||
"project_dir": "<auto-detected>",
|
||||
"results_dir": "./results",
|
||||
"data_cache_dir": "./dataflows/data_cache",
|
||||
|
||||
# LLM settings
|
||||
"llm_provider": "openai",
|
||||
"deep_think_llm": "o4-mini",
|
||||
"quick_think_llm": "gpt-4o-mini",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
|
||||
# Workflow settings
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
"max_recur_limit": 100,
|
||||
|
||||
# Data vendors
|
||||
"data_vendors": {
|
||||
"core_stock_apis": "yfinance",
|
||||
"technical_indicators": "yfinance",
|
||||
"fundamental_data": "alpha_vantage",
|
||||
"news_data": "alpha_vantage"
|
||||
},
|
||||
|
||||
# Tool-level overrides (optional)
|
||||
"tool_vendors": {}
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Directory Settings
|
||||
|
||||
#### project_dir
|
||||
- **Type**: str
|
||||
- **Default**: Auto-detected from package location
|
||||
- **Description**: Root directory of TradingAgents package
|
||||
|
||||
#### results_dir
|
||||
- **Type**: str
|
||||
- **Default**: `"./results"`
|
||||
- **Environment Variable**: `TRADINGAGENTS_RESULTS_DIR`
|
||||
- **Description**: Directory for storing analysis results
|
||||
- **Example**:
|
||||
```python
|
||||
config["results_dir"] = "/path/to/results"
|
||||
# Or set environment variable
|
||||
export TRADINGAGENTS_RESULTS_DIR=/path/to/results
|
||||
```
|
||||
|
||||
#### data_cache_dir
|
||||
- **Type**: str
|
||||
- **Default**: `"./dataflows/data_cache"`
|
||||
- **Description**: Directory for caching data vendor responses
|
||||
|
||||
### LLM Settings
|
||||
|
||||
#### llm_provider
|
||||
- **Type**: str
|
||||
- **Options**: `"openai"`, `"anthropic"`, `"google"`, `"openrouter"`, `"ollama"`
|
||||
- **Default**: `"openai"`
|
||||
- **Description**: LLM provider selection
|
||||
|
||||
**Examples**:
|
||||
```python
|
||||
# OpenAI (default)
|
||||
config["llm_provider"] = "openai"
|
||||
|
||||
# Anthropic
|
||||
config["llm_provider"] = "anthropic"
|
||||
|
||||
# OpenRouter (unified access)
|
||||
config["llm_provider"] = "openrouter"
|
||||
|
||||
# Google Generative AI
|
||||
config["llm_provider"] = "google"
|
||||
|
||||
# Ollama (local)
|
||||
config["llm_provider"] = "ollama"
|
||||
```
|
||||
|
||||
#### deep_think_llm
|
||||
- **Type**: str
|
||||
- **Default**: `"o4-mini"`
|
||||
- **Description**: Model for complex reasoning tasks
|
||||
- **Use Cases**: Research debates, trading decisions, risk assessment
|
||||
|
||||
**Recommended Models by Provider**:
|
||||
```python
|
||||
# OpenAI
|
||||
config["deep_think_llm"] = "o4-mini" # Fast, affordable
|
||||
config["deep_think_llm"] = "o1-preview" # Best reasoning
|
||||
|
||||
# Anthropic
|
||||
config["deep_think_llm"] = "claude-sonnet-4-20250514"
|
||||
|
||||
# OpenRouter
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
|
||||
# Google
|
||||
config["deep_think_llm"] = "gemini-2.0-flash"
|
||||
|
||||
# Ollama
|
||||
config["deep_think_llm"] = "mistral"
|
||||
```
|
||||
|
||||
#### quick_think_llm
|
||||
- **Type**: str
|
||||
- **Default**: `"gpt-4o-mini"`
|
||||
- **Description**: Model for fast, routine tasks
|
||||
- **Use Cases**: Analyst reports, data summarization, tool calling
|
||||
|
||||
**Recommended Models**:
|
||||
```python
|
||||
# OpenAI
|
||||
config["quick_think_llm"] = "gpt-4o-mini" # Most cost-effective
|
||||
|
||||
# Anthropic
|
||||
config["quick_think_llm"] = "claude-sonnet-4-20250514"
|
||||
|
||||
# OpenRouter
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini"
|
||||
```
|
||||
|
||||
#### backend_url
|
||||
- **Type**: str
|
||||
- **Default**: `"https://api.openai.com/v1"`
|
||||
- **Description**: API endpoint for LLM provider
|
||||
|
||||
**Examples**:
|
||||
```python
|
||||
# OpenAI
|
||||
config["backend_url"] = "https://api.openai.com/v1"
|
||||
|
||||
# Anthropic
|
||||
config["backend_url"] = "https://api.anthropic.com"
|
||||
|
||||
# OpenRouter
|
||||
config["backend_url"] = "https://openrouter.ai/api/v1"
|
||||
|
||||
# Ollama (local)
|
||||
config["backend_url"] = "http://localhost:11434/v1"
|
||||
```
|
||||
|
||||
### Workflow Settings
|
||||
|
||||
#### max_debate_rounds
|
||||
- **Type**: int
|
||||
- **Default**: `1`
|
||||
- **Range**: 1-5
|
||||
- **Description**: Number of bull/bear debate rounds
|
||||
- **Impact**:
|
||||
- More rounds = deeper analysis
|
||||
- More rounds = higher cost and latency
|
||||
- Diminishing returns after 2-3 rounds
|
||||
|
||||
**Examples**:
|
||||
```python
|
||||
# Fast, cost-effective
|
||||
config["max_debate_rounds"] = 1
|
||||
|
||||
# Balanced
|
||||
config["max_debate_rounds"] = 2
|
||||
|
||||
# Deep analysis
|
||||
config["max_debate_rounds"] = 3
|
||||
```
|
||||
|
||||
#### max_risk_discuss_rounds
|
||||
- **Type**: int
|
||||
- **Default**: `1`
|
||||
- **Range**: 1-3
|
||||
- **Description**: Number of risk management discussion rounds
|
||||
|
||||
#### max_recur_limit
|
||||
- **Type**: int
|
||||
- **Default**: `100`
|
||||
- **Description**: Maximum recursion limit for graph execution
|
||||
|
||||
### Data Vendor Settings
|
||||
|
||||
#### data_vendors
|
||||
- **Type**: Dict[str, str]
|
||||
- **Description**: Category-level data vendor configuration
|
||||
|
||||
**Available Categories**:
|
||||
|
||||
##### core_stock_apis
|
||||
- **Options**: `"yfinance"`, `"alpha_vantage"`, `"local"`
|
||||
- **Default**: `"yfinance"`
|
||||
- **Purpose**: Stock prices and quotes
|
||||
|
||||
##### technical_indicators
|
||||
- **Options**: `"yfinance"`, `"alpha_vantage"`, `"local"`
|
||||
- **Default**: `"yfinance"`
|
||||
- **Purpose**: Technical indicators (MACD, RSI, etc.)
|
||||
|
||||
##### fundamental_data
|
||||
- **Options**: `"openai"`, `"alpha_vantage"`, `"local"`
|
||||
- **Default**: `"alpha_vantage"`
|
||||
- **Purpose**: Company financials and ratios
|
||||
|
||||
##### news_data
|
||||
- **Options**: `"openai"`, `"alpha_vantage"`, `"google"`, `"local"`
|
||||
- **Default**: `"alpha_vantage"`
|
||||
- **Purpose**: News articles and events
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "yfinance",
|
||||
"technical_indicators": "yfinance",
|
||||
"fundamental_data": "alpha_vantage",
|
||||
"news_data": "google" # Use Google for news
|
||||
}
|
||||
```
|
||||
|
||||
#### tool_vendors
|
||||
- **Type**: Dict[str, str]
|
||||
- **Description**: Tool-level vendor overrides (takes precedence over categories)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
config["tool_vendors"] = {
|
||||
"get_stock_data": "alpha_vantage", # Override category default
|
||||
"get_news": "google" # Override category default
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
### LLM Provider API Keys
|
||||
|
||||
```bash
|
||||
# OpenAI (required for OpenAI provider or embeddings)
|
||||
export OPENAI_API_KEY=sk-your_key_here
|
||||
|
||||
# Anthropic (required for Anthropic provider)
|
||||
export ANTHROPIC_API_KEY=sk-ant-your_key_here
|
||||
|
||||
# OpenRouter (required for OpenRouter provider)
|
||||
export OPENROUTER_API_KEY=sk-or-v1-your_key_here
|
||||
|
||||
# Google (required for Google provider)
|
||||
export GOOGLE_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Data Vendor API Keys
|
||||
|
||||
```bash
|
||||
# Alpha Vantage (required for fundamental and news data)
|
||||
export ALPHA_VANTAGE_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Application Settings
|
||||
|
||||
```bash
|
||||
# Results directory
|
||||
export TRADINGAGENTS_RESULTS_DIR=./results
|
||||
```
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Production Configuration
|
||||
|
||||
```python
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
|
||||
# Use production-grade models
|
||||
config["llm_provider"] = "openai"
|
||||
config["deep_think_llm"] = "o1-preview" # Best reasoning
|
||||
config["quick_think_llm"] = "gpt-4o" # High quality
|
||||
|
||||
# Deep analysis
|
||||
config["max_debate_rounds"] = 2
|
||||
config["max_risk_discuss_rounds"] = 2
|
||||
|
||||
# Reliable data sources
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "alpha_vantage",
|
||||
"technical_indicators": "alpha_vantage",
|
||||
"fundamental_data": "alpha_vantage",
|
||||
"news_data": "alpha_vantage"
|
||||
}
|
||||
```
|
||||
|
||||
### Development/Testing Configuration
|
||||
|
||||
```python
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
|
||||
# Use cost-effective models
|
||||
config["llm_provider"] = "openai"
|
||||
config["deep_think_llm"] = "o4-mini"
|
||||
config["quick_think_llm"] = "gpt-4o-mini"
|
||||
|
||||
# Fast analysis
|
||||
config["max_debate_rounds"] = 1
|
||||
config["max_risk_discuss_rounds"] = 1
|
||||
|
||||
# Free data sources
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "yfinance",
|
||||
"technical_indicators": "yfinance",
|
||||
"fundamental_data": "alpha_vantage", # Free tier
|
||||
"news_data": "google"
|
||||
}
|
||||
```
|
||||
|
||||
### Cost-Optimized Configuration
|
||||
|
||||
```python
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
|
||||
# Use OpenRouter for competitive pricing
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
config["quick_think_llm"] = "openai/gpt-4o-mini"
|
||||
config["backend_url"] = "https://openrouter.ai/api/v1"
|
||||
|
||||
# Minimal debate rounds
|
||||
config["max_debate_rounds"] = 1
|
||||
config["max_risk_discuss_rounds"] = 1
|
||||
```
|
||||
|
||||
### Offline/Local Configuration
|
||||
|
||||
```python
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
|
||||
# Use local Ollama models
|
||||
config["llm_provider"] = "ollama"
|
||||
config["deep_think_llm"] = "mistral"
|
||||
config["quick_think_llm"] = "mistral"
|
||||
config["backend_url"] = "http://localhost:11434/v1"
|
||||
|
||||
# Use local data cache
|
||||
config["data_vendors"] = {
|
||||
"core_stock_apis": "local",
|
||||
"technical_indicators": "local",
|
||||
"fundamental_data": "local",
|
||||
"news_data": "local"
|
||||
}
|
||||
```
|
||||
|
||||
## Using Configuration
|
||||
|
||||
### In Code
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
# Create custom config
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "anthropic"
|
||||
config["max_debate_rounds"] = 2
|
||||
|
||||
# Initialize with config
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
|
||||
# Run analysis
|
||||
state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
```
|
||||
|
||||
### In CLI
|
||||
|
||||
The CLI reads configuration from `default_config.py` and allows runtime overrides through the interactive menu.
|
||||
|
||||
### With .env File
|
||||
|
||||
Create `.env` file:
|
||||
|
||||
```env
|
||||
# LLM Provider
|
||||
OPENAI_API_KEY=sk-your_key_here
|
||||
ANTHROPIC_API_KEY=sk-ant-your_key_here
|
||||
|
||||
# Data Vendor
|
||||
ALPHA_VANTAGE_API_KEY=your_key_here
|
||||
|
||||
# Application
|
||||
TRADINGAGENTS_RESULTS_DIR=./results
|
||||
```
|
||||
|
||||
Load in Python:
|
||||
|
||||
```python
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
# Now environment variables are available
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Never Hardcode Keys**: Use environment variables
|
||||
2. **Copy Default Config**: Always `config = DEFAULT_CONFIG.copy()`
|
||||
3. **Start Minimal**: Use 1 debate round initially
|
||||
4. **Test Locally**: Use Ollama for development
|
||||
5. **Monitor Costs**: Track LLM API usage
|
||||
6. **Cache Aggressively**: Use local data vendor when possible
|
||||
7. **Validate Configuration**: Check keys before running
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Missing API Keys
|
||||
|
||||
**Error**: `ValueError: OPENAI_API_KEY environment variable is required`
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
export OPENAI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Invalid Model Names
|
||||
|
||||
**Error**: `Invalid model name: 'gpt-5'`
|
||||
|
||||
**Solution**: Check provider documentation for valid model names
|
||||
|
||||
### Data Vendor Errors
|
||||
|
||||
**Error**: `VendorError: Alpha Vantage API key invalid`
|
||||
|
||||
**Solution**: Verify API key is correct and has remaining quota
|
||||
|
||||
## See Also
|
||||
|
||||
- [LLM Integration Architecture](../architecture/llm-integration.md)
|
||||
- [Data Flow Architecture](../architecture/data-flow.md)
|
||||
- [Adding LLM Provider](adding-llm-provider.md)
|
||||
- [Quick Start Guide](../QUICKSTART.md)
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"session_id": "20251225-143717",
|
||||
"started": "2025-12-25T14:37:17.406980",
|
||||
"github_issue": null,
|
||||
"agents": []
|
||||
}
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
# Session 20251225-143717
|
||||
|
||||
**Started**: 2025-12-25 14:37:17
|
||||
|
||||
---
|
||||
|
||||
**14:37:17 - unknown**: Completed
|
||||
|
||||
**14:37:19 - unknown**: Completed
|
||||
|
||||
**14:39:05 - unknown**: Completed
|
||||
|
||||
**14:39:29 - unknown**: Completed
|
||||
|
||||
**14:39:29 - unknown**: Completed
|
||||
|
||||
**14:43:00 - unknown**: Completed
|
||||
|
||||
**14:43:24 - unknown**: Completed
|
||||
|
||||
**14:44:10 - unknown**: Completed
|
||||
|
||||
**14:45:32 - unknown**: Completed
|
||||
|
||||
**14:46:13 - unknown**: Completed
|
||||
|
||||
**14:47:55 - unknown**: Completed
|
||||
|
||||
**14:53:19 - unknown**: Completed
|
||||
|
||||
**14:56:45 - unknown**: Completed
|
||||
|
||||
**14:57:50 - unknown**: Completed
|
||||
|
||||
**15:07:08 - unknown**: Completed
|
||||
|
||||
**15:07:10 - unknown**: Completed
|
||||
|
||||
**15:08:10 - unknown**: Completed
|
||||
|
||||
**15:09:59 - unknown**: Completed
|
||||
|
||||
**15:12:02 - unknown**: Completed
|
||||
|
||||
**15:12:17 - unknown**: Completed
|
||||
|
||||
**15:13:25 - unknown**: Completed
|
||||
|
||||
**15:15:12 - unknown**: Completed
|
||||
|
||||
**15:15:49 - unknown**: Completed
|
||||
|
||||
**15:16:53 - unknown**: Completed
|
||||
|
||||
**15:17:08 - unknown**: Completed
|
||||
|
||||
**15:17:13 - unknown**: Completed
|
||||
|
||||
**15:19:32 - unknown**: Completed
|
||||
|
||||
**15:21:30 - unknown**: Completed
|
||||
|
||||
**15:21:40 - unknown**: Completed
|
||||
|
||||
**15:21:57 - unknown**: Completed
|
||||
|
||||
**15:21:59 - unknown**: Completed
|
||||
|
||||
**15:24:02 - unknown**: Completed
|
||||
|
||||
**15:25:23 - unknown**: Completed
|
||||
|
||||
**15:26:17 - unknown**: Completed
|
||||
|
||||
**15:39:51 - unknown**: Completed
|
||||
|
||||
**15:40:02 - unknown**: Completed
|
||||
|
||||
**15:41:57 - unknown**: Completed
|
||||
|
||||
**15:43:18 - unknown**: Completed
|
||||
|
||||
**15:44:21 - unknown**: Completed
|
||||
|
||||
**15:47:55 - unknown**: Completed
|
||||
|
||||
**15:50:00 - unknown**: Completed
|
||||
|
||||
**15:50:18 - unknown**: Completed
|
||||
|
||||
**15:50:28 - unknown**: Completed
|
||||
|
||||
**15:52:04 - unknown**: Completed
|
||||
|
||||
**15:55:31 - unknown**: Completed
|
||||
|
||||
**16:01:59 - unknown**: Completed
|
||||
|
||||
**16:02:27 - unknown**: Completed
|
||||
|
||||
**16:09:11 - unknown**: Completed
|
||||
|
||||
**16:09:48 - unknown**: Completed
|
||||
|
||||
**16:09:50 - unknown**: Completed
|
||||
|
||||
**16:12:38 - unknown**: Completed
|
||||
|
||||
**16:27:00 - unknown**: Completed
|
||||
|
||||
**23:04:21 - unknown**: Completed
|
||||
|
||||
**23:05:12 - unknown**: Completed
|
||||
|
||||
**23:06:13 - unknown**: Completed
|
||||
|
||||
**23:38:16 - unknown**: Completed
|
||||
|
||||
**23:40:55 - unknown**: Completed
|
||||
|
||||
**23:41:45 - unknown**: Completed
|
||||
|
||||
**23:50:42 - unknown**: Completed
|
||||
|
||||
|
|
@ -0,0 +1,122 @@
|
|||
# Session 20251226-075746
|
||||
|
||||
**Started**: 2025-12-26 07:57:46
|
||||
|
||||
---
|
||||
|
||||
**07:57:46 - unknown**: Completed
|
||||
|
||||
**08:05:25 - unknown**: Completed
|
||||
|
||||
**08:07:12 - unknown**: Completed
|
||||
|
||||
**08:09:57 - unknown**: Completed
|
||||
|
||||
**08:10:13 - unknown**: Completed
|
||||
|
||||
**08:11:48 - unknown**: Completed
|
||||
|
||||
**08:14:02 - unknown**: Completed
|
||||
|
||||
**08:14:21 - unknown**: Completed
|
||||
|
||||
**08:14:30 - unknown**: Completed
|
||||
|
||||
**08:19:36 - unknown**: Completed
|
||||
|
||||
**08:34:07 - unknown**: Completed
|
||||
|
||||
**08:57:24 - unknown**: Completed
|
||||
|
||||
**09:01:49 - unknown**: Completed
|
||||
|
||||
**09:08:29 - unknown**: Completed
|
||||
|
||||
**09:08:55 - unknown**: Completed
|
||||
|
||||
**09:11:01 - unknown**: Completed
|
||||
|
||||
**09:15:14 - unknown**: Completed
|
||||
|
||||
**09:17:01 - unknown**: Completed
|
||||
|
||||
**09:17:54 - unknown**: Completed
|
||||
|
||||
**09:17:55 - unknown**: Completed
|
||||
|
||||
**09:19:24 - unknown**: Completed
|
||||
|
||||
**09:20:38 - unknown**: Completed
|
||||
|
||||
**09:20:40 - unknown**: Completed
|
||||
|
||||
**09:22:32 - unknown**: Completed
|
||||
|
||||
**09:22:41 - unknown**: Completed
|
||||
|
||||
**09:22:59 - unknown**: Completed
|
||||
|
||||
**09:26:03 - unknown**: Completed
|
||||
|
||||
**09:30:19 - unknown**: Completed
|
||||
|
||||
**09:31:50 - unknown**: Completed
|
||||
|
||||
**09:37:36 - unknown**: Completed
|
||||
|
||||
**09:41:02 - unknown**: Completed
|
||||
|
||||
**09:43:19 - unknown**: Completed
|
||||
|
||||
**09:44:27 - unknown**: Completed
|
||||
|
||||
**09:45:18 - unknown**: Completed
|
||||
|
||||
**09:47:57 - unknown**: Completed
|
||||
|
||||
**09:47:58 - unknown**: Completed
|
||||
|
||||
**09:49:48 - unknown**: Completed
|
||||
|
||||
**09:50:00 - unknown**: Completed
|
||||
|
||||
**09:53:25 - unknown**: Completed
|
||||
|
||||
**09:54:08 - unknown**: Completed
|
||||
|
||||
**09:54:59 - unknown**: Completed
|
||||
|
||||
**09:57:00 - unknown**: Completed
|
||||
|
||||
**09:57:07 - unknown**: Completed
|
||||
|
||||
**09:58:12 - unknown**: Completed
|
||||
|
||||
**10:00:02 - unknown**: Completed
|
||||
|
||||
**10:01:05 - unknown**: Completed
|
||||
|
||||
**10:03:34 - unknown**: Completed
|
||||
|
||||
**10:07:50 - unknown**: Completed
|
||||
|
||||
**10:10:14 - unknown**: Completed
|
||||
|
||||
**10:10:38 - unknown**: Completed
|
||||
|
||||
**10:11:31 - unknown**: Completed
|
||||
|
||||
**10:13:43 - unknown**: Completed
|
||||
|
||||
**10:13:45 - unknown**: Completed
|
||||
|
||||
**10:15:12 - unknown**: Completed
|
||||
|
||||
**10:16:06 - unknown**: Completed
|
||||
|
||||
**10:17:25 - unknown**: Completed
|
||||
|
||||
**10:17:27 - unknown**: Completed
|
||||
|
||||
**10:17:30 - unknown**: Completed
|
||||
|
||||
|
|
@ -0,0 +1,204 @@
|
|||
# Testing Overview
|
||||
|
||||
TradingAgents uses a comprehensive testing strategy to ensure code quality and reliability.
|
||||
|
||||
## Testing Philosophy
|
||||
|
||||
Our testing approach combines:
|
||||
- **Unit Tests**: Fast, isolated tests for individual components
|
||||
- **Integration Tests**: Tests for component interactions
|
||||
- **End-to-End Tests**: Full workflow validation
|
||||
- **Regression Tests**: Prevent fixed bugs from returning
|
||||
|
||||
## Test Structure
|
||||
|
||||
```
|
||||
tests/
|
||||
├── unit/ # Unit tests (fast, isolated)
|
||||
│ ├── test_analysts.py
|
||||
│ ├── test_dataflows.py
|
||||
│ └── test_utils.py
|
||||
├── integration/ # Integration tests (medium speed)
|
||||
│ ├── test_graph.py
|
||||
│ ├── test_llm_providers.py
|
||||
│ └── test_data_vendors.py
|
||||
├── regression/ # Regression tests
|
||||
│ └── smoke/ # Critical path tests (CI gate)
|
||||
├── fixtures/ # Shared test fixtures
|
||||
└── conftest.py # pytest configuration
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### All Tests
|
||||
|
||||
```bash
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Specific Test Categories
|
||||
|
||||
```bash
|
||||
# Unit tests only
|
||||
pytest tests/unit/
|
||||
|
||||
# Integration tests only
|
||||
pytest tests/integration/
|
||||
|
||||
# Regression tests only
|
||||
pytest tests/regression/
|
||||
|
||||
# Smoke tests (critical path)
|
||||
pytest -m smoke
|
||||
```
|
||||
|
||||
### With Coverage
|
||||
|
||||
```bash
|
||||
pytest tests/ --cov=tradingagents --cov-report=html
|
||||
```
|
||||
|
||||
### Specific Test File
|
||||
|
||||
```bash
|
||||
pytest tests/unit/test_analysts.py -v
|
||||
```
|
||||
|
||||
### Specific Test Function
|
||||
|
||||
```bash
|
||||
pytest tests/unit/test_analysts.py::test_market_analyst_initialization -v
|
||||
```
|
||||
|
||||
## Test Categories
|
||||
|
||||
### Unit Tests
|
||||
|
||||
**Purpose**: Test individual functions and classes in isolation
|
||||
|
||||
**Characteristics**:
|
||||
- Fast (<1 second per test)
|
||||
- No external dependencies
|
||||
- Use mocks for LLMs and data vendors
|
||||
- High coverage target (90%+)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
def test_analyst_initialization():
|
||||
"""Test analyst can be initialized."""
|
||||
llm = Mock()
|
||||
tools = []
|
||||
|
||||
analyst = MarketAnalyst(llm, tools)
|
||||
|
||||
assert analyst.name == "market"
|
||||
assert analyst.llm == llm
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
**Purpose**: Test component interactions
|
||||
|
||||
**Characteristics**:
|
||||
- Medium speed (1-30 seconds)
|
||||
- May use test APIs or mocks
|
||||
- Validate workflows
|
||||
- Coverage target (70%+)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
def test_data_vendor_integration():
|
||||
"""Test data vendor can provide data."""
|
||||
interface = DataInterface()
|
||||
data = interface.get_stock_data("NVDA", "2024-01-01", "2024-01-10")
|
||||
|
||||
assert "close" in data
|
||||
assert len(data["close"]) > 0
|
||||
```
|
||||
|
||||
### End-to-End Tests
|
||||
|
||||
**Purpose**: Test complete workflows
|
||||
|
||||
**Characteristics**:
|
||||
- Slow (30+ seconds)
|
||||
- Use real or test LLM APIs
|
||||
- Validate full system
|
||||
- Minimal count (critical paths only)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
@pytest.mark.integration
|
||||
def test_full_analysis_workflow():
|
||||
"""Test complete trading analysis."""
|
||||
ta = TradingAgentsGraph()
|
||||
state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
assert decision["action"] in ["BUY", "SELL", "HOLD"]
|
||||
assert 0.0 <= decision["confidence_score"] <= 1.0
|
||||
```
|
||||
|
||||
## Test Fixtures
|
||||
|
||||
Common fixtures are defined in `tests/conftest.py`:
|
||||
|
||||
```python
|
||||
@pytest.fixture
|
||||
def mock_llm():
|
||||
"""Mock LLM for testing."""
|
||||
llm = Mock()
|
||||
llm.invoke.return_value = Mock(content="Test response")
|
||||
return llm
|
||||
|
||||
@pytest.fixture
|
||||
def mock_data_tools():
|
||||
"""Mock data access tools."""
|
||||
return {
|
||||
"get_stock_data": Mock(return_value={"close": [150, 151, 152]}),
|
||||
"get_indicators": Mock(return_value={"RSI": {"rsi": [65]}}),
|
||||
}
|
||||
|
||||
@pytest.fixture
|
||||
def test_config():
|
||||
"""Test configuration."""
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["max_debate_rounds"] = 1
|
||||
return config
|
||||
```
|
||||
|
||||
## Writing Tests
|
||||
|
||||
See [Writing Tests Guide](writing-tests.md) for detailed patterns and examples.
|
||||
|
||||
## Coverage Goals
|
||||
|
||||
- **Overall**: 80%+
|
||||
- **Unit Tests**: 90%+
|
||||
- **Integration Tests**: 70%+
|
||||
- **Critical Paths**: 100%
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
Tests run automatically on:
|
||||
- Pull requests
|
||||
- Pushes to main branch
|
||||
- Pre-commit hooks (optional)
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Write Tests First**: TDD approach when possible
|
||||
2. **One Assertion**: Focus tests on single behaviors
|
||||
3. **Clear Names**: `test_<function>_<scenario>_<expected>`
|
||||
4. **Use Fixtures**: DRY principle for setup
|
||||
5. **Mock External Calls**: Don't hit real APIs in unit tests
|
||||
6. **Fast Tests**: Keep unit tests under 1 second
|
||||
7. **Isolation**: Tests should not depend on each other
|
||||
8. **Documentation**: Add docstrings to complex tests
|
||||
|
||||
## See Also
|
||||
|
||||
- [Running Tests](running-tests.md)
|
||||
- [Writing Tests](writing-tests.md)
|
||||
- [Test Organization Best Practices](../architecture/multi-agent-system.md#testing)
|
||||
|
|
@ -0,0 +1,376 @@
|
|||
# Running Tests
|
||||
|
||||
Complete guide for running the TradingAgents test suite.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install test dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pytest --version # Verify pytest is installed
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Run All Tests
|
||||
|
||||
```bash
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Run with Verbose Output
|
||||
|
||||
```bash
|
||||
pytest tests/ -v
|
||||
```
|
||||
|
||||
### Run with Coverage
|
||||
|
||||
```bash
|
||||
pytest tests/ --cov=tradingagents --cov-report=html
|
||||
```
|
||||
|
||||
View coverage report:
|
||||
```bash
|
||||
open htmlcov/index.html # macOS
|
||||
xdg-open htmlcov/index.html # Linux
|
||||
start htmlcov/index.html # Windows
|
||||
```
|
||||
|
||||
## Test Selection
|
||||
|
||||
### By Directory
|
||||
|
||||
```bash
|
||||
# Unit tests only
|
||||
pytest tests/unit/
|
||||
|
||||
# Integration tests only
|
||||
pytest tests/integration/
|
||||
|
||||
# Regression tests only
|
||||
pytest tests/regression/
|
||||
```
|
||||
|
||||
### By File
|
||||
|
||||
```bash
|
||||
pytest tests/unit/test_analysts.py
|
||||
```
|
||||
|
||||
### By Test Function
|
||||
|
||||
```bash
|
||||
pytest tests/unit/test_analysts.py::test_market_analyst_initialization
|
||||
```
|
||||
|
||||
### By Test Class
|
||||
|
||||
```bash
|
||||
pytest tests/unit/test_analysts.py::TestMarketAnalyst
|
||||
```
|
||||
|
||||
### By Pattern
|
||||
|
||||
```bash
|
||||
# Run all tests matching pattern
|
||||
pytest -k "analyst"
|
||||
|
||||
# Run tests NOT matching pattern
|
||||
pytest -k "not integration"
|
||||
|
||||
# Multiple patterns
|
||||
pytest -k "analyst and not integration"
|
||||
```
|
||||
|
||||
### By Markers
|
||||
|
||||
```bash
|
||||
# Smoke tests (critical path)
|
||||
pytest -m smoke
|
||||
|
||||
# Integration tests
|
||||
pytest -m integration
|
||||
|
||||
# Skip slow tests
|
||||
pytest -m "not slow"
|
||||
```
|
||||
|
||||
## Output Options
|
||||
|
||||
### Minimal Output
|
||||
|
||||
```bash
|
||||
pytest tests/ -q
|
||||
```
|
||||
|
||||
### Show All Output
|
||||
|
||||
```bash
|
||||
pytest tests/ -v -s
|
||||
```
|
||||
|
||||
### Show Only Failures
|
||||
|
||||
```bash
|
||||
pytest tests/ --tb=short
|
||||
```
|
||||
|
||||
### Show Failed Tests First
|
||||
|
||||
```bash
|
||||
pytest tests/ --failed-first
|
||||
```
|
||||
|
||||
### Stop on First Failure
|
||||
|
||||
```bash
|
||||
pytest tests/ -x
|
||||
```
|
||||
|
||||
### Stop After N Failures
|
||||
|
||||
```bash
|
||||
pytest tests/ --maxfail=3
|
||||
```
|
||||
|
||||
## Parallel Execution
|
||||
|
||||
Run tests in parallel for faster execution:
|
||||
|
||||
```bash
|
||||
# Install pytest-xdist
|
||||
pip install pytest-xdist
|
||||
|
||||
# Run with 4 workers
|
||||
pytest tests/ -n 4
|
||||
|
||||
# Run with auto-detect workers
|
||||
pytest tests/ -n auto
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### Generate Coverage Report
|
||||
|
||||
```bash
|
||||
pytest tests/ --cov=tradingagents --cov-report=term-missing
|
||||
```
|
||||
|
||||
### Coverage with HTML Report
|
||||
|
||||
```bash
|
||||
pytest tests/ --cov=tradingagents --cov-report=html
|
||||
```
|
||||
|
||||
### Coverage for Specific Module
|
||||
|
||||
```bash
|
||||
pytest tests/ --cov=tradingagents.agents --cov-report=term
|
||||
```
|
||||
|
||||
### Minimum Coverage Threshold
|
||||
|
||||
```bash
|
||||
pytest tests/ --cov=tradingagents --cov-fail-under=80
|
||||
```
|
||||
|
||||
## Debugging Tests
|
||||
|
||||
### Run with Python Debugger
|
||||
|
||||
```bash
|
||||
pytest tests/ --pdb
|
||||
```
|
||||
|
||||
### Drop into Debugger on Failure
|
||||
|
||||
```bash
|
||||
pytest tests/ --pdb -x
|
||||
```
|
||||
|
||||
### Show Local Variables on Failure
|
||||
|
||||
```bash
|
||||
pytest tests/ -l
|
||||
```
|
||||
|
||||
### Show Print Statements
|
||||
|
||||
```bash
|
||||
pytest tests/ -s
|
||||
```
|
||||
|
||||
## Environment Setup
|
||||
|
||||
### Set Environment Variables
|
||||
|
||||
```bash
|
||||
# For single test run
|
||||
OPENAI_API_KEY=test_key pytest tests/
|
||||
|
||||
# Or create .env.test
|
||||
cat > .env.test <<EOF
|
||||
OPENAI_API_KEY=test_key
|
||||
ALPHA_VANTAGE_API_KEY=test_key
|
||||
EOF
|
||||
|
||||
# Load in tests
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Skip Tests Requiring API Keys
|
||||
|
||||
```bash
|
||||
# Skip integration tests
|
||||
pytest tests/ -m "not integration"
|
||||
```
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
### Pre-Commit Testing
|
||||
|
||||
Create `.git/hooks/pre-commit`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
pytest tests/regression/smoke/ --maxfail=1 || exit 1
|
||||
```
|
||||
|
||||
Make executable:
|
||||
```bash
|
||||
chmod +x .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
Tests run automatically on push/PR via `.github/workflows/tests.yml`
|
||||
|
||||
## Test Filtering
|
||||
|
||||
### By Test Tier
|
||||
|
||||
```bash
|
||||
# Smoke tests only (fastest)
|
||||
pytest -m smoke
|
||||
|
||||
# Regression tests
|
||||
pytest -m regression
|
||||
|
||||
# Unit tests
|
||||
pytest tests/unit/
|
||||
```
|
||||
|
||||
### Skip Slow Tests
|
||||
|
||||
```bash
|
||||
pytest -m "not slow"
|
||||
```
|
||||
|
||||
### Skip External API Tests
|
||||
|
||||
```bash
|
||||
pytest -m "not api"
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Quick Check
|
||||
|
||||
```bash
|
||||
# Fast smoke tests
|
||||
pytest -m smoke -v
|
||||
```
|
||||
|
||||
### Pre-Commit
|
||||
|
||||
```bash
|
||||
# Critical path tests
|
||||
pytest tests/regression/smoke/ -v
|
||||
```
|
||||
|
||||
### Full Validation
|
||||
|
||||
```bash
|
||||
# All tests with coverage
|
||||
pytest tests/ --cov=tradingagents --cov-report=html -v
|
||||
```
|
||||
|
||||
### Debugging Failure
|
||||
|
||||
```bash
|
||||
# Run specific test with debugger
|
||||
pytest tests/unit/test_analysts.py::test_failing_test --pdb -s
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
### Show Slowest Tests
|
||||
|
||||
```bash
|
||||
pytest tests/ --durations=10
|
||||
```
|
||||
|
||||
### Show All Test Durations
|
||||
|
||||
```bash
|
||||
pytest tests/ --durations=0
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
### JUnit XML (for CI)
|
||||
|
||||
```bash
|
||||
pytest tests/ --junit-xml=test-results.xml
|
||||
```
|
||||
|
||||
### JSON Report
|
||||
|
||||
```bash
|
||||
pip install pytest-json-report
|
||||
pytest tests/ --json-report --json-report-file=report.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Not Found
|
||||
|
||||
**Issue**: `ERROR: file or directory not found: tests/`
|
||||
|
||||
**Solution**: Run from project root
|
||||
```bash
|
||||
cd /path/to/TradingAgents
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Import Errors
|
||||
|
||||
**Issue**: `ModuleNotFoundError: No module named 'tradingagents'`
|
||||
|
||||
**Solution**: Install package in editable mode
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### Fixture Not Found
|
||||
|
||||
**Issue**: `fixture 'mock_llm' not found`
|
||||
|
||||
**Solution**: Check `conftest.py` is in test directory
|
||||
|
||||
### Slow Tests
|
||||
|
||||
**Issue**: Tests taking too long
|
||||
|
||||
**Solution**: Run specific categories or use parallel execution
|
||||
```bash
|
||||
pytest tests/unit/ -n auto
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Testing Overview](README.md)
|
||||
- [Writing Tests](writing-tests.md)
|
||||
- [Test Organization Best Practices](../../.claude/skills/testing-guide/docs/test-organization-best-practices.md)
|
||||
|
|
@ -0,0 +1,432 @@
|
|||
# Writing Tests
|
||||
|
||||
Guide for writing effective tests for TradingAgents.
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Basic Test Pattern
|
||||
|
||||
```python
|
||||
def test_function_name_scenario_expected():
|
||||
"""Test description."""
|
||||
# Arrange - Set up test data
|
||||
input_data = prepare_test_data()
|
||||
|
||||
# Act - Execute the code being tested
|
||||
result = function_under_test(input_data)
|
||||
|
||||
# Assert - Verify the result
|
||||
assert result == expected_value
|
||||
```
|
||||
|
||||
### Test Class Pattern
|
||||
|
||||
```python
|
||||
class TestComponentName:
|
||||
"""Test suite for ComponentName."""
|
||||
|
||||
def test_initialization(self):
|
||||
"""Test component can be initialized."""
|
||||
component = ComponentName()
|
||||
assert component is not None
|
||||
|
||||
def test_specific_behavior(self):
|
||||
"""Test specific behavior works correctly."""
|
||||
component = ComponentName()
|
||||
result = component.method()
|
||||
assert result == expected
|
||||
```
|
||||
|
||||
## Unit Test Examples
|
||||
|
||||
### Testing Analyst Initialization
|
||||
|
||||
```python
|
||||
from tradingagents.agents.analysts.market_analyst import MarketAnalyst
|
||||
from unittest.mock import Mock
|
||||
|
||||
def test_market_analyst_initialization():
|
||||
"""Test MarketAnalyst can be initialized."""
|
||||
# Arrange
|
||||
llm = Mock()
|
||||
tools = []
|
||||
|
||||
# Act
|
||||
analyst = MarketAnalyst(llm, tools)
|
||||
|
||||
# Assert
|
||||
assert analyst.name == "market"
|
||||
assert analyst.llm == llm
|
||||
assert analyst.tools == {}
|
||||
```
|
||||
|
||||
### Testing with Mocked LLM
|
||||
|
||||
```python
|
||||
def test_analyst_generates_report():
|
||||
"""Test analyst generates analysis report."""
|
||||
# Arrange
|
||||
llm = Mock()
|
||||
llm.invoke.return_value = Mock(
|
||||
content="Technical analysis shows bullish trend..."
|
||||
)
|
||||
|
||||
tools = [Mock(name="get_stock_data")]
|
||||
analyst = MarketAnalyst(llm, tools)
|
||||
|
||||
# Act
|
||||
report = analyst.analyze("NVDA", "2024-05-10")
|
||||
|
||||
# Assert
|
||||
assert "bullish" in report.lower()
|
||||
assert llm.invoke.called
|
||||
```
|
||||
|
||||
### Testing Data Flows
|
||||
|
||||
```python
|
||||
from tradingagents.dataflows.yfinance import yfinance_get_stock_data
|
||||
|
||||
def test_yfinance_get_stock_data():
|
||||
"""Test yfinance returns stock data in correct format."""
|
||||
# Act
|
||||
data = yfinance_get_stock_data("NVDA", "2024-01-01", "2024-01-10")
|
||||
|
||||
# Assert
|
||||
assert "dates" in data
|
||||
assert "close" in data
|
||||
assert len(data["close"]) > 0
|
||||
assert all(isinstance(price, (int, float)) for price in data["close"])
|
||||
```
|
||||
|
||||
## Integration Test Examples
|
||||
|
||||
### Testing Graph Workflow
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
|
||||
@pytest.fixture
|
||||
def minimal_config():
|
||||
"""Minimal configuration for testing."""
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["max_debate_rounds"] = 1
|
||||
return config
|
||||
|
||||
def test_graph_propagation(minimal_config):
|
||||
"""Test graph can run full propagation."""
|
||||
# Arrange
|
||||
ta = TradingAgentsGraph(
|
||||
selected_analysts=["market"],
|
||||
config=minimal_config
|
||||
)
|
||||
|
||||
# Act
|
||||
state, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
||||
# Assert
|
||||
assert decision is not None
|
||||
assert decision["action"] in ["BUY", "SELL", "HOLD"]
|
||||
assert 0.0 <= decision["confidence_score"] <= 1.0
|
||||
```
|
||||
|
||||
### Testing LLM Provider Integration
|
||||
|
||||
```python
|
||||
@pytest.fixture
|
||||
def openrouter_config():
|
||||
"""Configuration for OpenRouter provider."""
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "openrouter"
|
||||
config["deep_think_llm"] = "anthropic/claude-sonnet-4.5"
|
||||
return config
|
||||
|
||||
@pytest.fixture
|
||||
def mock_env_openrouter(monkeypatch):
|
||||
"""Mock OpenRouter API key."""
|
||||
monkeypatch.setenv("OPENROUTER_API_KEY", "test_key")
|
||||
monkeypatch.setenv("OPENAI_API_KEY", "test_key")
|
||||
|
||||
def test_openrouter_initialization(openrouter_config, mock_env_openrouter):
|
||||
"""Test OpenRouter provider can be initialized."""
|
||||
ta = TradingAgentsGraph(config=openrouter_config)
|
||||
|
||||
assert ta.deep_thinking_llm is not None
|
||||
assert ta.quick_thinking_llm is not None
|
||||
```
|
||||
|
||||
## Using Fixtures
|
||||
|
||||
### Simple Fixture
|
||||
|
||||
```python
|
||||
@pytest.fixture
|
||||
def sample_stock_data():
|
||||
"""Sample stock data for testing."""
|
||||
return {
|
||||
"dates": ["2024-01-01", "2024-01-02", "2024-01-03"],
|
||||
"close": [150.0, 151.0, 152.0],
|
||||
"volume": [1000000, 1100000, 1200000]
|
||||
}
|
||||
|
||||
def test_using_fixture(sample_stock_data):
|
||||
"""Test using a fixture."""
|
||||
assert len(sample_stock_data["close"]) == 3
|
||||
```
|
||||
|
||||
### Fixture with Cleanup
|
||||
|
||||
```python
|
||||
@pytest.fixture
|
||||
def temp_cache_dir(tmp_path):
|
||||
"""Temporary cache directory."""
|
||||
cache_dir = tmp_path / "cache"
|
||||
cache_dir.mkdir()
|
||||
|
||||
yield cache_dir
|
||||
|
||||
# Cleanup happens automatically with tmp_path
|
||||
```
|
||||
|
||||
### Parametrized Fixture
|
||||
|
||||
```python
|
||||
@pytest.fixture(params=["openai", "anthropic", "google"])
|
||||
def llm_provider(request):
|
||||
"""Test with multiple LLM providers."""
|
||||
return request.param
|
||||
|
||||
def test_provider_initialization(llm_provider):
|
||||
"""Test all providers can be initialized."""
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = llm_provider
|
||||
|
||||
# Skip if API key not available
|
||||
if llm_provider == "anthropic" and not os.getenv("ANTHROPIC_API_KEY"):
|
||||
pytest.skip("ANTHROPIC_API_KEY not set")
|
||||
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
assert ta.deep_thinking_llm is not None
|
||||
```
|
||||
|
||||
## Mocking
|
||||
|
||||
### Mocking LLM Responses
|
||||
|
||||
```python
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
def test_with_mocked_llm():
|
||||
"""Test with mocked LLM responses."""
|
||||
llm = Mock()
|
||||
llm.invoke.return_value = Mock(content="Mocked response")
|
||||
|
||||
analyst = Analyst(llm)
|
||||
response = analyst.analyze("NVDA", "2024-05-10")
|
||||
|
||||
assert "Mocked response" in response
|
||||
llm.invoke.assert_called_once()
|
||||
```
|
||||
|
||||
### Mocking Data Vendors
|
||||
|
||||
```python
|
||||
@patch('tradingagents.dataflows.yfinance.yfinance_get_stock_data')
|
||||
def test_with_mocked_data(mock_get_stock_data):
|
||||
"""Test with mocked data vendor."""
|
||||
# Setup mock
|
||||
mock_get_stock_data.return_value = {
|
||||
"dates": ["2024-01-01"],
|
||||
"close": [150.0]
|
||||
}
|
||||
|
||||
# Test
|
||||
data = get_stock_data("NVDA", "2024-01-01", "2024-01-01")
|
||||
|
||||
assert data["close"] == [150.0]
|
||||
mock_get_stock_data.assert_called_once()
|
||||
```
|
||||
|
||||
### Mocking Environment Variables
|
||||
|
||||
```python
|
||||
def test_with_mocked_env(monkeypatch):
|
||||
"""Test with mocked environment variables."""
|
||||
monkeypatch.setenv("OPENAI_API_KEY", "test_key")
|
||||
monkeypatch.setenv("ALPHA_VANTAGE_API_KEY", "test_key")
|
||||
|
||||
# Now environment variables are set for this test
|
||||
assert os.getenv("OPENAI_API_KEY") == "test_key"
|
||||
```
|
||||
|
||||
## Test Markers
|
||||
|
||||
### Marking Tests
|
||||
|
||||
```python
|
||||
@pytest.mark.smoke
|
||||
def test_critical_path():
|
||||
"""Critical path test."""
|
||||
pass
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_integration_workflow():
|
||||
"""Integration test."""
|
||||
pass
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_expensive_operation():
|
||||
"""Slow test."""
|
||||
pass
|
||||
|
||||
@pytest.mark.skip(reason="Not implemented yet")
|
||||
def test_future_feature():
|
||||
"""Test for future feature."""
|
||||
pass
|
||||
```
|
||||
|
||||
### Conditional Skip
|
||||
|
||||
```python
|
||||
@pytest.mark.skipif(
|
||||
not os.getenv("OPENAI_API_KEY"),
|
||||
reason="OPENAI_API_KEY not set"
|
||||
)
|
||||
def test_requiring_api_key():
|
||||
"""Test that requires API key."""
|
||||
pass
|
||||
```
|
||||
|
||||
## Parameterized Tests
|
||||
|
||||
### Simple Parameterization
|
||||
|
||||
```python
|
||||
@pytest.mark.parametrize("ticker,expected_valid", [
|
||||
("NVDA", True),
|
||||
("AAPL", True),
|
||||
("INVALID123", False),
|
||||
("", False),
|
||||
])
|
||||
def test_ticker_validation(ticker, expected_valid):
|
||||
"""Test ticker validation with multiple inputs."""
|
||||
result = validate_ticker(ticker)
|
||||
assert result == expected_valid
|
||||
```
|
||||
|
||||
### Multiple Parameters
|
||||
|
||||
```python
|
||||
@pytest.mark.parametrize("provider,model", [
|
||||
("openai", "gpt-4o-mini"),
|
||||
("anthropic", "claude-sonnet-4-20250514"),
|
||||
("google", "gemini-2.0-flash"),
|
||||
])
|
||||
def test_llm_provider_models(provider, model):
|
||||
"""Test different provider/model combinations."""
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = provider
|
||||
config["quick_think_llm"] = model
|
||||
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
assert ta.quick_thinking_llm is not None
|
||||
```
|
||||
|
||||
## Error Testing
|
||||
|
||||
### Testing Exceptions
|
||||
|
||||
```python
|
||||
def test_missing_api_key_raises_error():
|
||||
"""Test error when API key is missing."""
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "openai"
|
||||
|
||||
# Clear environment variable
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with pytest.raises(ValueError, match="OPENAI_API_KEY"):
|
||||
TradingAgentsGraph(config=config)
|
||||
```
|
||||
|
||||
### Testing Error Messages
|
||||
|
||||
```python
|
||||
def test_invalid_ticker_error_message():
|
||||
"""Test error message for invalid ticker."""
|
||||
with pytest.raises(ValueError) as exc_info:
|
||||
validate_ticker("123INVALID")
|
||||
|
||||
assert "Invalid ticker format" in str(exc_info.value)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Test Names**: Use descriptive names following `test_<what>_<scenario>_<expected>`
|
||||
2. **One Assertion Per Test**: Focus on single behavior
|
||||
3. **Use Fixtures**: Avoid code duplication in setup
|
||||
4. **Mock External Dependencies**: Don't hit real APIs in unit tests
|
||||
5. **Test Edge Cases**: Include boundary conditions
|
||||
6. **Document Complex Tests**: Add docstrings explaining what's being tested
|
||||
7. **Keep Tests Fast**: Unit tests should run in <1 second
|
||||
8. **Independent Tests**: Each test should run in isolation
|
||||
9. **Meaningful Assertions**: Assert specific values, not just "not None"
|
||||
10. **Clean Up**: Use fixtures for setup/teardown
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Testing Configuration
|
||||
|
||||
```python
|
||||
def test_configuration_override():
|
||||
"""Test configuration can be overridden."""
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["max_debate_rounds"] = 5
|
||||
|
||||
ta = TradingAgentsGraph(config=config)
|
||||
|
||||
assert ta.config["max_debate_rounds"] == 5
|
||||
```
|
||||
|
||||
### Testing State Management
|
||||
|
||||
```python
|
||||
def test_agent_state_update():
|
||||
"""Test agent state is updated correctly."""
|
||||
state = AgentState(ticker="NVDA", date="2024-05-10")
|
||||
|
||||
state.analyst_reports["market"] = "Test report"
|
||||
|
||||
assert "market" in state.analyst_reports
|
||||
assert state.analyst_reports["market"] == "Test report"
|
||||
```
|
||||
|
||||
### Testing Retry Logic
|
||||
|
||||
```python
|
||||
def test_retry_on_rate_limit():
|
||||
"""Test retry logic on rate limit error."""
|
||||
from tradingagents.utils.exceptions import LLMRateLimitError
|
||||
|
||||
llm = Mock()
|
||||
llm.invoke.side_effect = [
|
||||
LLMRateLimitError(provider="openai", retry_after=1),
|
||||
Mock(content="Success")
|
||||
]
|
||||
|
||||
# Function should retry and succeed
|
||||
result = function_with_retry(llm)
|
||||
|
||||
assert "Success" in result
|
||||
assert llm.invoke.call_count == 2
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Testing Overview](README.md)
|
||||
- [Running Tests](running-tests.md)
|
||||
- [Test Organization Best Practices](../../.claude/skills/testing-guide/docs/test-organization-best-practices.md)
|
||||
|
|
@ -0,0 +1,808 @@
|
|||
"""
|
||||
Test suite for documentation structure validation.
|
||||
|
||||
This module tests:
|
||||
1. Documentation directory structure exists
|
||||
2. Required documentation files are present
|
||||
3. Documentation files have valid markdown structure
|
||||
4. Internal links resolve correctly
|
||||
5. No sensitive information (API keys, secrets) in docs
|
||||
6. Documentation follows consistent formatting
|
||||
7. Code examples in docs are valid
|
||||
|
||||
Tests are written TDD-style and will fail until documentation is created.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import List, Set, Tuple
|
||||
import pytest
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Fixtures and Constants
|
||||
# ============================================================================
|
||||
|
||||
@pytest.fixture
|
||||
def project_root() -> Path:
|
||||
"""Get the project root directory."""
|
||||
# Navigate up from tests/ to project root
|
||||
return Path(__file__).parent.parent
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def docs_root(project_root: Path) -> Path:
|
||||
"""Get the documentation root directory."""
|
||||
return project_root / "docs"
|
||||
|
||||
|
||||
# Expected documentation structure
|
||||
REQUIRED_DOCS_STRUCTURE = {
|
||||
# Root documentation files
|
||||
"docs/README.md": "Main documentation index",
|
||||
"docs/QUICKSTART.md": "Quick start guide",
|
||||
|
||||
# Architecture documentation
|
||||
"docs/architecture/multi-agent-system.md": "Multi-agent system architecture",
|
||||
"docs/architecture/data-flow.md": "Data flow documentation",
|
||||
"docs/architecture/llm-integration.md": "LLM integration architecture",
|
||||
|
||||
# API documentation
|
||||
"docs/api/trading-graph.md": "Trading graph API reference",
|
||||
"docs/api/agents.md": "Agents API reference",
|
||||
"docs/api/dataflows.md": "Data flows API reference",
|
||||
|
||||
# User guides
|
||||
"docs/guides/adding-new-analyst.md": "Guide for adding new analyst agents",
|
||||
"docs/guides/adding-llm-provider.md": "Guide for adding new LLM providers",
|
||||
"docs/guides/configuration.md": "Configuration guide",
|
||||
|
||||
# Testing documentation
|
||||
"docs/testing/README.md": "Testing documentation index",
|
||||
"docs/testing/running-tests.md": "Guide for running tests",
|
||||
"docs/testing/writing-tests.md": "Guide for writing tests",
|
||||
|
||||
# Development documentation
|
||||
"docs/development/setup.md": "Development environment setup",
|
||||
"docs/development/contributing.md": "Contribution guidelines",
|
||||
}
|
||||
|
||||
# Patterns for detecting sensitive information
|
||||
SENSITIVE_PATTERNS = [
|
||||
(r"sk-[a-zA-Z0-9]{32,}", "OpenAI API key"),
|
||||
(r"sk-or-v1-[a-zA-Z0-9]{32,}", "OpenRouter API key"),
|
||||
(r"sk-ant-[a-zA-Z0-9]{32,}", "Anthropic API key"),
|
||||
(r"ghp_[a-zA-Z0-9]{36,}", "GitHub Personal Access Token"),
|
||||
(r"gho_[a-zA-Z0-9]{36,}", "GitHub OAuth Token"),
|
||||
(r"[a-zA-Z0-9]{40}", "Generic 40-char secret (potential GitHub token)"),
|
||||
(r"(?i)password\s*[=:]\s*['\"][^'\"]+['\"]", "Hardcoded password"),
|
||||
(r"(?i)secret\s*[=:]\s*['\"][^'\"]+['\"]", "Hardcoded secret"),
|
||||
(r"(?i)api[_-]?key\s*[=:]\s*['\"][^'\"]+['\"]", "Hardcoded API key"),
|
||||
]
|
||||
|
||||
# Required markdown headers for each document type
|
||||
REQUIRED_HEADERS = {
|
||||
"README.md": ["# ", "## "], # Must have at least h1 and h2
|
||||
".md": ["# "], # All other markdown files must have at least h1
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Structure Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestDocumentationStructure:
|
||||
"""Test that documentation directory structure exists and is complete."""
|
||||
|
||||
def test_docs_root_exists(self, docs_root: Path):
|
||||
"""Test that docs/ directory exists."""
|
||||
assert docs_root.exists(), (
|
||||
f"Documentation root directory not found at {docs_root}. "
|
||||
"Create docs/ directory to start."
|
||||
)
|
||||
assert docs_root.is_dir(), f"{docs_root} exists but is not a directory"
|
||||
|
||||
def test_all_required_files_exist(self, docs_root: Path):
|
||||
"""Test that all required documentation files exist."""
|
||||
missing_files = []
|
||||
|
||||
for doc_path, description in REQUIRED_DOCS_STRUCTURE.items():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
missing_files.append(f"{doc_path} - {description}")
|
||||
|
||||
assert not missing_files, (
|
||||
f"Missing {len(missing_files)} required documentation files:\n" +
|
||||
"\n".join(f" - {f}" for f in missing_files)
|
||||
)
|
||||
|
||||
def test_all_required_directories_exist(self, docs_root: Path):
|
||||
"""Test that all required documentation subdirectories exist."""
|
||||
required_dirs = [
|
||||
"architecture",
|
||||
"api",
|
||||
"guides",
|
||||
"testing",
|
||||
"development",
|
||||
]
|
||||
|
||||
missing_dirs = []
|
||||
for dir_name in required_dirs:
|
||||
dir_path = docs_root / dir_name
|
||||
if not dir_path.exists():
|
||||
missing_dirs.append(dir_name)
|
||||
elif not dir_path.is_dir():
|
||||
missing_dirs.append(f"{dir_name} (exists but not a directory)")
|
||||
|
||||
assert not missing_dirs, (
|
||||
f"Missing required documentation directories:\n" +
|
||||
"\n".join(f" - docs/{d}" for d in missing_dirs)
|
||||
)
|
||||
|
||||
def test_no_empty_files(self, docs_root: Path):
|
||||
"""Test that no documentation files are empty."""
|
||||
empty_files = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if full_path.exists() and full_path.stat().st_size == 0:
|
||||
empty_files.append(doc_path)
|
||||
|
||||
assert not empty_files, (
|
||||
f"Found {len(empty_files)} empty documentation files:\n" +
|
||||
"\n".join(f" - {f}" for f in empty_files)
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Content Validation Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestMarkdownStructure:
|
||||
"""Test that documentation files have valid markdown structure."""
|
||||
|
||||
def test_all_files_have_required_headers(self, docs_root: Path):
|
||||
"""Test that all markdown files have required header levels."""
|
||||
files_missing_headers = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue # Skip missing files (covered by structure tests)
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
|
||||
# Determine required headers based on filename
|
||||
filename = full_path.name
|
||||
required = REQUIRED_HEADERS.get(filename, REQUIRED_HEADERS[".md"])
|
||||
|
||||
missing_headers = []
|
||||
for header_prefix in required:
|
||||
if not any(line.startswith(header_prefix) for line in content.splitlines()):
|
||||
missing_headers.append(header_prefix.strip())
|
||||
|
||||
if missing_headers:
|
||||
files_missing_headers.append(
|
||||
f"{doc_path}: missing {', '.join(missing_headers)}"
|
||||
)
|
||||
|
||||
assert not files_missing_headers, (
|
||||
f"Files with missing required headers:\n" +
|
||||
"\n".join(f" - {f}" for f in files_missing_headers)
|
||||
)
|
||||
|
||||
def test_markdown_has_valid_code_blocks(self, docs_root: Path):
|
||||
"""Test that markdown code blocks are properly closed."""
|
||||
files_with_unclosed_blocks = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
|
||||
# Count code block delimiters (```)
|
||||
code_block_count = content.count("```")
|
||||
|
||||
# Code blocks must come in pairs
|
||||
if code_block_count % 2 != 0:
|
||||
files_with_unclosed_blocks.append(
|
||||
f"{doc_path} (found {code_block_count} ``` markers)"
|
||||
)
|
||||
|
||||
assert not files_with_unclosed_blocks, (
|
||||
f"Files with unclosed code blocks:\n" +
|
||||
"\n".join(f" - {f}" for f in files_with_unclosed_blocks)
|
||||
)
|
||||
|
||||
def test_readme_has_table_of_contents(self, docs_root: Path):
|
||||
"""Test that main README has a table of contents."""
|
||||
readme_path = docs_root / "README.md"
|
||||
|
||||
if not readme_path.exists():
|
||||
pytest.skip("README.md does not exist yet")
|
||||
|
||||
content = readme_path.read_text(encoding="utf-8").lower()
|
||||
|
||||
# Look for common TOC indicators
|
||||
has_toc = any(
|
||||
indicator in content
|
||||
for indicator in [
|
||||
"table of contents",
|
||||
"## contents",
|
||||
"## overview",
|
||||
"[architecture]",
|
||||
"[api reference]",
|
||||
"[guides]",
|
||||
]
|
||||
)
|
||||
|
||||
assert has_toc, (
|
||||
"docs/README.md should include a table of contents or overview section "
|
||||
"linking to major documentation sections"
|
||||
)
|
||||
|
||||
def test_quickstart_has_installation_steps(self, docs_root: Path):
|
||||
"""Test that QUICKSTART has installation/setup steps."""
|
||||
quickstart_path = docs_root / "QUICKSTART.md"
|
||||
|
||||
if not quickstart_path.exists():
|
||||
pytest.skip("QUICKSTART.md does not exist yet")
|
||||
|
||||
content = quickstart_path.read_text(encoding="utf-8").lower()
|
||||
|
||||
# Look for installation-related content
|
||||
has_installation = any(
|
||||
keyword in content
|
||||
for keyword in [
|
||||
"install",
|
||||
"pip install",
|
||||
"setup",
|
||||
"requirements",
|
||||
"getting started",
|
||||
]
|
||||
)
|
||||
|
||||
assert has_installation, (
|
||||
"docs/QUICKSTART.md should include installation or setup instructions"
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Cross-Reference Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestDocumentationLinks:
|
||||
"""Test that internal documentation links are valid."""
|
||||
|
||||
def _extract_markdown_links(self, content: str) -> List[Tuple[str, str]]:
|
||||
"""Extract all markdown links from content.
|
||||
|
||||
Returns:
|
||||
List of (link_text, link_url) tuples
|
||||
"""
|
||||
# Match [text](url) pattern
|
||||
link_pattern = r'\[([^\]]+)\]\(([^)]+)\)'
|
||||
return re.findall(link_pattern, content)
|
||||
|
||||
def _is_external_link(self, url: str) -> bool:
|
||||
"""Check if a URL is external (http/https)."""
|
||||
return url.startswith(('http://', 'https://', 'mailto:'))
|
||||
|
||||
def _resolve_relative_link(
|
||||
self, base_path: Path, link_url: str
|
||||
) -> Path:
|
||||
"""Resolve a relative link from a base document path.
|
||||
|
||||
Args:
|
||||
base_path: Path to the document containing the link
|
||||
link_url: The relative URL from the link
|
||||
|
||||
Returns:
|
||||
Resolved absolute path
|
||||
"""
|
||||
# Remove anchor fragments
|
||||
link_url = link_url.split('#')[0]
|
||||
|
||||
if not link_url: # Just an anchor link
|
||||
return base_path
|
||||
|
||||
# Resolve relative to the directory containing the base file
|
||||
base_dir = base_path.parent
|
||||
return (base_dir / link_url).resolve()
|
||||
|
||||
def test_internal_links_resolve(self, docs_root: Path):
|
||||
"""Test that all internal documentation links resolve to existing files."""
|
||||
broken_links = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
links = self._extract_markdown_links(content)
|
||||
|
||||
for link_text, link_url in links:
|
||||
# Skip external links
|
||||
if self._is_external_link(link_url):
|
||||
continue
|
||||
|
||||
# Resolve relative link
|
||||
target_path = self._resolve_relative_link(full_path, link_url)
|
||||
|
||||
# Check if target exists
|
||||
if not target_path.exists():
|
||||
broken_links.append(
|
||||
f"{doc_path}: [{link_text}]({link_url}) -> {target_path}"
|
||||
)
|
||||
|
||||
assert not broken_links, (
|
||||
f"Found {len(broken_links)} broken internal links:\n" +
|
||||
"\n".join(f" - {link}" for link in broken_links)
|
||||
)
|
||||
|
||||
def test_readme_links_to_main_sections(self, docs_root: Path):
|
||||
"""Test that main README links to all major documentation sections."""
|
||||
readme_path = docs_root / "README.md"
|
||||
|
||||
if not readme_path.exists():
|
||||
pytest.skip("README.md does not exist yet")
|
||||
|
||||
content = readme_path.read_text(encoding="utf-8")
|
||||
links = self._extract_markdown_links(content)
|
||||
link_urls = [url for _, url in links]
|
||||
|
||||
# Required sections that should be linked
|
||||
required_links = [
|
||||
("architecture", "Architecture documentation"),
|
||||
("api", "API documentation"),
|
||||
("guides", "User guides"),
|
||||
("testing", "Testing documentation"),
|
||||
]
|
||||
|
||||
missing_links = []
|
||||
for section, description in required_links:
|
||||
# Check if any link points to this section
|
||||
has_link = any(section in url.lower() for url in link_urls)
|
||||
if not has_link:
|
||||
missing_links.append(f"{section}/ - {description}")
|
||||
|
||||
assert not missing_links, (
|
||||
f"README.md missing links to major sections:\n" +
|
||||
"\n".join(f" - {link}" for link in missing_links)
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Security Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestDocumentationSecurity:
|
||||
"""Test that documentation contains no sensitive information."""
|
||||
|
||||
def test_no_api_keys_in_docs(self, docs_root: Path):
|
||||
"""Test that documentation files contain no API keys or secrets."""
|
||||
files_with_secrets = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
|
||||
# Check against all sensitive patterns
|
||||
for pattern, secret_type in SENSITIVE_PATTERNS:
|
||||
matches = re.finditer(pattern, content)
|
||||
for match in matches:
|
||||
# Skip if it's clearly an example/placeholder
|
||||
matched_text = match.group(0)
|
||||
if self._is_placeholder(matched_text):
|
||||
continue
|
||||
|
||||
files_with_secrets.append(
|
||||
f"{doc_path}: Found {secret_type}: {matched_text[:20]}..."
|
||||
)
|
||||
|
||||
assert not files_with_secrets, (
|
||||
f"Found potential secrets in documentation:\n" +
|
||||
"\n".join(f" - {s}" for s in files_with_secrets) +
|
||||
"\n\nUse placeholders like 'your-api-key-here' or 'sk-xxx' instead."
|
||||
)
|
||||
|
||||
def _is_placeholder(self, text: str) -> bool:
|
||||
"""Check if text is likely a placeholder rather than real secret.
|
||||
|
||||
Args:
|
||||
text: The potentially sensitive text
|
||||
|
||||
Returns:
|
||||
True if text appears to be a placeholder
|
||||
"""
|
||||
placeholder_indicators = [
|
||||
"xxx",
|
||||
"your-",
|
||||
"example",
|
||||
"placeholder",
|
||||
"replace",
|
||||
"insert",
|
||||
"paste",
|
||||
"...",
|
||||
]
|
||||
|
||||
text_lower = text.lower()
|
||||
return any(indicator in text_lower for indicator in placeholder_indicators)
|
||||
|
||||
def test_env_examples_use_placeholders(self, docs_root: Path):
|
||||
"""Test that .env examples in docs use placeholders, not real values."""
|
||||
files_with_real_values = []
|
||||
|
||||
# Pattern to match environment variable assignments
|
||||
env_var_pattern = r'^([A-Z_]+)=(.+)$'
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
|
||||
# Find code blocks that might contain .env examples
|
||||
code_blocks = re.findall(r'```(?:bash|shell|env)?\n(.*?)```', content, re.DOTALL)
|
||||
|
||||
for block in code_blocks:
|
||||
for line in block.splitlines():
|
||||
match = re.match(env_var_pattern, line.strip())
|
||||
if match:
|
||||
var_name, var_value = match.groups()
|
||||
|
||||
# Check if value looks like a real key
|
||||
if (
|
||||
var_name.endswith(('_KEY', '_TOKEN', '_SECRET'))
|
||||
and not self._is_placeholder(var_value)
|
||||
and len(var_value) > 20 # Real keys are typically longer
|
||||
):
|
||||
files_with_real_values.append(
|
||||
f"{doc_path}: {var_name}={var_value[:20]}..."
|
||||
)
|
||||
|
||||
assert not files_with_real_values, (
|
||||
f"Found environment variables with potentially real values:\n" +
|
||||
"\n".join(f" - {v}" for v in files_with_real_values) +
|
||||
"\n\nUse placeholders in documentation."
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Code Example Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestCodeExamples:
|
||||
"""Test that code examples in documentation are valid."""
|
||||
|
||||
def _extract_code_blocks(self, content: str, language: str = None) -> List[str]:
|
||||
"""Extract code blocks from markdown content.
|
||||
|
||||
Args:
|
||||
content: Markdown content
|
||||
language: Optional language filter (e.g., 'python')
|
||||
|
||||
Returns:
|
||||
List of code block contents
|
||||
"""
|
||||
if language:
|
||||
pattern = rf'```{language}\n(.*?)```'
|
||||
else:
|
||||
pattern = r'```(?:\w+)?\n(.*?)```'
|
||||
|
||||
return re.findall(pattern, content, re.DOTALL)
|
||||
|
||||
def test_python_code_examples_have_valid_syntax(self, docs_root: Path):
|
||||
"""Test that Python code examples have valid syntax."""
|
||||
files_with_syntax_errors = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
python_blocks = self._extract_code_blocks(content, "python")
|
||||
|
||||
for i, code_block in enumerate(python_blocks):
|
||||
try:
|
||||
# Try to compile the code (doesn't execute it)
|
||||
compile(code_block, f"{doc_path}:block{i}", "exec")
|
||||
except SyntaxError as e:
|
||||
files_with_syntax_errors.append(
|
||||
f"{doc_path} (block {i}): {e.msg} at line {e.lineno}"
|
||||
)
|
||||
|
||||
assert not files_with_syntax_errors, (
|
||||
f"Found Python code blocks with syntax errors:\n" +
|
||||
"\n".join(f" - {err}" for err in files_with_syntax_errors)
|
||||
)
|
||||
|
||||
def test_code_examples_use_project_imports(self, docs_root: Path):
|
||||
"""Test that code examples use correct import paths."""
|
||||
files_with_wrong_imports = []
|
||||
|
||||
# Expected import prefix for this project
|
||||
expected_prefix = "tradingagents"
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
python_blocks = self._extract_code_blocks(content, "python")
|
||||
|
||||
for i, code_block in enumerate(python_blocks):
|
||||
# Look for import statements
|
||||
import_lines = [
|
||||
line for line in code_block.splitlines()
|
||||
if line.strip().startswith(('import ', 'from '))
|
||||
]
|
||||
|
||||
for line in import_lines:
|
||||
# Check if it's importing from this project
|
||||
if 'tradingagents' in line.lower() and expected_prefix not in line:
|
||||
files_with_wrong_imports.append(
|
||||
f"{doc_path} (block {i}): {line.strip()}"
|
||||
)
|
||||
|
||||
assert not files_with_wrong_imports, (
|
||||
f"Found code examples with incorrect import paths:\n" +
|
||||
"\n".join(f" - {imp}" for imp in files_with_wrong_imports) +
|
||||
f"\n\nAll imports should use '{expected_prefix}' prefix."
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Content Quality Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestDocumentationQuality:
|
||||
"""Test documentation quality and completeness."""
|
||||
|
||||
def test_architecture_docs_describe_key_components(self, docs_root: Path):
|
||||
"""Test that architecture docs describe key system components."""
|
||||
arch_path = docs_root / "architecture" / "multi-agent-system.md"
|
||||
|
||||
if not arch_path.exists():
|
||||
pytest.skip("Architecture documentation does not exist yet")
|
||||
|
||||
content = arch_path.read_text(encoding="utf-8").lower()
|
||||
|
||||
# Key components that should be documented
|
||||
required_components = [
|
||||
"agent",
|
||||
"graph",
|
||||
"state",
|
||||
"workflow",
|
||||
]
|
||||
|
||||
missing_components = []
|
||||
for component in required_components:
|
||||
if component not in content:
|
||||
missing_components.append(component)
|
||||
|
||||
assert not missing_components, (
|
||||
f"Architecture documentation missing key components:\n" +
|
||||
"\n".join(f" - {c}" for c in missing_components)
|
||||
)
|
||||
|
||||
def test_api_docs_include_code_examples(self, docs_root: Path):
|
||||
"""Test that API documentation includes code examples."""
|
||||
api_files = [
|
||||
"docs/api/trading-graph.md",
|
||||
"docs/api/agents.md",
|
||||
"docs/api/dataflows.md",
|
||||
]
|
||||
|
||||
files_without_examples = []
|
||||
|
||||
for doc_path in api_files:
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
|
||||
# Check for code blocks
|
||||
has_code = "```" in content
|
||||
|
||||
if not has_code:
|
||||
files_without_examples.append(doc_path)
|
||||
|
||||
assert not files_without_examples, (
|
||||
f"API documentation files missing code examples:\n" +
|
||||
"\n".join(f" - {f}" for f in files_without_examples)
|
||||
)
|
||||
|
||||
def test_guides_have_step_by_step_instructions(self, docs_root: Path):
|
||||
"""Test that guides include step-by-step instructions."""
|
||||
guide_files = [
|
||||
"docs/guides/adding-new-analyst.md",
|
||||
"docs/guides/adding-llm-provider.md",
|
||||
"docs/guides/configuration.md",
|
||||
]
|
||||
|
||||
files_without_steps = []
|
||||
|
||||
for doc_path in guide_files:
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8").lower()
|
||||
|
||||
# Look for step indicators
|
||||
has_steps = any(
|
||||
indicator in content
|
||||
for indicator in [
|
||||
"step 1",
|
||||
"1.",
|
||||
"first,",
|
||||
"## setup",
|
||||
"## installation",
|
||||
]
|
||||
)
|
||||
|
||||
if not has_steps:
|
||||
files_without_steps.append(doc_path)
|
||||
|
||||
assert not files_without_steps, (
|
||||
f"Guide files missing step-by-step instructions:\n" +
|
||||
"\n".join(f" - {f}" for f in files_without_steps)
|
||||
)
|
||||
|
||||
def test_contributing_guide_exists_and_complete(self, docs_root: Path):
|
||||
"""Test that contributing guide exists and covers key topics."""
|
||||
contrib_path = docs_root / "development" / "contributing.md"
|
||||
|
||||
if not contrib_path.exists():
|
||||
pytest.skip("Contributing guide does not exist yet")
|
||||
|
||||
content = contrib_path.read_text(encoding="utf-8").lower()
|
||||
|
||||
# Key topics for contributing guide
|
||||
required_topics = [
|
||||
("pull request", "Pull request guidelines"),
|
||||
("test", "Testing requirements"),
|
||||
("code", "Code standards"),
|
||||
]
|
||||
|
||||
missing_topics = []
|
||||
for keyword, topic in required_topics:
|
||||
if keyword not in content:
|
||||
missing_topics.append(topic)
|
||||
|
||||
assert not missing_topics, (
|
||||
f"Contributing guide missing key topics:\n" +
|
||||
"\n".join(f" - {t}" for t in missing_topics)
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Integration Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestDocumentationIntegration:
|
||||
"""Test documentation integrates properly with project."""
|
||||
|
||||
def test_docs_referenced_in_main_readme(self, project_root: Path):
|
||||
"""Test that main project README references the documentation."""
|
||||
main_readme = project_root / "README.md"
|
||||
|
||||
if not main_readme.exists():
|
||||
pytest.skip("Main README.md does not exist")
|
||||
|
||||
content = main_readme.read_text(encoding="utf-8").lower()
|
||||
|
||||
# Should reference docs directory
|
||||
has_docs_reference = any(
|
||||
ref in content
|
||||
for ref in [
|
||||
"docs/",
|
||||
"documentation",
|
||||
"[docs]",
|
||||
"see docs",
|
||||
]
|
||||
)
|
||||
|
||||
assert has_docs_reference, (
|
||||
"Main README.md should reference the docs/ directory or documentation"
|
||||
)
|
||||
|
||||
def test_all_public_apis_documented(self, project_root: Path, docs_root: Path):
|
||||
"""Test that all public APIs have corresponding documentation."""
|
||||
# This is a basic check - could be enhanced with AST parsing
|
||||
api_doc_path = docs_root / "api"
|
||||
|
||||
if not api_doc_path.exists():
|
||||
pytest.skip("API documentation directory does not exist yet")
|
||||
|
||||
# Check that major modules have API docs
|
||||
major_modules = [
|
||||
("graph/trading_graph.py", "trading-graph.md"),
|
||||
("agents/", "agents.md"),
|
||||
("dataflows/", "dataflows.md"),
|
||||
]
|
||||
|
||||
missing_docs = []
|
||||
for module_path, expected_doc in major_modules:
|
||||
module_full_path = project_root / "tradingagents" / module_path
|
||||
doc_full_path = api_doc_path / expected_doc
|
||||
|
||||
if module_full_path.exists() and not doc_full_path.exists():
|
||||
missing_docs.append(f"{expected_doc} for {module_path}")
|
||||
|
||||
assert not missing_docs, (
|
||||
f"Missing API documentation for modules:\n" +
|
||||
"\n".join(f" - {d}" for d in missing_docs)
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Performance Tests
|
||||
# ============================================================================
|
||||
|
||||
class TestDocumentationSize:
|
||||
"""Test that documentation files are reasonable in size."""
|
||||
|
||||
def test_no_excessively_large_files(self, docs_root: Path):
|
||||
"""Test that no documentation files are excessively large."""
|
||||
max_size_kb = 500 # 500 KB max per file
|
||||
large_files = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
size_kb = full_path.stat().st_size / 1024
|
||||
if size_kb > max_size_kb:
|
||||
large_files.append(f"{doc_path}: {size_kb:.1f} KB")
|
||||
|
||||
assert not large_files, (
|
||||
f"Found excessively large documentation files (>{max_size_kb} KB):\n" +
|
||||
"\n".join(f" - {f}" for f in large_files) +
|
||||
f"\n\nConsider splitting large files into smaller documents."
|
||||
)
|
||||
|
||||
def test_reasonable_line_length(self, docs_root: Path):
|
||||
"""Test that documentation lines are reasonable length."""
|
||||
max_line_length = 120
|
||||
files_with_long_lines = []
|
||||
|
||||
for doc_path in REQUIRED_DOCS_STRUCTURE.keys():
|
||||
full_path = docs_root.parent / doc_path
|
||||
if not full_path.exists():
|
||||
continue
|
||||
|
||||
content = full_path.read_text(encoding="utf-8")
|
||||
long_lines = []
|
||||
|
||||
for i, line in enumerate(content.splitlines(), 1):
|
||||
# Skip code blocks and URLs
|
||||
if line.strip().startswith(('```', 'http://', 'https://')):
|
||||
continue
|
||||
|
||||
if len(line) > max_line_length:
|
||||
long_lines.append(i)
|
||||
|
||||
if long_lines:
|
||||
files_with_long_lines.append(
|
||||
f"{doc_path}: lines {long_lines[:3]}{'...' if len(long_lines) > 3 else ''}"
|
||||
)
|
||||
|
||||
assert not files_with_long_lines, (
|
||||
f"Found files with lines exceeding {max_line_length} characters:\n" +
|
||||
"\n".join(f" - {f}" for f in files_with_long_lines) +
|
||||
f"\n\nConsider breaking long lines for better readability."
|
||||
)
|
||||
Loading…
Reference in New Issue