feat: Modernize config API and enhance documentation structure
- Replace deprecated DEFAULT_CONFIG with TradingAgentsConfig pattern - Add comprehensive docs/ folder with specialized documentation - Streamline CLAUDE.md for CLI optimization - Add Quick Start guide and environment variables reference - Update examples to use modern configuration API - Add LiteLLM proxy support for flexible LLM providers - Fix AgentToolkit constructor to use required config parameter 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
775258b950
commit
c4cea772da
17
.mise.toml
17
.mise.toml
|
|
@ -5,6 +5,7 @@ ruff = "latest"
|
|||
"npm:pyright" = "latest"
|
||||
|
||||
[env]
|
||||
_.file = ".env"
|
||||
# Python environment settings
|
||||
PYTHONPATH = "."
|
||||
PYTHONDONTWRITEBYTECODE = "1"
|
||||
|
|
@ -48,11 +49,7 @@ run = "ruff check --fix ."
|
|||
|
||||
[tasks.all]
|
||||
description = "Run format, lint, and typecheck"
|
||||
run = [
|
||||
"ruff format .",
|
||||
"ruff check .",
|
||||
"pyright"
|
||||
]
|
||||
run = ["ruff format .", "ruff check .", "pyright"]
|
||||
|
||||
[tasks.clean]
|
||||
description = "Clean up cache and build artifacts"
|
||||
|
|
@ -66,10 +63,6 @@ run = [
|
|||
"rm -rf *.egg-info"
|
||||
]
|
||||
|
||||
[tasks.setup]
|
||||
description = "Initial project setup"
|
||||
run = [
|
||||
"mise install",
|
||||
"uv sync --dev",
|
||||
"echo 'Setup complete! Run mise run --help to see available tasks.'"
|
||||
]
|
||||
[tasks.litellm]
|
||||
run = "uvx --from litellm[proxy] litellm --config litellm.yml --port 4000"
|
||||
description = "Start LiteLLM proxy for Claude Code → OpenRouter"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,432 @@
|
|||
<p align="center">
|
||||
<img src="assets/TauricResearch.png" style="width: 60%; height: auto;">
|
||||
</p>
|
||||
|
||||
<div align="center" style="line-height: 1;">
|
||||
<a href="https://arxiv.org/abs/2412.20138" target="_blank"><img alt="arXiv" src="https://img.shields.io/badge/arXiv-2412.20138-B31B1B?logo=arxiv"/></a>
|
||||
<a href="https://discord.com/invite/hk9PGKShPK" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-TradingResearch-7289da?logo=discord&logoColor=white&color=7289da"/></a>
|
||||
<a href="./assets/wechat.png" target="_blank"><img alt="WeChat" src="https://img.shields.io/badge/WeChat-TauricResearch-brightgreen?logo=wechat&logoColor=white"/></a>
|
||||
<a href="https://x.com/TauricResearch" target="_blank"><img alt="X Follow" src="https://img.shields.io/badge/X-TauricResearch-white?logo=x&logoColor=white"/></a>
|
||||
<br>
|
||||
<a href="https://github.com/TauricResearch/" target="_blank"><img alt="Community" src="https://img.shields.io/badge/Join_GitHub_Community-TauricResearch-14C290?logo=discourse"/></a>
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<!-- Keep these links. Translations will automatically update with the README. -->
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=de">Deutsch</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=es">Español</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=fr">français</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ja">日本語</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ko">한국어</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=pt">Português</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ru">Русский</a> |
|
||||
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=zh">中文</a>
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
# TradingAgents: Multi-Agents LLM Financial Trading Framework
|
||||
|
||||
> 🎉 **TradingAgents** officially released! We have received numerous inquiries about the work, and we would like to express our thanks for the enthusiasm in our community.
|
||||
>
|
||||
> So we decided to fully open-source the framework. Looking forward to building impactful projects with you!
|
||||
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
|
||||
🚀 [TradingAgents](#tradingagents-framework) | ⚡ [Installation & CLI](#installation-and-cli) | 🎬 [Demo](https://www.youtube.com/watch?v=90gr5lwjIho) | 📦 [Package Usage](#tradingagents-package) | 📚 [API Docs](./docs/api-reference.md) | 🔧 [Troubleshooting](./docs/troubleshooting.md) | 👥 [Agent Dev](./docs/agent-development.md) | 🤝 [Contributing](#contributing) | 📄 [Citation](#citation)
|
||||
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<a href="https://www.star-history.com/#TauricResearch/TradingAgents&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date" />
|
||||
<img alt="TradingAgents Star History" src="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date" style="width: 80%; height: auto;" />
|
||||
</picture>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
## TradingAgents Framework
|
||||
|
||||
TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents: from fundamental analysts, sentiment experts, and technical analysts, to trader, risk management team, the platform collaboratively evaluates market conditions and informs trading decisions. Moreover, these agents engage in dynamic discussions to pinpoint the optimal strategy.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/schema.png" style="width: 100%; height: auto;">
|
||||
</p>
|
||||
|
||||
> TradingAgents framework is designed for research purposes. Trading performance may vary based on many factors, including the chosen backbone language models, model temperature, trading periods, the quality of data, and other non-deterministic factors. [It is not intended as financial, investment, or trading advice.](https://tauric.ai/disclaimer/)
|
||||
|
||||
Our framework decomposes complex trading tasks into specialized roles. This ensures the system achieves a robust, scalable approach to market analysis and decision-making.
|
||||
|
||||
### Analyst Team
|
||||
- Fundamentals Analyst: Evaluates company financials and performance metrics, identifying intrinsic values and potential red flags.
|
||||
- Sentiment Analyst: Analyzes social media and public sentiment using sentiment scoring algorithms to gauge short-term market mood.
|
||||
- News Analyst: Monitors global news and macroeconomic indicators, interpreting the impact of events on market conditions.
|
||||
- Technical Analyst: Utilizes technical indicators (like MACD and RSI) to detect trading patterns and forecast price movements.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/analyst.png" width="100%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
### Researcher Team
|
||||
- Comprises both bullish and bearish researchers who critically assess the insights provided by the Analyst Team. Through structured debates, they balance potential gains against inherent risks.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/researcher.png" width="70%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
### Trader Agent
|
||||
- Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
### Risk Management and Portfolio Manager
|
||||
- Continuously evaluates portfolio risk by assessing market volatility, liquidity, and other risk factors. The risk management team evaluates and adjusts trading strategies, providing assessment reports to the Portfolio Manager for final decision.
|
||||
- The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
## Installation and CLI
|
||||
|
||||
### Installation
|
||||
|
||||
Clone TradingAgents:
|
||||
```bash
|
||||
git clone https://github.com/TauricResearch/TradingAgents.git
|
||||
cd TradingAgents
|
||||
```
|
||||
|
||||
Create a virtual environment in any of your favorite environment managers:
|
||||
```bash
|
||||
conda create -n tradingagents python=3.13
|
||||
conda activate tradingagents
|
||||
```
|
||||
|
||||
Install dependencies:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Required APIs
|
||||
|
||||
You will also need the FinnHub API for financial data. All of our code is implemented with the free tier.
|
||||
```bash
|
||||
export FINNHUB_API_KEY=$YOUR_FINNHUB_API_KEY
|
||||
```
|
||||
|
||||
You will need the OpenAI API for all the agents.
|
||||
```bash
|
||||
export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY
|
||||
```
|
||||
|
||||
### CLI Usage
|
||||
|
||||
You can also try out the CLI directly by running:
|
||||
```bash
|
||||
python -m cli.main
|
||||
```
|
||||
You will see a screen where you can select your desired tickers, date, LLMs, research depth, etc.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/cli/cli_init.png" width="100%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
An interface will appear showing results as they load, letting you track the agent's progress as it runs.
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/cli/cli_news.png" width="100%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="assets/cli/cli_transaction.png" width="100%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
## Quick Start
|
||||
|
||||
Get up and running with TradingAgents in 3 simple steps:
|
||||
|
||||
### Step 1: Set API Keys
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_openai_api_key"
|
||||
export FINNHUB_API_KEY="your_finnhub_api_key" # Optional for financial data
|
||||
```
|
||||
|
||||
### Step 2: Run Your First Analysis
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
# Create configuration (uses environment variables)
|
||||
config = TradingAgentsConfig.from_env()
|
||||
|
||||
# Initialize the trading graph
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# Analyze a stock
|
||||
result, decision = ta.propagate("AAPL", "2024-01-15")
|
||||
print(f"Decision: {decision}")
|
||||
```
|
||||
|
||||
### Step 3: Explore Results
|
||||
The analysis returns:
|
||||
- **Decision**: `BUY`, `SELL`, or `HOLD`
|
||||
- **Result**: Detailed analysis from all agents including market data, news sentiment, and risk assessment
|
||||
|
||||
**Next Steps**: Explore the [CLI interface](#cli-usage), check out [usage examples](#multi-llm-provider-examples), or dive into the [API documentation](./docs/api-reference.md).
|
||||
|
||||
## TradingAgents Package
|
||||
|
||||
### Implementation Details
|
||||
|
||||
We built TradingAgents with LangGraph to ensure flexibility and modularity. We utilize `o1-preview` and `gpt-4o` as our deep thinking and fast thinking LLMs for our experiments. However, for testing purposes, we recommend you use `o4-mini` and `gpt-4.1-mini` to save on costs as our framework makes **lots of** API calls.
|
||||
|
||||
### Python Usage
|
||||
|
||||
To use TradingAgents inside your code, you can import the `tradingagents` module and initialize a `TradingAgentsGraph()` object. The `.propagate()` function will return a decision. You can run `main.py`, here's also a quick example:
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
config = TradingAgentsConfig.from_env()
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# forward propagate
|
||||
_, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
print(decision)
|
||||
```
|
||||
|
||||
You can also adjust the default configuration to set your own choice of LLMs, debate rounds, etc.
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
# Create a custom config
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4.1-nano", # Use a different model
|
||||
quick_think_llm="gpt-4.1-nano", # Use a different model
|
||||
max_debate_rounds=3, # Increase debate rounds
|
||||
online_tools=True # Use online tools or cached data
|
||||
)
|
||||
|
||||
# Initialize with custom config
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# forward propagate
|
||||
_, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
print(decision)
|
||||
```
|
||||
|
||||
> For `online_tools`, we recommend enabling them for experimentation, as they provide access to real-time data. The agents' offline tools rely on cached data from our **Tauric TradingDB**, a curated dataset we use for backtesting. We're currently in the process of refining this dataset, and we plan to release it soon alongside our upcoming projects. Stay tuned!
|
||||
|
||||
You can view the full list of configurations in `tradingagents/config.py`.
|
||||
|
||||
### Complete Environment Variables Reference
|
||||
|
||||
| Variable | Description | Default | Example |
|
||||
|----------|-------------|---------|---------|
|
||||
| `LLM_PROVIDER` | LLM provider to use | `openai` | `anthropic` |
|
||||
| `DEEP_THINK_LLM` | Model for complex analysis | `o4-mini` | `claude-3-5-sonnet-latest` |
|
||||
| `QUICK_THINK_LLM` | Model for fast responses | `gpt-4o-mini` | `gpt-4o-mini` |
|
||||
| `BACKEND_URL` | API endpoint | `https://api.openai.com/v1` | `https://api.anthropic.com` |
|
||||
| `MAX_DEBATE_ROUNDS` | Investment debate rounds | `1` | `3` |
|
||||
| `MAX_RISK_DISCUSS_ROUNDS` | Risk discussion rounds | `1` | `2` |
|
||||
| `ONLINE_TOOLS` | Use live APIs vs cached data | `true` | `false` |
|
||||
| `DEFAULT_LOOKBACK_DAYS` | Historical data range | `30` | `60` |
|
||||
| `TRADINGAGENTS_RESULTS_DIR` | Output directory | `./results` | `./my_results` |
|
||||
| `TRADINGAGENTS_DATA_DIR` | Data storage directory | System default | `./data` |
|
||||
|
||||
### Multi-LLM Provider Examples
|
||||
|
||||
**Using Anthropic Claude:**
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
config = TradingAgentsConfig(
|
||||
llm_provider="anthropic",
|
||||
deep_think_llm="claude-3-5-sonnet-latest",
|
||||
quick_think_llm="claude-3-haiku-latest",
|
||||
max_debate_rounds=2
|
||||
)
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
_, decision = ta.propagate("TSLA", "2024-01-15")
|
||||
```
|
||||
|
||||
**Using Google Gemini:**
|
||||
```python
|
||||
config = TradingAgentsConfig(
|
||||
llm_provider="google",
|
||||
deep_think_llm="gemini-1.5-pro",
|
||||
quick_think_llm="gemini-1.5-flash"
|
||||
)
|
||||
```
|
||||
|
||||
See [docs/api-reference.md](./docs/api-reference.md) for complete API documentation.
|
||||
|
||||
## Development Guide
|
||||
|
||||
This section provides comprehensive development guidance for contributors working on the TradingAgents codebase.
|
||||
|
||||
### Common Development Commands
|
||||
|
||||
This project uses [mise](https://mise.jdx.dev/) for tool and task management. All development tasks are managed through mise.
|
||||
|
||||
#### Essential Commands
|
||||
- **CLI Application**: `mise run dev` - Interactive CLI for running trading analysis
|
||||
- **Direct Python Usage**: `mise run run` - Run main.py programmatically
|
||||
- **Format code**: `mise run format` - Auto-format with ruff
|
||||
- **Lint code**: `mise run lint` - Check code quality with ruff
|
||||
- **Type checking**: `mise run typecheck` - Run pyright type checker
|
||||
- **Run all tests**: `mise run test` - Run tests with pytest
|
||||
|
||||
#### Initial Setup
|
||||
- **First-time setup**: `mise run setup` - Install tools and dependencies
|
||||
- **Install tools only**: `mise install` - Install Python, uv, ruff, pyright
|
||||
- **Install dependencies**: `mise run install` - Install project dependencies with uv
|
||||
|
||||
### Configuration
|
||||
|
||||
The TradingAgents framework uses a centralized `TradingAgentsConfig` class for all configuration management.
|
||||
|
||||
#### Core Configuration Options
|
||||
|
||||
**LLM Settings**:
|
||||
- `llm_provider`: OpenAI, Anthropic, Google, Ollama, or OpenRouter (default: "openai")
|
||||
- `deep_think_llm`: Model for complex reasoning tasks (default: "o4-mini")
|
||||
- `quick_think_llm`: Model for fast responses (default: "gpt-4o-mini")
|
||||
|
||||
**Debate Parameters**:
|
||||
- `max_debate_rounds`: Maximum rounds in investment debates (default: 1)
|
||||
- `max_risk_discuss_rounds`: Maximum rounds in risk discussions (default: 1)
|
||||
|
||||
**Data Management**:
|
||||
- `online_tools`: Enable/disable live API calls vs cached data (default: True)
|
||||
- `default_lookback_days`: Historical data range for analysis (default: 30)
|
||||
|
||||
#### Required API Keys
|
||||
|
||||
```bash
|
||||
# For OpenAI (default)
|
||||
export OPENAI_API_KEY="your_openai_api_key"
|
||||
|
||||
# For Anthropic Claude
|
||||
export ANTHROPIC_API_KEY="your_anthropic_api_key"
|
||||
|
||||
# For Google Gemini
|
||||
export GOOGLE_API_KEY="your_google_api_key"
|
||||
|
||||
# For financial data (optional)
|
||||
export FINNHUB_API_KEY="your_finnhub_api_key"
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### Multi-Agent Trading System
|
||||
TradingAgents uses specialized LLM agents that work together in a trading firm structure:
|
||||
|
||||
**Agent Workflow**: `Analysts → Researchers → Trader → Risk Management`
|
||||
|
||||
### Core Components
|
||||
|
||||
#### 1. Domain-Driven Architecture
|
||||
Three main domains with clean separation:
|
||||
- **Financial Data** (`tradingagents/domains/marketdata/`): Market prices, technical analysis, fundamentals
|
||||
- **News** (`tradingagents/domains/news/`): News articles and sentiment analysis
|
||||
- **Social Media** (`tradingagents/domains/socialmedia/`): Social sentiment from Reddit/Twitter
|
||||
|
||||
#### 2. Repository-First Data Strategy
|
||||
- Services read from local repositories (cached data)
|
||||
- Separate update operations fetch fresh data from APIs
|
||||
- Smart caching with gap detection and deduplication
|
||||
|
||||
#### 3. Agent Integration (Anti-Corruption Layer)
|
||||
- `AgentToolkit` mediates between agents and domain services
|
||||
- Converts rich domain models to structured JSON for LLM consumption
|
||||
- Handles parameter validation and error recovery
|
||||
|
||||
### Key Design Patterns
|
||||
|
||||
1. **Debate-Driven Decisions**: Bull/bear researchers debate before trading
|
||||
2. **Memory-Augmented Learning**: ChromaDB stores past decisions for context
|
||||
3. **Quality-Aware Data**: All contexts include data quality metadata
|
||||
4. **Structured Outputs**: Pydantic models replace error-prone string parsing
|
||||
|
||||
### File Structure
|
||||
```
|
||||
tradingagents/
|
||||
├── agents/ # Agent implementations
|
||||
│ └── libs/ # AgentToolkit and utilities
|
||||
├── domains/ # Domain-specific services
|
||||
│ ├── marketdata/ # Financial data domain
|
||||
│ ├── news/ # News domain
|
||||
│ └── socialmedia/ # Social media domain
|
||||
├── graph/ # LangGraph workflow orchestration
|
||||
└── config.py # Configuration management
|
||||
```
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
**Caching Strategy:**
|
||||
- Repository-first data access minimizes API calls
|
||||
- Smart caching with automatic invalidation
|
||||
- Gap detection for missing data ranges
|
||||
|
||||
**Model Selection:**
|
||||
- `quick_think_llm` for data retrieval and formatting
|
||||
- `deep_think_llm` for complex analysis and decisions
|
||||
|
||||
**Cost Optimization:**
|
||||
```python
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4o-mini", # Lower cost
|
||||
max_debate_rounds=1, # Fewer debates
|
||||
online_tools=False, # Use cached data
|
||||
default_lookback_days=30 # Limit data range
|
||||
)
|
||||
```
|
||||
|
||||
## Need Help?
|
||||
|
||||
- **Detailed Architecture**: `docs/architecture.md`
|
||||
- **API Documentation**: `docs/api-reference.md`
|
||||
- **Troubleshooting**: `docs/troubleshooting.md`
|
||||
- **Agent Development**: `docs/agent-development.md`
|
||||
|
||||
## Contributing
|
||||
|
||||
We welcome contributions from the community! Whether it's fixing a bug, improving documentation, or suggesting a new feature, your input helps make this project better. If you are interested in this line of research, please consider joining our open-source financial AI research community [Tauric Research](https://tauric.ai/).
|
||||
|
||||
## Citation
|
||||
|
||||
Please reference our work if you find *TradingAgents* provides you with some help :)
|
||||
|
||||
```
|
||||
@misc{xiao2025tradingagentsmultiagentsllmfinancial,
|
||||
title={TradingAgents: Multi-Agents LLM Financial Trading Framework},
|
||||
author={Yijia Xiao and Edward Sun and Di Luo and Wei Wang},
|
||||
year={2025},
|
||||
eprint={2412.20138},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={q-fin.TR},
|
||||
url={https://arxiv.org/abs/2412.20138},
|
||||
}
|
||||
```
|
||||
|
||||
# important-instruction-reminders
|
||||
Do what has been asked; nothing more, nothing less.
|
||||
NEVER create files unless they're absolutely necessary for achieving your goal.
|
||||
ALWAYS prefer editing an existing file to creating a new one.
|
||||
NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
|
||||
|
||||
|
||||
IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context unless it is highly relevant to your task.
|
||||
843
README.md
843
README.md
|
|
@ -31,6 +31,14 @@
|
|||
>
|
||||
> So we decided to fully open-source the framework. Looking forward to building impactful projects with you!
|
||||
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
|
||||
🚀 [TradingAgents](#tradingagents-framework) | ⚡ [Installation & CLI](#installation-and-cli) | 🎬 [Demo](https://www.youtube.com/watch?v=90gr5lwjIho) | 📦 [Package Usage](#tradingagents-package) | 📚 [API Docs](./docs/api-reference.md) | 🔧 [Troubleshooting](./docs/troubleshooting.md) | 👥 [Agent Dev](./docs/agent-development.md) | 🤝 [Contributing](#contributing) | 📄 [Citation](#citation)
|
||||
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<a href="https://www.star-history.com/#TauricResearch/TradingAgents&Date">
|
||||
<picture>
|
||||
|
|
@ -41,12 +49,6 @@
|
|||
</a>
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
|
||||
🚀 [TradingAgents](#tradingagents-framework) | ⚡ [Installation & CLI](#installation-and-cli) | 🎬 [Demo](https://www.youtube.com/watch?v=90gr5lwjIho) | 📦 [Package Usage](#tradingagents-package) | 🤝 [Contributing](#contributing) | 📄 [Citation](#citation)
|
||||
|
||||
</div>
|
||||
|
||||
## TradingAgents Framework
|
||||
|
||||
TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents: from fundamental analysts, sentiment experts, and technical analysts, to trader, risk management team, the platform collaboratively evaluates market conditions and informs trading decisions. Moreover, these agents engage in dynamic discussions to pinpoint the optimal strategy.
|
||||
|
|
@ -146,6 +148,39 @@ An interface will appear showing results as they load, letting you track the age
|
|||
<img src="assets/cli/cli_transaction.png" width="100%" style="display: inline-block; margin: 0 2%;">
|
||||
</p>
|
||||
|
||||
## Quick Start
|
||||
|
||||
Get up and running with TradingAgents in 3 simple steps:
|
||||
|
||||
### Step 1: Set API Keys
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_openai_api_key"
|
||||
export FINNHUB_API_KEY="your_finnhub_api_key" # Optional for financial data
|
||||
```
|
||||
|
||||
### Step 2: Run Your First Analysis
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
# Create configuration (uses environment variables)
|
||||
config = TradingAgentsConfig.from_env()
|
||||
|
||||
# Initialize the trading graph
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# Analyze a stock
|
||||
result, decision = ta.propagate("AAPL", "2024-01-15")
|
||||
print(f"Decision: {decision}")
|
||||
```
|
||||
|
||||
### Step 3: Explore Results
|
||||
The analysis returns:
|
||||
- **Decision**: `BUY`, `SELL`, or `HOLD`
|
||||
- **Result**: Detailed analysis from all agents including market data, news sentiment, and risk assessment
|
||||
|
||||
**Next Steps**: Explore the [CLI interface](#cli-usage), check out [usage examples](#multi-llm-provider-examples), or dive into the [API documentation](./docs/api-reference.md).
|
||||
|
||||
## TradingAgents Package
|
||||
|
||||
### Implementation Details
|
||||
|
|
@ -158,9 +193,10 @@ To use TradingAgents inside your code, you can import the `tradingagents` module
|
|||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
|
||||
config = TradingAgentsConfig.from_env()
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# forward propagate
|
||||
_, decision = ta.propagate("NVDA", "2024-05-10")
|
||||
|
|
@ -171,14 +207,15 @@ You can also adjust the default configuration to set your own choice of LLMs, de
|
|||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
# Create a custom config
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["deep_think_llm"] = "gpt-4.1-nano" # Use a different model
|
||||
config["quick_think_llm"] = "gpt-4.1-nano" # Use a different model
|
||||
config["max_debate_rounds"] = 1 # Increase debate rounds
|
||||
config["online_tools"] = True # Use online tools or cached data
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4.1-nano", # Use a different model
|
||||
quick_think_llm="gpt-4.1-nano", # Use a different model
|
||||
max_debate_rounds=3, # Increase debate rounds
|
||||
online_tools=True # Use online tools or cached data
|
||||
)
|
||||
|
||||
# Initialize with custom config
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
|
@ -190,7 +227,51 @@ print(decision)
|
|||
|
||||
> For `online_tools`, we recommend enabling them for experimentation, as they provide access to real-time data. The agents' offline tools rely on cached data from our **Tauric TradingDB**, a curated dataset we use for backtesting. We're currently in the process of refining this dataset, and we plan to release it soon alongside our upcoming projects. Stay tuned!
|
||||
|
||||
You can view the full list of configurations in `tradingagents/default_config.py`.
|
||||
You can view the full list of configurations in `tradingagents/config.py`.
|
||||
|
||||
### Complete Environment Variables Reference
|
||||
|
||||
| Variable | Description | Default | Example |
|
||||
|----------|-------------|---------|---------|
|
||||
| `LLM_PROVIDER` | LLM provider to use | `openai` | `anthropic` |
|
||||
| `DEEP_THINK_LLM` | Model for complex analysis | `o4-mini` | `claude-3-5-sonnet-latest` |
|
||||
| `QUICK_THINK_LLM` | Model for fast responses | `gpt-4o-mini` | `gpt-4o-mini` |
|
||||
| `BACKEND_URL` | API endpoint | `https://api.openai.com/v1` | `https://api.anthropic.com` |
|
||||
| `MAX_DEBATE_ROUNDS` | Investment debate rounds | `1` | `3` |
|
||||
| `MAX_RISK_DISCUSS_ROUNDS` | Risk discussion rounds | `1` | `2` |
|
||||
| `ONLINE_TOOLS` | Use live APIs vs cached data | `true` | `false` |
|
||||
| `DEFAULT_LOOKBACK_DAYS` | Historical data range | `30` | `60` |
|
||||
| `TRADINGAGENTS_RESULTS_DIR` | Output directory | `./results` | `./my_results` |
|
||||
| `TRADINGAGENTS_DATA_DIR` | Data storage directory | System default | `./data` |
|
||||
|
||||
### Multi-LLM Provider Examples
|
||||
|
||||
**Using Anthropic Claude:**
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
config = TradingAgentsConfig(
|
||||
llm_provider="anthropic",
|
||||
deep_think_llm="claude-3-5-sonnet-latest",
|
||||
quick_think_llm="claude-3-haiku-latest",
|
||||
max_debate_rounds=2
|
||||
)
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
_, decision = ta.propagate("TSLA", "2024-01-15")
|
||||
```
|
||||
|
||||
**Using Google Gemini:**
|
||||
```python
|
||||
config = TradingAgentsConfig(
|
||||
llm_provider="google",
|
||||
deep_think_llm="gemini-1.5-pro",
|
||||
quick_think_llm="gemini-1.5-flash"
|
||||
)
|
||||
```
|
||||
|
||||
See [docs/api-reference.md](./docs/api-reference.md) for complete API documentation.
|
||||
|
||||
## Development Guide
|
||||
|
||||
|
|
@ -200,97 +281,39 @@ This section provides comprehensive development guidance for contributors workin
|
|||
|
||||
This project uses [mise](https://mise.jdx.dev/) for tool and task management. All development tasks are managed through mise.
|
||||
|
||||
#### Initial Setup
|
||||
- **First-time setup**: `mise run setup` - Install tools and dependencies
|
||||
- **Install tools only**: `mise install` - Install Python, uv, ruff, pyright
|
||||
- **Install dependencies**: `mise run install` - Install project dependencies with uv
|
||||
|
||||
#### Development Workflow
|
||||
#### Essential Commands
|
||||
- **CLI Application**: `mise run dev` - Interactive CLI for running trading analysis
|
||||
- **Direct Python Usage**: `mise run run` - Run main.py programmatically
|
||||
- **Format code**: `mise run format` - Auto-format with ruff
|
||||
- **Lint code**: `mise run lint` - Check code quality with ruff
|
||||
- **Type checking**: `mise run typecheck` - Run pyright type checker
|
||||
- **Fix lint issues**: `mise run fix` - Auto-fix linting issues
|
||||
- **Run all checks**: `mise run all` - Format, lint, and typecheck
|
||||
- **Clean artifacts**: `mise run clean` - Remove cache and build files
|
||||
|
||||
#### Testing
|
||||
|
||||
##### Running Tests
|
||||
- **Run all tests**: `mise run test` - Run tests with pytest
|
||||
- **Run specific test file**: `uv run pytest test_social_media_service.py` - Run individual test file
|
||||
- **Verbose output**: `uv run pytest -v` - Run tests with detailed output
|
||||
- **Run with output**: `uv run pytest -s` - Show print statements and debug output
|
||||
- **Test coverage**: `uv run pytest --cov=tradingagents` - Run tests with coverage report
|
||||
|
||||
##### Test Development (TDD Approach)
|
||||
This project follows **Test-Driven Development (TDD)** for service layer development:
|
||||
|
||||
1. **Write test first**: Create `{component}_service_test.py` with comprehensive test cases
|
||||
2. **Run test (should fail)**: Verify test fails with appropriate error messages
|
||||
3. **Implement minimum code**: Write just enough code to make the test pass
|
||||
4. **Refactor**: Improve code while keeping tests passing
|
||||
5. **Repeat**: Add more test cases and implement additional functionality
|
||||
|
||||
##### Test Structure and Conventions
|
||||
- **Test files**: Named `{component}_service_test.py` and placed next to source code (not in separate tests/ directory)
|
||||
- **Test functions**: Named `test_{functionality}()` and should not return values (use `assert` statements)
|
||||
- **Mock clients**: Use `unittest.mock.Mock()` objects for testing services
|
||||
- **Real repositories**: Use actual repository implementations (don't mock the repository layer)
|
||||
- **Test data**: Use realistic mock data that matches expected API responses
|
||||
- **Date handling**: Use fixed dates (e.g., `datetime(2024, 1, 2)`) in mocks for predictable filtering
|
||||
|
||||
##### Service Testing Pattern
|
||||
Example test structure for services:
|
||||
```python
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
def test_online_mode_with_mock_client():
|
||||
"""Test service in online mode with mock client."""
|
||||
# Mock the client
|
||||
mock_client = Mock()
|
||||
mock_client.get_data.return_value = {"data": [{"symbol": "TEST", "price": 100.0}]}
|
||||
|
||||
real_repo = ServiceRepository("test_data")
|
||||
|
||||
service = ServiceClass(
|
||||
client=mock_client,
|
||||
repository=real_repo,
|
||||
online_mode=True
|
||||
)
|
||||
|
||||
context = service.get_context("TEST", "2024-01-01", "2024-01-05")
|
||||
|
||||
# Validate structure
|
||||
assert isinstance(context, ContextModel)
|
||||
assert context.symbol == "TEST"
|
||||
assert len(context.data) > 0
|
||||
|
||||
# Test JSON serialization
|
||||
json_output = context.model_dump_json()
|
||||
assert len(json_output) > 0
|
||||
|
||||
# Verify client was called
|
||||
mock_client.get_data.assert_called_once()
|
||||
```
|
||||
|
||||
##### Mock Client Guidelines
|
||||
- **Use unittest.mock**: Use `Mock()` objects instead of custom mock classes
|
||||
- **Realistic data**: Return data structures that match actual API responses
|
||||
- **Date consistency**: Use fixed dates that work with test date ranges
|
||||
- **Error simulation**: Configure mocks to raise exceptions for testing error handling paths
|
||||
- **Multiple scenarios**: Use different return values for different test cases
|
||||
#### Initial Setup
|
||||
- **Install tools**: `mise install` - Install Python, uv, ruff, pyright
|
||||
- **Install dependencies**: `mise run install` - Install project dependencies with uv
|
||||
|
||||
### Configuration
|
||||
- **Environment Variables**: Create `.env` file with API keys (see `.env.example`)
|
||||
- **Config Class**: `TradingAgentsConfig` in `tradingagents/config.py` handles all configuration
|
||||
- **Tool Configuration**: `.mise.toml` manages Python 3.13, uv, ruff, pyright
|
||||
- **Code Quality**: `pyproject.toml` contains ruff and pyright configurations
|
||||
|
||||
#### Required Environment Variables
|
||||
The TradingAgents framework uses a centralized `TradingAgentsConfig` class for all configuration management.
|
||||
|
||||
#### Core Configuration Options
|
||||
|
||||
**LLM Settings**:
|
||||
- `llm_provider`: OpenAI, Anthropic, Google, Ollama, or OpenRouter (default: "openai")
|
||||
- `deep_think_llm`: Model for complex reasoning tasks (default: "o4-mini")
|
||||
- `quick_think_llm`: Model for fast responses (default: "gpt-4o-mini")
|
||||
|
||||
**Debate Parameters**:
|
||||
- `max_debate_rounds`: Maximum rounds in investment debates (default: 1)
|
||||
- `max_risk_discuss_rounds`: Maximum rounds in risk discussions (default: 1)
|
||||
|
||||
**Data Management**:
|
||||
- `online_tools`: Enable/disable live API calls vs cached data (default: True)
|
||||
- `default_lookback_days`: Historical data range for analysis (default: 30)
|
||||
|
||||
#### Required API Keys
|
||||
|
||||
##### Core LLM APIs (Choose One)
|
||||
```bash
|
||||
# For OpenAI (default)
|
||||
export OPENAI_API_KEY="your_openai_api_key"
|
||||
|
|
@ -300,598 +323,83 @@ export ANTHROPIC_API_KEY="your_anthropic_api_key"
|
|||
|
||||
# For Google Gemini
|
||||
export GOOGLE_API_KEY="your_google_api_key"
|
||||
```
|
||||
|
||||
##### Data Sources (Optional)
|
||||
```bash
|
||||
# For financial data
|
||||
# For financial data (optional)
|
||||
export FINNHUB_API_KEY="your_finnhub_api_key"
|
||||
|
||||
# For Reddit data
|
||||
export REDDIT_CLIENT_ID="your_reddit_client_id"
|
||||
export REDDIT_CLIENT_SECRET="your_reddit_client_secret"
|
||||
export REDDIT_USER_AGENT="your_app_name"
|
||||
```
|
||||
|
||||
## Architecture Deep Dive
|
||||
## Architecture Overview
|
||||
|
||||
### Multi-Agent Trading Framework
|
||||
TradingAgents implements a sophisticated multi-agent system that mirrors real-world trading firms with specialized roles and structured workflows.
|
||||
### Multi-Agent Trading System
|
||||
TradingAgents uses specialized LLM agents that work together in a trading firm structure:
|
||||
|
||||
### Core Architecture Components
|
||||
**Agent Workflow**: `Analysts → Researchers → Trader → Risk Management`
|
||||
|
||||
#### 1. **Agent Teams** (Sequential Workflow)
|
||||
```
|
||||
Analyst Team → Research Team → Trading Team → Risk Management Team
|
||||
```
|
||||
### Core Components
|
||||
|
||||
**Analyst Team** (`tradingagents/agents/analysts/`)
|
||||
- **Market Analyst**: Technical analysis using Yahoo Finance and StockStats
|
||||
- **Fundamentals Analyst**: Financial statements and company fundamentals via SimFin/Finnhub
|
||||
- **News Analyst**: News sentiment analysis and world affairs impact
|
||||
- **Social Media Analyst**: Reddit and social platform sentiment analysis
|
||||
#### 1. Domain-Driven Architecture
|
||||
Three main domains with clean separation:
|
||||
- **Financial Data** (`tradingagents/domains/marketdata/`): Market prices, technical analysis, fundamentals
|
||||
- **News** (`tradingagents/domains/news/`): News articles and sentiment analysis
|
||||
- **Social Media** (`tradingagents/domains/socialmedia/`): Social sentiment from Reddit/Twitter
|
||||
|
||||
**Research Team** (`tradingagents/agents/researchers/`)
|
||||
- **Bull Researcher**: Advocates for investment opportunities and growth potential
|
||||
- **Bear Researcher**: Highlights risks and argues against investments
|
||||
- **Research Manager**: Synthesizes debates and creates investment recommendations
|
||||
#### 2. Repository-First Data Strategy
|
||||
- Services read from local repositories (cached data)
|
||||
- Separate update operations fetch fresh data from APIs
|
||||
- Smart caching with gap detection and deduplication
|
||||
|
||||
**Trading Team** (`tradingagents/agents/trader/`)
|
||||
- **Trader**: Converts investment plans into specific trading decisions
|
||||
|
||||
**Risk Management Team** (`tradingagents/agents/risk_mgmt/`)
|
||||
- **Aggressive/Conservative/Neutral Debators**: Different risk perspectives
|
||||
- **Risk Manager**: Final decision maker balancing risk and reward
|
||||
|
||||
#### 2. **Domain-Driven Architecture** (`tradingagents/domains/`)
|
||||
**Domain-Driven Design (DDD) Architecture** (Current):
|
||||
The system has been restructured using Domain-Driven Design principles with three main bounded contexts:
|
||||
|
||||
**Domain Boundaries & Bounded Contexts:**
|
||||
- **Financial Data Domain** (`tradingagents/domains/marketdata/`): Market prices, technical indicators, fundamentals, insider data
|
||||
- **News Domain** (`tradingagents/domains/news/`): News articles, sentiment analysis, content aggregation
|
||||
- **Social Media Domain** (`tradingagents/domains/socialmedia/`): Social media posts, engagement metrics, sentiment analysis
|
||||
|
||||
**DDD Tactical Patterns per Domain:**
|
||||
- **Domain Services**: Business logic encapsulated in domain-specific services (`MarketDataService`, `NewsService`, `SocialMediaService`)
|
||||
- **Value Objects**: Immutable data structures (`SentimentScore`, `TechnicalIndicatorData`, `PostMetadata`)
|
||||
- **Entities**: Objects with identity and lifecycle (`NewsArticle`, `PostData`)
|
||||
- **Repository Pattern**: Domain-specific data access with smart caching, deduplication, and gap detection
|
||||
- **Context Objects**: Structured domain data containers (`MarketDataContext`, `NewsContext`, `SocialContext`)
|
||||
|
||||
**Domain Infrastructure per Bounded Context:**
|
||||
```
|
||||
marketdata/
|
||||
├── clients/ # YFinanceClient, FinnhubClient (domain-specific)
|
||||
├── repos/ # MarketDataRepository, FundamentalRepository
|
||||
├── services/ # MarketDataService, FundamentalDataService, InsiderDataService
|
||||
└── models/ # Domain Value Objects and Entities
|
||||
|
||||
news/
|
||||
├── clients/ # GoogleNewsClient (domain-specific)
|
||||
├── repositories/ # NewsRepository with article deduplication
|
||||
├── services/ # NewsService with sentiment analysis
|
||||
└── models/ # NewsArticle, SentimentScore
|
||||
|
||||
socialmedia/
|
||||
├── clients/ # RedditClient (domain-specific)
|
||||
├── repositories/ # SocialMediaRepository with engagement tracking
|
||||
├── services/ # SocialMediaService with sentiment analysis
|
||||
└── models/ # PostData, EngagementMetrics
|
||||
```
|
||||
|
||||
**Agent Integration Strategy - Anti-Corruption Layer (ACL):**
|
||||
- **AgentToolkit as ACL**: Mediates between agents (string-based, procedural) and domains (object-oriented, rich models)
|
||||
- **Data Translation**: Converts rich Pydantic domain models to structured JSON strings for LLM consumption
|
||||
- **Parameter Adaptation**: Handles interface mismatches (single date → date ranges, etc.)
|
||||
- **Backward Compatibility**: Preserves existing agent tool interface while providing domain service benefits
|
||||
|
||||
#### 3. **Graph Orchestration** (`tradingagents/graph/`)
|
||||
LangGraph-based workflow management:
|
||||
|
||||
- **TradingAgentsGraph**: Main orchestrator class
|
||||
- **State Management**: `AgentState`, `InvestDebateState`, `RiskDebateState` track workflow progress
|
||||
- **Conditional Logic**: Dynamic routing based on tool usage and debate completion
|
||||
- **Memory System**: ChromaDB-based vector memory for learning from past decisions
|
||||
|
||||
#### 4. **Configuration System**
|
||||
- **TradingAgentsConfig**: Centralized configuration with environment variable support
|
||||
- **Multi-LLM Support**: OpenAI, Anthropic, Google, Ollama, OpenRouter
|
||||
- **Data Modes**: Online (live APIs) vs offline (cached data)
|
||||
#### 3. Agent Integration (Anti-Corruption Layer)
|
||||
- `AgentToolkit` mediates between agents and domain services
|
||||
- Converts rich domain models to structured JSON for LLM consumption
|
||||
- Handles parameter validation and error recovery
|
||||
|
||||
### Key Design Patterns
|
||||
|
||||
1. **Debate-Driven Decision Making**: Critical decisions emerge from structured agent debates
|
||||
2. **Memory-Augmented Learning**: Agents learn from past similar situations using vector similarity
|
||||
3. **Repository-First Data Strategy**: Services always read from repositories with separate update operations
|
||||
4. **Structured JSON Contexts**: Replace error-prone string parsing with rich Pydantic models
|
||||
5. **Factory Pattern**: Agent creation via factory functions for flexible configuration
|
||||
6. **Signal Processing**: Final trading decisions processed into clean BUY/SELL/HOLD signals
|
||||
7. **Quality-Aware Data**: All contexts include quality metadata to help agents make better decisions
|
||||
1. **Debate-Driven Decisions**: Bull/bear researchers debate before trading
|
||||
2. **Memory-Augmented Learning**: ChromaDB stores past decisions for context
|
||||
3. **Quality-Aware Data**: All contexts include data quality metadata
|
||||
4. **Structured Outputs**: Pydantic models replace error-prone string parsing
|
||||
|
||||
### Code Style Guidelines
|
||||
|
||||
#### General Style
|
||||
- **Functions**: Snake_case naming (e.g., `fundamentals_analyst_node`, `create_fundamentals_analyst`)
|
||||
- **Classes**: PascalCase (e.g., `TradingAgentsGraph`, `MessageBuffer`)
|
||||
- **Variables**: Snake_case (e.g., `current_date`, `company_of_interest`)
|
||||
- **Constants**: UPPER_CASE (e.g., `DEFAULT_CONFIG`)
|
||||
- **Imports**: Standard library first, third-party, then local imports (langchain, tradingagents modules)
|
||||
|
||||
#### Data Structure Guidelines
|
||||
**MANDATORY: Always use dataclasses for method returns**
|
||||
- **Never return**: `dict`, `str`, `Any`, or unstructured data from public methods
|
||||
- **Always return**: Properly typed dataclasses with clear field definitions
|
||||
- **Rationale**: Provides type safety, IDE support, clear contracts, and prevents runtime errors
|
||||
|
||||
**Examples**:
|
||||
```python
|
||||
# ❌ BAD - Dictionary returns
|
||||
def update_news() -> dict[str, Any]:
|
||||
return {"status": "completed", "count": 5}
|
||||
|
||||
# ✅ GOOD - Dataclass returns
|
||||
@dataclass
|
||||
class NewsUpdateResult:
|
||||
status: str
|
||||
articles_found: int
|
||||
articles_scraped: int
|
||||
articles_failed: int
|
||||
|
||||
def update_news() -> NewsUpdateResult:
|
||||
return NewsUpdateResult(
|
||||
status="completed",
|
||||
articles_found=10,
|
||||
articles_scraped=8,
|
||||
articles_failed=2
|
||||
)
|
||||
### File Structure
|
||||
```
|
||||
tradingagents/
|
||||
├── agents/ # Agent implementations
|
||||
│ └── libs/ # AgentToolkit and utilities
|
||||
├── domains/ # Domain-specific services
|
||||
│ ├── marketdata/ # Financial data domain
|
||||
│ ├── news/ # News domain
|
||||
│ └── socialmedia/ # Social media domain
|
||||
├── graph/ # LangGraph workflow orchestration
|
||||
└── config.py # Configuration management
|
||||
```
|
||||
|
||||
**Dataclass Best Practices**:
|
||||
- Use `@dataclass` decorator for all return value structures
|
||||
- Include type hints for all fields
|
||||
- Use `| None` for optional fields (modern Python 3.10+ syntax)
|
||||
- Group related dataclasses in the same module
|
||||
- Prefer immutable dataclasses with `frozen=True` for value objects
|
||||
### Performance Optimization
|
||||
|
||||
#### Ruff Formatting & Linting Rules
|
||||
**Formatting** (`mise run format`):
|
||||
- **Line length**: 88 characters maximum
|
||||
- **Quote style**: Double quotes (`"string"`)
|
||||
- **Indentation**: 4 spaces (no tabs)
|
||||
- **Trailing commas**: Preserved for multi-line structures
|
||||
- **Line endings**: Auto-detected based on platform
|
||||
**Caching Strategy:**
|
||||
- Repository-first data access minimizes API calls
|
||||
- Smart caching with automatic invalidation
|
||||
- Gap detection for missing data ranges
|
||||
|
||||
**Linting** (`mise run lint`):
|
||||
- **Selected rules**:
|
||||
- `E`, `W`: pycodestyle errors and warnings
|
||||
- `F`: pyflakes (undefined names, unused imports)
|
||||
- `I`: isort (import sorting)
|
||||
- `B`: flake8-bugbear (common bugs)
|
||||
- `C4`: flake8-comprehensions (list/dict comprehensions)
|
||||
- `UP`: pyupgrade (Python syntax modernization)
|
||||
- `ARG`: flake8-unused-arguments
|
||||
- `SIM`: flake8-simplify (code simplification)
|
||||
- `TCH`: flake8-type-checking (type annotation imports)
|
||||
**Model Selection:**
|
||||
- `quick_think_llm` for data retrieval and formatting
|
||||
- `deep_think_llm` for complex analysis and decisions
|
||||
|
||||
- **Ignored rules**:
|
||||
- `E501`: Line too long (handled by formatter)
|
||||
- `B008`: Function calls in argument defaults (allowed for LangChain)
|
||||
- `C901`: Complex functions (legacy code tolerance)
|
||||
- `ARG001`, `ARG002`: Unused arguments (common in callbacks)
|
||||
|
||||
- **Import sorting**: `tradingagents` and `cli` treated as first-party modules
|
||||
|
||||
#### Pyright Type Checking Rules
|
||||
**Configuration** (`mise run typecheck`):
|
||||
- **Tool**: pyright 1.1.390+ with standard type checking mode
|
||||
- **Python version**: 3.10+ (configured for compatibility with modern syntax)
|
||||
- **Coverage**: Includes `tradingagents/`, `cli/`, and `main.py`
|
||||
- **Exclusions**: `__pycache__`, `node_modules`, `.venv`, `venv`, `build`, `dist`
|
||||
|
||||
**Type Annotation Guidelines**:
|
||||
- Use modern Python 3.10+ union syntax: `str | None` instead of `Optional[str]`
|
||||
- Use built-in generics: `list[str]` instead of `List[str]`
|
||||
- Use `dict[str, Any]` for flexible dictionaries
|
||||
- Import `from typing import Any` for untyped data structures
|
||||
- Prefer explicit return types on public functions
|
||||
- Use `# type: ignore` sparingly with explanatory comments
|
||||
|
||||
### Development Guidelines
|
||||
|
||||
#### Working with Agents
|
||||
|
||||
**Current Approach** (AgentToolkit as Anti-Corruption Layer):
|
||||
- Use `AgentToolkit` from `tradingagents.agents.libs.agent_toolkit`
|
||||
- Toolkit injects all domain services via dependency injection
|
||||
- Provides LangChain `@tool` decorated methods for agent consumption
|
||||
- Returns rich Pydantic domain models directly to agents
|
||||
- Handles parameter validation, date calculations, and error handling
|
||||
|
||||
**Agent Integration Pattern**:
|
||||
**Cost Optimization:**
|
||||
```python
|
||||
from tradingagents.agents.libs.agent_toolkit import AgentToolkit
|
||||
|
||||
# AgentToolkit acts as Anti-Corruption Layer
|
||||
toolkit = AgentToolkit(
|
||||
news_service=news_service,
|
||||
marketdata_service=marketdata_service,
|
||||
fundamentaldata_service=fundamentaldata_service,
|
||||
socialmedia_service=socialmedia_service,
|
||||
insiderdata_service=insiderdata_service
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4o-mini", # Lower cost
|
||||
max_debate_rounds=1, # Fewer debates
|
||||
online_tools=False, # Use cached data
|
||||
default_lookback_days=30 # Limit data range
|
||||
)
|
||||
|
||||
# Agents use toolkit tools that return rich domain contexts
|
||||
@tool
|
||||
def analyze_stock(symbol: str, date: str):
|
||||
# Get structured contexts from domain services via toolkit
|
||||
market_data = toolkit.get_market_data(symbol, start_date, end_date)
|
||||
social_data = toolkit.get_socialmedia_stock_info(symbol, date)
|
||||
news_data = toolkit.get_news(symbol, start_date, end_date)
|
||||
|
||||
# Work with rich Pydantic models
|
||||
price = market_data.latest_price
|
||||
sentiment = social_data.sentiment_summary.score
|
||||
article_count = news_data.article_count
|
||||
```
|
||||
|
||||
#### Working with Data Sources
|
||||
## Need Help?
|
||||
|
||||
**Current Domain Service Approach**:
|
||||
- **Repository-First**: Services always read data from repositories (local storage)
|
||||
- **Separate Update Operations**: Use dedicated update methods to fetch fresh data from APIs and store in repositories
|
||||
- **Clear Separation**: Reading data vs updating data are separate concerns
|
||||
- **Structured Contexts**: Services return rich Pydantic models with metadata
|
||||
- **Quality Awareness**: All contexts include data quality and source information
|
||||
|
||||
**Service Usage Pattern**:
|
||||
```python
|
||||
# Services use dependency injection
|
||||
service = MarketDataService(
|
||||
yfin_client=YFinanceClient(),
|
||||
repo=MarketDataRepository("cache_dir")
|
||||
)
|
||||
|
||||
# Always read from repository
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
|
||||
# Separate update operation to refresh repository data
|
||||
service.update_market_data("AAPL", "2024-01-01", "2024-01-31")
|
||||
```
|
||||
|
||||
#### Configuration Management
|
||||
- Use `TradingAgentsConfig.from_env()` for environment-based configuration
|
||||
- Key settings: `max_debate_rounds`, `llm_provider`, `online_tools`
|
||||
- Results are saved to `results_dir/{ticker}/{date}/` with structured reports
|
||||
|
||||
#### CLI Development
|
||||
- CLI uses Rich for terminal UI with live updating displays
|
||||
- Agent progress tracking through `MessageBuffer` class
|
||||
- Questionnaire-driven configuration collection
|
||||
- Real-time streaming of analysis results
|
||||
|
||||
### Progressive Development Framework
|
||||
|
||||
This framework ensures agents create type-safe, testable code through incremental development. It emphasizes building one component at a time with proper testing and type safety.
|
||||
|
||||
#### Core Principles
|
||||
|
||||
1. **Service-First Development**: Start with business logic in the service layer
|
||||
2. **Stub Dependencies**: Create placeholder methods that return proper dataclasses
|
||||
3. **Progressive Implementation**: Implement one dependency (client OR repository) at a time
|
||||
4. **Constructor Injection**: Dependencies passed through constructor for testability
|
||||
5. **Dataclass Returns**: All public methods return properly typed dataclasses
|
||||
6. **Test-Driven Development**: Write tests first, implement to make them pass
|
||||
|
||||
#### Development Process
|
||||
|
||||
**Step 1: Design Domain Models**
|
||||
```python
|
||||
# models.py - Define all dataclasses first
|
||||
@dataclass
|
||||
class DomainEntity:
|
||||
id: str
|
||||
name: str
|
||||
created_at: datetime
|
||||
|
||||
@dataclass
|
||||
class DomainContext:
|
||||
entities: list[DomainEntity]
|
||||
metadata: dict[str, Any]
|
||||
quality_score: float
|
||||
|
||||
@dataclass
|
||||
class UpdateResult:
|
||||
status: str
|
||||
entities_processed: int
|
||||
entities_failed: int
|
||||
```
|
||||
|
||||
**Step 2: Create Service with Business Logic**
|
||||
```python
|
||||
# service.py - Main business logic with stub dependencies
|
||||
class DomainService:
|
||||
def __init__(self, client: DomainClient, repository: DomainRepository):
|
||||
self.client = client
|
||||
self.repository = repository
|
||||
|
||||
def get_context(self, symbol: str, start_date: str, end_date: str) -> DomainContext:
|
||||
# Implement business logic flow
|
||||
entities = self.repository.get_entities(symbol, start_date, end_date)
|
||||
|
||||
# Process and transform data
|
||||
processed_entities = self._process_entities(entities)
|
||||
|
||||
# Calculate quality metrics
|
||||
quality_score = self._calculate_quality(processed_entities)
|
||||
|
||||
return DomainContext(
|
||||
entities=processed_entities,
|
||||
metadata={"symbol": symbol, "date_range": f"{start_date} to {end_date}"},
|
||||
quality_score=quality_score
|
||||
)
|
||||
|
||||
def update_data(self, symbol: str, start_date: str, end_date: str) -> UpdateResult:
|
||||
# Business logic for updating data
|
||||
raw_data = self.client.fetch_data(symbol, start_date, end_date)
|
||||
entities = self._transform_raw_data(raw_data)
|
||||
|
||||
processed = 0
|
||||
failed = 0
|
||||
for entity in entities:
|
||||
try:
|
||||
self.repository.save_entity(entity)
|
||||
processed += 1
|
||||
except Exception:
|
||||
failed += 1
|
||||
|
||||
return UpdateResult(
|
||||
status="completed",
|
||||
entities_processed=processed,
|
||||
entities_failed=failed
|
||||
)
|
||||
|
||||
def _process_entities(self, entities: list[DomainEntity]) -> list[DomainEntity]:
|
||||
# Private method for business logic
|
||||
return entities # Stub implementation
|
||||
|
||||
def _calculate_quality(self, entities: list[DomainEntity]) -> float:
|
||||
# Private method for quality calculation
|
||||
return 1.0 # Stub implementation
|
||||
```
|
||||
|
||||
**Step 3: Create Stub Dependencies**
|
||||
```python
|
||||
# client.py - Stub client that returns proper dataclasses
|
||||
class DomainClient:
|
||||
def fetch_data(self, symbol: str, start_date: str, end_date: str) -> list[dict[str, Any]]:
|
||||
# Stub implementation - returns realistic structure
|
||||
return [
|
||||
{"id": "1", "name": f"{symbol}_entity", "created_at": "2024-01-01T00:00:00Z"},
|
||||
{"id": "2", "name": f"{symbol}_entity_2", "created_at": "2024-01-02T00:00:00Z"}
|
||||
]
|
||||
|
||||
# repository.py - Stub repository that returns proper dataclasses
|
||||
class DomainRepository:
|
||||
def __init__(self, cache_dir: str):
|
||||
self.cache_dir = cache_dir
|
||||
|
||||
def get_entities(self, symbol: str, start_date: str, end_date: str) -> list[DomainEntity]:
|
||||
# Stub implementation - returns proper dataclasses
|
||||
return [
|
||||
DomainEntity(id="1", name=f"{symbol}_cached", created_at=datetime.now()),
|
||||
DomainEntity(id="2", name=f"{symbol}_cached_2", created_at=datetime.now())
|
||||
]
|
||||
|
||||
def save_entity(self, entity: DomainEntity) -> None:
|
||||
# Stub implementation
|
||||
pass
|
||||
```
|
||||
|
||||
**Step 4: Write Comprehensive Tests**
|
||||
```python
|
||||
# service_test.py - Test the service with mock dependencies
|
||||
from unittest.mock import Mock
|
||||
import pytest
|
||||
|
||||
def test_get_context_with_mock_dependencies():
|
||||
"""Test service business logic with mocked dependencies."""
|
||||
# Mock the dependencies
|
||||
mock_client = Mock()
|
||||
mock_repository = Mock()
|
||||
|
||||
# Configure mock returns
|
||||
mock_repository.get_entities.return_value = [
|
||||
DomainEntity(id="1", name="TEST_entity", created_at=datetime(2024, 1, 1))
|
||||
]
|
||||
|
||||
# Create service with mocks
|
||||
service = DomainService(client=mock_client, repository=mock_repository)
|
||||
|
||||
# Test the business logic
|
||||
context = service.get_context("TEST", "2024-01-01", "2024-01-31")
|
||||
|
||||
# Validate structure and business logic
|
||||
assert isinstance(context, DomainContext)
|
||||
assert context.metadata["symbol"] == "TEST"
|
||||
assert context.quality_score > 0
|
||||
assert len(context.entities) > 0
|
||||
|
||||
# Verify repository was called correctly
|
||||
mock_repository.get_entities.assert_called_once_with("TEST", "2024-01-01", "2024-01-31")
|
||||
|
||||
def test_update_data_with_mock_dependencies():
|
||||
"""Test update business logic with mocked dependencies."""
|
||||
mock_client = Mock()
|
||||
mock_repository = Mock()
|
||||
|
||||
# Configure mock client to return raw data
|
||||
mock_client.fetch_data.return_value = [
|
||||
{"id": "1", "name": "TEST_raw", "created_at": "2024-01-01T00:00:00Z"}
|
||||
]
|
||||
|
||||
service = DomainService(client=mock_client, repository=mock_repository)
|
||||
|
||||
result = service.update_data("TEST", "2024-01-01", "2024-01-31")
|
||||
|
||||
# Validate business logic results
|
||||
assert isinstance(result, UpdateResult)
|
||||
assert result.status == "completed"
|
||||
assert result.entities_processed >= 0
|
||||
|
||||
# Verify client and repository interactions
|
||||
mock_client.fetch_data.assert_called_once()
|
||||
mock_repository.save_entity.assert_called()
|
||||
```
|
||||
|
||||
**Step 5: Implement One Dependency at a Time**
|
||||
|
||||
Choose either client OR repository to implement first:
|
||||
|
||||
```python
|
||||
# Option A: Implement client first
|
||||
class DomainClient:
|
||||
def __init__(self, api_key: str):
|
||||
self.api_key = api_key
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update({"User-Agent": "TradingAgents/1.0"})
|
||||
|
||||
def fetch_data(self, symbol: str, start_date: str, end_date: str) -> list[dict[str, Any]]:
|
||||
# Real implementation with error handling
|
||||
try:
|
||||
response = self.session.get(
|
||||
f"https://api.example.com/data/{symbol}",
|
||||
params={"start": start_date, "end": end_date},
|
||||
timeout=30
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()["data"]
|
||||
except requests.RequestException as e:
|
||||
raise DomainClientError(f"Failed to fetch data: {e}")
|
||||
|
||||
# Option B: Implement repository first
|
||||
class DomainRepository:
|
||||
def __init__(self, cache_dir: str):
|
||||
self.cache_dir = Path(cache_dir)
|
||||
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def get_entities(self, symbol: str, start_date: str, end_date: str) -> list[DomainEntity]:
|
||||
# Real implementation with file I/O
|
||||
cache_file = self.cache_dir / f"{symbol}_{start_date}_{end_date}.json"
|
||||
|
||||
if not cache_file.exists():
|
||||
return []
|
||||
|
||||
try:
|
||||
with open(cache_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
return [
|
||||
DomainEntity(
|
||||
id=item["id"],
|
||||
name=item["name"],
|
||||
created_at=datetime.fromisoformat(item["created_at"])
|
||||
)
|
||||
for item in data
|
||||
]
|
||||
except (json.JSONDecodeError, KeyError) as e:
|
||||
raise DomainRepositoryError(f"Failed to load cached data: {e}")
|
||||
```
|
||||
|
||||
**Step 6: Test Real Implementation**
|
||||
```python
|
||||
def test_real_client_integration():
|
||||
"""Test real client implementation."""
|
||||
client = DomainClient(api_key="test_key")
|
||||
|
||||
# Test with real HTTP calls (or use responses library for mocking)
|
||||
with responses.RequestsMock() as rsps:
|
||||
rsps.add(
|
||||
responses.GET,
|
||||
"https://api.example.com/data/TEST",
|
||||
json={"data": [{"id": "1", "name": "TEST", "created_at": "2024-01-01T00:00:00Z"}]},
|
||||
status=200
|
||||
)
|
||||
|
||||
result = client.fetch_data("TEST", "2024-01-01", "2024-01-31")
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0]["id"] == "1"
|
||||
|
||||
def test_real_repository_integration():
|
||||
"""Test real repository implementation."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
repo = DomainRepository(temp_dir)
|
||||
|
||||
# Test saving and loading
|
||||
entity = DomainEntity(id="1", name="TEST", created_at=datetime.now())
|
||||
repo.save_entity(entity)
|
||||
|
||||
entities = repo.get_entities("TEST", "2024-01-01", "2024-01-31")
|
||||
assert len(entities) == 1
|
||||
assert entities[0].id == "1"
|
||||
```
|
||||
|
||||
**Step 7: Iterate and Refine**
|
||||
|
||||
1. Run tests after each implementation
|
||||
2. Refactor business logic as needed
|
||||
3. Add error handling and edge cases
|
||||
4. Implement the remaining dependency
|
||||
5. Add integration tests with both real dependencies
|
||||
|
||||
#### Directory Structure
|
||||
|
||||
```
|
||||
domain_name/
|
||||
├── models.py # Dataclasses only - no business logic
|
||||
├── client.py # External API integration
|
||||
├── repository.py # Data persistence and caching
|
||||
├── service.py # Main business logic coordinator
|
||||
└── service_test.py # Comprehensive test suite
|
||||
```
|
||||
|
||||
#### Benefits of This Approach
|
||||
|
||||
1. **Type Safety**: All interfaces defined upfront with dataclasses
|
||||
2. **Testability**: Business logic tested independently of external dependencies
|
||||
3. **Incremental Development**: One component at a time reduces complexity
|
||||
4. **Clear Contracts**: Dataclass returns make interfaces explicit
|
||||
5. **Error Isolation**: Issues contained within single components
|
||||
6. **Refactoring Safety**: Type system catches interface changes
|
||||
7. **Documentation**: Dataclasses serve as living documentation
|
||||
|
||||
#### Anti-Patterns to Avoid
|
||||
|
||||
❌ **Don't return dictionaries or strings from public methods**
|
||||
❌ **Don't implement all dependencies simultaneously**
|
||||
❌ **Don't skip writing tests first**
|
||||
❌ **Don't mix business logic with I/O operations**
|
||||
❌ **Don't use inheritance for dependency injection**
|
||||
❌ **Don't create circular dependencies between components**
|
||||
|
||||
✅ **Do use dataclasses for all return values**
|
||||
✅ **Do implement one dependency at a time**
|
||||
✅ **Do write tests before implementation**
|
||||
✅ **Do separate business logic from I/O**
|
||||
✅ **Do use constructor injection**
|
||||
✅ **Do maintain clear separation of concerns**
|
||||
|
||||
### File Structure Context
|
||||
- **`cli/`**: Interactive command-line interface
|
||||
- **`tradingagents/agents/`**: All agent implementations
|
||||
- **`libs/agent_toolkit.py`**: AgentToolkit Anti-Corruption Layer with LangChain @tool decorators
|
||||
- **`libs/context_helpers.py`**: Helper functions for parsing structured JSON data
|
||||
- **`libs/agent_utils.py`**: Legacy Toolkit (being phased out)
|
||||
- **`tradingagents/domains/`**: Domain-Driven Design bounded contexts
|
||||
- **`marketdata/`**: Financial data domain (prices, indicators, fundamentals, insider data)
|
||||
- **`news/`**: News domain (articles, sentiment analysis)
|
||||
- **`socialmedia/`**: Social media domain (posts, engagement, sentiment)
|
||||
- **`tradingagents/dataflows/`**: Legacy data source integrations (being phased out)
|
||||
- **`tradingagents/graph/`**: LangGraph workflow orchestration
|
||||
- **`tradingagents/config.py`**: Configuration management
|
||||
- **`main.py`**: Direct Python usage example
|
||||
- **`AGENTS.md`**: Detailed agent documentation
|
||||
- **Detailed Architecture**: `docs/architecture.md`
|
||||
- **API Documentation**: `docs/api-reference.md`
|
||||
- **Troubleshooting**: `docs/troubleshooting.md`
|
||||
- **Agent Development**: `docs/agent-development.md`
|
||||
|
||||
## Contributing
|
||||
|
||||
|
|
@ -912,3 +420,12 @@ Please reference our work if you find *TradingAgents* provides you with some hel
|
|||
url={https://arxiv.org/abs/2412.20138},
|
||||
}
|
||||
```
|
||||
|
||||
# important-instruction-reminders
|
||||
Do what has been asked; nothing more, nothing less.
|
||||
NEVER create files unless they're absolutely necessary for achieving your goal.
|
||||
ALWAYS prefer editing an existing file to creating a new one.
|
||||
NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
|
||||
|
||||
|
||||
IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context unless it is highly relevant to your task.
|
||||
|
|
@ -0,0 +1,581 @@
|
|||
# Agent Development Guide
|
||||
|
||||
This guide covers how to develop, modify, and extend agents in the TradingAgents framework.
|
||||
|
||||
## Agent Architecture Overview
|
||||
|
||||
The TradingAgents framework uses a multi-agent system where each agent has specific responsibilities in the trading decision workflow.
|
||||
|
||||
### Agent Categories
|
||||
|
||||
1. **Analyst Team** (`tradingagents/agents/analysts/`): Data analysis and market intelligence
|
||||
2. **Research Team** (`tradingagents/agents/researchers/`): Investment debate and recommendation synthesis
|
||||
3. **Trading Team** (`tradingagents/agents/trader/`): Trading decision execution
|
||||
4. **Risk Management** (`tradingagents/agents/risk_mgmt/`): Risk assessment and portfolio management
|
||||
|
||||
### Agent Implementation Pattern
|
||||
|
||||
All agents follow a consistent implementation pattern:
|
||||
|
||||
```python
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_core.messages import SystemMessage
|
||||
from tradingagents.agents.libs.agent_toolkit import AgentToolkit
|
||||
|
||||
def create_market_analyst(toolkit: AgentToolkit, config: dict) -> dict:
|
||||
"""Factory function to create a market analyst agent."""
|
||||
|
||||
# Define agent's role and responsibilities
|
||||
system_prompt = """You are a Market Analyst specializing in technical analysis.
|
||||
|
||||
Your responsibilities:
|
||||
- Analyze price trends and trading patterns
|
||||
- Calculate and interpret technical indicators
|
||||
- Identify support and resistance levels
|
||||
- Assess market momentum and volatility
|
||||
|
||||
Use the available tools to gather market data and provide actionable insights."""
|
||||
|
||||
# Create prompt template
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
SystemMessage(content=system_prompt),
|
||||
("human", "{input}")
|
||||
])
|
||||
|
||||
# Create the agent with tools
|
||||
agent = create_agent(
|
||||
llm=get_llm(config.get("quick_think_llm", "gpt-4o-mini")),
|
||||
tools=[
|
||||
toolkit.get_market_data,
|
||||
toolkit.get_ta_report,
|
||||
],
|
||||
prompt=prompt
|
||||
)
|
||||
|
||||
return agent
|
||||
```
|
||||
|
||||
## Adding New Agents
|
||||
|
||||
### Step 1: Define Agent Role
|
||||
|
||||
Create a new file for your agent (e.g., `custom_analyst.py`):
|
||||
|
||||
```python
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_core.messages import SystemMessage
|
||||
from tradingagents.agents.libs.agent_toolkit import AgentToolkit
|
||||
from tradingagents.agents.libs.agent_base import create_agent, get_llm
|
||||
|
||||
def create_custom_analyst(toolkit: AgentToolkit, config: dict) -> dict:
|
||||
"""Create a custom analyst with specific expertise."""
|
||||
|
||||
system_prompt = """You are a Custom Market Analyst specializing in [specific domain].
|
||||
|
||||
Your responsibilities:
|
||||
- [List specific responsibilities]
|
||||
- [Define analysis focus]
|
||||
- [Specify output format]
|
||||
|
||||
Always provide:
|
||||
1. Clear analysis summary
|
||||
2. Key findings with evidence
|
||||
3. Confidence level in your assessment
|
||||
4. Risk factors to consider
|
||||
"""
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
SystemMessage(content=system_prompt),
|
||||
("human", "Analyze {symbol} for date {date}. Context: {context}")
|
||||
])
|
||||
|
||||
# Select appropriate tools for this agent
|
||||
agent_tools = [
|
||||
toolkit.get_market_data,
|
||||
toolkit.get_news,
|
||||
toolkit.get_socialmedia_stock_info,
|
||||
# Add other relevant tools
|
||||
]
|
||||
|
||||
agent = create_agent(
|
||||
llm=get_llm(config.get("deep_think_llm", "o4-mini")),
|
||||
tools=agent_tools,
|
||||
prompt=prompt
|
||||
)
|
||||
|
||||
return agent
|
||||
```
|
||||
|
||||
### Step 2: Create Agent Node Function
|
||||
|
||||
Create a node function for LangGraph integration:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage
|
||||
from tradingagents.graph.state_models import AgentState
|
||||
|
||||
def custom_analyst_node(state: AgentState, config: dict) -> dict:
|
||||
"""Node function for custom analyst in the trading workflow."""
|
||||
|
||||
# Extract relevant information from state
|
||||
symbol = state.get("symbol", "")
|
||||
date = state.get("date", "")
|
||||
context = state.get("context", {})
|
||||
|
||||
# Get the agent instance
|
||||
toolkit = state.get("toolkit")
|
||||
agent = create_custom_analyst(toolkit, config)
|
||||
|
||||
# Prepare input message
|
||||
input_msg = f"""Analyze {symbol} for {date}.
|
||||
|
||||
Previous analysis context:
|
||||
{context}
|
||||
|
||||
Provide detailed analysis focusing on [specific aspects].
|
||||
"""
|
||||
|
||||
# Execute agent
|
||||
response = agent.invoke({
|
||||
"input": input_msg,
|
||||
"symbol": symbol,
|
||||
"date": date,
|
||||
"context": context
|
||||
})
|
||||
|
||||
# Extract analysis from response
|
||||
analysis = response.get("output", "")
|
||||
|
||||
# Update state with results
|
||||
return {
|
||||
"custom_analysis": analysis,
|
||||
"agents_completed": state.get("agents_completed", []) + ["custom_analyst"]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Integrate into Workflow
|
||||
|
||||
Add your agent to the trading graph:
|
||||
|
||||
```python
|
||||
# In tradingagents/graph/trading_graph.py
|
||||
|
||||
from tradingagents.agents.analysts.custom_analyst import custom_analyst_node
|
||||
|
||||
class TradingAgentsGraph:
|
||||
def _build_graph(self):
|
||||
# ... existing code ...
|
||||
|
||||
# Add your custom analyst node
|
||||
graph.add_node("custom_analyst", custom_analyst_node)
|
||||
|
||||
# Define connections in the workflow
|
||||
graph.add_edge("market_analyst", "custom_analyst")
|
||||
graph.add_edge("custom_analyst", "research_team")
|
||||
|
||||
# ... rest of graph construction ...
|
||||
```
|
||||
|
||||
## Extending Existing Agents
|
||||
|
||||
### Modifying Agent Prompts
|
||||
|
||||
To customize an existing agent's behavior, modify its system prompt:
|
||||
|
||||
```python
|
||||
def create_enhanced_fundamentals_analyst(toolkit: AgentToolkit, config: dict) -> dict:
|
||||
"""Enhanced version of fundamentals analyst with additional capabilities."""
|
||||
|
||||
enhanced_prompt = """You are a Senior Fundamentals Analyst with expertise in financial modeling.
|
||||
|
||||
Your enhanced responsibilities:
|
||||
- Analyze financial statements with deep ratio analysis
|
||||
- Build DCF models when appropriate
|
||||
- Compare metrics against industry benchmarks
|
||||
- Assess management quality and corporate governance
|
||||
- Evaluate ESG factors and sustainability metrics
|
||||
|
||||
Analysis Framework:
|
||||
1. Financial Health Assessment (40%)
|
||||
2. Valuation Analysis (30%)
|
||||
3. Growth Potential (20%)
|
||||
4. Risk Assessment (10%)
|
||||
|
||||
Always provide quantitative metrics with qualitative insights.
|
||||
"""
|
||||
|
||||
# ... rest of implementation
|
||||
```
|
||||
|
||||
### Adding New Tools
|
||||
|
||||
Create custom tools for specialized data sources:
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
from typing import Annotated
|
||||
|
||||
@tool
|
||||
def get_industry_comparison(
|
||||
symbol: Annotated[str, "Stock symbol to analyze"],
|
||||
metrics: Annotated[list[str], "List of metrics to compare"]
|
||||
) -> str:
|
||||
"""Compare stock metrics against industry averages."""
|
||||
|
||||
# Implementation to fetch industry data
|
||||
# This could integrate with additional APIs or databases
|
||||
|
||||
industry_data = fetch_industry_metrics(symbol, metrics)
|
||||
stock_data = fetch_stock_metrics(symbol, metrics)
|
||||
|
||||
comparison = compare_metrics(stock_data, industry_data)
|
||||
|
||||
return format_comparison_report(comparison)
|
||||
|
||||
# Add to agent toolkit
|
||||
def create_enhanced_toolkit(config: dict) -> AgentToolkit:
|
||||
"""Create toolkit with additional custom tools."""
|
||||
|
||||
toolkit = AgentToolkit.build(config)
|
||||
|
||||
# Add custom tools
|
||||
toolkit.add_tool(get_industry_comparison)
|
||||
|
||||
return toolkit
|
||||
```
|
||||
|
||||
## Agent Communication Patterns
|
||||
|
||||
### Structured Information Passing
|
||||
|
||||
Agents communicate through structured state objects:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class AnalysisState:
|
||||
symbol: str
|
||||
date: str
|
||||
market_analysis: str | None = None
|
||||
fundamental_analysis: str | None = None
|
||||
sentiment_analysis: str | None = None
|
||||
risk_assessment: str | None = None
|
||||
final_recommendation: str | None = None
|
||||
confidence_score: float | None = None
|
||||
|
||||
def analyst_coordination_node(state: AnalysisState, config: dict) -> dict:
|
||||
"""Coordinate multiple analysts and synthesize results."""
|
||||
|
||||
analyses = {
|
||||
"market": state.market_analysis,
|
||||
"fundamental": state.fundamental_analysis,
|
||||
"sentiment": state.sentiment_analysis
|
||||
}
|
||||
|
||||
# Synthesis logic
|
||||
synthesized_analysis = synthesize_analyses(analyses)
|
||||
|
||||
return {
|
||||
"synthesized_analysis": synthesized_analysis,
|
||||
"confidence_score": calculate_confidence(analyses)
|
||||
}
|
||||
```
|
||||
|
||||
### Debate Mechanisms
|
||||
|
||||
Implement structured debates between agents:
|
||||
|
||||
```python
|
||||
def investment_debate_node(state: AgentState, config: dict) -> dict:
|
||||
"""Facilitate debate between bull and bear researchers."""
|
||||
|
||||
max_rounds = config.get("max_debate_rounds", 3)
|
||||
current_round = state.get("debate_round", 0)
|
||||
|
||||
if current_round >= max_rounds:
|
||||
# End debate and synthesize
|
||||
return finalize_debate(state)
|
||||
|
||||
if current_round % 2 == 0:
|
||||
# Bull researcher's turn
|
||||
response = bull_researcher.invoke(state)
|
||||
return {
|
||||
"bull_arguments": state.get("bull_arguments", []) + [response],
|
||||
"debate_round": current_round + 1,
|
||||
"current_speaker": "bear"
|
||||
}
|
||||
else:
|
||||
# Bear researcher's turn
|
||||
response = bear_researcher.invoke(state)
|
||||
return {
|
||||
"bear_arguments": state.get("bear_arguments", []) + [response],
|
||||
"debate_round": current_round + 1,
|
||||
"current_speaker": "bull"
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Testing
|
||||
|
||||
### Unit Testing Agents
|
||||
|
||||
Create comprehensive tests for agent behavior:
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from unittest.mock import Mock, patch
|
||||
from tradingagents.agents.analysts.custom_analyst import create_custom_analyst
|
||||
|
||||
def test_custom_analyst_creation():
|
||||
"""Test agent creation with mock toolkit."""
|
||||
mock_toolkit = Mock()
|
||||
config = {"deep_think_llm": "gpt-4o-mini"}
|
||||
|
||||
agent = create_custom_analyst(mock_toolkit, config)
|
||||
|
||||
assert agent is not None
|
||||
assert hasattr(agent, 'invoke')
|
||||
|
||||
def test_custom_analyst_analysis():
|
||||
"""Test agent analysis with mock data."""
|
||||
mock_toolkit = Mock()
|
||||
|
||||
# Configure mock responses
|
||||
mock_toolkit.get_market_data.return_value = "Mock market data"
|
||||
mock_toolkit.get_news.return_value = "Mock news data"
|
||||
|
||||
agent = create_custom_analyst(mock_toolkit, {})
|
||||
|
||||
with patch('tradingagents.agents.libs.agent_base.get_llm') as mock_llm:
|
||||
mock_llm.return_value.invoke.return_value = {"output": "Test analysis"}
|
||||
|
||||
result = agent.invoke({
|
||||
"input": "Analyze AAPL",
|
||||
"symbol": "AAPL",
|
||||
"date": "2024-01-15"
|
||||
})
|
||||
|
||||
assert "output" in result
|
||||
assert result["output"] == "Test analysis"
|
||||
|
||||
def test_agent_node_integration():
|
||||
"""Test agent node function."""
|
||||
state = {
|
||||
"symbol": "AAPL",
|
||||
"date": "2024-01-15",
|
||||
"toolkit": Mock(),
|
||||
"agents_completed": []
|
||||
}
|
||||
|
||||
result = custom_analyst_node(state, {})
|
||||
|
||||
assert "custom_analysis" in result
|
||||
assert "custom_analyst" in result["agents_completed"]
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
|
||||
Test agent interactions within the workflow:
|
||||
|
||||
```python
|
||||
def test_agent_workflow_integration():
|
||||
"""Test agent integration in trading workflow."""
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
config = TradingAgentsConfig(
|
||||
online_tools=False, # Use cached data for testing
|
||||
max_debate_rounds=1
|
||||
)
|
||||
|
||||
graph = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# Test with known stock and date
|
||||
result, decision = graph.propagate("AAPL", "2024-01-15")
|
||||
|
||||
assert result is not None
|
||||
assert decision in ["BUY", "SELL", "HOLD"]
|
||||
assert "custom_analysis" in result # If your agent was integrated
|
||||
```
|
||||
|
||||
## Agent Performance Optimization
|
||||
|
||||
### Model Selection Strategy
|
||||
|
||||
Choose appropriate models for different agent types:
|
||||
|
||||
```python
|
||||
def get_optimal_model(agent_type: str, config: dict) -> str:
|
||||
"""Select optimal model based on agent requirements."""
|
||||
|
||||
model_mapping = {
|
||||
# Fast models for data retrieval and formatting
|
||||
"data_fetchers": config.get("quick_think_llm", "gpt-4o-mini"),
|
||||
|
||||
# Powerful models for complex analysis
|
||||
"analysts": config.get("deep_think_llm", "o4-mini"),
|
||||
|
||||
# Balanced models for decision making
|
||||
"traders": config.get("deep_think_llm", "o4-mini"),
|
||||
|
||||
# Conservative models for risk assessment
|
||||
"risk_managers": config.get("deep_think_llm", "o4-mini")
|
||||
}
|
||||
|
||||
return model_mapping.get(agent_type, config.get("quick_think_llm"))
|
||||
```
|
||||
|
||||
### Caching Agent Responses
|
||||
|
||||
Implement caching for expensive agent operations:
|
||||
|
||||
```python
|
||||
from functools import lru_cache
|
||||
import hashlib
|
||||
|
||||
def cache_key(symbol: str, date: str, analysis_type: str) -> str:
|
||||
"""Generate cache key for agent analysis."""
|
||||
return hashlib.md5(f"{symbol}_{date}_{analysis_type}".encode()).hexdigest()
|
||||
|
||||
@lru_cache(maxsize=100)
|
||||
def cached_analysis(cache_key: str, symbol: str, date: str) -> str:
|
||||
"""Cache expensive analysis operations."""
|
||||
# This would be called by agents that perform expensive operations
|
||||
pass
|
||||
|
||||
def create_cached_agent(base_agent, cache_size: int = 100):
|
||||
"""Wrap agent with caching functionality."""
|
||||
|
||||
@lru_cache(maxsize=cache_size)
|
||||
def cached_invoke(input_hash: str, **kwargs):
|
||||
return base_agent.invoke(kwargs)
|
||||
|
||||
def invoke(inputs: dict):
|
||||
# Create hash of inputs for cache key
|
||||
input_str = str(sorted(inputs.items()))
|
||||
input_hash = hashlib.md5(input_str.encode()).hexdigest()
|
||||
|
||||
return cached_invoke(input_hash, **inputs)
|
||||
|
||||
base_agent.invoke = invoke
|
||||
return base_agent
|
||||
```
|
||||
|
||||
### Parallel Agent Execution
|
||||
|
||||
Implement parallel execution for independent agents:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
async def parallel_analysis_node(state: AgentState, config: dict) -> dict:
|
||||
"""Execute multiple analysts in parallel."""
|
||||
|
||||
symbol = state.get("symbol")
|
||||
date = state.get("date")
|
||||
toolkit = state.get("toolkit")
|
||||
|
||||
# Create analysts
|
||||
market_analyst = create_market_analyst(toolkit, config)
|
||||
fundamentals_analyst = create_fundamentals_analyst(toolkit, config)
|
||||
sentiment_analyst = create_sentiment_analyst(toolkit, config)
|
||||
|
||||
# Define analysis tasks
|
||||
async def run_analyst(analyst, input_data):
|
||||
loop = asyncio.get_event_loop()
|
||||
with ThreadPoolExecutor() as executor:
|
||||
return await loop.run_in_executor(executor, analyst.invoke, input_data)
|
||||
|
||||
# Execute in parallel
|
||||
tasks = [
|
||||
run_analyst(market_analyst, {"symbol": symbol, "date": date}),
|
||||
run_analyst(fundamentals_analyst, {"symbol": symbol, "date": date}),
|
||||
run_analyst(sentiment_analyst, {"symbol": symbol, "date": date})
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
return {
|
||||
"market_analysis": results[0]["output"],
|
||||
"fundamental_analysis": results[1]["output"],
|
||||
"sentiment_analysis": results[2]["output"]
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Agent Design Principles
|
||||
|
||||
1. **Single Responsibility**: Each agent should have one clear purpose
|
||||
2. **Clear Communication**: Use structured inputs/outputs
|
||||
3. **Error Handling**: Gracefully handle missing data or API failures
|
||||
4. **Testability**: Design agents to be easily testable with mocks
|
||||
5. **Performance**: Consider token usage and API costs
|
||||
|
||||
### Code Organization
|
||||
|
||||
```
|
||||
tradingagents/agents/
|
||||
├── analysts/
|
||||
│ ├── __init__.py
|
||||
│ ├── market_analyst.py
|
||||
│ ├── fundamentals_analyst.py
|
||||
│ ├── custom_analyst.py
|
||||
│ └── analyst_base.py # Common analyst functionality
|
||||
├── researchers/
|
||||
│ ├── __init__.py
|
||||
│ ├── bull_researcher.py
|
||||
│ ├── bear_researcher.py
|
||||
│ └── researcher_base.py
|
||||
├── libs/
|
||||
│ ├── agent_toolkit.py # Anti-corruption layer
|
||||
│ ├── agent_base.py # Common agent utilities
|
||||
│ └── context_helpers.py # Helper functions
|
||||
└── tests/
|
||||
├── test_analysts.py
|
||||
├── test_researchers.py
|
||||
└── test_integration.py
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
Implement robust error handling in agents:
|
||||
|
||||
```python
|
||||
def safe_agent_invoke(agent, inputs: dict, default_response: str = "Analysis unavailable") -> str:
|
||||
"""Safely invoke agent with error handling."""
|
||||
|
||||
try:
|
||||
result = agent.invoke(inputs)
|
||||
return result.get("output", default_response)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Agent invocation failed: {e}")
|
||||
return f"{default_response}. Error: {str(e)}"
|
||||
|
||||
def create_resilient_analyst(toolkit: AgentToolkit, config: dict) -> dict:
|
||||
"""Create analyst with built-in resilience."""
|
||||
|
||||
base_agent = create_custom_analyst(toolkit, config)
|
||||
|
||||
def resilient_invoke(inputs: dict):
|
||||
# Add retry logic
|
||||
max_retries = 3
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return base_agent.invoke(inputs)
|
||||
except Exception as e:
|
||||
if attempt == max_retries - 1:
|
||||
# Final attempt failed
|
||||
return {
|
||||
"output": "Analysis failed after multiple attempts",
|
||||
"error": str(e),
|
||||
"confidence": 0.0
|
||||
}
|
||||
# Wait before retry
|
||||
time.sleep(2 ** attempt) # Exponential backoff
|
||||
|
||||
base_agent.invoke = resilient_invoke
|
||||
return base_agent
|
||||
```
|
||||
|
||||
This guide provides a comprehensive foundation for developing and extending agents in the TradingAgents framework. Follow these patterns and best practices to create robust, testable, and performant agents.
|
||||
|
|
@ -0,0 +1,371 @@
|
|||
# TradingAgents API Reference
|
||||
|
||||
This document provides comprehensive API documentation for the TradingAgents framework.
|
||||
|
||||
## Core Classes
|
||||
|
||||
### TradingAgentsGraph
|
||||
|
||||
Main orchestrator class for running trading analysis workflows.
|
||||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
# Initialize with default configuration
|
||||
ta = TradingAgentsGraph(debug=True, config=TradingAgentsConfig.from_env())
|
||||
|
||||
# Run analysis for a specific stock and date
|
||||
result, decision = ta.propagate("AAPL", "2024-01-15")
|
||||
```
|
||||
|
||||
**Methods:**
|
||||
|
||||
- `propagate(symbol: str, date: str) -> tuple[dict, str]`: Execute full trading analysis workflow
|
||||
- `get_memory()`: Access the vector memory system for past decisions
|
||||
|
||||
### TradingAgentsConfig
|
||||
|
||||
Configuration management class with environment variable support.
|
||||
|
||||
```python
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
|
||||
# Create from environment variables
|
||||
config = TradingAgentsConfig.from_env()
|
||||
|
||||
# Create with custom values
|
||||
config = TradingAgentsConfig(
|
||||
llm_provider="anthropic",
|
||||
deep_think_llm="claude-3-5-sonnet-latest",
|
||||
max_debate_rounds=3,
|
||||
online_tools=True
|
||||
)
|
||||
```
|
||||
|
||||
**Configuration Options:**
|
||||
|
||||
| Parameter | Type | Default | Environment Variable | Description |
|
||||
|-----------|------|---------|---------------------|-------------|
|
||||
| `project_dir` | str | Current directory | `TRADINGAGENTS_PROJECT_DIR` | Base project directory |
|
||||
| `results_dir` | str | "./results" | `TRADINGAGENTS_RESULTS_DIR` | Output directory for analysis results |
|
||||
| `data_dir` | str | "/Users/yluo/Documents/Code/ScAI/FR1-data" | `TRADINGAGENTS_DATA_DIR` | Directory for local data storage |
|
||||
| `llm_provider` | Literal | "openai" | `LLM_PROVIDER` | LLM provider (openai, anthropic, google, ollama, openrouter) |
|
||||
| `deep_think_llm` | str | "o4-mini" | `DEEP_THINK_LLM` | Model for complex reasoning tasks |
|
||||
| `quick_think_llm` | str | "gpt-4o-mini" | `QUICK_THINK_LLM` | Model for fast responses |
|
||||
| `backend_url` | str | "https://api.openai.com/v1" | `BACKEND_URL` | API endpoint for LLM providers |
|
||||
| `max_debate_rounds` | int | 1 | `MAX_DEBATE_ROUNDS` | Maximum rounds in investment debates |
|
||||
| `max_risk_discuss_rounds` | int | 1 | `MAX_RISK_DISCUSS_ROUNDS` | Maximum rounds in risk discussions |
|
||||
| `max_recur_limit` | int | 100 | `MAX_RECUR_LIMIT` | Maximum recursion depth for workflows |
|
||||
| `online_tools` | bool | True | `ONLINE_TOOLS` | Enable live API calls vs cached data |
|
||||
| `default_lookback_days` | int | 30 | `DEFAULT_LOOKBACK_DAYS` | Historical data range for analysis |
|
||||
| `default_ta_lookback_days` | int | 30 | `DEFAULT_TA_LOOKBACK_DAYS` | Technical analysis data range |
|
||||
|
||||
## Domain Services
|
||||
|
||||
### MarketDataService
|
||||
|
||||
Provides market data and technical analysis.
|
||||
|
||||
```python
|
||||
from tradingagents.domains.marketdata.market_data_service import MarketDataService
|
||||
|
||||
service = MarketDataService.build(config)
|
||||
|
||||
# Get market data context
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
print(f"Latest price: ${context.latest_price}")
|
||||
print(f"Price change: {context.price_change_percent}%")
|
||||
|
||||
# Update market data
|
||||
service.update_market_data("AAPL", "2024-01-01", "2024-01-31")
|
||||
```
|
||||
|
||||
**Methods:**
|
||||
|
||||
- `get_market_data_context(symbol, start_date, end_date) -> PriceDataContext`: Get price and volume data
|
||||
- `get_ta_report(symbol, start_date, end_date) -> TAReportContext`: Get technical analysis indicators
|
||||
- `update_market_data(symbol, start_date, end_date)`: Fetch and cache fresh market data
|
||||
|
||||
### NewsService
|
||||
|
||||
Provides news analysis and sentiment scoring.
|
||||
|
||||
```python
|
||||
from tradingagents.domains.news.news_service import NewsService
|
||||
|
||||
service = NewsService.build(config)
|
||||
|
||||
# Get stock-specific news
|
||||
context = service.get_news_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
print(f"Articles found: {context.article_count}")
|
||||
print(f"Overall sentiment: {context.sentiment_summary.label}")
|
||||
|
||||
# Get global news context
|
||||
global_context = service.get_global_news_context("2024-01-15")
|
||||
```
|
||||
|
||||
**Methods:**
|
||||
|
||||
- `get_news_context(symbol, start_date, end_date) -> NewsContext`: Get stock-specific news
|
||||
- `get_global_news_context(date) -> GlobalNewsContext`: Get general market news
|
||||
- `update_news(symbol, start_date, end_date)`: Fetch and cache fresh news articles
|
||||
|
||||
### SocialMediaService
|
||||
|
||||
Provides social media sentiment analysis.
|
||||
|
||||
```python
|
||||
from tradingagents.domains.socialmedia.social_media_service import SocialMediaService
|
||||
|
||||
service = SocialMediaService.build(config)
|
||||
|
||||
# Get social media sentiment
|
||||
context = service.get_socialmedia_stock_info("AAPL", "2024-01-15")
|
||||
print(f"Posts analyzed: {context.post_count}")
|
||||
print(f"Sentiment score: {context.sentiment_summary.score}")
|
||||
```
|
||||
|
||||
**Methods:**
|
||||
|
||||
- `get_socialmedia_stock_info(symbol, date) -> StockSocialContext`: Get social media analysis
|
||||
- `update_socialmedia_data(symbol, date)`: Fetch and cache fresh social media data
|
||||
|
||||
### FundamentalDataService
|
||||
|
||||
Provides financial statements and fundamental analysis.
|
||||
|
||||
```python
|
||||
from tradingagents.domains.marketdata.fundamental_data_service import FundamentalDataService
|
||||
|
||||
service = FundamentalDataService.build(config)
|
||||
|
||||
# Get financial statements
|
||||
income_stmt = service.get_income_statement_context("AAPL", "2024-01-15")
|
||||
balance_sheet = service.get_balance_sheet_context("AAPL", "2024-01-15")
|
||||
cash_flow = service.get_cash_flow_context("AAPL", "2024-01-15")
|
||||
```
|
||||
|
||||
**Methods:**
|
||||
|
||||
- `get_income_statement_context(symbol, date) -> IncomeStatementContext`: Get income statement data
|
||||
- `get_balance_sheet_context(symbol, date) -> BalanceSheetContext`: Get balance sheet data
|
||||
- `get_cash_flow_context(symbol, date) -> CashFlowContext`: Get cash flow data
|
||||
|
||||
### InsiderDataService
|
||||
|
||||
Provides insider trading data and sentiment analysis.
|
||||
|
||||
```python
|
||||
from tradingagents.domains.marketdata.insider_data_service import InsiderDataService
|
||||
|
||||
service = InsiderDataService.build(config)
|
||||
|
||||
# Get insider transactions
|
||||
transactions = service.get_insider_transaction_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
sentiment = service.get_insider_sentiment_context("AAPL", "2024-01-15")
|
||||
```
|
||||
|
||||
**Methods:**
|
||||
|
||||
- `get_insider_transaction_context(symbol, start_date, end_date) -> InsiderTransactionContext`: Get insider trading data
|
||||
- `get_insider_sentiment_context(symbol, date) -> InsiderSentimentContext`: Get insider sentiment analysis
|
||||
|
||||
## AgentToolkit (Anti-Corruption Layer)
|
||||
|
||||
The AgentToolkit mediates between agents and domain services, providing LangChain tool decorators.
|
||||
|
||||
```python
|
||||
from tradingagents.agents.libs.agent_toolkit import AgentToolkit
|
||||
|
||||
# AgentToolkit is injected into agents automatically
|
||||
# Provides @tool decorated methods for LangChain agent consumption
|
||||
|
||||
# Available tools:
|
||||
# - get_market_data(symbol, start_date, end_date)
|
||||
# - get_ta_report(symbol, start_date, end_date)
|
||||
# - get_news(symbol, start_date, end_date)
|
||||
# - get_global_news(date)
|
||||
# - get_socialmedia_stock_info(symbol, date)
|
||||
# - get_income_statement(symbol, date)
|
||||
# - get_balance_sheet(symbol, date)
|
||||
# - get_cash_flow_statement(symbol, date)
|
||||
# - get_insider_transactions(symbol, start_date, end_date)
|
||||
# - get_insider_sentiment(symbol, date)
|
||||
```
|
||||
|
||||
## Context Models
|
||||
|
||||
### PriceDataContext
|
||||
|
||||
Market data context returned by MarketDataService.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class PriceDataContext:
|
||||
symbol: str
|
||||
start_date: str
|
||||
end_date: str
|
||||
price_data: list[dict]
|
||||
latest_price: float
|
||||
price_change_percent: float
|
||||
volume_data: list[dict]
|
||||
metadata: dict[str, Any]
|
||||
```
|
||||
|
||||
### TAReportContext
|
||||
|
||||
Technical analysis context with indicators.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class TAReportContext:
|
||||
symbol: str
|
||||
start_date: str
|
||||
end_date: str
|
||||
indicators: dict[str, Any]
|
||||
signals: list[str]
|
||||
metadata: dict[str, Any]
|
||||
```
|
||||
|
||||
### NewsContext
|
||||
|
||||
News analysis context with sentiment scoring.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class NewsContext:
|
||||
symbol: str
|
||||
start_date: str
|
||||
end_date: str
|
||||
articles: list[NewsArticle]
|
||||
article_count: int
|
||||
sentiment_summary: SentimentScore
|
||||
metadata: dict[str, Any]
|
||||
```
|
||||
|
||||
### StockSocialContext
|
||||
|
||||
Social media analysis context.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class StockSocialContext:
|
||||
symbol: str
|
||||
date: str
|
||||
posts: list[PostData]
|
||||
post_count: int
|
||||
sentiment_summary: SentimentScore
|
||||
engagement_metrics: EngagementMetrics
|
||||
metadata: dict[str, Any]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
All services implement consistent error handling patterns:
|
||||
|
||||
```python
|
||||
try:
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
except ServiceException as e:
|
||||
# Service-level errors (API failures, data validation)
|
||||
logger.error(f"Service error: {e}")
|
||||
except ClientException as e:
|
||||
# Client-level errors (network, authentication)
|
||||
logger.error(f"Client error: {e}")
|
||||
except RepositoryException as e:
|
||||
# Repository-level errors (file I/O, cache corruption)
|
||||
logger.error(f"Repository error: {e}")
|
||||
```
|
||||
|
||||
## Multi-LLM Provider Support
|
||||
|
||||
### OpenAI Configuration
|
||||
|
||||
```bash
|
||||
export LLM_PROVIDER="openai"
|
||||
export OPENAI_API_KEY="your_openai_api_key"
|
||||
export DEEP_THINK_LLM="gpt-4"
|
||||
export QUICK_THINK_LLM="gpt-4o-mini"
|
||||
export BACKEND_URL="https://api.openai.com/v1"
|
||||
```
|
||||
|
||||
### Anthropic Configuration
|
||||
|
||||
```bash
|
||||
export LLM_PROVIDER="anthropic"
|
||||
export ANTHROPIC_API_KEY="your_anthropic_api_key"
|
||||
export DEEP_THINK_LLM="claude-3-5-sonnet-latest"
|
||||
export QUICK_THINK_LLM="claude-3-haiku-latest"
|
||||
export BACKEND_URL="https://api.anthropic.com"
|
||||
```
|
||||
|
||||
### Google Configuration
|
||||
|
||||
```bash
|
||||
export LLM_PROVIDER="google"
|
||||
export GOOGLE_API_KEY="your_google_api_key"
|
||||
export DEEP_THINK_LLM="gemini-1.5-pro"
|
||||
export QUICK_THINK_LLM="gemini-1.5-flash"
|
||||
export BACKEND_URL="https://generativelanguage.googleapis.com"
|
||||
```
|
||||
|
||||
## Memory System
|
||||
|
||||
The framework includes a ChromaDB-based vector memory system for learning from past decisions.
|
||||
|
||||
```python
|
||||
# Memory is automatically managed by TradingAgentsGraph
|
||||
graph = TradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# Access memory directly if needed
|
||||
memory = graph.get_memory()
|
||||
similar_situations = memory.search("AAPL analysis", k=5)
|
||||
```
|
||||
|
||||
**Memory Features:**
|
||||
|
||||
- Automatic storage of decision contexts and outcomes
|
||||
- Vector similarity search for similar market situations
|
||||
- Learning from historical performance to improve future decisions
|
||||
- Configurable retention policies
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Service Usage Patterns
|
||||
|
||||
```python
|
||||
# Always read from repository first (cached data)
|
||||
context = service.get_context(symbol, start_date, end_date)
|
||||
|
||||
# Use separate update operations for fresh data
|
||||
if context.metadata.get("data_quality") == "LOW":
|
||||
service.update_data(symbol, start_date, end_date)
|
||||
context = service.get_context(symbol, start_date, end_date)
|
||||
```
|
||||
|
||||
### Error Resilience
|
||||
|
||||
```python
|
||||
# Services return contexts with quality metadata
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
|
||||
if context.metadata.get("data_quality") == "HIGH":
|
||||
# Use data confidently
|
||||
latest_price = context.latest_price
|
||||
else:
|
||||
# Handle degraded data quality
|
||||
logger.warning("Using cached/partial data due to API issues")
|
||||
```
|
||||
|
||||
### Configuration Management
|
||||
|
||||
```python
|
||||
# Use environment-based configuration for different environments
|
||||
config = TradingAgentsConfig.from_env()
|
||||
|
||||
# Override specific settings for testing
|
||||
test_config = config.copy()
|
||||
test_config.online_tools = False # Use cached data only
|
||||
test_config.max_debate_rounds = 1 # Speed up tests
|
||||
```
|
||||
|
|
@ -0,0 +1,313 @@
|
|||
# TradingAgents Architecture Documentation
|
||||
|
||||
## Multi-Agent Trading Framework
|
||||
|
||||
TradingAgents implements a sophisticated multi-agent system that mirrors real-world trading firms with specialized roles and structured workflows.
|
||||
|
||||
## Core Architecture Components
|
||||
|
||||
### 1. **Agent Teams** (Sequential Workflow)
|
||||
|
||||
```
|
||||
Analyst Team → Research Team → Trading Team → Risk Management Team
|
||||
```
|
||||
|
||||
**Analyst Team** (`tradingagents/agents/analysts/`)
|
||||
- **Market Analyst**: Technical analysis using Yahoo Finance and StockStats
|
||||
- **Fundamentals Analyst**: Financial statements and company fundamentals via SimFin/Finnhub
|
||||
- **News Analyst**: News sentiment analysis and world affairs impact
|
||||
- **Social Media Analyst**: Reddit and social platform sentiment analysis
|
||||
|
||||
**Research Team** (`tradingagents/agents/researchers/`)
|
||||
- **Bull Researcher**: Advocates for investment opportunities and growth potential
|
||||
- **Bear Researcher**: Highlights risks and argues against investments
|
||||
- **Research Manager**: Synthesizes debates and creates investment recommendations
|
||||
|
||||
**Trading Team** (`tradingagents/agents/trader/`)
|
||||
- **Trader**: Converts investment plans into specific trading decisions
|
||||
|
||||
**Risk Management Team** (`tradingagents/agents/risk_mgmt/`)
|
||||
- **Aggressive/Conservative/Neutral Debators**: Different risk perspectives
|
||||
- **Risk Manager**: Final decision maker balancing risk and reward
|
||||
|
||||
### 2. **Domain-Driven Architecture** (`tradingagents/domains/`)
|
||||
|
||||
**Domain-Driven Design (DDD) Architecture** (Current):
|
||||
The system has been restructured using Domain-Driven Design principles with three main bounded contexts:
|
||||
|
||||
**Domain Boundaries & Bounded Contexts:**
|
||||
- **Financial Data Domain** (`tradingagents/domains/marketdata/`): Market prices, technical indicators, fundamentals, insider data
|
||||
- **News Domain** (`tradingagents/domains/news/`): News articles, sentiment analysis, content aggregation
|
||||
- **Social Media Domain** (`tradingagents/domains/socialmedia/`): Social media posts, engagement metrics, sentiment analysis
|
||||
|
||||
**DDD Tactical Patterns per Domain:**
|
||||
- **Domain Services**: Business logic encapsulated in domain-specific services (`MarketDataService`, `NewsService`, `SocialMediaService`)
|
||||
- **Value Objects**: Immutable data structures (`SentimentScore`, `TechnicalIndicatorData`, `PostMetadata`)
|
||||
- **Entities**: Objects with identity and lifecycle (`NewsArticle`, `PostData`)
|
||||
- **Repository Pattern**: Domain-specific data access with smart caching, deduplication, and gap detection
|
||||
- **Context Objects**: Structured domain data containers (`MarketDataContext`, `NewsContext`, `SocialContext`)
|
||||
|
||||
**Domain Infrastructure per Bounded Context:**
|
||||
```
|
||||
marketdata/
|
||||
├── clients/ # YFinanceClient, FinnhubClient (domain-specific)
|
||||
├── repos/ # MarketDataRepository, FundamentalRepository
|
||||
├── services/ # MarketDataService, FundamentalDataService, InsiderDataService
|
||||
└── models/ # Domain Value Objects and Entities
|
||||
|
||||
news/
|
||||
├── clients/ # GoogleNewsClient (domain-specific)
|
||||
├── repositories/ # NewsRepository with article deduplication
|
||||
├── services/ # NewsService with sentiment analysis
|
||||
└── models/ # NewsArticle, SentimentScore
|
||||
|
||||
socialmedia/
|
||||
├── clients/ # RedditClient (domain-specific)
|
||||
├── repositories/ # SocialMediaRepository with engagement tracking
|
||||
├── services/ # SocialMediaService with sentiment analysis
|
||||
└── models/ # PostData, EngagementMetrics
|
||||
```
|
||||
|
||||
**Agent Integration Strategy - Anti-Corruption Layer (ACL):**
|
||||
- **AgentToolkit as ACL**: Mediates between agents (string-based, procedural) and domains (object-oriented, rich models)
|
||||
- **Data Translation**: Converts rich Pydantic domain models to structured JSON strings for LLM consumption
|
||||
- **Parameter Adaptation**: Handles interface mismatches (single date → date ranges, etc.)
|
||||
- **Backward Compatibility**: Preserves existing agent tool interface while providing domain service benefits
|
||||
|
||||
### 3. **Graph Orchestration** (`tradingagents/graph/`)
|
||||
|
||||
LangGraph-based workflow management:
|
||||
|
||||
- **TradingAgentsGraph**: Main orchestrator class
|
||||
- **State Management**: `AgentState`, `InvestDebateState`, `RiskDebateState` track workflow progress
|
||||
- **Conditional Logic**: Dynamic routing based on tool usage and debate completion
|
||||
- **Memory System**: ChromaDB-based vector memory for learning from past decisions
|
||||
|
||||
### 4. **Configuration System**
|
||||
- **TradingAgentsConfig**: Centralized configuration with environment variable support
|
||||
- **Multi-LLM Support**: OpenAI, Anthropic, Google, Ollama, OpenRouter
|
||||
- **Data Modes**: Online (live APIs) vs offline (cached data)
|
||||
|
||||
## Architecture Design Patterns and Principles
|
||||
|
||||
### Core Design Principles
|
||||
|
||||
1. **Separation of Concerns**: Each domain has clear boundaries and responsibilities
|
||||
2. **Single Responsibility Principle**: Each class and module has one reason to change
|
||||
3. **Dependency Inversion**: High-level modules depend on abstractions, not concrete implementations
|
||||
4. **Open/Closed Principle**: Modules are open for extension but closed for modification
|
||||
5. **Interface Segregation**: Clients only depend on methods they actually use
|
||||
|
||||
### Key Architectural Patterns
|
||||
|
||||
1. **Domain-Driven Design (DDD)**:
|
||||
- Bounded contexts ensure clear separation between domains
|
||||
- Ubiquitous language ensures consistent terminology across code and business
|
||||
- Entities, Value Objects, and Aggregates provide rich domain models
|
||||
- Repositories abstract data persistence concerns
|
||||
|
||||
2. **Anti-Corruption Layer (ACL)**:
|
||||
- AgentToolkit protects domain models from agent implementation details
|
||||
- Translation layer ensures clean integration between procedural agents and object-oriented domains
|
||||
- Backward compatibility maintained while improving architecture
|
||||
|
||||
3. **Repository Pattern**:
|
||||
- Smart caching reduces API calls and improves performance
|
||||
- Data deduplication ensures consistency
|
||||
- Gap detection identifies missing data ranges
|
||||
- Local storage provides offline capability
|
||||
|
||||
4. **Service Layer Pattern**:
|
||||
- Business logic encapsulated in domain services
|
||||
- Thin client implementations for API integrations
|
||||
- Rich context objects provide structured data for agents
|
||||
|
||||
### Data Flow Architecture
|
||||
|
||||
1. **Request Processing**:
|
||||
- Agents request data through AgentToolkit
|
||||
- Toolkit translates requests to domain service calls
|
||||
- Services retrieve data from repositories or clients
|
||||
- Results are formatted as structured context objects
|
||||
|
||||
2. **Data Updates**:
|
||||
- Services can update repositories with fresh data
|
||||
- Clients fetch data from external APIs
|
||||
- Data is cached with metadata for quality assessment
|
||||
|
||||
3. **Memory Integration**:
|
||||
- ChromaDB stores vector embeddings of past decisions
|
||||
- Similar situations are retrieved for context-aware decision making
|
||||
- Learning from historical performance improves future decisions
|
||||
|
||||
## Key Design Patterns
|
||||
|
||||
1. **Debate-Driven Decision Making**: Critical decisions emerge from structured agent debates
|
||||
2. **Memory-Augmented Learning**: Agents learn from past similar situations using vector similarity
|
||||
3. **Repository-First Data Strategy**: Services always read from repositories with separate update operations
|
||||
4. **Structured JSON Contexts**: Replace error-prone string parsing with rich Pydantic models
|
||||
5. **Factory Pattern**: Agent creation via factory functions for flexible configuration
|
||||
6. **Signal Processing**: Final trading decisions processed into clean BUY/SELL/HOLD signals
|
||||
7. **Quality-Aware Data**: All contexts include quality metadata to help agents make better decisions
|
||||
|
||||
## Working with Agents
|
||||
|
||||
**Current Approach** (AgentToolkit as Anti-Corruption Layer):
|
||||
- Use `AgentToolkit` from `tradingagents.agents.libs.agent_toolkit`
|
||||
- Toolkit injects all domain services via dependency injection
|
||||
- Provides LangChain `@tool` decorated methods for agent consumption
|
||||
- Returns rich Pydantic domain models directly to agents
|
||||
- Handles parameter validation, date calculations, and error handling
|
||||
|
||||
**Agent Integration Pattern**:
|
||||
```python
|
||||
from tradingagents.agents.libs.agent_toolkit import AgentToolkit
|
||||
|
||||
# AgentToolkit acts as Anti-Corruption Layer
|
||||
toolkit = AgentToolkit(
|
||||
news_service=news_service,
|
||||
marketdata_service=marketdata_service,
|
||||
fundamentaldata_service=fundamentaldata_service,
|
||||
socialmedia_service=socialmedia_service,
|
||||
insiderdata_service=insiderdata_service
|
||||
)
|
||||
|
||||
# Agents use toolkit tools that return rich domain contexts
|
||||
@tool
|
||||
def analyze_stock(symbol: str, date: str):
|
||||
# Get structured contexts from domain services via toolkit
|
||||
market_data = toolkit.get_market_data(symbol, start_date, end_date)
|
||||
social_data = toolkit.get_socialmedia_stock_info(symbol, date)
|
||||
news_data = toolkit.get_news(symbol, start_date, end_date)
|
||||
|
||||
# Work with rich Pydantic models
|
||||
price = market_data.latest_price
|
||||
sentiment = social_data.sentiment_summary.score
|
||||
article_count = news_data.article_count
|
||||
```
|
||||
|
||||
## Working with Data Sources
|
||||
|
||||
**Current Domain Service Approach**:
|
||||
- **Repository-First**: Services always read data from repositories (local storage)
|
||||
- **Separate Update Operations**: Use dedicated update methods to fetch fresh data from APIs and store in repositories
|
||||
- **Clear Separation**: Reading data vs updating data are separate concerns
|
||||
- **Structured Contexts**: Services return rich Pydantic models with metadata
|
||||
- **Quality Awareness**: All contexts include data quality and source information
|
||||
|
||||
**Service Usage Pattern**:
|
||||
```python
|
||||
# Services use dependency injection
|
||||
service = MarketDataService(
|
||||
yfin_client=YFinanceClient(),
|
||||
repo=MarketDataRepository("cache_dir")
|
||||
)
|
||||
|
||||
# Always read from repository
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
|
||||
# Separate update operation to refresh repository data
|
||||
service.update_market_data("AAPL", "2024-01-01", "2024-01-31")
|
||||
```
|
||||
|
||||
## Performance Optimization Guidelines
|
||||
|
||||
The TradingAgents framework is designed for efficiency in financial analysis workflows. Here are key optimization strategies:
|
||||
|
||||
### 1. Caching and Data Management
|
||||
|
||||
**Repository Pattern Caching:**
|
||||
- All domain services use repository-first data access pattern
|
||||
- Data is cached locally to minimize API calls
|
||||
- Smart caching with automatic invalidation based on data freshness
|
||||
- Deduplication and gap detection in stored data
|
||||
|
||||
**Best Practices:**
|
||||
```python
|
||||
# Efficient data access pattern
|
||||
service = MarketDataService.build(config)
|
||||
|
||||
# Always read from repository first (cached data)
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
|
||||
# Only update when fresh data is needed
|
||||
service.update_market_data("AAPL", "2024-01-01", "2024-01-31")
|
||||
```
|
||||
|
||||
### 2. LLM Cost and Performance Optimization
|
||||
|
||||
**Model Selection Strategy:**
|
||||
- Use `quick_think_llm` for simple data retrieval and formatting tasks
|
||||
- Reserve `deep_think_llm` for complex analysis and decision-making
|
||||
- Configure appropriate models based on your cost/performance requirements
|
||||
|
||||
**API Call Minimization:**
|
||||
- Batch similar requests when possible
|
||||
- Cache LLM responses for identical queries
|
||||
- Use structured outputs to reduce need for follow-up clarifications
|
||||
|
||||
### 3. Memory Management
|
||||
|
||||
**Vector Memory Optimization:**
|
||||
- ChromaDB-based memory system automatically manages vector storage
|
||||
- Configure memory retention policies to balance performance and storage
|
||||
- Use memory efficiently by storing only key decision points and learnings
|
||||
|
||||
### 4. Parallel Processing
|
||||
|
||||
**Graph Execution Optimization:**
|
||||
- LangGraph workflows can execute independent nodes in parallel
|
||||
- Configure appropriate concurrency limits to avoid API rate limiting
|
||||
- Monitor and optimize critical paths in the workflow graph
|
||||
|
||||
**Configuration for Performance:**
|
||||
```python
|
||||
# Optimize for cost-sensitive environments
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4.1-mini", # Lower cost model
|
||||
quick_think_llm="gpt-4.1-mini", # Lower cost model
|
||||
max_debate_rounds=1, # Reduce debate rounds
|
||||
online_tools=False, # Use cached data when possible
|
||||
default_lookback_days=30 # Limit data range
|
||||
)
|
||||
```
|
||||
|
||||
## Data Directory Structure
|
||||
|
||||
The TradingAgents framework expects a specific directory structure for data storage:
|
||||
|
||||
```
|
||||
project_dir/
|
||||
├── results/ # Analysis results (configurable via results_dir)
|
||||
├── dataflows/
|
||||
│ └── data_cache/ # Cached data (automatically managed)
|
||||
├── tradingagents/ # Core framework code
|
||||
└── cli/ # Command-line interface
|
||||
```
|
||||
|
||||
For custom data directories, ensure the path exists and is writable. The framework will automatically create necessary subdirectories for caching.
|
||||
|
||||
## File Structure Context
|
||||
- **`cli/`**: Interactive command-line interface
|
||||
- **`tradingagents/agents/`**: All agent implementations
|
||||
- **`libs/agent_toolkit.py`**: AgentToolkit Anti-Corruption Layer with LangChain @tool decorators
|
||||
- **`libs/context_helpers.py`**: Helper functions for parsing structured JSON data
|
||||
- **`libs/agent_utils.py`**: Legacy Toolkit (being phased out)
|
||||
- **`tradingagents/domains/`**: Domain-Driven Design bounded contexts
|
||||
- **`marketdata/`**: Financial data domain (prices, indicators, fundamentals, insider data)
|
||||
- **`news/`**: News domain (articles, sentiment analysis)
|
||||
- **`socialmedia/`**: Social media domain (posts, engagement, sentiment)
|
||||
- **`tradingagents/dataflows/`**: Legacy data source integrations (being phased out)
|
||||
- **`tradingagents/graph/`**: LangGraph workflow orchestration
|
||||
- **`tradingagents/config.py`**: Configuration management
|
||||
- **`main.py`**: Direct Python usage example
|
||||
- **`docs/agent-development.md`**: Detailed agent documentation
|
||||
|
||||
## Trading Strategy Implementation
|
||||
|
||||
The strategy is implemented through a graph-based workflow using LangGraph:
|
||||
|
||||
1. **Sequential Processing**: Analyst teams process data in sequence
|
||||
2. **Debate-Driven Decision Making**: Researchers engage in structured debates
|
||||
3. **Risk Assessment**: Risk managers evaluate potential downside scenarios
|
||||
4. **Signal Generation**: Final trading decisions are processed into clean signals
|
||||
|
|
@ -0,0 +1,443 @@
|
|||
# TradingAgents Troubleshooting Guide
|
||||
|
||||
This guide covers common issues and solutions when working with the TradingAgents framework.
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Installation and Setup Issues
|
||||
|
||||
#### Missing Dependencies
|
||||
**Problem:** `ModuleNotFoundError` when importing TradingAgents components.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Ensure all dependencies are installed
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Or use mise for development setup
|
||||
mise run setup
|
||||
```
|
||||
|
||||
#### Python Version Compatibility
|
||||
**Problem:** Type annotation errors or syntax issues.
|
||||
|
||||
**Solution:** TradingAgents requires Python 3.10+. Check your version:
|
||||
```bash
|
||||
python --version # Should be 3.10 or higher
|
||||
|
||||
# If using conda
|
||||
conda create -n tradingagents python=3.13
|
||||
conda activate tradingagents
|
||||
```
|
||||
|
||||
### Configuration Issues
|
||||
|
||||
#### API Key Not Found
|
||||
**Problem:** `API key not found` errors when running analysis.
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Set required API keys
|
||||
export OPENAI_API_KEY="your_openai_api_key"
|
||||
export FINNHUB_API_KEY="your_finnhub_api_key"
|
||||
|
||||
# For other providers
|
||||
export ANTHROPIC_API_KEY="your_anthropic_api_key"
|
||||
export GOOGLE_API_KEY="your_google_api_key"
|
||||
|
||||
# Verify keys are set
|
||||
echo $OPENAI_API_KEY
|
||||
```
|
||||
|
||||
#### Invalid LLM Provider
|
||||
**Problem:** `Invalid LLM_PROVIDER` error.
|
||||
|
||||
**Solution:** Check valid providers:
|
||||
```python
|
||||
# Valid options: openai, anthropic, google, ollama, openrouter
|
||||
export LLM_PROVIDER="openai" # or anthropic, google, etc.
|
||||
```
|
||||
|
||||
#### Directory Permission Issues
|
||||
**Problem:** `Permission denied` when accessing data directories.
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check directory permissions
|
||||
ls -la ./results ./dataflows
|
||||
|
||||
# Create directories with proper permissions
|
||||
mkdir -p results dataflows/data_cache
|
||||
chmod 755 results dataflows/data_cache
|
||||
|
||||
# Or use environment variables for custom paths
|
||||
export TRADINGAGENTS_RESULTS_DIR="./my_results"
|
||||
export TRADINGAGENTS_DATA_DIR="./my_data"
|
||||
```
|
||||
|
||||
### Data and API Issues
|
||||
|
||||
#### Rate Limiting
|
||||
**Problem:** API rate limit exceeded errors.
|
||||
|
||||
**Solutions:**
|
||||
```python
|
||||
# Use cached data to reduce API calls
|
||||
config = TradingAgentsConfig(online_tools=False)
|
||||
|
||||
# Reduce debate rounds to minimize LLM API calls
|
||||
config = TradingAgentsConfig(
|
||||
max_debate_rounds=1,
|
||||
max_risk_discuss_rounds=1
|
||||
)
|
||||
|
||||
# Use smaller/cheaper models
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4o-mini",
|
||||
quick_think_llm="gpt-4o-mini"
|
||||
)
|
||||
```
|
||||
|
||||
#### Empty or Invalid Data
|
||||
**Problem:** Services return empty contexts or "LOW" data quality.
|
||||
|
||||
**Solutions:**
|
||||
```python
|
||||
# Check data quality in context metadata
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
if context.metadata.get("data_quality") == "LOW":
|
||||
# Try updating data first
|
||||
service.update_market_data("AAPL", "2024-01-01", "2024-01-31")
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
|
||||
# Check for weekend/holiday dates
|
||||
from datetime import datetime
|
||||
date_obj = datetime.strptime("2024-01-15", "%Y-%m-%d")
|
||||
if date_obj.weekday() >= 5: # Saturday=5, Sunday=6
|
||||
print("Markets are closed on weekends")
|
||||
```
|
||||
|
||||
#### Network/Connectivity Issues
|
||||
**Problem:** Network timeouts or connection errors.
|
||||
|
||||
**Solutions:**
|
||||
```python
|
||||
# Increase timeout settings (implementation dependent)
|
||||
# Use offline mode during network issues
|
||||
config = TradingAgentsConfig(online_tools=False)
|
||||
|
||||
# Check network connectivity
|
||||
import requests
|
||||
try:
|
||||
requests.get("https://api.openai.com/v1/models", timeout=10)
|
||||
print("Network connectivity OK")
|
||||
except requests.RequestException as e:
|
||||
print(f"Network issue: {e}")
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
|
||||
#### Slow Analysis Performance
|
||||
**Problem:** Trading analysis takes too long to complete.
|
||||
|
||||
**Optimizations:**
|
||||
```python
|
||||
# Use faster models for non-critical operations
|
||||
config = TradingAgentsConfig(
|
||||
quick_think_llm="gpt-4o-mini", # Fastest model for simple tasks
|
||||
deep_think_llm="gpt-4o", # Balance of speed and quality
|
||||
)
|
||||
|
||||
# Reduce debate complexity
|
||||
config = TradingAgentsConfig(
|
||||
max_debate_rounds=1,
|
||||
max_risk_discuss_rounds=1
|
||||
)
|
||||
|
||||
# Use cached data when possible
|
||||
config = TradingAgentsConfig(online_tools=False)
|
||||
|
||||
# Reduce data range
|
||||
config = TradingAgentsConfig(
|
||||
default_lookback_days=14,
|
||||
default_ta_lookback_days=14
|
||||
)
|
||||
```
|
||||
|
||||
#### High Memory Usage
|
||||
**Problem:** Memory usage grows during analysis.
|
||||
|
||||
**Solutions:**
|
||||
```python
|
||||
# Clear memory between analyses
|
||||
import gc
|
||||
gc.collect()
|
||||
|
||||
# Limit vector memory size (if using memory features)
|
||||
# Configure ChromaDB retention policies in graph initialization
|
||||
|
||||
# Process stocks one at a time instead of in batches
|
||||
for symbol in ["AAPL", "GOOGL", "MSFT"]:
|
||||
result, decision = ta.propagate(symbol, "2024-01-15")
|
||||
# Process result immediately
|
||||
gc.collect() # Clear memory between iterations
|
||||
```
|
||||
|
||||
#### High API Costs
|
||||
**Problem:** LLM API costs are higher than expected.
|
||||
|
||||
**Cost Optimization:**
|
||||
```python
|
||||
# Use smaller models
|
||||
config = TradingAgentsConfig(
|
||||
deep_think_llm="gpt-4o-mini", # Much cheaper than gpt-4
|
||||
quick_think_llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
# Reduce agent interactions
|
||||
config = TradingAgentsConfig(
|
||||
max_debate_rounds=1, # Fewer back-and-forth discussions
|
||||
max_risk_discuss_rounds=1
|
||||
)
|
||||
|
||||
# Use cached data to reduce context size
|
||||
config = TradingAgentsConfig(online_tools=False)
|
||||
|
||||
# Monitor token usage
|
||||
ta = TradingAgentsGraph(debug=True, config=config) # Enable debug for token tracking
|
||||
```
|
||||
|
||||
### Code Development Issues
|
||||
|
||||
#### Type Checking Errors
|
||||
**Problem:** `pyright` or type checker complaints.
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Run type checker to see specific issues
|
||||
mise run typecheck
|
||||
|
||||
# Common fixes:
|
||||
# 1. Add type annotations
|
||||
def get_data(symbol: str, date: str) -> dict[str, Any]:
|
||||
return {}
|
||||
|
||||
# 2. Use proper imports
|
||||
from typing import Any, Optional
|
||||
from datetime import datetime
|
||||
|
||||
# 3. Handle optional types
|
||||
def process_data(data: dict[str, Any] | None) -> str:
|
||||
if data is None:
|
||||
return "No data"
|
||||
return str(data)
|
||||
```
|
||||
|
||||
#### Import Errors
|
||||
**Problem:** Cannot import TradingAgents modules.
|
||||
|
||||
**Solutions:**
|
||||
```python
|
||||
# Ensure you're in the correct directory
|
||||
import os
|
||||
print(os.getcwd()) # Should be /path/to/TradingAgents
|
||||
|
||||
# Add project root to Python path if needed
|
||||
import sys
|
||||
sys.path.append("/path/to/TradingAgents")
|
||||
|
||||
# Use absolute imports
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
```
|
||||
|
||||
#### Test Failures
|
||||
**Problem:** Tests fail when running `mise run test`.
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Run tests with verbose output
|
||||
mise run test -v
|
||||
|
||||
# Run specific test file
|
||||
uv run pytest tradingagents/domains/marketdata/market_data_service_test.py -v
|
||||
|
||||
# Run tests with debug output
|
||||
uv run pytest -s # Shows print statements
|
||||
|
||||
# Check test dependencies
|
||||
pip install pytest pytest-cov responses # Common test dependencies
|
||||
```
|
||||
|
||||
### Data Quality Issues
|
||||
|
||||
#### Inconsistent Results
|
||||
**Problem:** Same analysis produces different results.
|
||||
|
||||
**Causes and Solutions:**
|
||||
```python
|
||||
# 1. Non-deterministic LLM responses
|
||||
config = TradingAgentsConfig(
|
||||
# Use temperature settings if your LLM provider supports it
|
||||
# This varies by provider
|
||||
)
|
||||
|
||||
# 2. Real-time data changes
|
||||
config = TradingAgentsConfig(online_tools=False) # Use cached data for consistent results
|
||||
|
||||
# 3. Date/time sensitivity
|
||||
# Always use consistent date formats and timezones
|
||||
from datetime import datetime
|
||||
date_str = datetime.now().strftime("%Y-%m-%d")
|
||||
```
|
||||
|
||||
#### Missing Historical Data
|
||||
**Problem:** No data available for specific dates or symbols.
|
||||
|
||||
**Solutions:**
|
||||
```python
|
||||
# Check if symbol exists and markets were open
|
||||
def is_valid_trading_day(date_str: str) -> bool:
|
||||
from datetime import datetime
|
||||
date_obj = datetime.strptime(date_str, "%Y-%m-%d")
|
||||
# Skip weekends
|
||||
if date_obj.weekday() >= 5:
|
||||
return False
|
||||
# Add holiday checking logic here
|
||||
return True
|
||||
|
||||
# Use broader date ranges
|
||||
service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31") # Month range
|
||||
# Instead of single day which might be missing
|
||||
```
|
||||
|
||||
## Debugging Strategies
|
||||
|
||||
### Enable Debug Mode
|
||||
```python
|
||||
# Enable debug logging
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
# Enable debug in TradingAgentsGraph
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
```
|
||||
|
||||
### Check Configuration
|
||||
```python
|
||||
# Print current configuration
|
||||
config = TradingAgentsConfig.from_env()
|
||||
print("Current config:")
|
||||
for key, value in config.to_dict().items():
|
||||
print(f" {key}: {value}")
|
||||
```
|
||||
|
||||
### Validate Data Quality
|
||||
```python
|
||||
# Always check context metadata
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
print(f"Data quality: {context.metadata.get('data_quality', 'UNKNOWN')}")
|
||||
print(f"Data source: {context.metadata.get('source', 'UNKNOWN')}")
|
||||
print(f"Last updated: {context.metadata.get('last_updated', 'UNKNOWN')}")
|
||||
```
|
||||
|
||||
### Test Individual Components
|
||||
```python
|
||||
# Test services individually
|
||||
from tradingagents.domains.marketdata.market_data_service import MarketDataService
|
||||
|
||||
service = MarketDataService.build(config)
|
||||
context = service.get_market_data_context("AAPL", "2024-01-01", "2024-01-31")
|
||||
print(f"Service working: {len(context.price_data) > 0}")
|
||||
```
|
||||
|
||||
### Monitor API Usage
|
||||
```python
|
||||
# Track API calls in debug mode
|
||||
# Many providers show token usage in debug output
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
result, decision = ta.propagate("AAPL", "2024-01-15")
|
||||
# Check debug output for token usage statistics
|
||||
```
|
||||
|
||||
## Performance Monitoring
|
||||
|
||||
### Measure Execution Time
|
||||
```python
|
||||
import time
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
|
||||
start_time = time.time()
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
result, decision = ta.propagate("AAPL", "2024-01-15")
|
||||
end_time = time.time()
|
||||
|
||||
print(f"Analysis completed in {end_time - start_time:.2f} seconds")
|
||||
```
|
||||
|
||||
### Memory Monitoring
|
||||
```python
|
||||
import psutil
|
||||
import os
|
||||
|
||||
def get_memory_usage():
|
||||
process = psutil.Process(os.getpid())
|
||||
return process.memory_info().rss / 1024 / 1024 # MB
|
||||
|
||||
print(f"Memory before: {get_memory_usage():.1f} MB")
|
||||
result, decision = ta.propagate("AAPL", "2024-01-15")
|
||||
print(f"Memory after: {get_memory_usage():.1f} MB")
|
||||
```
|
||||
|
||||
### Cost Tracking
|
||||
```python
|
||||
# Track API costs (varies by provider)
|
||||
# Most LLM providers show token usage in their API responses
|
||||
# Monitor your provider's dashboard for detailed cost tracking
|
||||
|
||||
# For OpenAI, costs can be estimated:
|
||||
# gpt-4o-mini: ~$0.15 per 1M input tokens, ~$0.60 per 1M output tokens
|
||||
# gpt-4: ~$30 per 1M input tokens, ~$60 per 1M output tokens
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Check Logs
|
||||
```bash
|
||||
# Enable Python logging
|
||||
export PYTHONPATH="."
|
||||
export LOGLEVEL="DEBUG"
|
||||
python -m cli.main
|
||||
```
|
||||
|
||||
### Community Resources
|
||||
- **GitHub Issues**: [TauricResearch/TradingAgents/issues](https://github.com/TauricResearch/TradingAgents/issues)
|
||||
- **Discord**: [TradingResearch Community](https://discord.com/invite/hk9PGKShPK)
|
||||
- **Documentation**: [API Reference](./API_REFERENCE.md) and [CLAUDE.md](./CLAUDE.md)
|
||||
|
||||
### Reporting Issues
|
||||
When reporting issues, include:
|
||||
1. Python version (`python --version`)
|
||||
2. Operating system
|
||||
3. Full error traceback
|
||||
4. Configuration used (remove API keys)
|
||||
5. Steps to reproduce
|
||||
6. Expected vs actual behavior
|
||||
|
||||
### Emergency Recovery
|
||||
If the system becomes completely unusable:
|
||||
```bash
|
||||
# Reset to clean state
|
||||
rm -rf dataflows/data_cache/ # Clear cached data
|
||||
rm -rf results/ # Clear result files
|
||||
|
||||
# Reinstall dependencies
|
||||
pip uninstall tradingagents -y
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Use minimal configuration
|
||||
export ONLINE_TOOLS=false
|
||||
export MAX_DEBATE_ROUNDS=1
|
||||
export DEEP_THINK_LLM="gpt-4o-mini"
|
||||
export QUICK_THINK_LLM="gpt-4o-mini"
|
||||
```
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
model_list:
|
||||
- model_name: "*" # Catches any model request
|
||||
litellm_params:
|
||||
model: "openrouter/qwen/qwen3-coder"
|
||||
api_key: os.environ/OPENROUTER_API_KEY
|
||||
stream: false
|
||||
timeout: 600 # 10 minutes total - complex code can take time
|
||||
stop: []
|
||||
|
||||
general_settings:
|
||||
drop_params: true
|
||||
stream: false
|
||||
|
||||
router_settings:
|
||||
num_retries: 10
|
||||
retry_after: 2
|
||||
allowed_fails: 100
|
||||
|
|
@ -5,7 +5,7 @@ from typing import Annotated
|
|||
|
||||
from langchain_core.tools import tool
|
||||
|
||||
from tradingagents.config import DEFAULT_CONFIG, TradingAgentsConfig
|
||||
from tradingagents.config import TradingAgentsConfig
|
||||
from tradingagents.domains.marketdata.fundamental_data_service import (
|
||||
BalanceSheetContext,
|
||||
CashFlowContext,
|
||||
|
|
@ -45,7 +45,7 @@ class AgentToolkit:
|
|||
marketdata_service: MarketDataService,
|
||||
fundamentaldata_service: FundamentalDataService,
|
||||
insiderdata_service: InsiderDataService,
|
||||
config: TradingAgentsConfig = DEFAULT_CONFIG,
|
||||
config: TradingAgentsConfig,
|
||||
):
|
||||
self._news_service = news_service
|
||||
self._marketdata_service = marketdata_service
|
||||
|
|
|
|||
|
|
@ -95,6 +95,7 @@ class TradingAgentsGraph:
|
|||
market_data_service,
|
||||
fundamental_data_service,
|
||||
insider_data_service,
|
||||
self.config,
|
||||
)
|
||||
|
||||
# Initialize memories
|
||||
|
|
|
|||
Loading…
Reference in New Issue