feat: Initial setup improvements and architecture planning - Add .env file support with automatic environment loading - Create comprehensive architecture planning document - Improve main.py with user-friendly CLI interface - Fix hardcoded paths in default_config.py - Set optimal testing defaults (gpt-4o-mini, offline mode) - Add python-dotenv dependency for secure API key management - Update .gitignore to exclude .env files - Add 6-month development roadmap from testing to production - Ready for cost-efficient architecture testing

This commit is contained in:
Marc Platt 2025-06-09 18:41:38 -04:00
parent 0fdfd35867
commit 2af03ecf69
7 changed files with 574 additions and 21 deletions

1
.gitignore vendored
View File

@ -6,3 +6,4 @@ src/
eval_results/
eval_data/
*.egg-info/
.env

442
ARCHITECTURE_PLAN.md Normal file
View File

@ -0,0 +1,442 @@
# TradingAgents: Master Architecture & Development Plan
## 📋 Project Overview
**TradingAgents** is a multi-agent LLM framework for financial trading that simulates real-world trading firms through specialized AI agents collaborating on market analysis and trading decisions.
**Current Status**: ✅ Basic Setup Complete | 🧪 Architecture Testing Phase
**Last Updated**: 2025-01-27
**Version**: v0.1.0-alpha
---
## 🏗️ Current Architecture
```mermaid
graph TB
subgraph "Input Layer"
A[Market Data APIs]
B[News Sources]
C[Social Media]
end
subgraph "Data Processing Layer"
D[FinnHub API]
E[Yahoo Finance]
F[Reddit API]
G[Google News]
end
subgraph "Agent Framework (LangGraph)"
H[Analyst Team]
I[Research Team]
J[Trading Team]
K[Risk Management]
end
subgraph "Analyst Team"
H1[Market Analyst]
H2[Sentiment Analyst]
H3[News Analyst]
H4[Fundamentals Analyst]
end
subgraph "Research Team"
I1[Bull Researcher]
I2[Bear Researcher]
I3[Research Manager]
end
subgraph "Trading Team"
J1[Trader Agent]
end
subgraph "Risk Management"
K1[Risk Analyst]
K2[Portfolio Manager]
end
subgraph "LLM Backend"
L[OpenAI API]
M[gpt-4o-mini]
end
subgraph "Memory & State"
N[Financial Memory]
O[Agent State]
P[Trading History]
end
subgraph "Output Layer"
Q[Trading Decisions]
R[Risk Assessments]
S[Reports]
end
A --> D
B --> G
C --> F
D --> H
E --> H
F --> H
G --> H
H --> I
I --> J
J --> K
H1 --> H
H2 --> H
H3 --> H
H4 --> H
I1 --> I
I2 --> I
I3 --> I
J1 --> J
K1 --> K
K2 --> K
L --> H
L --> I
L --> J
L --> K
N --> H
N --> I
N --> J
N --> K
K --> Q
K --> R
K --> S
```
### 🔧 Technical Stack
- **Framework**: LangGraph for agent orchestration
- **LLMs**: OpenAI GPT-4o-mini (testing), planned DeepSeek (production)
- **Data Sources**: FinnHub, Yahoo Finance, Reddit, Google News
- **Memory**: Custom FinancialSituationMemory
- **Environment**: Python 3.9+, .env configuration
---
## 📊 Current State Assessment
### ✅ Completed Components
- [x] **Environment Setup**: API keys, dependencies, .env configuration
- [x] **Core Agent Framework**: LangGraph-based multi-agent system
- [x] **Data Integration**: FinnHub, Yahoo Finance, Reddit APIs
- [x] **Basic Agent Types**: Market, Sentiment, News, Fundamentals analysts
- [x] **Memory System**: Agent memory for learning from past decisions
- [x] **CLI Interface**: Interactive command-line interface
### 🔄 In Progress
- [ ] **Architecture Testing**: Validating agent interactions and decision flow
- [ ] **Cost Optimization**: Minimizing API calls for testing phase
- [ ] **Error Handling**: Robust error handling for API failures
### ❌ Pending
- [ ] **Historical Backtesting**: Testing against historical market data
- [ ] **Performance Metrics**: Quantitative evaluation of trading decisions
- [ ] **Local Model Integration**: DeepSeek model deployment
- [ ] **Scalability Testing**: Multi-asset, multi-timeframe analysis
- [ ] **Production Infrastructure**: Raspberry Pi cluster setup
---
## 📝 Changelog
### v0.1.0-alpha (2025-01-27)
- **Added**: Initial project setup and environment configuration
- **Added**: .env file support for secure API key management
- **Added**: Complete dependency installation and verification
- **Added**: LangGraph-based agent framework
- **Added**: Multi-agent analyst team (Market, Sentiment, News, Fundamentals)
- **Added**: Research team with Bull/Bear researchers
- **Added**: Trading and Risk Management agents
- **Added**: CLI interface for interactive testing
- **Fixed**: Environment variable loading issues
- **Changed**: Model configuration to use gpt-4o-mini for cost efficiency
---
## 🧪 Phase 1: Architecture Testing & Validation
### Immediate Next Steps (Week 1-2)
1. **OpenAI API Resolution**
- [ ] Add payment method to OpenAI account
- [ ] Verify API quota and rate limits
- [ ] Test minimal API calls for functionality
2. **Agent Flow Testing**
- [ ] Test single agent execution (Market Analyst)
- [ ] Test agent-to-agent communication
- [ ] Validate decision propagation through the pipeline
- [ ] Test memory persistence between runs
3. **Data Pipeline Validation**
- [ ] Test offline data sources (cached financial data)
- [ ] Validate data format consistency
- [ ] Test error handling for missing data
- [ ] Verify date range handling
4. **Cost Optimization Testing**
- [ ] Measure token usage per agent
- [ ] Optimize prompt efficiency
- [ ] Test reduced debate rounds (current: 1)
- [ ] Implement request batching where possible
### Testing Scenarios (Week 2-3)
1. **Single Stock Analysis**
- [ ] Test NVDA analysis (current test case)
- [ ] Test AAPL analysis for comparison
- [ ] Test volatile stock (e.g., meme stock)
- [ ] Test stable stock (e.g., utility)
2. **Market Condition Testing**
- [ ] Bull market scenario
- [ ] Bear market scenario
- [ ] Sideways market scenario
- [ ] High volatility events
3. **Decision Quality Assessment**
- [ ] Track decision consistency
- [ ] Measure reasoning quality
- [ ] Test risk assessment accuracy
- [ ] Validate portfolio management logic
---
## 🚀 Major Project Phases
### Phase 2: Local Model Integration (Month 1-2)
**Objective**: Replace OpenAI API with local DeepSeek models
#### 2.1 Local Model Setup
- [ ] **Research DeepSeek Model Variants**
- Evaluate DeepSeek-R1, DeepSeek-V3 for financial tasks
- Compare model sizes vs. performance trade-offs
- Test quantization options for Raspberry Pi deployment
- [ ] **Local Inference Setup**
- Install and configure Ollama or similar framework
- Test model performance on development machine
- Optimize inference parameters for speed/quality balance
- [ ] **API Compatibility Layer**
- Create OpenAI-compatible API wrapper
- Implement model switching configuration
- Test seamless transition between local/remote models
#### 2.2 Raspberry Pi Cluster Architecture
- [ ] **Hardware Planning**
- Calculate compute requirements for multi-agent system
- Plan Raspberry Pi cluster configuration
- Design power, cooling, and networking setup
- [ ] **Distributed Processing**
- Design agent-to-Pi assignment strategy
- Implement load balancing for inference requests
- Create failover mechanisms for hardware failures
### Phase 3: MCP Server Integration (Month 2-3)
**Objective**: Implement Model Context Protocol for enhanced capabilities
#### 3.1 MCP Server Setup
- [ ] **Server Architecture Design**
- Design MCP server for financial data access
- Plan secure API endpoints for agent communication
- Implement authentication and rate limiting
- [ ] **Financial Data MCP Tools**
- Real-time market data feeds
- Economic calendar integration
- News sentiment analysis tools
- Technical indicator calculators
- [ ] **Agent-MCP Integration**
- Modify agents to use MCP tools
- Implement tool discovery and capability negotiation
- Add error handling for MCP communication
#### 3.2 Enhanced Capabilities
- [ ] **Advanced Data Sources**
- SEC filing analysis
- Earnings call transcripts
- Insider trading data
- Options flow data
- [ ] **Real-time Processing**
- Streaming market data integration
- Event-driven analysis triggers
- Real-time risk monitoring
### Phase 4: Agent Specialization & Expansion (Month 3-4)
**Objective**: Create more specialized and sophisticated agents
#### 4.1 Specialized Analyst Agents
- [ ] **Technical Analysis Specialists**
- Chart pattern recognition agent
- Options flow analysis agent
- Momentum/trend analysis agent
- Support/resistance level agent
- [ ] **Fundamental Analysis Specialists**
- Earnings analysis agent
- Sector rotation agent
- Economic indicator agent
- Company comparison agent
- [ ] **Alternative Data Agents**
- Satellite imagery analysis (retail foot traffic)
- Social media sentiment (beyond Reddit)
- Patent filing analysis
- Supply chain analysis
#### 4.2 Advanced Trading Agents
- [ ] **Strategy Specialists**
- Pairs trading agent
- Arbitrage opportunity agent
- Options strategy agent
- Swing trading agent
- [ ] **Risk Management Specialists**
- VaR calculation agent
- Correlation analysis agent
- Portfolio optimization agent
- Black swan detection agent
### Phase 5: Historical Backtesting & Validation (Month 4-5)
**Objective**: Comprehensive testing against historical market data
#### 5.1 Backtesting Infrastructure
- [ ] **Data Pipeline**
- Historical data ingestion (10+ years)
- Data quality validation and cleaning
- Event timeline reconstruction
- News/sentiment historical matching
- [ ] **Simulation Engine**
- Multi-timeframe simulation capability
- Transaction cost modeling
- Slippage and market impact simulation
- Portfolio rebalancing logic
#### 5.2 Performance Analysis
- [ ] **Quantitative Metrics**
- Sharpe ratio calculation
- Maximum drawdown analysis
- Win/loss ratio tracking
- Risk-adjusted returns
- [ ] **Comparative Analysis**
- Benchmark comparison (S&P 500, etc.)
- Strategy performance across market regimes
- Agent contribution analysis
- Decision quality metrics
### Phase 6: Production Deployment (Month 5-6)
**Objective**: Deploy robust, scalable production system
#### 6.1 Infrastructure Scaling
- [ ] **Multi-Asset Support**
- Stocks, ETFs, options, futures
- Multiple market coverage (US, EU, Asia)
- Currency and commodity analysis
- Crypto market integration
- [ ] **High Availability Setup**
- Redundant system architecture
- Automatic failover mechanisms
- Data backup and recovery
- Monitoring and alerting systems
#### 6.2 Real-World Integration
- [ ] **Broker Integration**
- Paper trading implementation
- Real money trading (small scale)
- Order management system
- Trade execution optimization
- [ ] **Regulatory Compliance**
- Trading regulations adherence
- Risk management requirements
- Audit trail implementation
- Compliance monitoring
---
## 🎯 Success Metrics
### Testing Phase Metrics
- **System Stability**: >95% uptime during testing
- **Decision Consistency**: <10% variance in similar scenarios
- **Cost Efficiency**: <$0.10 per analysis cycle
- **Response Time**: <2 minutes for complete analysis
### Production Phase Metrics
- **Return Performance**: Target 15%+ annual return
- **Risk Management**: Maximum 10% drawdown
- **Sharpe Ratio**: Target >1.5
- **Win Rate**: Target >55% profitable trades
---
## 🔧 Development Workflow
### Testing Protocol
1. **Feature Branch Development**: All new features in separate branches
2. **Unit Testing**: Each agent component tested individually
3. **Integration Testing**: Full pipeline testing before merge
4. **Performance Testing**: Token usage and response time monitoring
### Documentation Standards
- **Code Documentation**: Inline comments for all agent logic
- **API Documentation**: Complete endpoint documentation
- **User Guides**: Setup and usage instructions
- **Architecture Updates**: This document updated with each major change
---
## 🚨 Risk Management
### Technical Risks
- **Model Reliability**: Implement multiple model fallbacks
- **Data Quality**: Comprehensive data validation
- **System Failures**: Redundant infrastructure planning
- **Security**: Secure API key and data handling
### Financial Risks
- **Backtesting Limitations**: Over-optimization awareness
- **Market Regime Changes**: Adaptive strategy implementation
- **Regulatory Changes**: Compliance monitoring
- **Capital Protection**: Strict risk limits and stop-losses
---
## 📞 Next Actions Summary
### This Week (High Priority)
1. 🔴 **Resolve OpenAI API access** - Add payment method
2. 🟡 **Complete agent flow testing** - Validate end-to-end pipeline
3. 🟡 **Implement cost monitoring** - Track and optimize token usage
4. 🟢 **Document test results** - Update this plan with findings
### Next Week (Medium Priority)
1. **Expand testing scenarios** - Multiple stocks and market conditions
2. **Performance optimization** - Reduce latency and costs
3. **Error handling** - Robust failure recovery
4. **Local model research** - DeepSeek evaluation and setup planning
---
*This document serves as the living roadmap for TradingAgents development. Update regularly as progress is made and new insights are gained.*

View File

@ -1,3 +1,6 @@
# Load environment variables from .env file
import tradingagents.env_loader
from typing import Optional
import datetime
import typer

83
main.py
View File

@ -1,19 +1,76 @@
#!/usr/bin/env python3
"""
TradingAgents - Multi-Agent Financial Trading System
Main entry point for running standard trading analysis.
Usage:
python main.py [TICKER] [DATE]
Examples:
python main.py # Analyze NVDA on default date
python main.py AAPL # Analyze AAPL on default date
python main.py TSLA 2024-01-15 # Analyze TSLA on specific date
"""
import sys
from datetime import datetime, timedelta
# Load environment variables from .env file
import tradingagents.env_loader
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
# Create a custom config
config = DEFAULT_CONFIG.copy()
config["deep_think_llm"] = "gpt-4.1-nano" # Use a different model
config["quick_think_llm"] = "gpt-4.1-nano" # Use a different model
config["max_debate_rounds"] = 1 # Increase debate rounds
config["online_tools"] = True # Increase debate rounds
# Initialize with custom config
ta = TradingAgentsGraph(debug=True, config=config)
def main():
"""Run TradingAgents analysis with command line arguments."""
# Parse command line arguments
ticker = sys.argv[1] if len(sys.argv) > 1 else "NVDA"
# Default to a recent trading date if none provided
if len(sys.argv) > 2:
trade_date = sys.argv[2]
else:
# Use a date from a few days ago (to ensure market data is available)
default_date = datetime.now() - timedelta(days=3)
trade_date = default_date.strftime("%Y-%m-%d")
print(f"🚀 TradingAgents Analysis")
print(f"📊 Ticker: {ticker}")
print(f"📅 Date: {trade_date}")
print("=" * 50)
try:
# Initialize TradingAgents system
ta = TradingAgentsGraph(debug=False, config=DEFAULT_CONFIG)
print("🤖 Running multi-agent analysis...")
# Run the complete analysis pipeline
final_state, decision = ta.propagate(ticker, trade_date)
# Display results
print("\n" + "=" * 50)
print("📋 FINAL TRADING DECISION")
print("=" * 50)
print(decision)
# Optional: Save results for later review
# ta.reflect_and_remember(1000) # Implement after backtesting
print(f"\n✅ Analysis complete for {ticker} on {trade_date}")
except Exception as e:
print(f"❌ Error during analysis: {str(e)}")
print("\n💡 Troubleshooting:")
print(" - Check your API keys in .env file")
print(" - Verify internet connection for data sources")
print(" - Try: python -m cli.main for interactive mode")
return 1
return 0
# forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)
# Memorize mistakes and reflect
# ta.reflect_and_remember(1000) # parameter is the position returns
if __name__ == "__main__":
exit_code = main()

View File

@ -22,3 +22,4 @@ redis
chainlit
rich
questionary
python-dotenv

View File

@ -1,19 +1,19 @@
import os
# Get the project root directory (TradingAgents folder)
PROJECT_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
DEFAULT_CONFIG = {
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",
"data_cache_dir": os.path.join(
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"dataflows/data_cache",
),
"project_dir": PROJECT_ROOT,
"data_dir": os.path.join(PROJECT_ROOT, "data"), # Use local data directory
"data_cache_dir": os.path.join(PROJECT_ROOT, "tradingagents", "dataflows", "data_cache"),
# LLM settings
"deep_think_llm": "o4-mini",
"deep_think_llm": "gpt-4o-mini", # Fixed model name
"quick_think_llm": "gpt-4o-mini",
# Debate and discussion settings
"max_debate_rounds": 1,
"max_risk_discuss_rounds": 1,
"max_recur_limit": 100,
# Tool settings
"online_tools": True,
"online_tools": False, # Start with offline for testing
}

View File

@ -0,0 +1,49 @@
"""
Environment variable loader for TradingAgents.
Automatically loads environment variables from .env file if it exists.
"""
import os
from pathlib import Path
def load_environment():
"""Load environment variables from .env file if it exists."""
try:
from dotenv import load_dotenv
# Find the project root (where .env should be located)
current_dir = Path(__file__).parent
project_root = current_dir.parent # Go up one level from tradingagents/
env_file = project_root / ".env"
if env_file.exists():
load_dotenv(env_file)
print(f"✅ Environment variables loaded from {env_file}")
# Verify required API keys are loaded
required_keys = ["FINNHUB_API_KEY", "OPENAI_API_KEY"]
missing_keys = []
for key in required_keys:
if not os.getenv(key) or os.getenv(key) == f"your_{key.lower()}_here":
missing_keys.append(key)
if missing_keys:
print(f"⚠️ Warning: The following API keys need to be set in your .env file:")
for key in missing_keys:
print(f" - {key}")
print(f" Please edit {env_file} and replace the placeholder values with your actual API keys.")
else:
print("✅ All required API keys are configured!")
else:
print(f" No .env file found at {env_file}")
print(" You can create one with your API keys or use environment variables directly.")
except ImportError:
print("⚠️ python-dotenv not installed. Install it with: pip install python-dotenv")
except Exception as e:
print(f"❌ Error loading .env file: {e}")
# Call load_environment when this module is imported
load_environment()