Deploy_prepare

This commit is contained in:
Jiahao Zhang 2025-07-10 22:42:25 -07:00
parent b34113f420
commit 68a58c3a59
15 changed files with 1258 additions and 19 deletions

183
backend/DEPLOYMENT_GUIDE.md Normal file
View File

@ -0,0 +1,183 @@
# TradingAgents API Deployment Guide
## 🍎 App Store Deployment Strategy
This guide provides deployment instructions for publishing the TradingAgents iOS app to the Apple App Store.
### **Phase 1: Railway Deployment (App Store Submission)**
Railway provides the quickest path to production with automatic HTTPS - perfect for App Store submission.
## 🚀 Quick Start (5 minutes)
### **Step 1: Prepare Repository**
1. **Ensure all files are committed to GitHub:**
```bash
git add .
git commit -m "Prepare for Railway deployment"
git push origin main
```
### **Step 2: Deploy to Railway**
1. **Go to [Railway.app](https://railway.app)**
2. **Sign up/Login** with your GitHub account
3. **Click "New Project"****"Deploy from GitHub repo"**
4. **Select your TradingAgents repository**
5. **Railway will auto-detect** the Python app and start building
### **Step 3: Configure Environment Variables**
In the Railway dashboard:
1. **Go to your project** → **Variables tab**
2. **Add these required variables:**
```
OPENAI_API_KEY=your_actual_openai_key
FINNHUB_API_KEY=your_actual_finnhub_key
SERPAPI_API_KEY=your_actual_serpapi_key (optional but recommended)
```
3. **Optional variables for better performance:**
```
DEEP_THINK_MODEL=gpt-4o
QUICK_THINK_MODEL=gpt-4o-mini
MAX_DEBATE_ROUNDS=3
MAX_RISK_DISCUSS_ROUNDS=2
```
### **Step 4: Get Your Production URL**
1. **In Railway dashboard**, go to **Settings** → **Domains**
2. **Copy the railway.app URL** (e.g., `https://tradingagents-production.up.railway.app`)
3. **Optional:** Add a custom domain later
### **Step 5: Test Your Deployed API**
```bash
# Test the health endpoint
curl https://your-app.railway.app/health
# Test analysis endpoint
curl -X POST https://your-app.railway.app/analyze \
-H "Content-Type: application/json" \
-d '{"ticker": "AAPL"}'
```
## 📱 Update iOS App for Production
### **Step 1: Update API Configuration**
In your iOS project, update the `TradingAgentsService.swift`:
```swift
// Replace localhost URL with your Railway URL
private let baseURL = "https://your-app.railway.app"
```
### **Step 2: Configure for App Store**
1. **Update app version** in Xcode
2. **Test with production API** thoroughly
3. **Ensure all network calls use HTTPS**
4. **Update app privacy settings** if needed
## 🔒 Production Security Setup
### **Step 1: Configure CORS (Optional)**
To restrict API access to your app only:
1. **In Railway dashboard**, add variable:
```
CORS_ORIGINS=https://your-custom-domain.com
```
### **Step 2: Monitor Usage**
1. **Check Railway dashboard** for usage metrics
2. **Monitor API response times**
3. **Set up alerts** for downtime
## 📊 Performance Optimization
### **Railway Configuration**
Railway automatically handles:
- ✅ **HTTPS/SSL certificates**
- ✅ **Auto-scaling based on usage**
- ✅ **Health checks and restarts**
- ✅ **CDN for faster global access**
### **Expected Performance**
- **Startup time:** ~30-60 seconds (cold start)
- **Analysis time:** 2-8 minutes per request
- **Concurrent users:** Railway free tier handles 10-20 concurrent users
## 🔧 Troubleshooting
### **Common Issues**
1. **Build failures:**
- Check requirements.txt is valid
- Ensure Python version compatibility
2. **Runtime errors:**
- Verify all environment variables are set
- Check Railway logs for detailed errors
3. **API timeouts:**
- Railway has 100-second request timeout
- Consider implementing streaming for long analyses
### **Monitoring Commands**
```bash
# Check Railway logs
railway logs
# Check service status
railway status
# Restart service
railway redeploy
```
## 📈 Scaling for App Store Success
### **Free Tier Limits**
- **Execution time:** 500 hours/month
- **Memory:** 512MB
- **Storage:** 1GB
### **When to Upgrade**
- **> 100 daily users:** Consider Pro plan ($5/month)
- **> 1000 daily users:** Plan migration to VPS
### **Migration Path**
When your app grows, migrate to:
1. **Railway Pro** (simple upgrade)
2. **Docker + VPS** (full control)
3. **AWS/GCP** (enterprise scale)
## ✅ Pre-App Store Checklist
- [ ] API deployed and accessible via HTTPS
- [ ] All endpoints return expected responses
- [ ] iOS app updated with production URL
- [ ] App tested with production API
- [ ] Error handling tested (network issues, API timeouts)
- [ ] App Store privacy policy updated
- [ ] Terms of service mention third-party APIs
## 🆘 Support
- **Railway Documentation:** [docs.railway.app](https://docs.railway.app)
- **Railway Discord:** Join for community support
- **GitHub Issues:** Report bugs in your repository
---
**🎉 Congratulations!** Your TradingAgents API is now ready for App Store submission with a production-grade backend hosted on Railway.

1
backend/Procfile Normal file
View File

@ -0,0 +1 @@
web: uvicorn api:app --host 0.0.0.0 --port $PORT --workers 1

View File

@ -0,0 +1,155 @@
# Tool Call Fix Summary
## Problem Description
The TradingAgents system was experiencing a critical error during execution:
```
Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_Ai2XeBuqwYn44GC5ogLZxzsG", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}}
```
## Root Cause Analysis
The error occurred in the Market Analyst when it made multiple tool calls, but one of them failed:
1. **Market Analyst** made 8 tool calls including `get_stockstats_indicators_report_online` with `macd_signal` indicator
2. **Tool execution failed** because `macd_signal` is not supported (only `macds` is available)
3. **No ToolMessage created** for the failed tool call
4. **OpenAI API rejected** the conversation because tool call `call_Ai2XeBuqwYn44GC5ogLZxzsG` had no response
From the logs:
```
ERROR:tradingagents.graph.setup:❌ market tools: Error executing get_stockstats_indicators_report_online: Indicator macd_signal is not supported. Please choose from: ['close_50_sma', 'close_200_sma', 'close_10_ema', 'macd', 'macds', 'macdh', 'rsi', 'boll', 'boll_ub', 'boll_lb', 'atr', 'vwma', 'mfi']
```
## Solution Implemented
Modified `backend/tradingagents/graph/setup.py` in the `_wrap_tool_node_for_channel` method to ensure **every tool call gets a ToolMessage response**, even when errors occur.
### Changes Made
1. **Failed Tool Execution**: Create error ToolMessage
```python
except Exception as e:
logger.error(f"❌ {analyst_type} tools: Error executing {tool_name}: {str(e)}")
# Create an error ToolMessage to maintain conversation flow
error_message = ToolMessage(
content=f"Error executing {tool_name}: {str(e)}",
tool_call_id=tool_call_id
)
updated_messages.append(error_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added error ToolMessage for {tool_name}")
tools_executed += 1
```
2. **Skipped Tool Calls**: Create skip ToolMessage
```python
if not can_call:
logger.warning(f"🔧 {analyst_type} tools: SKIPPING - {reason}")
# Create a skip ToolMessage to maintain conversation flow
skip_message = ToolMessage(
content=f"Tool call skipped: {reason}",
tool_call_id=tool_call_id
)
updated_messages.append(skip_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added skip ToolMessage for {tool_name}")
tools_executed += 1
continue
```
3. **Unknown Tool Call Format**: Create error ToolMessage
```python
else:
logger.error(f"❌ {analyst_type} tools: Unknown tool call format")
# Create an error ToolMessage even for unknown format
unknown_tool_call_id = f'unknown_format_{i}'
error_message = ToolMessage(
content=f"Error: Unknown tool call format at index {i}",
tool_call_id=unknown_tool_call_id
)
updated_messages.append(error_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added error ToolMessage for unknown format")
tools_executed += 1
continue
```
4. **Empty Tool Name**: Create error ToolMessage
```python
if not tool_name:
logger.error(f"❌ {analyst_type} tools: Empty tool name")
# Create an error ToolMessage for empty tool name
error_message = ToolMessage(
content=f"Error: Empty tool name at index {i}",
tool_call_id=tool_call_id
)
updated_messages.append(error_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added error ToolMessage for empty tool name")
tools_executed += 1
continue
```
5. **Improved Tool Call ID Handling**: Ensure unique IDs
```python
tool_call_id = tool_call.get('id', f'unknown_{i}') # For dict format
tool_call_id = tool_call.id if hasattr(tool_call, 'id') else f'unknown_{i}' # For object format
```
## Testing Results
### Test 1: Simple Fix Verification
```bash
python test_simple_fix.py
```
**Result**: ✅ PASS - No tool call errors detected in first 15 chunks
### Test 2: Extended Analysis Test
```bash
python test_extended_fix.py
```
**Result**: ✅ PASS - No tool call errors detected during 90-second execution
### Test 3: API Health Check
```bash
curl http://localhost:8000/health
```
**Result**: ✅ {"status":"healthy"}
## Impact
- **✅ Fixed**: Critical OpenAI API error that was breaking the analysis flow
- **✅ Improved**: Error handling and logging for tool execution
- **✅ Enhanced**: Robustness of the conversation flow with OpenAI API
- **✅ Maintained**: All existing functionality while adding error resilience
## Verification Commands
To verify the fix is working:
1. Start the API server:
```bash
cd backend
uvicorn api:app --host 0.0.0.0 --port 8000
```
2. Run the verification test:
```bash
python test_simple_fix.py
```
3. Test with real endpoint:
```bash
curl -X GET "http://localhost:8000/analyze/stream?ticker=AAPL"
```
The system should now run without the tool call error and handle failed tool executions gracefully.
## Files Modified
- `backend/tradingagents/graph/setup.py` - Main fix implementation
- `backend/test_simple_fix.py` - Simple verification test (created)
- `backend/test_extended_fix.py` - Extended verification test (created)
- `backend/test_tool_call_fix.py` - Comprehensive test script (created)
## Status
🎉 **RESOLVED** - Tool call error fix successfully implemented and tested.

View File

@ -26,11 +26,14 @@ app = FastAPI(
)
# Add CORS middleware for Swift app
# Get allowed origins from environment variable or use defaults
cors_origins = os.getenv("CORS_ORIGINS", "*").split(",") if os.getenv("CORS_ORIGINS") else ["*"]
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # In production, replace with your Swift app's URL
allow_origins=cors_origins, # Production: Use specific domains
allow_credentials=True,
allow_methods=["*"],
allow_methods=["GET", "POST", "OPTIONS"],
allow_headers=["*"],
)

View File

@ -923,10 +923,14 @@ def run_analysis(advanced_mode=False):
# Stream the analysis
trace = []
for chunk in graph.graph.stream(init_agent_state, **args):
if len(chunk["messages"]) > 0:
# Get the last message from the chunk
# Handle the new parallel execution structure
messages_found = False
# Check for messages in the old format (for backward compatibility)
if "messages" in chunk and len(chunk["messages"]) > 0:
messages_found = True
last_message = chunk["messages"][-1]
# Extract message content and type
if hasattr(last_message, "content"):
content = extract_content_string(last_message.content) # Use the helper function
@ -948,6 +952,49 @@ def run_analysis(advanced_mode=False):
)
else:
message_buffer.add_tool_call(tool_call.name, tool_call.args)
# Check for messages in the new parallel execution format
else:
# Look for messages in analyst channels
message_channels = ["market_messages", "social_messages", "news_messages", "fundamentals_messages"]
for node_name, node_data in chunk.items():
if isinstance(node_data, dict):
for channel in message_channels:
if channel in node_data and node_data[channel]:
messages_found = True
# Get the last message from this channel
last_message = node_data[channel][-1]
# Extract message content and type
if hasattr(last_message, "content"):
content = extract_content_string(last_message.content)
msg_type = f"{channel.replace('_messages', '').title()} Analyst"
else:
content = str(last_message)
msg_type = f"{channel.replace('_messages', '').title()} System"
# Add message to buffer
message_buffer.add_message(msg_type, content)
# If it's a tool call, add it to tool calls
if hasattr(last_message, "tool_calls"):
for tool_call in last_message.tool_calls:
# Handle both dictionary and object tool calls
if isinstance(tool_call, dict):
message_buffer.add_tool_call(
tool_call["name"], tool_call["args"]
)
else:
message_buffer.add_tool_call(tool_call.name, tool_call.args)
# Only process the first message channel found to avoid duplicates
break
if messages_found:
break
# Continue with the rest of the processing (reports, etc.)
if True: # Always process chunk for reports regardless of messages
# Update reports and agent status based on chunk content
# Analyst Team Reports
@ -1154,7 +1201,18 @@ def run_analysis(advanced_mode=False):
# Get final state and decision
final_state = trace[-1]
decision = graph.process_signal(final_state["final_trade_decision"])
# Extract the final trade decision from the correct location
final_trade_decision = None
if "Risk Judge" in final_state and "final_trade_decision" in final_state["Risk Judge"]:
final_trade_decision = final_state["Risk Judge"]["final_trade_decision"]
elif "final_trade_decision" in final_state:
final_trade_decision = final_state["final_trade_decision"]
if final_trade_decision:
decision = graph.process_signal(final_trade_decision)
else:
decision = "No trade decision available"
# Update all agent statuses to completed
for agent in message_buffer.agent_status:
@ -1172,6 +1230,124 @@ def run_analysis(advanced_mode=False):
# Display the complete final report
display_complete_report(final_state)
# Save the final complete report and decision
# Save the final trade decision
if final_trade_decision:
decision_file = results_dir / "final_trade_decision.md"
with open(decision_file, "w") as f:
f.write(f"# Final Trading Decision\n\n")
f.write(f"**Ticker:** {selections['ticker']}\n")
f.write(f"**Analysis Date:** {selections['analysis_date']}\n")
f.write(f"**Decision:** {decision}\n\n")
f.write("## Raw Decision Text\n\n")
f.write(final_trade_decision)
# Save the complete final report
complete_report_file = results_dir / "complete_analysis_report.md"
with open(complete_report_file, "w") as f:
f.write(f"# Complete Analysis Report\n\n")
f.write(f"**Ticker:** {selections['ticker']}\n")
f.write(f"**Analysis Date:** {selections['analysis_date']}\n")
f.write(f"**Analysis Time:** {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
# Add analyst reports
if final_state.get("market_report"):
f.write("## Market Analysis\n\n")
f.write(final_state["market_report"])
f.write("\n\n")
if final_state.get("sentiment_report"):
f.write("## Social Media Sentiment Analysis\n\n")
f.write(final_state["sentiment_report"])
f.write("\n\n")
if final_state.get("news_report"):
f.write("## News Analysis\n\n")
f.write(final_state["news_report"])
f.write("\n\n")
if final_state.get("fundamentals_report"):
f.write("## Fundamentals Analysis\n\n")
f.write(final_state["fundamentals_report"])
f.write("\n\n")
# Add research team analysis
if final_state.get("investment_debate_state"):
debate_state = final_state["investment_debate_state"]
f.write("## Investment Research Analysis\n\n")
if debate_state.get("bull_history"):
f.write("### Bull Researcher Analysis\n\n")
f.write(debate_state["bull_history"])
f.write("\n\n")
if debate_state.get("bear_history"):
f.write("### Bear Researcher Analysis\n\n")
f.write(debate_state["bear_history"])
f.write("\n\n")
if debate_state.get("judge_decision"):
f.write("### Research Manager Decision\n\n")
f.write(debate_state["judge_decision"])
f.write("\n\n")
# Add trading analysis
if final_state.get("trader_investment_plan"):
f.write("## Trading Plan\n\n")
f.write(final_state["trader_investment_plan"])
f.write("\n\n")
# Add risk analysis
if final_state.get("risk_debate_state"):
risk_state = final_state["risk_debate_state"]
f.write("## Risk Management Analysis\n\n")
if risk_state.get("risky_history"):
f.write("### Aggressive Risk Analysis\n\n")
f.write(risk_state["risky_history"])
f.write("\n\n")
if risk_state.get("safe_history"):
f.write("### Conservative Risk Analysis\n\n")
f.write(risk_state["safe_history"])
f.write("\n\n")
if risk_state.get("neutral_history"):
f.write("### Neutral Risk Analysis\n\n")
f.write(risk_state["neutral_history"])
f.write("\n\n")
if risk_state.get("judge_decision"):
f.write("### Risk Manager Final Decision\n\n")
f.write(risk_state["judge_decision"])
f.write("\n\n")
# Add final decision
if final_trade_decision:
f.write("## Final Trading Decision\n\n")
f.write(f"**Decision:** {decision}\n\n")
f.write("### Detailed Decision\n\n")
f.write(final_trade_decision)
# Save final state as JSON for programmatic access
final_state_file = results_dir / "final_state.json"
with open(final_state_file, "w") as f:
import json
# Convert final_state to JSON-serializable format
json_state = {}
for key, value in final_state.items():
try:
json.dumps(value) # Test if it's JSON serializable
json_state[key] = value
except:
json_state[key] = str(value) # Convert to string if not serializable
json.dump(json_state, f, indent=2)
print(f"\n✅ Analysis complete! Results saved to: {results_dir}")
print(f"📄 Complete report: {complete_report_file}")
print(f"🎯 Final decision: {decision_file}")
print(f"📊 Final state: {final_state_file}")
update_display(layout)
@ -1703,7 +1879,17 @@ def run_analysis_streaming(advanced_mode=False):
# Get final state and decision
final_state = trace[-1]
decision = graph.process_signal(final_state["final_trade_decision"])
# Extract the final trade decision from the correct location
final_trade_decision = None
if "Risk Judge" in final_state and "final_trade_decision" in final_state["Risk Judge"]:
final_trade_decision = final_state["Risk Judge"]["final_trade_decision"]
elif "final_trade_decision" in final_state:
final_trade_decision = final_state["final_trade_decision"]
if final_trade_decision:
decision = graph.process_signal(final_trade_decision)
else:
decision = "No trade decision available"
# Update all agent statuses to completed
for agent in streaming_buffer.agent_status:
@ -1721,6 +1907,124 @@ def run_analysis_streaming(advanced_mode=False):
# Display the complete final report
display_complete_report(final_state)
# Save the final complete report and decision
# Save the final trade decision
if final_trade_decision:
decision_file = results_dir / "final_trade_decision.md"
with open(decision_file, "w") as f:
f.write(f"# Final Trading Decision\n\n")
f.write(f"**Ticker:** {selections['ticker']}\n")
f.write(f"**Analysis Date:** {selections['analysis_date']}\n")
f.write(f"**Decision:** {decision}\n\n")
f.write("## Raw Decision Text\n\n")
f.write(final_trade_decision)
# Save the complete final report
complete_report_file = results_dir / "complete_analysis_report.md"
with open(complete_report_file, "w") as f:
f.write(f"# Complete Analysis Report\n\n")
f.write(f"**Ticker:** {selections['ticker']}\n")
f.write(f"**Analysis Date:** {selections['analysis_date']}\n")
f.write(f"**Analysis Time:** {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
# Add analyst reports
if final_state.get("market_report"):
f.write("## Market Analysis\n\n")
f.write(final_state["market_report"])
f.write("\n\n")
if final_state.get("sentiment_report"):
f.write("## Social Media Sentiment Analysis\n\n")
f.write(final_state["sentiment_report"])
f.write("\n\n")
if final_state.get("news_report"):
f.write("## News Analysis\n\n")
f.write(final_state["news_report"])
f.write("\n\n")
if final_state.get("fundamentals_report"):
f.write("## Fundamentals Analysis\n\n")
f.write(final_state["fundamentals_report"])
f.write("\n\n")
# Add research team analysis
if final_state.get("investment_debate_state"):
debate_state = final_state["investment_debate_state"]
f.write("## Investment Research Analysis\n\n")
if debate_state.get("bull_history"):
f.write("### Bull Researcher Analysis\n\n")
f.write(debate_state["bull_history"])
f.write("\n\n")
if debate_state.get("bear_history"):
f.write("### Bear Researcher Analysis\n\n")
f.write(debate_state["bear_history"])
f.write("\n\n")
if debate_state.get("judge_decision"):
f.write("### Research Manager Decision\n\n")
f.write(debate_state["judge_decision"])
f.write("\n\n")
# Add trading analysis
if final_state.get("trader_investment_plan"):
f.write("## Trading Plan\n\n")
f.write(final_state["trader_investment_plan"])
f.write("\n\n")
# Add risk analysis
if final_state.get("risk_debate_state"):
risk_state = final_state["risk_debate_state"]
f.write("## Risk Management Analysis\n\n")
if risk_state.get("risky_history"):
f.write("### Aggressive Risk Analysis\n\n")
f.write(risk_state["risky_history"])
f.write("\n\n")
if risk_state.get("safe_history"):
f.write("### Conservative Risk Analysis\n\n")
f.write(risk_state["safe_history"])
f.write("\n\n")
if risk_state.get("neutral_history"):
f.write("### Neutral Risk Analysis\n\n")
f.write(risk_state["neutral_history"])
f.write("\n\n")
if risk_state.get("judge_decision"):
f.write("### Risk Manager Final Decision\n\n")
f.write(risk_state["judge_decision"])
f.write("\n\n")
# Add final decision
if final_trade_decision:
f.write("## Final Trading Decision\n\n")
f.write(f"**Decision:** {decision}\n\n")
f.write("### Detailed Decision\n\n")
f.write(final_trade_decision)
# Save final state as JSON for programmatic access
final_state_file = results_dir / "final_state.json"
with open(final_state_file, "w") as f:
import json
# Convert final_state to JSON-serializable format
json_state = {}
for key, value in final_state.items():
try:
json.dumps(value) # Test if it's JSON serializable
json_state[key] = value
except:
json_state[key] = str(value) # Convert to string if not serializable
json.dump(json_state, f, indent=2)
print(f"\n✅ Analysis complete! Results saved to: {results_dir}")
print(f"📄 Complete report: {complete_report_file}")
print(f"🎯 Final decision: {decision_file}")
print(f"📊 Final state: {final_state_file}")
update_streaming_display(layout, streaming_buffer)

170
backend/deploy-to-railway.sh Executable file
View File

@ -0,0 +1,170 @@
#!/bin/bash
# TradingAgents Railway Deployment Script
# This script helps deploy the TradingAgents API to Railway for App Store submission
set -e # Exit on any error
echo "🚀 TradingAgents Railway Deployment Script"
echo "========================================="
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${BLUE} $1${NC}"
}
print_success() {
echo -e "${GREEN}$1${NC}"
}
print_warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
}
print_error() {
echo -e "${RED}$1${NC}"
}
# Check if we're in the right directory
if [ ! -f "api.py" ]; then
print_error "This script must be run from the backend directory"
exit 1
fi
print_status "Step 1: Pre-deployment Checklist"
echo
# Check for required files
print_status "Checking required deployment files..."
if [ -f "railway.json" ]; then
print_success "railway.json found"
else
print_error "railway.json not found"
exit 1
fi
if [ -f "Procfile" ]; then
print_success "Procfile found"
else
print_error "Procfile not found"
exit 1
fi
if [ -f "requirements.txt" ]; then
print_success "requirements.txt found"
else
print_error "requirements.txt not found"
exit 1
fi
if [ -f "production.env.example" ]; then
print_success "production.env.example found"
else
print_warning "production.env.example not found (optional)"
fi
echo
print_status "Step 2: Environment Variables Check"
echo
print_warning "Make sure you have these API keys ready for Railway:"
echo " 📋 OPENAI_API_KEY=your_openai_key"
echo " 📋 FINNHUB_API_KEY=your_finnhub_key"
echo " 📋 SERPAPI_API_KEY=your_serpapi_key (optional but recommended)"
echo
print_status "Step 3: Git Repository Status"
echo
# Check git status
if git status --porcelain | grep -q .; then
print_warning "You have uncommitted changes:"
git status --short
echo
read -p "Do you want to commit these changes? (y/n): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo
read -p "Enter commit message: " commit_message
git add .
git commit -m "$commit_message"
print_success "Changes committed"
else
print_warning "Proceeding with uncommitted changes..."
fi
else
print_success "Working directory is clean"
fi
# Check if we're on a branch
current_branch=$(git branch --show-current)
print_status "Current branch: $current_branch"
# Push to remote
print_status "Pushing to remote repository..."
git push origin $current_branch
print_success "Code pushed to remote"
echo
print_status "Step 4: Railway Deployment Instructions"
echo
print_success "🎯 Ready for Railway deployment!"
echo
echo "Next steps:"
echo "1. 🌐 Go to https://railway.app"
echo "2. 🔑 Sign up/Login with your GitHub account"
echo "3. Click 'New Project' → 'Deploy from GitHub repo'"
echo "4. 📁 Select your TradingAgents repository"
echo "5. ⚙️ Railway will auto-detect Python and start building"
echo
echo "After deployment starts:"
echo "6. 🔧 Go to project → Variables tab"
echo "7. Add your environment variables:"
echo " OPENAI_API_KEY=your_actual_key"
echo " FINNHUB_API_KEY=your_actual_key"
echo " SERPAPI_API_KEY=your_actual_key"
echo "8. 🌍 Go to Settings → Domains to get your public URL"
echo "9. 🧪 Test your API at https://your-app.railway.app/health"
echo
print_status "Step 5: Post-Deployment Tasks"
echo
echo "After successful Railway deployment:"
echo "1. 📝 Copy your Railway URL (e.g., https://tradingagents-prod.up.railway.app)"
echo "2. 📱 Update iOS app AppConfig.swift with the new URL"
echo "3. 🧪 Test iOS app with production API"
echo "4. 🍎 Submit to App Store"
echo
print_status "Helpful Railway Commands (install CLI first)"
echo
echo "Install Railway CLI:"
echo " npm install -g @railway/cli"
echo
echo "Useful commands:"
echo " railway login # Login to Railway"
echo " railway status # Check deployment status"
echo " railway logs # View application logs"
echo " railway shell # Access deployment shell"
echo " railway redeploy # Redeploy current version"
echo
print_success "🎉 Deployment script completed!"
print_warning "📋 Don't forget to update the iOS app with your Railway URL!"
echo
echo "For detailed instructions, see: DEPLOYMENT_GUIDE.md"
echo "Good luck with your App Store submission! 🍎"

View File

@ -0,0 +1,31 @@
# TradingAgents Production Environment Configuration
# Copy this file to .env and fill in your actual values for Railway deployment
# Required API Keys
OPENAI_API_KEY=your_openai_api_key_here
FINNHUB_API_KEY=your_finnhub_api_key_here
# Optional but Recommended
SERPAPI_API_KEY=your_serpapi_key_here
REDDIT_CLIENT_ID=your_reddit_client_id
REDDIT_CLIENT_SECRET=your_reddit_client_secret
# Production Server Configuration
TRADINGAGENTS_API_HOST=0.0.0.0
TRADINGAGENTS_API_PORT=8000
# LLM Configuration for Production
DEEP_THINK_MODEL=gpt-4o
QUICK_THINK_MODEL=gpt-4o-mini
BACKEND_URL=https://api.openai.com/v1
# Data Directories (Railway will use default paths)
# TRADINGAGENTS_RESULTS_DIR=/app/results
# TRADINGAGENTS_DATA_DIR=/app/data
# Security Settings for Production
CORS_ORIGINS=https://your-domain.com,https://your-app.railway.app
# Optional: Performance Settings
MAX_DEBATE_ROUNDS=3
MAX_RISK_DISCUSS_ROUNDS=2

21
backend/railway.json Normal file
View File

@ -0,0 +1,21 @@
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "nixpacks",
"buildCommand": "pip install -r requirements.txt"
},
"deploy": {
"startCommand": "uvicorn api:app --host 0.0.0.0 --port $PORT",
"restartPolicyType": "on_failure",
"restartPolicyMaxRetries": 3
},
"environments": {
"production": {
"variables": {
"PYTHONPATH": "/app",
"TRADINGAGENTS_API_HOST": "0.0.0.0",
"TRADINGAGENTS_API_PORT": "$PORT"
}
}
}
}

View File

@ -29,3 +29,4 @@ pydantic
uvicorn[standard]
python-dotenv
google-search-results
serpapi

View File

@ -0,0 +1,113 @@
#!/usr/bin/env python3
"""
Extended test to verify tool call fix works through analysis phase.
"""
import requests
import json
import time
import signal
import sys
def test_extended_analysis():
"""Test tool call fix through extended analysis."""
print("🧪 Extended Tool Call Fix Test")
print("=" * 50)
# Set up timeout handler
def timeout_handler(signum, frame):
print("\n⏰ Test timeout - but no tool call errors detected!")
print("✅ Fix appears to be working")
sys.exit(0)
signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(90) # 90 second timeout
try:
response = requests.get(
'http://localhost:8000/analyze/stream',
params={'ticker': 'AAPL'},
stream=True,
timeout=90
)
if response.status_code != 200:
print(f"❌ API error: {response.status_code}")
return False
print("✅ API accessible")
print("📊 Monitoring for tool call errors...")
chunk_count = 0
tool_call_error_found = False
agents_seen = set()
for line in response.iter_lines():
if line:
try:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = json.loads(decoded_line[6:])
chunk_count += 1
msg_type = data.get('type', 'unknown')
# Track agent activity
if msg_type == 'agent_status':
agent = data.get('agent', '')
status = data.get('status', '')
if agent and status == 'active':
agents_seen.add(agent)
# Check for tool call errors
if msg_type == 'error':
error_msg = data.get('message', '')
print(f"❌ Error detected: {error_msg[:100]}...")
if 'tool_calls' in error_msg and 'must be followed by tool messages' in error_msg:
print("❌ TOOL CALL ERROR DETECTED!")
tool_call_error_found = True
break
# Progress reporting
if chunk_count % 20 == 0:
print(f"📦 Processed {chunk_count} chunks, agents seen: {len(agents_seen)}")
# Success condition
if msg_type == 'final_result':
print(f"✅ Analysis completed successfully after {chunk_count} chunks!")
break
# Stop after reasonable time if no errors
if chunk_count >= 200:
print(f"🛑 Stopping after {chunk_count} chunks - no tool call errors detected")
break
except Exception as e:
print(f"⚠️ Parse error: {e}")
continue
signal.alarm(0) # Cancel timeout
print(f"\n📊 Test Summary:")
print(f" Chunks processed: {chunk_count}")
print(f" Agents seen: {agents_seen}")
print(f" Tool call errors: {'YES' if tool_call_error_found else 'NO'}")
if tool_call_error_found:
print("❌ RESULT: Tool call error detected - fix failed")
return False
else:
print("✅ RESULT: No tool call errors - fix is working!")
return True
except Exception as e:
print(f"❌ Test error: {e}")
return False
finally:
signal.alarm(0)
if __name__ == "__main__":
success = test_extended_analysis()
print(f"\n{'🎉 PASS' if success else '💥 FAIL'}")

View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
"""
Simple test to verify tool call fix is working.
"""
import requests
import json
import time
def test_tool_call_fix():
"""Test if tool call fix prevents the specific error."""
print("🧪 Simple Tool Call Fix Test")
print("=" * 40)
try:
# Test with a simple ticker
response = requests.get(
'http://localhost:8000/analyze/stream',
params={'ticker': 'AAPL'},
stream=True,
timeout=30
)
if response.status_code != 200:
print(f"❌ API error: {response.status_code}")
return False
print("✅ API accessible")
chunk_count = 0
tool_call_error_found = False
# Read first 20 chunks to check for tool call errors
for line in response.iter_lines():
if line and chunk_count < 20:
try:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = json.loads(decoded_line[6:])
chunk_count += 1
msg_type = data.get('type', 'unknown')
print(f"📦 Chunk {chunk_count}: {msg_type}")
# Check for the specific tool call error
if msg_type == 'error':
error_msg = data.get('message', '')
print(f"❌ Error: {error_msg}")
if 'tool_calls' in error_msg and 'must be followed by tool messages' in error_msg:
print("❌ TOOL CALL ERROR DETECTED - Fix failed!")
tool_call_error_found = True
break
# If we get through some chunks without the error, the fix is working
if chunk_count >= 15:
break
except Exception as e:
print(f"⚠️ Parse error: {e}")
continue
else:
break
if tool_call_error_found:
print("❌ RESULT: Tool call error still occurs")
return False
else:
print("✅ RESULT: No tool call errors detected in first 15 chunks")
print(" Fix appears to be working!")
return True
except Exception as e:
print(f"❌ Test error: {e}")
return False
if __name__ == "__main__":
success = test_tool_call_fix()
print(f"\n{'🎉 PASS' if success else '💥 FAIL'}")

View File

@ -0,0 +1,147 @@
#!/usr/bin/env python3
"""
Test script to verify the tool call fix works properly.
This script tests the API endpoint and checks for tool call errors.
"""
import requests
import json
import time
import sys
from datetime import datetime
def test_api_endpoint():
"""Test the API endpoint for tool call errors."""
print("🧪 Testing TradingAgents API for tool call fix...")
print("=" * 60)
try:
# Test health endpoint first
print("📋 Testing health endpoint...")
health_response = requests.get('http://localhost:8000/health', timeout=5)
if health_response.status_code == 200:
print("✅ Health endpoint OK")
else:
print(f"❌ Health endpoint failed: {health_response.status_code}")
return False
# Test streaming analysis
print("📋 Testing streaming analysis...")
response = requests.get(
'http://localhost:8000/analyze/stream',
params={'ticker': 'AAPL'},
stream=True,
timeout=120 # 2 minute timeout
)
if response.status_code != 200:
print(f"❌ API returned status code: {response.status_code}")
print(f"Response: {response.text}")
return False
print("✅ API endpoint accessible")
# Process streaming response
chunk_count = 0
error_found = False
success_found = False
start_time = time.time()
print("📦 Processing chunks...")
for line in response.iter_lines():
if line:
try:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = json.loads(decoded_line[6:])
chunk_count += 1
msg_type = data.get('type', 'unknown')
# Check for specific error patterns
if msg_type == 'error':
error_msg = data.get('message', 'Unknown error')
print(f"❌ ERROR DETECTED: {error_msg}")
# Check if it's the specific tool call error we're fixing
if 'tool_calls' in error_msg and 'must be followed by tool messages' in error_msg:
print("❌ TOOL CALL ERROR: The fix didn't work!")
error_found = True
break
else:
print("⚠️ Other error detected (not tool call related)")
elif msg_type == 'final_result':
print(f"✅ SUCCESS: Final result received after {chunk_count} chunks")
success_found = True
break
elif msg_type in ['status', 'agent_status', 'progress', 'reasoning']:
if chunk_count <= 10 or chunk_count % 10 == 0:
print(f"📦 Chunk {chunk_count}: {msg_type}")
# Safety timeout
elapsed = time.time() - start_time
if elapsed > 120: # 2 minutes
print("⏰ Test timeout reached")
break
# Stop after reasonable number of chunks
if chunk_count >= 100:
print("🛑 Stopping after 100 chunks")
break
except json.JSONDecodeError as e:
print(f"⚠️ JSON decode error: {e}")
continue
except Exception as e:
print(f"⚠️ Error processing chunk: {e}")
continue
# Summary
print("\n" + "=" * 60)
print("📊 TEST SUMMARY:")
print(f" Total chunks processed: {chunk_count}")
print(f" Tool call errors found: {'YES' if error_found else 'NO'}")
print(f" Analysis completed: {'YES' if success_found else 'NO'}")
if error_found:
print("❌ RESULT: Tool call fix did NOT work")
return False
elif success_found:
print("✅ RESULT: Tool call fix works - analysis completed successfully")
return True
else:
print("⚠️ RESULT: Tool call fix appears to work - no errors detected")
print(" (Analysis didn't complete but no tool call errors found)")
return True
except requests.exceptions.ConnectionError:
print("❌ Connection refused - API server not running")
print(" Start the server with: uvicorn api:app --host 0.0.0.0 --port 8000")
return False
except requests.exceptions.Timeout:
print("⏰ Request timed out")
return False
except Exception as e:
print(f"❌ Unexpected error: {e}")
return False
def main():
"""Main test function."""
print(f"🕒 Test started at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
success = test_api_endpoint()
print(f"\n🕒 Test finished at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
if success:
print("🎉 Overall result: PASS")
sys.exit(0)
else:
print("💥 Overall result: FAIL")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -1,5 +1,4 @@
from .finnhub_utils import get_data_in_range
from .googlenews_utils import getNewsData
from .serpapi_utils import getNewsDataSerpAPI
from .yfin_utils import YFinanceUtils
from .reddit_utils import fetch_top_from_category

View File

@ -2,7 +2,7 @@ from typing import Annotated, Dict
from .reddit_utils import fetch_top_from_category
from .yfin_utils import *
from .stockstats_utils import *
from .googlenews_utils import *
from .serpapi_utils import getNewsDataSerpAPI
from .finnhub_utils import get_data_in_range
from dateutil.relativedelta import relativedelta
@ -312,15 +312,13 @@ def get_google_news(
before = start_date - relativedelta(days=look_back_days)
before = before.strftime("%Y-%m-%d")
# Log the API call - try SerpAPI first, fallback to web scraping
# Use SerpAPI exclusively - no fallback
serpapi_key = DEFAULT_CONFIG.get("serpapi_key", "")
if serpapi_key:
logger.info(f"🌐 Calling SerpAPI with query='{query}', start='{before}', end='{curr_date}'")
news_results = getNewsDataSerpAPI(query, before, curr_date, serpapi_key)
else:
logger.info(f"🌐 SerpAPI key not found, falling back to web scraping")
logger.info(f"🌐 Calling getNewsData with query='{query}', start='{before}', end='{curr_date}'")
news_results = getNewsData(query, before, curr_date)
if not serpapi_key:
raise ValueError("SerpAPI key is required. Please set SERPAPI_API_KEY in your environment variables.")
logger.info(f"🌐 Calling SerpAPI with query='{query}', start='{before}', end='{curr_date}'")
news_results = getNewsDataSerpAPI(query, before, curr_date, serpapi_key)
# Enhanced logging - Raw response
logger.info(f"🌐 RAW RESPONSE TYPE: {type(news_results)}")

View File

@ -620,23 +620,48 @@ class GraphSetup:
if isinstance(tool_call, dict):
tool_name = tool_call.get('name', '')
tool_args = tool_call.get('args', {})
tool_call_id = tool_call.get('id', 'unknown')
tool_call_id = tool_call.get('id', f'unknown_{i}')
elif hasattr(tool_call, 'name'):
tool_name = tool_call.name
tool_args = tool_call.args if hasattr(tool_call, 'args') else {}
tool_call_id = tool_call.id if hasattr(tool_call, 'id') else 'unknown'
tool_call_id = tool_call.id if hasattr(tool_call, 'id') else f'unknown_{i}'
else:
logger.error(f"{analyst_type} tools: Unknown tool call format")
# Create an error ToolMessage even for unknown format
unknown_tool_call_id = f'unknown_format_{i}'
error_message = ToolMessage(
content=f"Error: Unknown tool call format at index {i}",
tool_call_id=unknown_tool_call_id
)
updated_messages.append(error_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added error ToolMessage for unknown format")
tools_executed += 1
continue
if not tool_name:
logger.error(f"{analyst_type} tools: Empty tool name")
# Create an error ToolMessage for empty tool name
error_message = ToolMessage(
content=f"Error: Empty tool name at index {i}",
tool_call_id=tool_call_id
)
updated_messages.append(error_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added error ToolMessage for empty tool name")
tools_executed += 1
continue
# Check if the tool can be called
can_call, reason = self.tool_tracker.can_call_tool(analyst_type, tool_name, tool_args)
if not can_call:
logger.warning(f"🔧 {analyst_type} tools: SKIPPING - {reason}")
# Create a skip ToolMessage to maintain conversation flow
skip_message = ToolMessage(
content=f"Tool call skipped: {reason}",
tool_call_id=tool_call_id
)
updated_messages.append(skip_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added skip ToolMessage for {tool_name}")
tools_executed += 1
continue
logger.info(f"🔧 {analyst_type} tools: [{i+1}/{len(last_msg.tool_calls)}] Executing {tool_name}")
@ -667,6 +692,14 @@ class GraphSetup:
except Exception as e:
logger.error(f"{analyst_type} tools: Error executing {tool_name}: {str(e)}")
# Create an error ToolMessage to maintain conversation flow
error_message = ToolMessage(
content=f"Error executing {tool_name}: {str(e)}",
tool_call_id=tool_call_id
)
updated_messages.append(error_message)
logger.info(f"🔧 {analyst_type} tools: ✅ Added error ToolMessage for {tool_name}")
tools_executed += 1
logger.info(f"🔧 {analyst_type} tools: Executed {tools_executed} tools")
logger.info(f"🔧 {analyst_type} tools: Total calls for {analyst_type}: {self.tool_tracker.total_calls.get(analyst_type, 0)}")