v0 iOS app

This commit is contained in:
Jiahao Zhang 2025-07-01 23:10:44 -07:00
parent f8dc1fca25
commit 4b12480559
83 changed files with 4775 additions and 5604 deletions

228
.gitignore vendored
View File

@ -1,16 +1,226 @@
env/
# Python
__pycache__/
.DS_Store
*.csv
src/
eval_results/
eval_data/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
# Environment variables
.installed.cfg
*.egg
MANIFEST
.env
.env.local
.env.*.local
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
.pytest_cache/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.mypy_cache/
.dmypy.json
dmypy.json
.tox/
.nox/
# Results folder - ignore all analysis outputs
# UV Package Manager
uv.lock
# Project specific - Results & Evaluation Data
results/
eval_results/
backend/eval_results/
backend/results/
data_cache/
backend/tradingagents/dataflows/data_cache/
*.log
logs/
*.out
*.pid
# API & Runtime Files
*.sock
.restart_api_*
api_*.log
server_*.log
# Configuration & Secrets
config.local.*
*.key
*.pem
*.p12
*.p8
api_keys.txt
secrets.json
# macOS
.DS_Store
.AppleDouble
.LSOverride
Icon?
._*
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
# Xcode & iOS
*.xcuserstate
*.xcuserdatad
project.xcworkspace/
xcuserdata/
*.xccheckout
*.moved-aside
DerivedData/
*.hmap
*.ipa
*.dSYM.zip
*.dSYM
*.xcscmblueprint
*.xcarchive
timeline.xctimeline
playground.xcworkspace
.build/
# Swift Package Manager
.build/
.swiftpm/
Package.resolved
# CocoaPods
Pods/
*.podspec
Podfile.lock
# Carthage
Carthage/Build/
# Accio dependency management
Dependencies/
.accio/
# fastlane
fastlane/report.xml
fastlane/Preview.html
fastlane/screenshots/**/*.png
fastlane/test_output
fastlane/readme.md
# Code Injection
iOSInjectionProject/
# IDE & Editors
.idea/
.vscode/
*.swp
*.swo
*~
.sublime-workspace
.sublime-project
*.code-workspace
# JetBrains
.idea/
*.iws
*.iml
*.ipr
out/
# Vim
*.swp
*.swo
.netrwhist
# Emacs
*~
\#*\#
/.emacs.desktop
/.emacs.desktop.lock
*.elc
auto-save-list
tramp
.\#*
# Backup files
*.bak
*.backup
*.old
*.orig
*.rej
*.tmp
# Temporary files
.tmp/
temp/
tmp/
*.temp
# Database
*.db
*.sqlite
*.sqlite3
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# pipenv
Pipfile.lock
# PEP 582
__pypackages__/
# Celery
celerybeat-schedule
celerybeat.pid
# SageMath
*.sage.py
# Spyder
.spyderproject
.spyproject
# Rope
.ropeproject
# mkdocs
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre
.pyre/

View File

@ -1,6 +1,41 @@
<p align="center">
<img src="assets/TauricResearch.png" style="width: 60%; height: auto;">
</p>
# 🚀 Trading Agents
<div align="center">
<img src="assets/TauricResearch.png" alt="Tauric Logo" width="400"/>
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
![Platform](https://img.shields.io/badge/platform-Windows%20|%20macOS%20|%20Linux-lightgrey.svg)
</div>
## 📁 Project Structure
The project is organized into clear, separate components:
```
TradingAgents/
├── backend/ # Python backend server and trading logic
│ ├── api.py # FastAPI server
│ ├── tradingagents/ # Core trading agents and logic
│ ├── cli/ # Command-line interface
│ └── requirements.txt # Python dependencies
├── ios/ # iOS mobile application
│ └── TradingDummy/ # Swift/SwiftUI app project
├── docs/ # All documentation
│ ├── PRD.md # Product requirements
│ ├── DOCUMENTATION.md # Technical documentation
│ └── FASTAPI_SETUP.md # API setup guide
├── assets/ # Images and media assets
├── README.md # This file
└── LICENSE # MIT license
```
### Quick Navigation
- 🐍 [Backend Setup](backend/README.md) - Python server and trading agents
- 📱 [iOS App Setup](ios/README.md) - Mobile application
- 📚 [Documentation](docs/README.md) - All project documentation
## Overview
<div align="center" style="line-height: 1;">
<a href="https://arxiv.org/abs/2412.20138" target="_blank"><img alt="arXiv" src="https://img.shields.io/badge/arXiv-2412.20138-B31B1B?logo=arxiv"/></a>
@ -93,7 +128,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
## Installation and CLI
### Installation
### Backend Installation
Clone TradingAgents:
```bash
@ -107,8 +142,9 @@ conda create -n tradingagents python=3.13
conda activate tradingagents
```
Install dependencies:
Install backend dependencies:
```bash
cd backend
pip install -r requirements.txt
```
@ -126,12 +162,37 @@ export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY
### CLI Usage
You can also try out the CLI directly by running:
From the backend directory, you can try out the CLI directly by running:
```bash
python -m cli.main
```
You will see a screen where you can select your desired tickers, date, LLMs, research depth, etc.
### FastAPI Server
Alternatively, you can run the FastAPI server for REST API access:
```bash
python run_api.py
```
The server will start at `http://localhost:8000`. You can then use the iOS app or make API calls directly.
### iOS App
To use the iOS companion app:
1. Navigate to the iOS directory:
```bash
cd ios/TradingDummy
```
2. Open the project in Xcode:
```bash
open TradingDummy.xcodeproj
```
3. Make sure the backend server is running, then build and run the app (⌘+R)
<p align="center">
<img src="assets/cli/cli_init.png" width="100%" style="display: inline-block; margin: 0 2%;">
</p>

73
TradingDummy/README.md Normal file
View File

@ -0,0 +1,73 @@
# TradingDummy iOS App - Project Structure
## Overview
The iOS app is now organized following MVVM architecture with clear separation of concerns.
## Directory Structure
```
TradingDummy/
├── Configuration/
│ └── AppConfig.swift # App-wide configuration (API URLs, timeouts)
├── Models/
│ └── AnalysisModels.swift # Data models (Request/Response)
├── Services/
│ └── TradingAgentsService.swift # API service layer
├── ViewModels/
│ └── TradingAnalysisViewModel.swift # Business logic
├── Views/
│ ├── TradingAnalysisView.swift # Main view
│ ├── AnalysisResultView.swift # Result display components
│ └── SupportingViews.swift # Loading, Error, Welcome views
├── TradingDummyApp.swift # App entry point
└── ContentView.swift # Original demo view (can be removed)
```
## Architecture
### Models
- `AnalysisRequest`: Request payload for API
- `AnalysisResponse`: Response data structure with all analysis reports
### Services
- `TradingAgentsService`: Singleton service for API communication
- `APIError`: Custom error types for better error handling
### ViewModels
- `TradingAnalysisViewModel`: Manages state and business logic
- Uses `@Published` properties for SwiftUI binding
- Handles async API calls with proper error handling
### Views
- `TradingAnalysisView`: Main screen with ticker input
- `AnalysisResultView`: Displays analysis results with expandable cards
- Supporting views for different states (loading, error, welcome)
## Configuration
### Development
- API URL: `http://localhost:8000`
- Timeout: 60 seconds
### Production
- Update `AppConfig.swift` with your production URL
- Build with Release configuration
## Usage
1. Open `TradingDummy.xcodeproj` in Xcode
2. Build and run (⌘+R)
3. The app uses Xcode 16's file system synchronized groups - all Swift files are automatically included
## Testing
1. Ensure backend is running: `cd backend && uv run python3 run_api.py`
2. Run the app in simulator
3. Enter a ticker symbol and tap "Analyze"
## Notes
- All files in subdirectories are automatically included by Xcode 16
- No need to manually add files to the project
- The modular structure makes it easy to add new features or modify existing ones

View File

@ -0,0 +1,639 @@
// !$*UTF8*$!
{
archiveVersion = 1;
classes = {
};
objectVersion = 77;
objects = {
/* Begin PBXBuildFile section */
2B8FC87F2E1077CC000CC9A0 /* ReSwift in Frameworks */ = {isa = PBXBuildFile; productRef = 2B8FC87E2E1077CC000CC9A0 /* ReSwift */; };
/* End PBXBuildFile section */
/* Begin PBXContainerItemProxy section */
2B8FC8612E1077B6000CC9A0 /* PBXContainerItemProxy */ = {
isa = PBXContainerItemProxy;
containerPortal = 2B8FC8472E1077B4000CC9A0 /* Project object */;
proxyType = 1;
remoteGlobalIDString = 2B8FC84E2E1077B4000CC9A0;
remoteInfo = TradingDummy;
};
2B8FC86B2E1077B6000CC9A0 /* PBXContainerItemProxy */ = {
isa = PBXContainerItemProxy;
containerPortal = 2B8FC8472E1077B4000CC9A0 /* Project object */;
proxyType = 1;
remoteGlobalIDString = 2B8FC84E2E1077B4000CC9A0;
remoteInfo = TradingDummy;
};
/* End PBXContainerItemProxy section */
/* Begin PBXFileReference section */
2B8FC84F2E1077B5000CC9A0 /* TradingDummy.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = TradingDummy.app; sourceTree = BUILT_PRODUCTS_DIR; };
2B8FC8602E1077B6000CC9A0 /* TradingDummyTests.xctest */ = {isa = PBXFileReference; explicitFileType = wrapper.cfbundle; includeInIndex = 0; path = TradingDummyTests.xctest; sourceTree = BUILT_PRODUCTS_DIR; };
2B8FC86A2E1077B6000CC9A0 /* TradingDummyUITests.xctest */ = {isa = PBXFileReference; explicitFileType = wrapper.cfbundle; includeInIndex = 0; path = TradingDummyUITests.xctest; sourceTree = BUILT_PRODUCTS_DIR; };
/* End PBXFileReference section */
/* Begin PBXFileSystemSynchronizedBuildFileExceptionSet section */
2B1579CB2E12F4EB000B3E8E /* Exceptions for "TradingDummy" folder in "TradingDummy" target */ = {
isa = PBXFileSystemSynchronizedBuildFileExceptionSet;
membershipExceptions = (
Info.plist,
);
target = 2B8FC84E2E1077B4000CC9A0 /* TradingDummy */;
};
/* End PBXFileSystemSynchronizedBuildFileExceptionSet section */
/* Begin PBXFileSystemSynchronizedRootGroup section */
2B8FC8512E1077B5000CC9A0 /* TradingDummy */ = {
isa = PBXFileSystemSynchronizedRootGroup;
exceptions = (
2B1579CB2E12F4EB000B3E8E /* Exceptions for "TradingDummy" folder in "TradingDummy" target */,
);
path = TradingDummy;
sourceTree = "<group>";
};
2B8FC8632E1077B6000CC9A0 /* TradingDummyTests */ = {
isa = PBXFileSystemSynchronizedRootGroup;
path = TradingDummyTests;
sourceTree = "<group>";
};
2B8FC86D2E1077B6000CC9A0 /* TradingDummyUITests */ = {
isa = PBXFileSystemSynchronizedRootGroup;
path = TradingDummyUITests;
sourceTree = "<group>";
};
/* End PBXFileSystemSynchronizedRootGroup section */
/* Begin PBXFrameworksBuildPhase section */
2B8FC84C2E1077B4000CC9A0 /* Frameworks */ = {
isa = PBXFrameworksBuildPhase;
buildActionMask = 2147483647;
files = (
2B8FC87F2E1077CC000CC9A0 /* ReSwift in Frameworks */,
);
runOnlyForDeploymentPostprocessing = 0;
};
2B8FC85D2E1077B6000CC9A0 /* Frameworks */ = {
isa = PBXFrameworksBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
2B8FC8672E1077B6000CC9A0 /* Frameworks */ = {
isa = PBXFrameworksBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXFrameworksBuildPhase section */
/* Begin PBXGroup section */
2B8FC8462E1077B4000CC9A0 = {
isa = PBXGroup;
children = (
2B8FC8512E1077B5000CC9A0 /* TradingDummy */,
2B8FC8632E1077B6000CC9A0 /* TradingDummyTests */,
2B8FC86D2E1077B6000CC9A0 /* TradingDummyUITests */,
2B8FC8502E1077B5000CC9A0 /* Products */,
);
sourceTree = "<group>";
};
2B8FC8502E1077B5000CC9A0 /* Products */ = {
isa = PBXGroup;
children = (
2B8FC84F2E1077B5000CC9A0 /* TradingDummy.app */,
2B8FC8602E1077B6000CC9A0 /* TradingDummyTests.xctest */,
2B8FC86A2E1077B6000CC9A0 /* TradingDummyUITests.xctest */,
);
name = Products;
sourceTree = "<group>";
};
/* End PBXGroup section */
/* Begin PBXNativeTarget section */
2B8FC84E2E1077B4000CC9A0 /* TradingDummy */ = {
isa = PBXNativeTarget;
buildConfigurationList = 2B8FC8742E1077B6000CC9A0 /* Build configuration list for PBXNativeTarget "TradingDummy" */;
buildPhases = (
2B8FC84B2E1077B4000CC9A0 /* Sources */,
2B8FC84C2E1077B4000CC9A0 /* Frameworks */,
2B8FC84D2E1077B4000CC9A0 /* Resources */,
);
buildRules = (
);
dependencies = (
);
fileSystemSynchronizedGroups = (
2B8FC8512E1077B5000CC9A0 /* TradingDummy */,
);
name = TradingDummy;
packageProductDependencies = (
2B8FC87E2E1077CC000CC9A0 /* ReSwift */,
);
productName = TradingDummy;
productReference = 2B8FC84F2E1077B5000CC9A0 /* TradingDummy.app */;
productType = "com.apple.product-type.application";
};
2B8FC85F2E1077B6000CC9A0 /* TradingDummyTests */ = {
isa = PBXNativeTarget;
buildConfigurationList = 2B8FC8772E1077B6000CC9A0 /* Build configuration list for PBXNativeTarget "TradingDummyTests" */;
buildPhases = (
2B8FC85C2E1077B6000CC9A0 /* Sources */,
2B8FC85D2E1077B6000CC9A0 /* Frameworks */,
2B8FC85E2E1077B6000CC9A0 /* Resources */,
);
buildRules = (
);
dependencies = (
2B8FC8622E1077B6000CC9A0 /* PBXTargetDependency */,
);
fileSystemSynchronizedGroups = (
2B8FC8632E1077B6000CC9A0 /* TradingDummyTests */,
);
name = TradingDummyTests;
packageProductDependencies = (
);
productName = TradingDummyTests;
productReference = 2B8FC8602E1077B6000CC9A0 /* TradingDummyTests.xctest */;
productType = "com.apple.product-type.bundle.unit-test";
};
2B8FC8692E1077B6000CC9A0 /* TradingDummyUITests */ = {
isa = PBXNativeTarget;
buildConfigurationList = 2B8FC87A2E1077B6000CC9A0 /* Build configuration list for PBXNativeTarget "TradingDummyUITests" */;
buildPhases = (
2B8FC8662E1077B6000CC9A0 /* Sources */,
2B8FC8672E1077B6000CC9A0 /* Frameworks */,
2B8FC8682E1077B6000CC9A0 /* Resources */,
);
buildRules = (
);
dependencies = (
2B8FC86C2E1077B6000CC9A0 /* PBXTargetDependency */,
);
fileSystemSynchronizedGroups = (
2B8FC86D2E1077B6000CC9A0 /* TradingDummyUITests */,
);
name = TradingDummyUITests;
packageProductDependencies = (
);
productName = TradingDummyUITests;
productReference = 2B8FC86A2E1077B6000CC9A0 /* TradingDummyUITests.xctest */;
productType = "com.apple.product-type.bundle.ui-testing";
};
/* End PBXNativeTarget section */
/* Begin PBXProject section */
2B8FC8472E1077B4000CC9A0 /* Project object */ = {
isa = PBXProject;
attributes = {
BuildIndependentTargetsInParallel = 1;
LastSwiftUpdateCheck = 1610;
LastUpgradeCheck = 1610;
TargetAttributes = {
2B8FC84E2E1077B4000CC9A0 = {
CreatedOnToolsVersion = 16.1;
};
2B8FC85F2E1077B6000CC9A0 = {
CreatedOnToolsVersion = 16.1;
TestTargetID = 2B8FC84E2E1077B4000CC9A0;
};
2B8FC8692E1077B6000CC9A0 = {
CreatedOnToolsVersion = 16.1;
TestTargetID = 2B8FC84E2E1077B4000CC9A0;
};
};
};
buildConfigurationList = 2B8FC84A2E1077B4000CC9A0 /* Build configuration list for PBXProject "TradingDummy" */;
developmentRegion = en;
hasScannedForEncodings = 0;
knownRegions = (
en,
Base,
);
mainGroup = 2B8FC8462E1077B4000CC9A0;
minimizedProjectReferenceProxies = 1;
packageReferences = (
2B8FC87D2E1077CC000CC9A0 /* XCRemoteSwiftPackageReference "ReSwift" */,
);
preferredProjectObjectVersion = 77;
productRefGroup = 2B8FC8502E1077B5000CC9A0 /* Products */;
projectDirPath = "";
projectRoot = "";
targets = (
2B8FC84E2E1077B4000CC9A0 /* TradingDummy */,
2B8FC85F2E1077B6000CC9A0 /* TradingDummyTests */,
2B8FC8692E1077B6000CC9A0 /* TradingDummyUITests */,
);
};
/* End PBXProject section */
/* Begin PBXResourcesBuildPhase section */
2B8FC84D2E1077B4000CC9A0 /* Resources */ = {
isa = PBXResourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
2B8FC85E2E1077B6000CC9A0 /* Resources */ = {
isa = PBXResourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
2B8FC8682E1077B6000CC9A0 /* Resources */ = {
isa = PBXResourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXResourcesBuildPhase section */
/* Begin PBXSourcesBuildPhase section */
2B8FC84B2E1077B4000CC9A0 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
2B8FC85C2E1077B6000CC9A0 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
2B8FC8662E1077B6000CC9A0 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXSourcesBuildPhase section */
/* Begin PBXTargetDependency section */
2B8FC8622E1077B6000CC9A0 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
target = 2B8FC84E2E1077B4000CC9A0 /* TradingDummy */;
targetProxy = 2B8FC8612E1077B6000CC9A0 /* PBXContainerItemProxy */;
};
2B8FC86C2E1077B6000CC9A0 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
target = 2B8FC84E2E1077B4000CC9A0 /* TradingDummy */;
targetProxy = 2B8FC86B2E1077B6000CC9A0 /* PBXContainerItemProxy */;
};
/* End PBXTargetDependency section */
/* Begin XCBuildConfiguration section */
2B8FC8722E1077B6000CC9A0 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++20";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_ENABLE_OBJC_WEAK = YES;
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_COMMA = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
CLANG_WARN_STRICT_PROTOTYPES = YES;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = dwarf;
ENABLE_STRICT_OBJC_MSGSEND = YES;
ENABLE_TESTABILITY = YES;
ENABLE_USER_SCRIPT_SANDBOXING = YES;
GCC_C_LANGUAGE_STANDARD = gnu17;
GCC_DYNAMIC_NO_PIC = NO;
GCC_NO_COMMON_BLOCKS = YES;
GCC_OPTIMIZATION_LEVEL = 0;
GCC_PREPROCESSOR_DEFINITIONS = (
"DEBUG=1",
"$(inherited)",
);
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
LOCALIZATION_PREFERS_STRING_CATALOGS = YES;
MTL_ENABLE_DEBUG_INFO = INCLUDE_SOURCE;
MTL_FAST_MATH = YES;
ONLY_ACTIVE_ARCH = YES;
SWIFT_ACTIVE_COMPILATION_CONDITIONS = "DEBUG $(inherited)";
SWIFT_OPTIMIZATION_LEVEL = "-Onone";
};
name = Debug;
};
2B8FC8732E1077B6000CC9A0 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
ASSETCATALOG_COMPILER_GENERATE_SWIFT_ASSET_SYMBOL_EXTENSIONS = YES;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++20";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_ENABLE_OBJC_WEAK = YES;
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_COMMA = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
CLANG_WARN_STRICT_PROTOTYPES = YES;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym";
ENABLE_NS_ASSERTIONS = NO;
ENABLE_STRICT_OBJC_MSGSEND = YES;
ENABLE_USER_SCRIPT_SANDBOXING = YES;
GCC_C_LANGUAGE_STANDARD = gnu17;
GCC_NO_COMMON_BLOCKS = YES;
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
LOCALIZATION_PREFERS_STRING_CATALOGS = YES;
MTL_ENABLE_DEBUG_INFO = NO;
MTL_FAST_MATH = YES;
SWIFT_COMPILATION_MODE = wholemodule;
};
name = Release;
};
2B8FC8752E1077B6000CC9A0 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
ASSETCATALOG_COMPILER_GLOBAL_ACCENT_COLOR_NAME = AccentColor;
CODE_SIGN_ENTITLEMENTS = TradingDummy/TradingDummy.entitlements;
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
DEVELOPMENT_ASSET_PATHS = "\"TradingDummy/Preview Content\"";
DEVELOPMENT_TEAM = 7T2V4G2PCV;
ENABLE_HARDENED_RUNTIME = YES;
ENABLE_PREVIEWS = YES;
GENERATE_INFOPLIST_FILE = YES;
INFOPLIST_FILE = TradingDummy/Info.plist;
"INFOPLIST_KEY_UIApplicationSceneManifest_Generation[sdk=iphoneos*]" = YES;
"INFOPLIST_KEY_UIApplicationSceneManifest_Generation[sdk=iphonesimulator*]" = YES;
"INFOPLIST_KEY_UIApplicationSupportsIndirectInputEvents[sdk=iphoneos*]" = YES;
"INFOPLIST_KEY_UIApplicationSupportsIndirectInputEvents[sdk=iphonesimulator*]" = YES;
"INFOPLIST_KEY_UILaunchScreen_Generation[sdk=iphoneos*]" = YES;
"INFOPLIST_KEY_UILaunchScreen_Generation[sdk=iphonesimulator*]" = YES;
"INFOPLIST_KEY_UIStatusBarStyle[sdk=iphoneos*]" = UIStatusBarStyleDefault;
"INFOPLIST_KEY_UIStatusBarStyle[sdk=iphonesimulator*]" = UIStatusBarStyleDefault;
INFOPLIST_KEY_UISupportedInterfaceOrientations_iPad = "UIInterfaceOrientationPortrait UIInterfaceOrientationPortraitUpsideDown UIInterfaceOrientationLandscapeLeft UIInterfaceOrientationLandscapeRight";
INFOPLIST_KEY_UISupportedInterfaceOrientations_iPhone = "UIInterfaceOrientationPortrait UIInterfaceOrientationLandscapeLeft UIInterfaceOrientationLandscapeRight";
IPHONEOS_DEPLOYMENT_TARGET = 18.1;
LD_RUNPATH_SEARCH_PATHS = "@executable_path/Frameworks";
"LD_RUNPATH_SEARCH_PATHS[sdk=macosx*]" = "@executable_path/../Frameworks";
MACOSX_DEPLOYMENT_TARGET = 15.1;
MARKETING_VERSION = 1.0;
PRODUCT_BUNDLE_IDENTIFIER = zjh08177.TradingDummy;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = auto;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx xros xrsimulator";
SWIFT_EMIT_LOC_STRINGS = YES;
SWIFT_VERSION = 5.0;
TARGETED_DEVICE_FAMILY = "1,2,7";
XROS_DEPLOYMENT_TARGET = 2.1;
};
name = Debug;
};
2B8FC8762E1077B6000CC9A0 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
ASSETCATALOG_COMPILER_GLOBAL_ACCENT_COLOR_NAME = AccentColor;
CODE_SIGN_ENTITLEMENTS = TradingDummy/TradingDummy.entitlements;
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
DEVELOPMENT_ASSET_PATHS = "\"TradingDummy/Preview Content\"";
DEVELOPMENT_TEAM = 7T2V4G2PCV;
ENABLE_HARDENED_RUNTIME = YES;
ENABLE_PREVIEWS = YES;
GENERATE_INFOPLIST_FILE = YES;
INFOPLIST_FILE = TradingDummy/Info.plist;
"INFOPLIST_KEY_UIApplicationSceneManifest_Generation[sdk=iphoneos*]" = YES;
"INFOPLIST_KEY_UIApplicationSceneManifest_Generation[sdk=iphonesimulator*]" = YES;
"INFOPLIST_KEY_UIApplicationSupportsIndirectInputEvents[sdk=iphoneos*]" = YES;
"INFOPLIST_KEY_UIApplicationSupportsIndirectInputEvents[sdk=iphonesimulator*]" = YES;
"INFOPLIST_KEY_UILaunchScreen_Generation[sdk=iphoneos*]" = YES;
"INFOPLIST_KEY_UILaunchScreen_Generation[sdk=iphonesimulator*]" = YES;
"INFOPLIST_KEY_UIStatusBarStyle[sdk=iphoneos*]" = UIStatusBarStyleDefault;
"INFOPLIST_KEY_UIStatusBarStyle[sdk=iphonesimulator*]" = UIStatusBarStyleDefault;
INFOPLIST_KEY_UISupportedInterfaceOrientations_iPad = "UIInterfaceOrientationPortrait UIInterfaceOrientationPortraitUpsideDown UIInterfaceOrientationLandscapeLeft UIInterfaceOrientationLandscapeRight";
INFOPLIST_KEY_UISupportedInterfaceOrientations_iPhone = "UIInterfaceOrientationPortrait UIInterfaceOrientationLandscapeLeft UIInterfaceOrientationLandscapeRight";
IPHONEOS_DEPLOYMENT_TARGET = 18.1;
LD_RUNPATH_SEARCH_PATHS = "@executable_path/Frameworks";
"LD_RUNPATH_SEARCH_PATHS[sdk=macosx*]" = "@executable_path/../Frameworks";
MACOSX_DEPLOYMENT_TARGET = 15.1;
MARKETING_VERSION = 1.0;
PRODUCT_BUNDLE_IDENTIFIER = zjh08177.TradingDummy;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = auto;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx xros xrsimulator";
SWIFT_EMIT_LOC_STRINGS = YES;
SWIFT_VERSION = 5.0;
TARGETED_DEVICE_FAMILY = "1,2,7";
XROS_DEPLOYMENT_TARGET = 2.1;
};
name = Release;
};
2B8FC8782E1077B6000CC9A0 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
BUNDLE_LOADER = "$(TEST_HOST)";
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
DEVELOPMENT_TEAM = 7T2V4G2PCV;
GENERATE_INFOPLIST_FILE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.1;
MACOSX_DEPLOYMENT_TARGET = 15.1;
MARKETING_VERSION = 1.0;
PRODUCT_BUNDLE_IDENTIFIER = zjh08177.TradingDummyTests;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = auto;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx xros xrsimulator";
SWIFT_EMIT_LOC_STRINGS = NO;
SWIFT_VERSION = 5.0;
TARGETED_DEVICE_FAMILY = "1,2,7";
TEST_HOST = "$(BUILT_PRODUCTS_DIR)/TradingDummy.app/$(BUNDLE_EXECUTABLE_FOLDER_PATH)/TradingDummy";
XROS_DEPLOYMENT_TARGET = 2.1;
};
name = Debug;
};
2B8FC8792E1077B6000CC9A0 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
BUNDLE_LOADER = "$(TEST_HOST)";
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
DEVELOPMENT_TEAM = 7T2V4G2PCV;
GENERATE_INFOPLIST_FILE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.1;
MACOSX_DEPLOYMENT_TARGET = 15.1;
MARKETING_VERSION = 1.0;
PRODUCT_BUNDLE_IDENTIFIER = zjh08177.TradingDummyTests;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = auto;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx xros xrsimulator";
SWIFT_EMIT_LOC_STRINGS = NO;
SWIFT_VERSION = 5.0;
TARGETED_DEVICE_FAMILY = "1,2,7";
TEST_HOST = "$(BUILT_PRODUCTS_DIR)/TradingDummy.app/$(BUNDLE_EXECUTABLE_FOLDER_PATH)/TradingDummy";
XROS_DEPLOYMENT_TARGET = 2.1;
};
name = Release;
};
2B8FC87B2E1077B6000CC9A0 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
DEVELOPMENT_TEAM = 7T2V4G2PCV;
GENERATE_INFOPLIST_FILE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.1;
MACOSX_DEPLOYMENT_TARGET = 15.1;
MARKETING_VERSION = 1.0;
PRODUCT_BUNDLE_IDENTIFIER = zjh08177.TradingDummyUITests;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = auto;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx xros xrsimulator";
SWIFT_EMIT_LOC_STRINGS = NO;
SWIFT_VERSION = 5.0;
TARGETED_DEVICE_FAMILY = "1,2,7";
TEST_TARGET_NAME = TradingDummy;
XROS_DEPLOYMENT_TARGET = 2.1;
};
name = Debug;
};
2B8FC87C2E1077B6000CC9A0 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
CODE_SIGN_STYLE = Automatic;
CURRENT_PROJECT_VERSION = 1;
DEVELOPMENT_TEAM = 7T2V4G2PCV;
GENERATE_INFOPLIST_FILE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.1;
MACOSX_DEPLOYMENT_TARGET = 15.1;
MARKETING_VERSION = 1.0;
PRODUCT_BUNDLE_IDENTIFIER = zjh08177.TradingDummyUITests;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = auto;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx xros xrsimulator";
SWIFT_EMIT_LOC_STRINGS = NO;
SWIFT_VERSION = 5.0;
TARGETED_DEVICE_FAMILY = "1,2,7";
TEST_TARGET_NAME = TradingDummy;
XROS_DEPLOYMENT_TARGET = 2.1;
};
name = Release;
};
/* End XCBuildConfiguration section */
/* Begin XCConfigurationList section */
2B8FC84A2E1077B4000CC9A0 /* Build configuration list for PBXProject "TradingDummy" */ = {
isa = XCConfigurationList;
buildConfigurations = (
2B8FC8722E1077B6000CC9A0 /* Debug */,
2B8FC8732E1077B6000CC9A0 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
2B8FC8742E1077B6000CC9A0 /* Build configuration list for PBXNativeTarget "TradingDummy" */ = {
isa = XCConfigurationList;
buildConfigurations = (
2B8FC8752E1077B6000CC9A0 /* Debug */,
2B8FC8762E1077B6000CC9A0 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
2B8FC8772E1077B6000CC9A0 /* Build configuration list for PBXNativeTarget "TradingDummyTests" */ = {
isa = XCConfigurationList;
buildConfigurations = (
2B8FC8782E1077B6000CC9A0 /* Debug */,
2B8FC8792E1077B6000CC9A0 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
2B8FC87A2E1077B6000CC9A0 /* Build configuration list for PBXNativeTarget "TradingDummyUITests" */ = {
isa = XCConfigurationList;
buildConfigurations = (
2B8FC87B2E1077B6000CC9A0 /* Debug */,
2B8FC87C2E1077B6000CC9A0 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
/* End XCConfigurationList section */
/* Begin XCRemoteSwiftPackageReference section */
2B8FC87D2E1077CC000CC9A0 /* XCRemoteSwiftPackageReference "ReSwift" */ = {
isa = XCRemoteSwiftPackageReference;
repositoryURL = "https://github.com/ReSwift/ReSwift.git";
requirement = {
kind = upToNextMajorVersion;
minimumVersion = 6.1.1;
};
};
/* End XCRemoteSwiftPackageReference section */
/* Begin XCSwiftPackageProductDependency section */
2B8FC87E2E1077CC000CC9A0 /* ReSwift */ = {
isa = XCSwiftPackageProductDependency;
package = 2B8FC87D2E1077CC000CC9A0 /* XCRemoteSwiftPackageReference "ReSwift" */;
productName = ReSwift;
};
/* End XCSwiftPackageProductDependency section */
};
rootObject = 2B8FC8472E1077B4000CC9A0 /* Project object */;
}

View File

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<Workspace
version = "1.0">
<FileRef
location = "self:">
</FileRef>
</Workspace>

View File

@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>SchemeUserState</key>
<dict>
<key>TradingDummy.xcscheme_^#shared#^_</key>
<dict>
<key>orderHint</key>
<integer>0</integer>
</dict>
</dict>
</dict>
</plist>

View File

@ -0,0 +1,11 @@
{
"colors" : [
{
"idiom" : "universal"
}
],
"info" : {
"author" : "xcode",
"version" : 1
}
}

View File

@ -0,0 +1,85 @@
{
"images" : [
{
"idiom" : "universal",
"platform" : "ios",
"size" : "1024x1024"
},
{
"appearances" : [
{
"appearance" : "luminosity",
"value" : "dark"
}
],
"idiom" : "universal",
"platform" : "ios",
"size" : "1024x1024"
},
{
"appearances" : [
{
"appearance" : "luminosity",
"value" : "tinted"
}
],
"idiom" : "universal",
"platform" : "ios",
"size" : "1024x1024"
},
{
"idiom" : "mac",
"scale" : "1x",
"size" : "16x16"
},
{
"idiom" : "mac",
"scale" : "2x",
"size" : "16x16"
},
{
"idiom" : "mac",
"scale" : "1x",
"size" : "32x32"
},
{
"idiom" : "mac",
"scale" : "2x",
"size" : "32x32"
},
{
"idiom" : "mac",
"scale" : "1x",
"size" : "128x128"
},
{
"idiom" : "mac",
"scale" : "2x",
"size" : "128x128"
},
{
"idiom" : "mac",
"scale" : "1x",
"size" : "256x256"
},
{
"idiom" : "mac",
"scale" : "2x",
"size" : "256x256"
},
{
"idiom" : "mac",
"scale" : "1x",
"size" : "512x512"
},
{
"idiom" : "mac",
"scale" : "2x",
"size" : "512x512"
}
],
"info" : {
"author" : "xcode",
"version" : 1
}
}

View File

@ -0,0 +1,6 @@
{
"info" : {
"author" : "xcode",
"version" : 1
}
}

View File

@ -0,0 +1,33 @@
import Foundation
enum AppConfig {
// API Configuration
static let apiBaseURL: String = {
// Check for environment variable first (useful for CI/CD)
if let envURL = ProcessInfo.processInfo.environment["TRADINGAGENTS_API_URL"] {
return envURL
}
// Default URLs for different environments
#if DEBUG
#if targetEnvironment(simulator)
// For iOS Simulator
return "http://localhost:8000"
#else
// For real device - UPDATE THIS WITH YOUR MAC'S IP
return "http://192.168.4.223:8000"
#endif
#else
// For production, update this to your deployed server URL
return "https://api.tradingagents.com"
#endif
}()
// Network Configuration
static let requestTimeout: TimeInterval = 600.0
static let maxRetries = 3
// UI Configuration
static let defaultTicker = "AAPL"
static let animationDuration = 0.3
}

View File

@ -0,0 +1,30 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>NSLocalNetworkUsageDescription</key>
<string>This app needs to connect to your local trading analysis server to provide market insights and trading analysis.</string>
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<false/>
<key>NSExceptionDomains</key>
<dict>
<key>localhost</key>
<dict>
<key>NSExceptionAllowsInsecureHTTPLoads</key>
<true/>
<key>NSExceptionMinimumTLSVersion</key>
<string>TLSv1.0</string>
</dict>
<key>192.168.4.223</key>
<dict>
<key>NSExceptionAllowsInsecureHTTPLoads</key>
<true/>
<key>NSExceptionMinimumTLSVersion</key>
<string>TLSv1.0</string>
</dict>
</dict>
</dict>
</dict>
</plist>

View File

@ -0,0 +1,35 @@
import Foundation
// MARK: - Request Model
struct AnalysisRequest: Codable {
let ticker: String
}
// MARK: - Response Model
struct AnalysisResponse: Codable {
let ticker: String
let analysisDate: String
let marketReport: String?
let sentimentReport: String?
let newsReport: String?
let fundamentalsReport: String?
let investmentPlan: String?
let traderInvestmentPlan: String?
let finalTradeDecision: String?
let processedSignal: String?
let error: String?
enum CodingKeys: String, CodingKey {
case ticker
case analysisDate = "analysis_date"
case marketReport = "market_report"
case sentimentReport = "sentiment_report"
case newsReport = "news_report"
case fundamentalsReport = "fundamentals_report"
case investmentPlan = "investment_plan"
case traderInvestmentPlan = "trader_investment_plan"
case finalTradeDecision = "final_trade_decision"
case processedSignal = "processed_signal"
case error
}
}

View File

@ -0,0 +1,6 @@
{
"info" : {
"author" : "xcode",
"version" : 1
}
}

View File

@ -0,0 +1,357 @@
import Foundation
import Combine
import os.log
// MARK: - SSE Event Models
public struct SSEEvent: Codable {
let type: String
let message: String?
let agent: String?
let section: String?
let content: String?
let status: String?
}
// MARK: - Progress Models
public struct AnalysisProgress {
public let currentAgent: String
public let message: String
public let reports: [String: String]
public let isComplete: Bool
public let error: String?
public init(currentAgent: String, message: String, reports: [String: String], isComplete: Bool, error: String?) {
self.currentAgent = currentAgent
self.message = message
self.reports = reports
self.isComplete = isComplete
self.error = error
}
}
// MARK: - TradingAgentsService
public class TradingAgentsService: ObservableObject {
internal let logger = Logger(subsystem: "com.tradingagents.app", category: "TradingAgentsService")
private let baseURL: String = {
// Check for environment variable first
if let envURL = ProcessInfo.processInfo.environment["TRADINGAGENTS_API_URL"] {
return envURL
}
// Default URLs for different environments
#if DEBUG
#if targetEnvironment(simulator)
// For iOS Simulator
return "http://localhost:8000"
#else
// For real device - UPDATE THIS WITH YOUR MAC'S IP
return "http://192.168.4.223:8000"
#endif
#else
// For production, update this to your deployed server URL
return "https://api.tradingagents.com"
#endif
}()
private var eventSource: URLSessionDataTask?
private var streamingDelegate: SSEStreamDelegate?
private let session: URLSession
@Published var isAnalyzing = false
@Published var progress = AnalysisProgress(
currentAgent: "",
message: "",
reports: [:],
isComplete: false,
error: nil
)
public init() {
self.session = URLSession.shared
logger.info("🚀 TradingAgentsService initialized with baseURL: \(self.baseURL)")
}
public func streamAnalysis(for ticker: String) -> AnyPublisher<AnalysisProgress, Never> {
let subject = PassthroughSubject<AnalysisProgress, Never>()
logger.info("📡 Starting stream analysis for ticker: \(ticker)")
let urlString = "\(baseURL)/analyze/stream?ticker=\(ticker)"
logger.info("🌐 Request URL: \(urlString)")
guard let url = URL(string: urlString) else {
logger.error("❌ Invalid URL: \(urlString)")
subject.send(AnalysisProgress(
currentAgent: "",
message: "",
reports: [:],
isComplete: true,
error: "Invalid URL: \(urlString)"
))
return subject.eraseToAnyPublisher()
}
var request = URLRequest(url: url)
request.setValue("text/event-stream", forHTTPHeaderField: "Accept")
request.setValue("no-cache", forHTTPHeaderField: "Cache-Control")
request.setValue("keep-alive", forHTTPHeaderField: "Connection")
request.timeoutInterval = 600.0 // 10 minutes
logger.info("📋 Request headers: \(request.allHTTPHeaderFields ?? [:])")
// Use direct streaming approach
self.streamWithCustomSession(request: request, subject: subject)
return subject.eraseToAnyPublisher()
}
private func streamWithCustomSession(request: URLRequest, subject: PassthroughSubject<AnalysisProgress, Never>) {
logger.info("🔄 Starting SSE streaming with delegate")
// Create delegate for real-time streaming
let delegate = SSEStreamDelegate(subject: subject, service: self)
// Create session with delegate for streaming
let config = URLSessionConfiguration.default
config.timeoutIntervalForRequest = 600.0
config.timeoutIntervalForResource = 600.0
let delegateSession = URLSession(configuration: config, delegate: delegate, delegateQueue: nil)
let task = delegateSession.dataTask(with: request)
delegate.task = task
// Store delegate and task
self.streamingDelegate = delegate
self.eventSource = task
task.resume()
logger.info("🚀 SSE Stream task started with delegate")
}
internal func formatAgentName(_ agent: String) -> String {
switch agent.lowercased() {
case "market": return "Market Analyst"
case "social": return "Social Media Analyst"
case "news": return "News Analyst"
case "fundamentals": return "Fundamentals Analyst"
case "bull_researcher": return "Bull Researcher"
case "bear_researcher": return "Bear Researcher"
case "trader": return "Trading Team"
default: return agent.capitalized
}
}
internal func formatSectionName(_ section: String) -> String {
switch section {
case "market_report": return "Market Analysis"
case "sentiment_report": return "Sentiment Analysis"
case "news_report": return "News Analysis"
case "fundamentals_report": return "Fundamentals Analysis"
case "investment_plan": return "Investment Plan"
case "trader_investment_plan": return "Trading Plan"
case "final_trade_decision": return "Final Decision"
default: return section.replacingOccurrences(of: "_", with: " ").capitalized
}
}
public func stopAnalysis() {
eventSource?.cancel()
eventSource = nil
streamingDelegate = nil
isAnalyzing = false
}
}
// MARK: - SSE Stream Delegate for Real-time Streaming
private class SSEStreamDelegate: NSObject, URLSessionDataDelegate {
private let subject: PassthroughSubject<AnalysisProgress, Never>
private weak var service: TradingAgentsService?
private var buffer = ""
private var currentReports: [String: String] = [:]
var task: URLSessionDataTask?
init(subject: PassthroughSubject<AnalysisProgress, Never>, service: TradingAgentsService) {
self.subject = subject
self.service = service
super.init()
}
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive response: URLResponse, completionHandler: @escaping (URLSession.ResponseDisposition) -> Void) {
guard let httpResponse = response as? HTTPURLResponse else {
completionHandler(.cancel)
return
}
service?.logger.info("📶 SSE HTTP Response Status: \(httpResponse.statusCode)")
if httpResponse.statusCode == 200 {
completionHandler(.allow)
} else {
service?.logger.error("❌ SSE HTTP Error: \(httpResponse.statusCode)")
DispatchQueue.main.async {
self.subject.send(AnalysisProgress(
currentAgent: "",
message: "",
reports: self.currentReports,
isComplete: true,
error: "HTTP \(httpResponse.statusCode)"
))
}
completionHandler(.cancel)
}
}
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data) {
guard let service = service else { return }
let newData = String(data: data, encoding: .utf8) ?? ""
service.logger.info("📦 Received \(data.count) bytes: \(String(newData.prefix(100)))...")
// Add new data to buffer
buffer += newData
// Process complete lines
let lines = buffer.components(separatedBy: .newlines)
buffer = lines.last ?? "" // Keep incomplete line in buffer
// Process all complete lines except the last (incomplete) one
for line in lines.dropLast() {
if line.hasPrefix("data: ") {
let jsonString = String(line.dropFirst(6))
service.logger.info("🔍 Processing JSON: \(String(jsonString.prefix(50)))...")
if let jsonData = jsonString.data(using: .utf8),
let event = try? JSONDecoder().decode(SSEEvent.self, from: jsonData) {
service.logger.info("✅ Decoded event - Type: \(event.type)")
DispatchQueue.main.async {
self.processSSEEvent(event, service: service)
}
} else {
service.logger.warning("⚠️ Failed to decode JSON: \(jsonString)")
}
}
}
}
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didCompleteWithError error: Error?) {
if let error = error {
service?.logger.error("❌ SSE Connection error: \(error.localizedDescription)")
DispatchQueue.main.async {
self.subject.send(AnalysisProgress(
currentAgent: "",
message: "",
reports: self.currentReports,
isComplete: true,
error: "Connection error: \(error.localizedDescription)"
))
}
} else {
service?.logger.info("✅ SSE Connection completed successfully")
}
}
private func processSSEEvent(_ event: SSEEvent, service: TradingAgentsService) {
switch event.type {
case "status":
service.logger.info("📢 Status: \(event.message ?? "")")
subject.send(AnalysisProgress(
currentAgent: "Starting",
message: event.message ?? "Starting analysis...",
reports: currentReports,
isComplete: false,
error: nil
))
case "agent_status":
let agentName = service.formatAgentName(event.agent ?? "")
let statusMessage = event.status == "completed" ?
"\(agentName) completed" :
"🔄 Analyzing with \(agentName)..."
service.logger.info("👤 Agent: \(agentName), Status: \(event.status ?? "")")
subject.send(AnalysisProgress(
currentAgent: agentName,
message: statusMessage,
reports: currentReports,
isComplete: false,
error: nil
))
case "progress":
if let content = event.content, let percentage = Int(content) {
service.logger.info("📊 Progress: \(percentage)%")
subject.send(AnalysisProgress(
currentAgent: service.progress.currentAgent,
message: "Progress: \(percentage)%",
reports: currentReports,
isComplete: false,
error: nil
))
}
case "report":
if let section = event.section, let content = event.content {
service.logger.info("📊 Report: \(section)")
currentReports[section] = content
subject.send(AnalysisProgress(
currentAgent: service.progress.currentAgent,
message: "📊 Updated \(service.formatSectionName(section))",
reports: currentReports,
isComplete: false,
error: nil
))
}
case "complete":
service.logger.info("✅ Analysis completed")
subject.send(AnalysisProgress(
currentAgent: "Complete",
message: "✅ Analysis completed successfully",
reports: currentReports,
isComplete: true,
error: nil
))
case "error":
service.logger.error("❌ Error: \(event.message ?? "")")
subject.send(AnalysisProgress(
currentAgent: "",
message: "",
reports: currentReports,
isComplete: true,
error: event.message ?? "Unknown error"
))
default:
service.logger.info(" Unknown event type: \(event.type)")
}
}
}
// MARK: - API Errors
enum APIError: LocalizedError {
case invalidURL
case invalidResponse
case httpError(statusCode: Int)
case serverError(message: String)
var errorDescription: String? {
switch self {
case .invalidURL:
return "Invalid API URL"
case .invalidResponse:
return "Invalid server response"
case .httpError(let statusCode):
return "HTTP error: \(statusCode)"
case .serverError(let message):
return "Server error: \(message)"
}
}
}

View File

@ -0,0 +1,10 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.files.user-selected.read-only</key>
<true/>
</dict>
</plist>

View File

@ -0,0 +1,17 @@
//
// TradingDummyApp.swift
// TradingDummy
//
// Created by ByteDance on 6/28/25.
//
import SwiftUI
@main
struct TradingDummyApp: App {
var body: some Scene {
WindowGroup {
TradingAnalysisView()
}
}
}

View File

@ -0,0 +1,141 @@
import Foundation
import Combine
// MARK: - View Model
@MainActor
class TradingAnalysisViewModel: ObservableObject {
// MARK: - Published Properties
@Published var ticker: String = ""
@Published var isAnalyzing: Bool = false
@Published var showingResults: Bool = false
@Published var errorMessage: String?
// MARK: - Streaming Properties
@Published var currentAgent: String = ""
@Published var statusMessage: String = ""
@Published var analysisProgress: Double = 0.0
@Published var reports: [String: String] = [:]
@Published var finalDecision: String = ""
// MARK: - Services
private let tradingService = TradingAgentsService()
private var cancellables = Set<AnyCancellable>()
// MARK: - Constants
private let agentSteps = [
"Starting", "Market Analyst", "Social Media Analyst",
"News Analyst", "Fundamentals Analyst", "Bull Researcher",
"Bear Researcher", "Trading Team", "Complete"
]
init() {
setupSubscriptions()
}
private func setupSubscriptions() {
// Subscribe to service progress updates
tradingService.$progress
.receive(on: DispatchQueue.main)
.sink { [weak self] progress in
self?.updateProgress(progress)
}
.store(in: &cancellables)
}
private func updateProgress(_ progress: AnalysisProgress) {
currentAgent = progress.currentAgent
statusMessage = progress.message
reports = progress.reports
// Update progress percentage based on current agent
if let stepIndex = agentSteps.firstIndex(of: progress.currentAgent) {
analysisProgress = Double(stepIndex) / Double(agentSteps.count - 1)
}
// Handle completion
if progress.isComplete {
isAnalyzing = false
if progress.error == nil {
showingResults = true
finalDecision = reports["final_trade_decision"] ?? ""
} else {
errorMessage = progress.error
}
}
// Handle errors
if let error = progress.error {
errorMessage = error
isAnalyzing = false
}
}
func startAnalysis() {
guard !ticker.isEmpty else {
errorMessage = "Please enter a ticker symbol"
return
}
// Reset state
isAnalyzing = true
showingResults = false
errorMessage = nil
currentAgent = ""
statusMessage = ""
analysisProgress = 0.0
reports = [:]
finalDecision = ""
// Start streaming analysis
tradingService.streamAnalysis(for: ticker.uppercased())
.receive(on: DispatchQueue.main)
.sink { [weak self] progress in
self?.updateProgress(progress)
}
.store(in: &cancellables)
}
func stopAnalysis() {
tradingService.stopAnalysis()
isAnalyzing = false
currentAgent = ""
statusMessage = "Analysis stopped"
}
func resetAnalysis() {
stopAnalysis()
showingResults = false
errorMessage = nil
ticker = ""
currentAgent = ""
statusMessage = ""
analysisProgress = 0.0
reports = [:]
finalDecision = ""
}
// MARK: - Computed Properties
var formattedReports: [(title: String, content: String)] {
let reportOrder = [
("market_report", "Market Analysis"),
("sentiment_report", "Sentiment Analysis"),
("news_report", "News Analysis"),
("fundamentals_report", "Fundamentals Analysis"),
("investment_plan", "Investment Plan"),
("trader_investment_plan", "Trading Plan")
]
return reportOrder.compactMap { key, title in
guard let content = reports[key], !content.isEmpty else { return nil }
return (title: title, content: content)
}
}
var hasReports: Bool {
!reports.isEmpty
}
var progressPercentage: Int {
Int(analysisProgress * 100)
}
}

View File

@ -0,0 +1,139 @@
import SwiftUI
// MARK: - Analysis Result View
struct AnalysisResultView: View {
let ticker: String
let reports: [(title: String, content: String)]
let finalDecision: String
let onDismiss: () -> Void
var body: some View {
NavigationView {
ScrollView {
VStack(alignment: .leading, spacing: 20) {
// Header
HeaderView(ticker: ticker)
.padding(.horizontal)
// Reports
VStack(spacing: 16) {
ForEach(reports, id: \.title) { report in
ReportCard(
title: report.title,
icon: iconForReport(report.title),
content: report.content
)
}
// Final Decision
if !finalDecision.isEmpty {
ReportCard(
title: "Final Decision",
icon: "checkmark.seal.fill",
content: finalDecision,
isHighlighted: true
)
}
}
.padding(.horizontal)
}
.padding(.vertical)
}
.navigationTitle("Analysis Results")
.toolbar {
ToolbarItem(placement: .primaryAction) {
Button("Done") {
onDismiss()
}
}
}
}
}
private func iconForReport(_ title: String) -> String {
switch title {
case "Market Analysis": return "chart.line.uptrend.xyaxis"
case "Sentiment Analysis": return "bubble.left.and.bubble.right"
case "News Analysis": return "newspaper"
case "Fundamentals Analysis": return "doc.text"
case "Investment Plan": return "lightbulb"
case "Trading Plan": return "chart.bar.fill"
default: return "doc.text"
}
}
}
// MARK: - Header View
struct HeaderView: View {
let ticker: String
var body: some View {
VStack(alignment: .leading, spacing: 8) {
HStack(alignment: .top) {
VStack(alignment: .leading, spacing: 4) {
Text(ticker)
.font(.largeTitle)
.fontWeight(.bold)
Text("Analysis Date: \(DateFormatter.shortDate.string(from: Date()))")
.font(.caption)
.foregroundStyle(.secondary)
}
Spacer()
}
}
}
}
// MARK: - Report Card
struct ReportCard: View {
let title: String
let icon: String
let content: String
var isHighlighted: Bool = false
@State private var isExpanded: Bool = false
var body: some View {
VStack(alignment: .leading, spacing: 12) {
Button(action: { withAnimation { isExpanded.toggle() } }) {
HStack {
Image(systemName: icon)
.font(.title3)
.foregroundStyle(isHighlighted ? .white : .blue)
Text(title)
.font(.headline)
.foregroundStyle(isHighlighted ? .white : .primary)
Spacer()
Image(systemName: "chevron.right")
.rotationEffect(.degrees(isExpanded ? 90 : 0))
.foregroundStyle(isHighlighted ? .white : .secondary)
}
}
.buttonStyle(.plain)
if isExpanded {
Text(content)
.font(.body)
.foregroundStyle(isHighlighted ? .white : .primary)
.fixedSize(horizontal: false, vertical: true)
}
}
.padding()
.background(isHighlighted ? Color.blue : Color.gray.opacity(0.1))
.cornerRadius(12)
}
}
// MARK: - Extensions
extension DateFormatter {
static let shortDate: DateFormatter = {
let formatter = DateFormatter()
formatter.dateStyle = .short
return formatter
}()
}

View File

@ -0,0 +1,64 @@
import SwiftUI
// MARK: - Loading View
struct LoadingView: View {
let ticker: String
var body: some View {
VStack(spacing: 20) {
ProgressView()
.scaleEffect(1.5)
Text("Analyzing \(ticker)...")
.font(.headline)
.foregroundStyle(.secondary)
}
.padding()
}
}
// MARK: - Welcome View
struct WelcomeView: View {
var body: some View {
VStack(spacing: 20) {
Image(systemName: "chart.line.uptrend.xyaxis")
.font(.system(size: 60))
.foregroundStyle(.blue)
Text("Trading Agents Analysis")
.font(.title)
.fontWeight(.bold)
Text("Enter a stock ticker to get AI-powered trading analysis")
.font(.subheadline)
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
.padding(.horizontal)
}
.padding()
}
}
// MARK: - Error View
struct ErrorView: View {
let error: String
var body: some View {
VStack(spacing: 15) {
Image(systemName: "exclamationmark.triangle.fill")
.font(.system(size: 50))
.foregroundStyle(.red)
Text("Error")
.font(.title2)
.fontWeight(.bold)
Text(error)
.font(.body)
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
.padding(.horizontal)
}
.padding()
}
}

View File

@ -0,0 +1,203 @@
import SwiftUI
struct TradingAnalysisView: View {
@StateObject private var viewModel = TradingAnalysisViewModel()
var body: some View {
NavigationView {
VStack(spacing: 20) {
// Header
headerSection
// Input Section
inputSection
// Progress Section (shown during analysis)
if viewModel.isAnalyzing {
progressSection
}
// Reports Section (shown as reports come in)
if viewModel.hasReports && !viewModel.showingResults {
reportsSection
}
Spacer()
}
.padding()
.navigationTitle("Trading Analysis")
.alert("Error", isPresented: .constant(viewModel.errorMessage != nil)) {
Button("OK") {
viewModel.errorMessage = nil
}
} message: {
Text(viewModel.errorMessage ?? "")
}
.sheet(isPresented: $viewModel.showingResults) {
AnalysisResultView(
ticker: viewModel.ticker,
reports: viewModel.formattedReports,
finalDecision: viewModel.finalDecision,
onDismiss: {
viewModel.resetAnalysis()
}
)
}
}
}
// MARK: - View Components
private var headerSection: some View {
VStack(spacing: 8) {
Text("TradingAgents")
.font(.largeTitle)
.fontWeight(.bold)
.foregroundColor(.primary)
Text("AI-Powered Stock Analysis")
.font(.subheadline)
.foregroundColor(.secondary)
}
}
private var inputSection: some View {
VStack(spacing: 16) {
HStack {
TextField("Enter ticker symbol (e.g., AAPL)", text: $viewModel.ticker)
.textFieldStyle(RoundedBorderTextFieldStyle())
.autocapitalization(.allCharacters)
.autocorrectionDisabled(true)
.disabled(viewModel.isAnalyzing)
if viewModel.isAnalyzing {
Button("Stop") {
viewModel.stopAnalysis()
}
.foregroundColor(.red)
} else {
Button("Analyze") {
viewModel.startAnalysis()
}
.disabled(viewModel.ticker.isEmpty)
}
}
if !viewModel.ticker.isEmpty && !viewModel.isAnalyzing {
Text("Tap 'Analyze' to start real-time analysis")
.font(.footnote)
.foregroundColor(.secondary)
}
}
}
private var progressSection: some View {
VStack(spacing: 16) {
// Progress Bar
VStack(spacing: 8) {
HStack {
Text("Analysis Progress")
.font(.headline)
Spacer()
Text("\(viewModel.progressPercentage)%")
.font(.caption)
.foregroundColor(.secondary)
}
ProgressView(value: viewModel.analysisProgress)
.progressViewStyle(LinearProgressViewStyle(tint: .blue))
}
// Current Agent Status
if !viewModel.currentAgent.isEmpty {
HStack {
Image(systemName: "brain.head.profile")
.foregroundColor(.blue)
VStack(alignment: .leading) {
Text(viewModel.currentAgent)
.font(.subheadline)
.fontWeight(.medium)
if !viewModel.statusMessage.isEmpty {
Text(viewModel.statusMessage)
.font(.caption)
.foregroundColor(.secondary)
}
}
Spacer()
}
.padding()
.background(Color.gray.opacity(0.1))
.cornerRadius(8)
}
}
}
private var reportsSection: some View {
VStack(spacing: 12) {
HStack {
Text("Live Reports")
.font(.headline)
Spacer()
Text("\(viewModel.formattedReports.count) sections")
.font(.caption)
.foregroundColor(.secondary)
}
ScrollView {
LazyVStack(spacing: 8) {
ForEach(viewModel.formattedReports, id: \.title) { report in
ReportCardView(title: report.title, content: report.content)
}
}
}
.frame(maxHeight: 300)
}
}
}
// MARK: - Supporting Views
struct ReportCardView: View {
let title: String
let content: String
@State private var isExpanded = false
var body: some View {
VStack(alignment: .leading, spacing: 8) {
HStack {
Text(title)
.font(.subheadline)
.fontWeight(.medium)
Spacer()
Image(systemName: isExpanded ? "chevron.up" : "chevron.down")
.font(.caption)
.foregroundColor(.secondary)
}
.onTapGesture {
withAnimation(.easeInOut(duration: 0.2)) {
isExpanded.toggle()
}
}
if isExpanded {
Text(content)
.font(.caption)
.foregroundColor(.secondary)
.lineLimit(nil)
.transition(.opacity.combined(with: .slide))
} else {
Text(content)
.font(.caption)
.foregroundColor(.secondary)
.lineLimit(2)
}
}
.padding()
.background(Color(.systemGray6))
.cornerRadius(8)
}
}
#Preview {
TradingAnalysisView()
}

View File

@ -0,0 +1,16 @@
//
// TradingDummyTests.swift
// TradingDummyTests
//
// Created by ByteDance on 6/28/25.
//
import Testing
struct TradingDummyTests {
@Test func example() async throws {
// Write your test here and use APIs like `#expect(...)` to check expected conditions.
}
}

View File

@ -0,0 +1,43 @@
//
// TradingDummyUITests.swift
// TradingDummyUITests
//
// Created by ByteDance on 6/28/25.
//
import XCTest
final class TradingDummyUITests: XCTestCase {
override func setUpWithError() throws {
// Put setup code here. This method is called before the invocation of each test method in the class.
// In UI tests it is usually best to stop immediately when a failure occurs.
continueAfterFailure = false
// In UI tests its important to set the initial state - such as interface orientation - required for your tests before they run. The setUp method is a good place to do this.
}
override func tearDownWithError() throws {
// Put teardown code here. This method is called after the invocation of each test method in the class.
}
@MainActor
func testExample() throws {
// UI tests must launch the application that they test.
let app = XCUIApplication()
app.launch()
// Use XCTAssert and related functions to verify your tests produce the correct results.
}
@MainActor
func testLaunchPerformance() throws {
if #available(macOS 10.15, iOS 13.0, tvOS 13.0, watchOS 7.0, *) {
// This measures how long it takes to launch your application.
measure(metrics: [XCTApplicationLaunchMetric()]) {
XCUIApplication().launch()
}
}
}
}

View File

@ -0,0 +1,33 @@
//
// TradingDummyUITestsLaunchTests.swift
// TradingDummyUITests
//
// Created by ByteDance on 6/28/25.
//
import XCTest
final class TradingDummyUITestsLaunchTests: XCTestCase {
override class var runsForEachTargetApplicationUIConfiguration: Bool {
true
}
override func setUpWithError() throws {
continueAfterFailure = false
}
@MainActor
func testLaunch() throws {
let app = XCUIApplication()
app.launch()
// Insert steps here to perform after app launch but before taking a screenshot,
// such as logging into a test account or navigating somewhere in the app
let attachment = XCTAttachment(screenshot: app.screenshot())
attachment.name = "Launch Screen"
attachment.lifetime = .keepAlways
add(attachment)
}
}

143
backend/README.md Normal file
View File

@ -0,0 +1,143 @@
# Trading Agents Backend
This directory contains the Python backend for the Trading Agents system.
## Structure
- **`api.py`** - FastAPI server that exposes trading analysis endpoints
- **`run_api.py`** - Script to run the FastAPI server
- **`main.py`** - Main entry point for the trading system
- **`tradingagents/`** - Core trading logic and agents
- `agents/` - Various AI agents (analysts, researchers, traders, risk managers)
- `dataflows/` - Data fetching and processing utilities
- `graph/` - Trading decision graph and workflow
- **`cli/`** - Command-line interface for the trading system
- **`results/`** - Output directory for analysis results
- **`requirements.txt`** - Python dependencies
- **`setup.py`** - Package setup configuration
- **`pyproject.toml`** - Modern Python project configuration
## Quick Start
1. Install dependencies:
```bash
pip install -r requirements.txt
```
2. Run the FastAPI server:
```bash
python run_api.py
```
3. Or use the CLI:
```bash
python -m cli.main
```
## API Endpoints
- `GET /` - Root endpoint (API status)
- `GET /health` - Health check
- `POST /analyze` - Analyze a stock ticker
- Request: `{"ticker": "AAPL"}`
- Response: Comprehensive analysis including market, sentiment, news, and trading decisions
- `GET /docs` - Interactive API documentation (Swagger UI)
## Testing the API
### Quick Test
```bash
# Run the test script
python test_api.py
```
### Manual Testing
1. **Using Browser**: Navigate to http://localhost:8000/docs for interactive testing
2. **Using curl**: See `test_api.md` for detailed curl commands
3. **Using Python**: Use the provided `test_api.py` script
For detailed testing instructions, see [test_api.md](test_api.md).
## Environment Variables
Create a `.env` file in the project root with:
### Required
- `OPENAI_API_KEY` - Your OpenAI API key for AI agents
- `FINNHUB_API_KEY` - For market data (free tier available)
### Optional
- `REDDIT_CLIENT_ID` - For Reddit sentiment analysis
- `REDDIT_CLIENT_SECRET` - For Reddit sentiment analysis
- `TRADINGAGENTS_RESULTS_DIR` - Custom results directory (default: `backend/results`)
- `TRADINGAGENTS_DATA_DIR` - Custom data directory (default: `backend/data`)
- `TRADINGAGENTS_API_HOST` - API host (default: `localhost`)
- `TRADINGAGENTS_API_PORT` - API port (default: `8000`)
### Example `.env` file:
```bash
# API Keys
OPENAI_API_KEY=sk-your-key-here
FINNHUB_API_KEY=your-finnhub-key
# Optional Reddit API
REDDIT_CLIENT_ID=your-client-id
REDDIT_CLIENT_SECRET=your-client-secret
# Optional custom directories
# TRADINGAGENTS_RESULTS_DIR=/custom/path/to/results
# TRADINGAGENTS_API_PORT=3000
```
## Features
### Streaming Analysis (NEW!)
The TradingAgents CLI now supports real-time streaming of analysis reports as they're being generated by the AI agents. Instead of waiting for complete sections to be delivered, you can see the analysis content flowing in real-time.
#### Usage Options
1. **Dedicated Streaming Command:**
```bash
python -m cli.main stream
python -m cli.main stream --advanced # With advanced configuration
```
2. **Streaming Flag with Analyze Command:**
```bash
python -m cli.main analyze --streaming
python -m cli.main analyze --streaming --advanced
```
3. **Default Command with Streaming:**
```bash
python -m cli.main --streaming
python -m cli.main --streaming --advanced
```
#### Streaming vs. Regular Analysis
- **Regular Analysis**: Shows progress updates and agent statuses, but reports are delivered only when complete sections are finished
- **Streaming Analysis**: Shows the same progress updates PLUS real-time streaming of report content as agents generate it
#### What You'll See in Streaming Mode
- 🔄 **Agent Progress**: Same as regular mode - shows which agents are working
- 📡 **Live Streaming Panel**: Real-time content from the currently active agent
- 🔴 **Live Indicator**: Shows which agent is currently generating content
- ⚡ **Higher Refresh Rate**: Updates 8 times per second for smooth streaming
#### Benefits
- **Real-time Insights**: See analysis developing as it happens
- **Better Understanding**: Watch the thought process of each agent
- **Immediate Feedback**: Know immediately when agents start working on different aspects
- **Enhanced Experience**: More engaging and informative than batch delivery
#### Technical Details
The streaming implementation:
- Uses a `StreamingMessageBuffer` that extends the regular `MessageBuffer`
- Detects agent transitions and content generation in real-time
- Maintains the same final report quality while providing streaming experience
- Automatically saves all content to report files as in regular mode

View File

@ -0,0 +1,188 @@
# TradingAgents Streaming Analysis - Usage Examples
## Quick Start
### 1. Basic Streaming Analysis
```bash
# Run streaming analysis with default settings
python -m cli.main stream
# Or use the streaming flag with analyze command
python -m cli.main analyze --streaming
```
### 2. Advanced Streaming Analysis
```bash
# Run streaming analysis with advanced configuration
python -m cli.main stream --advanced
# Or combine flags
python -m cli.main analyze --streaming --advanced
```
### 3. Default Command with Streaming
```bash
# Use streaming as default behavior
python -m cli.main --streaming
```
## What to Expect
### Regular Analysis Output
```
data: {"type": "status", "message": "Starting analysis for TEM..."}
data: {"type": "agent_status", "agent": "Market Analyst"}
data: {"type": "progress", "percentage": 16}
...
[Wait for completion, then get all reports at once]
```
### Streaming Analysis Output
```
data: {"type": "status", "message": "Starting analysis for TEM..."}
data: {"type": "agent_status", "agent": "Market Analyst"}
data: {"type": "progress", "percentage": 16}
🔴 Live: Market Analyst
Technical Analysis for TEM:
Based on the recent price action and volume patterns...
[Content streams in real-time as the agent thinks and writes]
Moving to fundamental analysis...
The company's recent earnings report shows...
[More content streaming live]
🔴 Live: Social Media Analyst
Social sentiment analysis reveals...
[Content continues streaming from next agent]
```
## UI Layout in Streaming Mode
```
┌─ Welcome to TradingAgents ─────────────────┐
│ Welcome to TradingAgents CLI │
│ © Tauric Research │
└────────────────────────────────────────────┘
┌─ Agent Progress ──┬─ Recent Messages ──────┐
│ Market Analyst ✅ │ 14:30:22 [System] TEM │
│ Social Analyst 🔄 │ 14:30:25 [Reasoning] │
│ News Analyst ⏳ │ 14:30:28 [Tool Call] │
└───────────────────┴────────────────────────┘
┌─ 🔴 Live: Social Media Analyst ────────────┐
│ Analyzing social media sentiment for TEM: │
│ │
│ Recent Twitter mentions show mixed │
│ sentiment with 60% positive mentions... │
│ [Content streaming in real-time] │
└────────────────────────────────────────────┘
┌─ Latest Report Section ────────────────────┐
│ ### Market Analysis │
│ Technical indicators suggest... │
│ [Shows most recently completed section] │
└────────────────────────────────────────────┘
┌─ TradingAgents Streaming Analysis ─────────┐
│ Press Ctrl+C to stop │
└────────────────────────────────────────────┘
```
## Advanced Usage Scenarios
### 1. Development and Debugging
When developing or debugging agent behaviors, streaming allows you to:
- See exactly where agents get stuck
- Monitor real-time thought processes
- Identify performance bottlenecks
- Watch agent decision-making flow
### 2. Educational Use
For learning how the system works:
- Observe AI reasoning in real-time
- Understand multi-agent collaboration
- See how different analysis types build on each other
- Learn from the agents' analytical approaches
### 3. Production Monitoring
In production environments:
- Monitor system health in real-time
- Detect anomalies early
- Provide better user experience with live updates
- Enable real-time decision making
## Comparison: Regular vs Streaming
| Aspect | Regular Analysis | Streaming Analysis |
|--------|------------------|-------------------|
| **Content Delivery** | Batch (sections at once) | Real-time (as generated) |
| **User Experience** | Wait then receive | Continuous feedback |
| **Debugging** | Post-mortem only | Live debugging |
| **Monitoring** | End result only | Process visibility |
| **Engagement** | Passive waiting | Active observation |
| **Resource Usage** | Lower CPU (4 FPS) | Higher CPU (8 FPS) |
## Technical Architecture
### StreamingMessageBuffer
Extends the regular MessageBuffer with:
- Real-time content streaming capabilities
- Agent transition detection
- Content buffering and delivery
- Callback system for live updates
### Layout Differences
- Additional "streaming_content" panel for live updates
- Higher refresh rate (8 FPS vs 4 FPS)
- Enhanced agent status indicators
- Real-time content formatting
### Agent Detection
The system detects agent transitions by:
- Analyzing message content for agent keywords
- Monitoring state changes in the graph
- Tracking section completions
- Managing content buffers per agent
## Best Practices
### 1. When to Use Streaming
- **Development**: Always use streaming for development
- **Production**: Use for high-engagement scenarios
- **Debugging**: Essential for troubleshooting
- **Demos**: Great for showing system capabilities
### 2. When to Use Regular Analysis
- **Batch Processing**: When processing many symbols
- **Resource Constrained**: On slower systems
- **Automated Systems**: When only final results matter
- **Background Processing**: For scheduled runs
### 3. Performance Considerations
- Streaming uses more CPU due to higher refresh rate
- Network usage is similar (same data, different timing)
- Memory usage slightly higher due to buffering
- Terminal performance may vary based on content length
## Troubleshooting
### Common Issues
1. **Slow Terminal**: Reduce content display or use regular mode
2. **Missing Content**: Check agent keyword detection
3. **Jumbled Output**: Ensure terminal supports rich formatting
4. **Performance**: Lower refresh rate or use regular analysis
### Debug Mode
```bash
# Enable debug output
export DEBUG=1
python -m cli.main stream --advanced
```
This will show additional information about:
- Agent detection logic
- Content buffering
- State transitions
- Performance metrics

473
backend/api.py Normal file
View File

@ -0,0 +1,473 @@
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from typing import Dict, Any, Optional
import datetime
import os
import json
import asyncio
from pathlib import Path
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Import trading agents
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
# Create FastAPI app
app = FastAPI(
title="TradingAgents API",
description="API for TradingAgents financial analysis",
version="1.0.0"
)
# Add CORS middleware for Swift app
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # In production, replace with your Swift app's URL
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Request model
class AnalysisRequest(BaseModel):
ticker: str
# Response model
class AnalysisResponse(BaseModel):
ticker: str
analysis_date: str
market_report: Optional[str] = None
sentiment_report: Optional[str] = None
news_report: Optional[str] = None
fundamentals_report: Optional[str] = None
investment_plan: Optional[str] = None
trader_investment_plan: Optional[str] = None
final_trade_decision: Optional[str] = None
processed_signal: Optional[str] = None
error: Optional[str] = None
# Simple configuration
def get_config():
config = DEFAULT_CONFIG.copy()
config.update({
"llm_provider": "openai",
"deep_think_llm": os.getenv("DEEP_THINK_MODEL", "o3"),
"quick_think_llm": os.getenv("QUICK_THINK_MODEL", "gpt-4o"),
"backend_url": os.getenv("BACKEND_URL", "https://api.openai.com/v1"),
"max_debate_rounds": 5,
"max_risk_discuss_rounds": 3,
"online_tools": True,
})
return config
# Shared OpenAI client factory (reused by interface.py tools)
_shared_openai_client = None
def get_shared_openai_client():
"""Get a shared OpenAI client with proper configuration"""
global _shared_openai_client
if _shared_openai_client is None:
config = get_config()
from openai import OpenAI
_shared_openai_client = OpenAI(base_url=config["backend_url"])
return _shared_openai_client
def get_compatible_model_for_tools():
"""Get a model that's compatible with web_search_preview tools"""
config = get_config()
model = config["quick_think_llm"]
# Models that don't support web_search_preview
incompatible_models = ["gpt-4.1-nano", "gpt-4.1-mini"]
if model in incompatible_models:
# Fallback to a compatible model
fallback_model = "gpt-4o-mini"
print(f"⚠️ Model {model} doesn't support web_search_preview. Using {fallback_model} for tools.")
return fallback_model
return model
def save_results_to_disk(ticker: str, analysis_date: str, results: dict, config: dict):
"""Save analysis results to disk like the CLI does"""
results_dir = Path(config["results_dir"]) / ticker / analysis_date
results_dir.mkdir(parents=True, exist_ok=True)
# Save full results as JSON
results_file = results_dir / "api_analysis_results.json"
with open(results_file, 'w') as f:
json.dump(results, f, indent=2)
# Save individual reports
reports_dir = results_dir / "reports"
reports_dir.mkdir(exist_ok=True)
# Save each report as a separate file
report_types = [
('market_report', 'market_analysis.txt'),
('sentiment_report', 'sentiment_analysis.txt'),
('news_report', 'news_analysis.txt'),
('fundamentals_report', 'fundamentals_analysis.txt'),
('investment_plan', 'investment_plan.txt'),
('trader_investment_plan', 'trader_investment_plan.txt'),
('final_trade_decision', 'final_trade_decision.txt'),
('processed_signal', 'signal.txt')
]
for key, filename in report_types:
if results.get(key):
report_file = reports_dir / filename
with open(report_file, 'w') as f:
f.write(str(results[key]))
return str(results_dir)
@app.get("/")
async def root():
return {"message": "TradingAgents API is running"}
@app.post("/analyze", response_model=AnalysisResponse)
async def analyze_ticker(request: AnalysisRequest):
"""Analyze a stock ticker and return trading recommendations"""
try:
# Validate ticker
ticker = request.ticker.strip().upper()
if not ticker:
raise HTTPException(status_code=400, detail="Ticker cannot be empty")
# Use current date
analysis_date = datetime.datetime.now().strftime("%Y-%m-%d")
# Initialize trading graph with all analysts
config = get_config()
graph = TradingAgentsGraph(
selected_analysts=["market", "social", "news", "fundamentals"],
debug=False,
config=config
)
# Run analysis
final_state, processed_signal = graph.propagate(ticker, analysis_date)
# Prepare results
results = {
"ticker": ticker,
"analysis_date": analysis_date,
"market_report": final_state.get("market_report"),
"sentiment_report": final_state.get("sentiment_report"),
"news_report": final_state.get("news_report"),
"fundamentals_report": final_state.get("fundamentals_report"),
"investment_plan": final_state.get("investment_plan"),
"trader_investment_plan": final_state.get("trader_investment_plan"),
"final_trade_decision": final_state.get("final_trade_decision"),
"processed_signal": processed_signal
}
# Save results to disk
saved_path = save_results_to_disk(ticker, analysis_date, results, config)
print(f"✅ Results saved to: {saved_path}")
# Return API response
return AnalysisResponse(**results)
except Exception as e:
# Return error in response
return AnalysisResponse(
ticker=request.ticker,
analysis_date=datetime.datetime.now().strftime("%Y-%m-%d"),
error=str(e)
)
# Health check endpoint
@app.get("/health")
async def health_check():
return {"status": "healthy"}
# Simple SSE test endpoint
@app.get("/test-stream")
async def test_stream():
"""Simple SSE test endpoint"""
def event_stream():
import time
for i in range(5):
yield f"data: {json.dumps({'count': i, 'message': f'Test message {i}'})}\n\n"
time.sleep(1)
yield f"data: {json.dumps({'message': 'Test complete'})}\n\n"
return StreamingResponse(
event_stream(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*"
}
)
@app.get("/analyze/stream")
async def stream_analysis(ticker: str):
"""Stream real-time analysis updates using SSE"""
print(f"\n🚀 NEW STREAM REQUEST: ticker={ticker}")
try:
# Validate ticker
ticker = ticker.strip().upper()
if not ticker:
print("❌ Empty ticker provided")
raise HTTPException(status_code=400, detail="Ticker cannot be empty")
print(f"✅ Validated ticker: {ticker}")
# Use current date
analysis_date = datetime.datetime.now().strftime("%Y-%m-%d")
print(f"📅 Analysis date: {analysis_date}")
async def event_stream():
try:
print(f"📡 Starting event stream for {ticker}")
# Send initial status immediately
initial_event = json.dumps({'type': 'status', 'message': f'Starting analysis for {ticker}...'})
print(f"📤 Sending initial status: {initial_event}")
yield f"data: {initial_event}\n\n"
# Initialize trading graph with all analysts
print("🔧 Initializing trading graph...")
config = get_config()
print(f"📋 Config: {config}")
graph = TradingAgentsGraph(
selected_analysts=["market", "social", "news", "fundamentals"],
debug=True, # Enable debug mode
config=config
)
print("✅ Trading graph initialized")
# Initialize state and get graph args
print("🔧 Creating initial state...")
init_agent_state = graph.propagator.create_initial_state(ticker, analysis_date)
print(f"📊 Initial state keys: {list(init_agent_state.keys()) if init_agent_state else 'None'}")
args = graph.propagator.get_graph_args()
print(f"🔧 Graph args: {args}")
# Track progress and reports
agent_progress = {
"Market Analyst": "pending",
"Social Media Analyst": "pending",
"News Analyst": "pending",
"Fundamentals Analyst": "pending",
"Bull Researcher": "pending",
"Bear Researcher": "pending",
"Research Manager": "pending",
"Trading Team": "pending",
"Portfolio Manager": "pending"
}
print(f"📊 Initial agent progress: {agent_progress}")
reports_completed = []
trace = []
chunk_count = 0
print("🔄 Starting real-time streaming using graph.graph.stream()...")
# Real-time streaming using graph.stream()
for chunk in graph.graph.stream(init_agent_state, **args):
chunk_count += 1
print(f"\n📦 CHUNK {chunk_count}: {list(chunk.keys()) if chunk else 'Empty'}")
trace.append(chunk)
# Allow async event loop to process
await asyncio.sleep(0.1)
if len(chunk.get("messages", [])) > 0:
print(f"💬 Processing {len(chunk['messages'])} messages")
# Process messages for agent detection
last_message = chunk["messages"][-1]
print(f"📨 Last message type: {type(last_message)}")
if hasattr(last_message, "content"):
content = str(last_message.content) if hasattr(last_message.content, '__str__') else str(last_message.content)
# Extract text content if it's a list
if isinstance(last_message.content, list):
text_parts = []
for part in last_message.content:
if hasattr(part, 'text'):
text_parts.append(part.text)
elif isinstance(part, str):
text_parts.append(part)
else:
text_parts.append(str(part))
content = " ".join(text_parts)
# Send reasoning updates
reasoning_event = json.dumps({'type': 'reasoning', 'content': content[:500]})
print(f"📤 Sending reasoning: {reasoning_event[:100]}...")
yield f"data: {reasoning_event}\n\n"
# Handle section completions and send progress updates
if "market_report" in chunk and chunk["market_report"] and "market_report" not in reports_completed:
print("✅ Market report completed!")
agent_progress["Market Analyst"] = "completed"
agent_progress["Social Media Analyst"] = "in_progress"
reports_completed.append("market_report")
events = [
json.dumps({'type': 'agent_status', 'agent': 'market', 'status': 'completed'}),
json.dumps({'type': 'agent_status', 'agent': 'social', 'status': 'in_progress'}),
json.dumps({'type': 'report', 'section': 'market_report', 'content': chunk['market_report']}),
json.dumps({'type': 'progress', 'content': '25'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
if "sentiment_report" in chunk and chunk["sentiment_report"] and "sentiment_report" not in reports_completed:
print("✅ Sentiment report completed!")
agent_progress["Social Media Analyst"] = "completed"
agent_progress["News Analyst"] = "in_progress"
reports_completed.append("sentiment_report")
events = [
json.dumps({'type': 'agent_status', 'agent': 'social', 'status': 'completed'}),
json.dumps({'type': 'agent_status', 'agent': 'news', 'status': 'in_progress'}),
json.dumps({'type': 'report', 'section': 'sentiment_report', 'content': chunk['sentiment_report']}),
json.dumps({'type': 'progress', 'content': '40'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
if "news_report" in chunk and chunk["news_report"] and "news_report" not in reports_completed:
print("✅ News report completed!")
agent_progress["News Analyst"] = "completed"
agent_progress["Fundamentals Analyst"] = "in_progress"
reports_completed.append("news_report")
events = [
json.dumps({'type': 'agent_status', 'agent': 'news', 'status': 'completed'}),
json.dumps({'type': 'agent_status', 'agent': 'fundamentals', 'status': 'in_progress'}),
json.dumps({'type': 'report', 'section': 'news_report', 'content': chunk['news_report']}),
json.dumps({'type': 'progress', 'content': '55'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
if "fundamentals_report" in chunk and chunk["fundamentals_report"] and "fundamentals_report" not in reports_completed:
print("✅ Fundamentals report completed!")
agent_progress["Fundamentals Analyst"] = "completed"
agent_progress["Bull Researcher"] = "in_progress"
agent_progress["Bear Researcher"] = "in_progress"
reports_completed.append("fundamentals_report")
events = [
json.dumps({'type': 'agent_status', 'agent': 'fundamentals', 'status': 'completed'}),
json.dumps({'type': 'agent_status', 'agent': 'bull_researcher', 'status': 'in_progress'}),
json.dumps({'type': 'agent_status', 'agent': 'bear_researcher', 'status': 'in_progress'}),
json.dumps({'type': 'report', 'section': 'fundamentals_report', 'content': chunk['fundamentals_report']}),
json.dumps({'type': 'progress', 'content': '70'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
# Handle research team debates
if "investment_debate_state" in chunk and chunk["investment_debate_state"]:
print("🔄 Processing investment debate state...")
debate_state = chunk["investment_debate_state"]
if "judge_decision" in debate_state and debate_state["judge_decision"] and "investment_plan" not in reports_completed:
print("✅ Investment plan completed!")
agent_progress["Bull Researcher"] = "completed"
agent_progress["Bear Researcher"] = "completed"
agent_progress["Research Manager"] = "completed"
agent_progress["Trading Team"] = "in_progress"
reports_completed.append("investment_plan")
events = [
json.dumps({'type': 'agent_status', 'agent': 'bull_researcher', 'status': 'completed'}),
json.dumps({'type': 'agent_status', 'agent': 'bear_researcher', 'status': 'completed'}),
json.dumps({'type': 'agent_status', 'agent': 'trader', 'status': 'in_progress'}),
json.dumps({'type': 'report', 'section': 'investment_plan', 'content': debate_state['judge_decision']}),
json.dumps({'type': 'progress', 'content': '85'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
# Handle trading team
if "trader_investment_plan" in chunk and chunk["trader_investment_plan"] and "trader_investment_plan" not in reports_completed:
print("✅ Trading plan completed!")
agent_progress["Trading Team"] = "completed"
reports_completed.append("trader_investment_plan")
events = [
json.dumps({'type': 'agent_status', 'agent': 'trader', 'status': 'completed'}),
json.dumps({'type': 'report', 'section': 'trader_investment_plan', 'content': chunk['trader_investment_plan']}),
json.dumps({'type': 'progress', 'content': '95'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
# Handle final decision
if "final_trade_decision" in chunk and chunk["final_trade_decision"] and "final_trade_decision" not in reports_completed:
print("✅ Final decision completed!")
reports_completed.append("final_trade_decision")
events = [
json.dumps({'type': 'report', 'section': 'final_trade_decision', 'content': chunk['final_trade_decision']}),
json.dumps({'type': 'progress', 'content': '100'})
]
for event in events:
print(f"📤 Sending: {event[:100]}...")
yield f"data: {event}\n\n"
print(f"🔄 Streaming completed. Processed {chunk_count} chunks, {len(reports_completed)} reports completed")
# Get final state and process signal
final_state = trace[-1] if trace else {}
processed_signal = graph.process_signal(final_state.get("final_trade_decision", ""))
# Send completion
completion_event = json.dumps({'type': 'complete', 'message': 'Analysis completed successfully', 'signal': processed_signal})
print(f"📤 Sending completion: {completion_event}")
yield f"data: {completion_event}\n\n"
except Exception as e:
print(f"💥 Error in streaming: {str(e)}")
import traceback
traceback.print_exc()
error_event = json.dumps({'type': 'error', 'message': str(e)})
print(f"📤 Sending error: {error_event}")
yield f"data: {error_event}\n\n"
return StreamingResponse(
event_stream(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Cache-Control",
"X-Accel-Buffering": "no" # Disable nginx buffering
}
)
except Exception as e:
print(f"💥 Error in stream_analysis: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))

75
backend/check_setup.py Normal file
View File

@ -0,0 +1,75 @@
#!/usr/bin/env python3
"""
Check if the environment is properly set up for TradingAgents
"""
import os
import sys
print("🔍 TradingAgents Environment Check")
print("=" * 50)
# Check Python version
print(f"✓ Python version: {sys.version.split()[0]}")
# Check required packages
required_packages = [
'fastapi',
'uvicorn',
'pydantic',
'openai',
'langchain',
'langchain_openai',
'langgraph'
]
print("\nChecking required packages:")
missing_packages = []
for package in required_packages:
try:
__import__(package.replace('-', '_'))
print(f"{package} is installed")
except ImportError:
print(f"{package} is NOT installed")
missing_packages.append(package)
# Check environment variables
print("\nChecking environment variables:")
env_vars = {
'OPENAI_API_KEY': 'Required for AI agents',
'FINNHUB_API_KEY': 'Required for market data',
'REDDIT_CLIENT_ID': 'Optional for sentiment analysis',
'REDDIT_CLIENT_SECRET': 'Optional for sentiment analysis'
}
missing_required = []
for var, description in env_vars.items():
value = os.getenv(var)
if value:
print(f"{var} is set ({description})")
else:
if 'Required' in description:
print(f"{var} is NOT set ({description})")
missing_required.append(var)
else:
print(f"{var} is not set ({description})")
# Summary
print("\n" + "=" * 50)
if missing_packages:
print("❌ Missing packages:")
for pkg in missing_packages:
print(f" - {pkg}")
print("\nInstall with: pip install -r requirements.txt")
if missing_required:
print("❌ Missing required environment variables:")
for var in missing_required:
print(f" - {var}")
print("\nAdd these to your .env file")
if not missing_packages and not missing_required:
print("✅ Environment is properly configured!")
print("\nYou can now run:")
print(" uv run python3 run_api.py")
else:
print("\n⚠️ Fix the issues above before running the API")

View File

@ -1182,10 +1182,32 @@ def analyze(
"--advanced",
"-a",
help="Use advanced configuration mode with full customization options"
),
streaming: bool = typer.Option(
False,
"--streaming",
"-s",
help="Enable real-time streaming of analysis reports as they're generated"
)
):
"""Run trading analysis with simplified or advanced configuration."""
run_analysis(advanced_mode=advanced)
if streaming:
run_analysis_streaming(advanced_mode=advanced)
else:
run_analysis(advanced_mode=advanced)
@app.command()
def stream(
advanced: bool = typer.Option(
False,
"--advanced",
"-a",
help="Use advanced configuration mode with full customization options"
)
):
"""Run real-time streaming trading analysis."""
run_analysis_streaming(advanced_mode=advanced)
@app.callback(invoke_without_command=True)
@ -1196,12 +1218,510 @@ def main(
"--advanced",
"-a",
help="Use advanced configuration mode with full customization options"
),
streaming: bool = typer.Option(
False,
"--streaming",
"-s",
help="Enable real-time streaming of analysis reports as they're generated"
)
):
"""TradingAgents CLI: Multi-Agents LLM Financial Trading Framework"""
if ctx.invoked_subcommand is None:
# Default behavior - run analysis
run_analysis(advanced_mode=advanced)
if streaming:
run_analysis_streaming(advanced_mode=advanced)
else:
run_analysis(advanced_mode=advanced)
class StreamingMessageBuffer(MessageBuffer):
"""Enhanced MessageBuffer for real-time content streaming"""
def __init__(self, max_length=100):
super().__init__(max_length)
self.streaming_content = {
"current_agent": None,
"current_content": "",
"content_buffer": "",
"last_streamed_length": 0
}
self.content_callbacks = []
def add_content_callback(self, callback):
"""Add a callback to be called when new content is streamed"""
self.content_callbacks.append(callback)
def stream_content(self, agent_name, content_chunk):
"""Stream content in real-time"""
self.streaming_content["current_agent"] = agent_name
self.streaming_content["content_buffer"] += content_chunk
# Call registered callbacks with new content
for callback in self.content_callbacks:
callback(agent_name, content_chunk, self.streaming_content["content_buffer"])
def finalize_streaming_content(self, section_name):
"""Finalize the streaming content into a report section"""
if self.streaming_content["content_buffer"]:
self.update_report_section(section_name, self.streaming_content["content_buffer"])
self.streaming_content["content_buffer"] = ""
self.streaming_content["last_streamed_length"] = 0
def create_streaming_layout():
"""Create layout optimized for streaming content"""
layout = Layout()
layout.split_column(
Layout(name="header", size=3),
Layout(name="main"),
Layout(name="footer", size=3),
)
layout["main"].split_column(
Layout(name="upper", ratio=2),
Layout(name="streaming_content", ratio=4),
Layout(name="analysis", ratio=3)
)
layout["upper"].split_row(
Layout(name="progress", ratio=2), Layout(name="messages", ratio=3)
)
return layout
def update_streaming_display(layout, streaming_buffer, spinner_text=None):
"""Update display with streaming content"""
# Update header
layout["header"].update(
Panel(
"[bold green]Welcome to TradingAgents CLI[/bold green]\n"
"[dim]© [Tauric Research](https://github.com/TauricResearch)[/dim]",
title="Welcome to TradingAgents",
border_style="green",
)
)
# Update progress panel using streaming_buffer
progress_table = Table(show_header=False, box=box.MINIMAL)
progress_table.add_column("Agent", style="cyan", no_wrap=True)
progress_table.add_column("Status", style="magenta")
for agent, status in streaming_buffer.agent_status.items():
if status == "completed":
status_icon = ""
elif status == "in_progress":
status_icon = "🔄"
else:
status_icon = ""
progress_table.add_row(agent, f"{status_icon} {status.title()}")
layout["progress"].update(
Panel(
progress_table,
title="Agent Progress",
border_style="blue"
)
)
# Update messages panel using streaming_buffer
messages_content = []
for timestamp, msg_type, content in list(streaming_buffer.messages)[-10:]: # Show last 10 messages
messages_content.append(f"[dim]{timestamp}[/dim] [{msg_type}] {content}")
if spinner_text:
messages_content.append(f"[yellow]⚡ {spinner_text}[/yellow]")
layout["messages"].update(
Panel(
"\n".join(messages_content),
title="Recent Messages",
border_style="yellow"
)
)
# Add streaming content panel
if streaming_buffer.streaming_content["current_agent"] and streaming_buffer.streaming_content["content_buffer"]:
agent_name = streaming_buffer.streaming_content["current_agent"]
content = streaming_buffer.streaming_content["content_buffer"]
# Limit display content to prevent overwhelming the terminal
display_content = content[-2000:] if len(content) > 2000 else content
if len(content) > 2000:
display_content = "...\n" + display_content
streaming_panel = Panel(
Markdown(display_content),
title=f"🔴 Live: {agent_name}",
border_style="red",
expand=True
)
layout["streaming_content"].update(streaming_panel)
else:
layout["streaming_content"].update(
Panel(
"[dim]Waiting for content to stream...[/dim]",
title="📡 Streaming Content",
border_style="dim"
)
)
# Update analysis panel using streaming_buffer
if streaming_buffer.current_report:
layout["analysis"].update(
Panel(
Markdown(streaming_buffer.current_report),
title="Latest Report Section",
border_style="green"
)
)
else:
layout["analysis"].update(
Panel(
"[dim]Analysis reports will appear here...[/dim]",
title="Analysis Reports",
border_style="dim"
)
)
# Footer with instructions
layout["footer"].update(
Panel(
"[bold]TradingAgents Streaming Analysis[/bold] | Press Ctrl+C to stop",
style="bold white on blue"
)
)
def update_research_team_status_streaming(streaming_buffer, status):
"""Update all research team agent statuses for streaming"""
research_agents = ["Bull Researcher", "Bear Researcher", "Research Manager"]
for agent in research_agents:
streaming_buffer.update_agent_status(agent, status)
def run_analysis_streaming(advanced_mode=False):
"""
Streaming version of run_analysis that delivers reports in real-time
"""
# Get user selections based on mode
if advanced_mode:
selections = get_user_selections_advanced()
else:
selections = get_user_selections()
# Create config with selected research depth
config = DEFAULT_CONFIG.copy()
config["max_debate_rounds"] = selections["research_depth"]
config["max_risk_discuss_rounds"] = selections["research_depth"]
config["quick_think_llm"] = selections["shallow_thinker"]
config["deep_think_llm"] = selections["deep_thinker"]
config["backend_url"] = selections["backend_url"]
config["llm_provider"] = selections["llm_provider"].lower()
# Initialize the graph
graph = TradingAgentsGraph(
[analyst.value for analyst in selections["analysts"]], config=config, debug=True
)
# Create result directory
results_dir = Path(config["results_dir"]) / selections["ticker"] / selections["analysis_date"]
results_dir.mkdir(parents=True, exist_ok=True)
report_dir = results_dir / "reports"
report_dir.mkdir(parents=True, exist_ok=True)
log_file = results_dir / "message_tool.log"
log_file.touch(exist_ok=True)
# Use streaming message buffer instead of regular one
streaming_buffer = StreamingMessageBuffer()
def save_message_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
timestamp, message_type, content = obj.messages[-1]
content = content.replace("\n", " ") # Replace newlines with spaces
with open(log_file, "a") as f:
f.write(f"{timestamp} [{message_type}] {content}\n")
return wrapper
def save_tool_call_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
timestamp, tool_name, args = obj.tool_calls[-1]
args_str = ", ".join(f"{k}={v}" for k, v in args.items())
with open(log_file, "a") as f:
f.write(f"{timestamp} [Tool Call] {tool_name}({args_str})\n")
return wrapper
def save_report_section_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(section_name, content):
func(section_name, content)
if section_name in obj.report_sections and obj.report_sections[section_name] is not None:
content = obj.report_sections[section_name]
if content:
file_name = f"{section_name}.md"
with open(report_dir / file_name, "w") as f:
f.write(content)
return wrapper
streaming_buffer.add_message = save_message_decorator(streaming_buffer, "add_message")
streaming_buffer.add_tool_call = save_tool_call_decorator(streaming_buffer, "add_tool_call")
streaming_buffer.update_report_section = save_report_section_decorator(streaming_buffer, "update_report_section")
# Create streaming layout
layout = create_streaming_layout()
# Agent mapping for streaming
agent_mapping = {
"market": "Market Analyst",
"social": "Social Media Analyst",
"news": "News Analyst",
"fundamentals": "Fundamentals Analyst",
"bull": "Bull Researcher",
"bear": "Bear Researcher",
"research_manager": "Research Manager",
"trader": "Trading Team",
"risky": "Risky Analyst",
"safe": "Safe Analyst",
"neutral": "Neutral Analyst",
"portfolio": "Portfolio Manager"
}
with Live(layout, refresh_per_second=8) as live: # Higher refresh rate for streaming
# Initial display
update_streaming_display(layout, streaming_buffer)
# Add initial messages
streaming_buffer.add_message("System", f"Selected ticker: {selections['ticker']}")
streaming_buffer.add_message(
"System", f"Analysis date: {selections['analysis_date']}"
)
streaming_buffer.add_message(
"System",
f"Selected analysts: {', '.join(analyst.value for analyst in selections['analysts'])}",
)
update_streaming_display(layout, streaming_buffer)
# Reset agent statuses
for agent in streaming_buffer.agent_status:
streaming_buffer.update_agent_status(agent, "pending")
# Reset report sections
for section in streaming_buffer.report_sections:
streaming_buffer.report_sections[section] = None
streaming_buffer.current_report = None
streaming_buffer.final_report = None
# Update agent status to in_progress for the first analyst
first_analyst = f"{selections['analysts'][0].value.capitalize()} Analyst"
streaming_buffer.update_agent_status(first_analyst, "in_progress")
update_streaming_display(layout, streaming_buffer)
# Create spinner text
spinner_text = (
f"Analyzing {selections['ticker']} on {selections['analysis_date']}..."
)
update_streaming_display(layout, streaming_buffer, spinner_text)
# Initialize state and get graph args
init_agent_state = graph.propagator.create_initial_state(
selections["ticker"], selections["analysis_date"]
)
args = graph.propagator.get_graph_args()
# Stream the analysis with real-time content delivery
trace = []
current_streaming_agent = None
for chunk in graph.graph.stream(init_agent_state, **args):
if len(chunk["messages"]) > 0:
# Get the last message from the chunk
last_message = chunk["messages"][-1]
# Extract message content and type
if hasattr(last_message, "content"):
content = extract_content_string(last_message.content)
msg_type = "Reasoning"
# Detect which agent is currently speaking and stream content
agent_detected = None
for key, agent_name in agent_mapping.items():
if any(keyword in content.lower() for keyword in [key, agent_name.lower()]):
agent_detected = agent_name
break
# If we detected an agent or have ongoing streaming
if agent_detected or current_streaming_agent:
if agent_detected and agent_detected != current_streaming_agent:
# New agent started - finalize previous and start new
if current_streaming_agent:
section_map = {
"Market Analyst": "market_report",
"Social Media Analyst": "sentiment_report",
"News Analyst": "news_report",
"Fundamentals Analyst": "fundamentals_report",
"Research Manager": "investment_plan",
"Trading Team": "trader_investment_plan",
"Portfolio Manager": "final_trade_decision"
}
if current_streaming_agent in section_map:
streaming_buffer.finalize_streaming_content(section_map[current_streaming_agent])
current_streaming_agent = agent_detected
streaming_buffer.update_agent_status(agent_detected, "in_progress")
# Stream the content in real-time
if current_streaming_agent:
streaming_buffer.stream_content(current_streaming_agent, content + "\n")
else:
content = str(last_message)
msg_type = "System"
# Add message to buffer
streaming_buffer.add_message(msg_type, content[:200] + "..." if len(content) > 200 else content)
# Handle tool calls
if hasattr(last_message, "tool_calls"):
for tool_call in last_message.tool_calls:
if isinstance(tool_call, dict):
streaming_buffer.add_tool_call(
tool_call["name"], tool_call["args"]
)
else:
streaming_buffer.add_tool_call(tool_call.name, tool_call.args)
# Handle section completions and agent status updates
# Analyst Team Reports
if "market_report" in chunk and chunk["market_report"]:
streaming_buffer.update_report_section("market_report", chunk["market_report"])
streaming_buffer.update_agent_status("Market Analyst", "completed")
current_streaming_agent = None
if "social" in [a.value for a in selections["analysts"]]:
streaming_buffer.update_agent_status("Social Media Analyst", "in_progress")
if "sentiment_report" in chunk and chunk["sentiment_report"]:
streaming_buffer.update_report_section("sentiment_report", chunk["sentiment_report"])
streaming_buffer.update_agent_status("Social Media Analyst", "completed")
current_streaming_agent = None
if "news" in [a.value for a in selections["analysts"]]:
streaming_buffer.update_agent_status("News Analyst", "in_progress")
if "news_report" in chunk and chunk["news_report"]:
streaming_buffer.update_report_section("news_report", chunk["news_report"])
streaming_buffer.update_agent_status("News Analyst", "completed")
current_streaming_agent = None
if "fundamentals" in [a.value for a in selections["analysts"]]:
streaming_buffer.update_agent_status("Fundamentals Analyst", "in_progress")
if "fundamentals_report" in chunk and chunk["fundamentals_report"]:
streaming_buffer.update_report_section("fundamentals_report", chunk["fundamentals_report"])
streaming_buffer.update_agent_status("Fundamentals Analyst", "completed")
current_streaming_agent = None
update_research_team_status_streaming(streaming_buffer, "in_progress")
# Research Team - Handle Investment Debate State with streaming
if "investment_debate_state" in chunk and chunk["investment_debate_state"]:
debate_state = chunk["investment_debate_state"]
if "bull_history" in debate_state and debate_state["bull_history"]:
update_research_team_status_streaming(streaming_buffer, "in_progress")
bull_responses = debate_state["bull_history"].split("\n")
latest_bull = bull_responses[-1] if bull_responses else ""
if latest_bull:
streaming_buffer.stream_content("Bull Researcher", latest_bull + "\n")
if "bear_history" in debate_state and debate_state["bear_history"]:
update_research_team_status_streaming(streaming_buffer, "in_progress")
bear_responses = debate_state["bear_history"].split("\n")
latest_bear = bear_responses[-1] if bear_responses else ""
if latest_bear:
streaming_buffer.stream_content("Bear Researcher", latest_bear + "\n")
if "judge_decision" in debate_state and debate_state["judge_decision"]:
streaming_buffer.stream_content("Research Manager", debate_state["judge_decision"] + "\n")
streaming_buffer.finalize_streaming_content("investment_plan")
update_research_team_status_streaming(streaming_buffer, "completed")
streaming_buffer.update_agent_status("Risky Analyst", "in_progress")
current_streaming_agent = None
# Trading Team with streaming
if "trader_investment_plan" in chunk and chunk["trader_investment_plan"]:
streaming_buffer.update_report_section("trader_investment_plan", chunk["trader_investment_plan"])
streaming_buffer.update_agent_status("Risky Analyst", "in_progress")
current_streaming_agent = None
# Risk Management Team with streaming
if "risk_debate_state" in chunk and chunk["risk_debate_state"]:
risk_state = chunk["risk_debate_state"]
if "current_risky_response" in risk_state and risk_state["current_risky_response"]:
streaming_buffer.update_agent_status("Risky Analyst", "in_progress")
streaming_buffer.stream_content("Risky Analyst", risk_state["current_risky_response"] + "\n")
if "current_safe_response" in risk_state and risk_state["current_safe_response"]:
streaming_buffer.update_agent_status("Safe Analyst", "in_progress")
streaming_buffer.stream_content("Safe Analyst", risk_state["current_safe_response"] + "\n")
if "current_neutral_response" in risk_state and risk_state["current_neutral_response"]:
streaming_buffer.update_agent_status("Neutral Analyst", "in_progress")
streaming_buffer.stream_content("Neutral Analyst", risk_state["current_neutral_response"] + "\n")
if "judge_decision" in risk_state and risk_state["judge_decision"]:
streaming_buffer.stream_content("Portfolio Manager", risk_state["judge_decision"] + "\n")
streaming_buffer.finalize_streaming_content("final_trade_decision")
# Mark all risk team as completed
streaming_buffer.update_agent_status("Risky Analyst", "completed")
streaming_buffer.update_agent_status("Safe Analyst", "completed")
streaming_buffer.update_agent_status("Neutral Analyst", "completed")
streaming_buffer.update_agent_status("Portfolio Manager", "completed")
current_streaming_agent = None
# Update the display with streaming content
update_streaming_display(layout, streaming_buffer)
trace.append(chunk)
# Finalize any remaining streaming content
if current_streaming_agent:
section_map = {
"Market Analyst": "market_report",
"Social Media Analyst": "sentiment_report",
"News Analyst": "news_report",
"Fundamentals Analyst": "fundamentals_report",
"Research Manager": "investment_plan",
"Trading Team": "trader_investment_plan",
"Portfolio Manager": "final_trade_decision"
}
if current_streaming_agent in section_map:
streaming_buffer.finalize_streaming_content(section_map[current_streaming_agent])
# Get final state and decision
final_state = trace[-1]
decision = graph.process_signal(final_state["final_trade_decision"])
# Update all agent statuses to completed
for agent in streaming_buffer.agent_status:
streaming_buffer.update_agent_status(agent, "completed")
streaming_buffer.add_message(
"Analysis", f"Completed streaming analysis for {selections['analysis_date']}"
)
# Update final report sections
for section in streaming_buffer.report_sections.keys():
if section in final_state:
streaming_buffer.update_report_section(section, final_state[section])
# Display the complete final report
display_complete_report(final_state)
update_streaming_display(layout, streaming_buffer)
if __name__ == "__main__":

View File

@ -24,3 +24,7 @@ rich
questionary
langchain_anthropic
langchain-google-genai
fastapi
pydantic
uvicorn[standard]
python-dotenv

132
backend/restart_api.sh Executable file
View File

@ -0,0 +1,132 @@
#!/bin/bash
# TradingAgents API Restart Script
# This script kills all existing API processes and restarts the server cleanly
echo "🔄 TradingAgents API Restart Script"
echo "=================================="
# Function to kill processes by pattern
kill_processes() {
local pattern=$1
local description=$2
echo "🔍 Looking for $description..."
pids=$(ps aux | grep "$pattern" | grep -v grep | awk '{print $2}')
if [ -n "$pids" ]; then
echo "💀 Killing $description: $pids"
echo "$pids" | xargs kill -9 2>/dev/null || true
sleep 1
else
echo "✅ No $description found"
fi
}
# Function to kill process on specific port
kill_port() {
local port=$1
echo "🔍 Looking for processes on port $port..."
# Try lsof first
if command -v lsof >/dev/null 2>&1; then
pids=$(lsof -ti :$port 2>/dev/null)
if [ -n "$pids" ]; then
echo "💀 Killing processes on port $port: $pids"
echo "$pids" | xargs kill -9 2>/dev/null || true
sleep 1
else
echo "✅ No processes found on port $port"
fi
else
echo "⚠️ lsof not available, skipping port check"
fi
}
# Step 1: Kill all Python API processes
echo
echo "📋 Step 1: Killing existing API processes..."
kill_processes "run_api.py" "run_api.py processes"
kill_processes "uvicorn" "uvicorn processes"
kill_processes "TradingAgents" "TradingAgents processes"
# Step 2: Kill processes on port 8000
echo
echo "📋 Step 2: Killing processes on port 8000..."
kill_port 8000
# Step 3: Wait for cleanup
echo
echo "📋 Step 3: Waiting for cleanup..."
sleep 3
# Step 4: Verify port is free
echo
echo "📋 Step 4: Verifying port 8000 is free..."
if command -v lsof >/dev/null 2>&1; then
if lsof -ti :8000 >/dev/null 2>&1; then
echo "⚠️ Port 8000 is still in use. Trying force kill..."
lsof -ti :8000 | xargs kill -9 2>/dev/null || true
sleep 2
fi
fi
# Step 5: Change to backend directory
echo
echo "📋 Step 5: Changing to backend directory..."
if [ ! -f "run_api.py" ]; then
if [ -f "../backend/run_api.py" ]; then
cd ../backend
echo "✅ Changed to backend directory"
elif [ -f "backend/run_api.py" ]; then
cd backend
echo "✅ Changed to backend directory"
else
echo "❌ Cannot find run_api.py. Please run this script from the project root or backend directory."
exit 1
fi
else
echo "✅ Already in backend directory"
fi
# Step 6: Activate virtual environment
echo
echo "📋 Step 6: Activating virtual environment..."
if [ -f "venv/bin/activate" ]; then
source venv/bin/activate
echo "✅ Virtual environment activated"
elif [ -f "../venv/bin/activate" ]; then
source ../venv/bin/activate
echo "✅ Virtual environment activated"
else
echo "⚠️ Virtual environment not found. Continuing without activation..."
fi
# Step 7: Check if port is really free
echo
echo "📋 Step 7: Final port check..."
if command -v nc >/dev/null 2>&1; then
if nc -z localhost 8000 2>/dev/null; then
echo "❌ Port 8000 is still occupied!"
echo "💡 Try running: sudo lsof -ti :8000 | xargs sudo kill -9"
echo "💡 Or use a different port in the script"
exit 1
else
echo "✅ Port 8000 is free"
fi
fi
# Step 8: Start the API server
echo
echo "📋 Step 8: Starting TradingAgents API server..."
echo "🚀 Server will be available at http://localhost:8000"
echo "🚀 Server will be available at http://192.168.4.223:8000 (for iOS)"
echo "📚 API docs will be at http://localhost:8000/docs"
echo
echo "💡 Press Ctrl+C to stop the server"
echo "💡 Enhanced logging is active - you'll see detailed tool usage"
echo
echo "=================================="
# Start the server (not in background so we can see logs)
python run_api.py

22
backend/run_api.py Normal file
View File

@ -0,0 +1,22 @@
#!/usr/bin/env python
"""
Run the TradingAgents FastAPI server
"""
import uvicorn
from tradingagents.default_config import DEFAULT_CONFIG
if __name__ == "__main__":
host = "0.0.0.0" # Allow connections from any interface
port = DEFAULT_CONFIG["api_port"]
print(f"\n🚀 Starting TradingAgents API server...")
print(f"📍 Server will be available at http://localhost:{port}")
print(f"📚 API docs will be at http://localhost:{port}/docs\n")
uvicorn.run(
"api:app",
host=host,
port=port,
reload=True,
log_level="info"
)

127
backend/test_api.md Normal file
View File

@ -0,0 +1,127 @@
# Testing the TradingAgents API
## 1. Start the API Server
First, make sure you're in the backend directory and have your environment activated:
```bash
cd backend
source venv/bin/activate # On Windows: venv\Scripts\activate
python run_api.py
```
You should see:
```
🚀 Starting TradingAgents API server...
📍 Server will be available at http://localhost:8000
📚 API docs will be at http://localhost:8000/docs
```
## 2. Test Using FastAPI Interactive Docs
The easiest way to test is using FastAPI's built-in documentation:
1. Open your browser and go to: http://localhost:8000/docs
2. You'll see an interactive API documentation (Swagger UI)
3. Click on any endpoint to expand it
4. Click "Try it out" to test the endpoint
## 3. Test Individual Endpoints
### A. Test Root Endpoint
```bash
curl http://localhost:8000/
```
Expected response:
```json
{"message": "TradingAgents API is running"}
```
### B. Test Health Check
```bash
curl http://localhost:8000/health
```
Expected response:
```json
{"status": "healthy"}
```
### C. Test Analysis Endpoint (Main Functionality)
```bash
curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{"ticker": "AAPL"}'
```
Expected response structure:
```json
{
"ticker": "AAPL",
"analysis_date": "2024-06-29",
"market_report": "...",
"sentiment_report": "...",
"news_report": "...",
"fundamentals_report": "...",
"investment_plan": "...",
"trader_investment_plan": "...",
"final_trade_decision": "...",
"processed_signal": "BUY/SELL/HOLD",
"error": null
}
```
## 4. Test Using Python
Create a test script `test_api.py`:
```python
import requests
import json
# Test root endpoint
response = requests.get("http://localhost:8000/")
print("Root endpoint:", response.json())
# Test health check
response = requests.get("http://localhost:8000/health")
print("Health check:", response.json())
# Test analysis
data = {"ticker": "AAPL"}
response = requests.post(
"http://localhost:8000/analyze",
json=data
)
print("Analysis response status:", response.status_code)
if response.status_code == 200:
result = response.json()
print(f"Ticker: {result['ticker']}")
print(f"Date: {result['analysis_date']}")
print(f"Signal: {result.get('processed_signal', 'N/A')}")
print(f"Error: {result.get('error', 'None')}")
```
## 5. Common Issues and Solutions
### Issue: Connection Refused
- **Solution**: Make sure the server is running (`python run_api.py`)
### Issue: Missing API Keys
- **Solution**: Ensure your `.env` file has:
```
OPENAI_API_KEY=your_key_here
FINNHUB_API_KEY=your_key_here
```
### Issue: Timeout or Slow Response
- **Solution**: The analysis can take 30-60 seconds as it involves multiple AI agents
## 6. Test from iOS App
1. Make sure the API server is running
2. Open the iOS app in Xcode
3. Run the app (⌘+R)
4. Enter a ticker and tap "Analyze"
5. Check Xcode console for any network errors

101
backend/test_api.py Normal file
View File

@ -0,0 +1,101 @@
#!/usr/bin/env python
"""
Simple script to test the TradingAgents API
"""
import requests
import json
import time
from datetime import datetime
# API base URL
BASE_URL = "http://localhost:8000"
def test_root():
"""Test root endpoint"""
print("Testing root endpoint...")
try:
response = requests.get(f"{BASE_URL}/")
print(f"✅ Status: {response.status_code}")
print(f"✅ Response: {response.json()}")
except Exception as e:
print(f"❌ Error: {e}")
print()
def test_health():
"""Test health check endpoint"""
print("Testing health endpoint...")
try:
response = requests.get(f"{BASE_URL}/health")
print(f"✅ Status: {response.status_code}")
print(f"✅ Response: {response.json()}")
except Exception as e:
print(f"❌ Error: {e}")
print()
def test_analysis(ticker="AAPL"):
"""Test analysis endpoint"""
print(f"Testing analysis endpoint with ticker: {ticker}")
print("⏳ This may take 30-60 seconds...")
try:
start_time = time.time()
response = requests.post(
f"{BASE_URL}/analyze",
json={"ticker": ticker}
)
end_time = time.time()
print(f"✅ Status: {response.status_code}")
print(f"✅ Time taken: {end_time - start_time:.2f} seconds")
if response.status_code == 200:
result = response.json()
print(f"✅ Ticker: {result['ticker']}")
print(f"✅ Date: {result['analysis_date']}")
print(f"✅ Signal: {result.get('processed_signal', 'N/A')}")
if result.get('error'):
print(f"⚠️ Error in analysis: {result['error']}")
else:
print("✅ Analysis completed successfully!")
# Show available reports
reports = [
'market_report', 'sentiment_report', 'news_report',
'fundamentals_report', 'final_trade_decision'
]
available_reports = [r for r in reports if result.get(r)]
print(f"✅ Available reports: {', '.join(available_reports)}")
else:
print(f"❌ Error response: {response.text}")
except requests.exceptions.ConnectionError:
print("❌ Error: Could not connect to API. Is the server running?")
print(" Run: python run_api.py")
except Exception as e:
print(f"❌ Error: {e}")
print()
def main():
print("🚀 TradingAgents API Test Suite")
print(f"📍 Testing API at: {BASE_URL}")
print(f"🕐 Started at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
print("-" * 50)
# Test endpoints
test_root()
test_health()
# Ask user if they want to test analysis
user_input = input("Do you want to test the analysis endpoint? (y/n): ").lower()
if user_input == 'y':
ticker = input("Enter ticker to analyze (default: AAPL): ").strip().upper()
if not ticker:
ticker = "AAPL"
test_analysis(ticker)
print("-" * 50)
print("✅ Testing complete!")
if __name__ == "__main__":
main()

View File

@ -11,7 +11,7 @@ class FinancialSituationMemory:
self.embedding = "text-embedding-3-small"
self.client = OpenAI(base_url=config["backend_url"])
self.chroma_client = chromadb.Client(Settings(allow_reset=True))
self.situation_collection = self.chroma_client.create_collection(name=name)
self.situation_collection = self.chroma_client.get_or_create_collection(name=name)
def get_embedding(self, text):
"""Get OpenAI embedding for a text"""

View File

@ -73,11 +73,28 @@ def getNewsData(query, start_date, end_date):
for el in results_on_page:
try:
link = el.find("a")["href"]
title = el.select_one("div.MBeuO").get_text()
snippet = el.select_one(".GI74Re").get_text()
date = el.select_one(".LfVVr").get_text()
source = el.select_one(".NUnG9d span").get_text()
# Extract link
link_el = el.find("a")
if not link_el or "href" not in link_el.attrs:
continue
link = link_el["href"]
# Extract title with null check
title_el = el.select_one("div.MBeuO")
title = title_el.get_text() if title_el else "No title"
# Extract snippet with null check
snippet_el = el.select_one(".GI74Re")
snippet = snippet_el.get_text() if snippet_el else "No snippet"
# Extract date with null check
date_el = el.select_one(".LfVVr")
date = date_el.get_text() if date_el else "No date"
# Extract source with null check
source_el = el.select_one(".NUnG9d span")
source = source_el.get_text() if source_el else "Unknown source"
news_results.append(
{
"link": link,

View File

@ -14,6 +14,10 @@ from tqdm import tqdm
import yfinance as yf
from openai import OpenAI
from .config import get_config, set_config, DATA_DIR
from dotenv import load_dotenv
# Load environment variables so OpenAI tools can access API keys
load_dotenv()
def get_finnhub_news(
@ -287,25 +291,47 @@ def get_google_news(
curr_date: Annotated[str, "Curr date in yyyy-mm-dd format"],
look_back_days: Annotated[int, "how many days to look back"],
) -> str:
query = query.replace(" ", "+")
import logging
import time
logger = logging.getLogger(__name__)
# Enhanced logging - Tool entry (for comparison with failing tools)
start_time = time.time()
logger.info(f"🔧 TOOL START: get_google_news | Agent: News Analyst | Query: {query} | Date: {curr_date}")
try:
query = query.replace(" ", "+")
start_date = datetime.strptime(curr_date, "%Y-%m-%d")
before = start_date - relativedelta(days=look_back_days)
before = before.strftime("%Y-%m-%d")
start_date = datetime.strptime(curr_date, "%Y-%m-%d")
before = start_date - relativedelta(days=look_back_days)
before = before.strftime("%Y-%m-%d")
news_results = getNewsData(query, before, curr_date)
news_results = getNewsData(query, before, curr_date)
news_str = ""
news_str = ""
for news in news_results:
news_str += (
f"### {news['title']} (source: {news['source']}) \n\n{news['snippet']}\n\n"
)
for news in news_results:
news_str += (
f"### {news['title']} (source: {news['source']}) \n\n{news['snippet']}\n\n"
)
if len(news_results) == 0:
return ""
return f"## {query} Google News, from {before} to {curr_date}:\n\n{news_str}"
if len(news_results) == 0:
result = ""
else:
result = f"## {query} Google News, from {before} to {curr_date}:\n\n{news_str}"
# Enhanced logging - Success
duration = time.time() - start_time
logger.info(f"✅ TOOL SUCCESS: get_google_news | Duration: {duration:.2f}s | Results count: {len(news_results)}")
logger.info(f"📋 TOOL OUTPUT LENGTH: {len(result)} characters")
return result
except Exception as e:
# Enhanced logging - Error (for comparison)
duration = time.time() - start_time
logger.error(f"❌ TOOL ERROR: get_google_news | Duration: {duration:.2f}s")
logger.error(f"🚨 FULL ERROR DETAILS: {type(e).__name__}: {str(e)}")
raise e
def get_reddit_global_news(
@ -630,41 +656,60 @@ def get_YFin_data_online(
start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "End date in yyyy-mm-dd format"],
):
import logging
import time
logger = logging.getLogger(__name__)
# Enhanced logging - Tool entry (for comparison with failing tools)
start_time = time.time()
logger.info(f"🔧 TOOL START: get_YFin_data_online | Agent: Market Analyst | Symbol: {symbol} | Range: {start_date} to {end_date}")
try:
datetime.strptime(start_date, "%Y-%m-%d")
datetime.strptime(end_date, "%Y-%m-%d")
datetime.strptime(start_date, "%Y-%m-%d")
datetime.strptime(end_date, "%Y-%m-%d")
# Create ticker object
ticker = yf.Ticker(symbol.upper())
# Create ticker object
ticker = yf.Ticker(symbol.upper())
# Fetch historical data for the specified date range
data = ticker.history(start=start_date, end=end_date)
# Fetch historical data for the specified date range
data = ticker.history(start=start_date, end=end_date)
# Check if data is empty
if data.empty:
result = f"No data found for symbol '{symbol}' between {start_date} and {end_date}"
else:
# Remove timezone info from index for cleaner output
if data.index.tz is not None:
data.index = data.index.tz_localize(None)
# Check if data is empty
if data.empty:
return (
f"No data found for symbol '{symbol}' between {start_date} and {end_date}"
)
# Round numerical values to 2 decimal places for cleaner display
numeric_columns = ["Open", "High", "Low", "Close", "Adj Close"]
for col in numeric_columns:
if col in data.columns:
data[col] = data[col].round(2)
# Remove timezone info from index for cleaner output
if data.index.tz is not None:
data.index = data.index.tz_localize(None)
# Convert DataFrame to CSV string
csv_string = data.to_csv()
# Round numerical values to 2 decimal places for cleaner display
numeric_columns = ["Open", "High", "Low", "Close", "Adj Close"]
for col in numeric_columns:
if col in data.columns:
data[col] = data[col].round(2)
# Add header information
header = f"# Stock data for {symbol.upper()} from {start_date} to {end_date}\n"
header += f"# Total records: {len(data)}\n"
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
# Convert DataFrame to CSV string
csv_string = data.to_csv()
# Add header information
header = f"# Stock data for {symbol.upper()} from {start_date} to {end_date}\n"
header += f"# Total records: {len(data)}\n"
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
return header + csv_string
result = header + csv_string
# Enhanced logging - Success
duration = time.time() - start_time
logger.info(f"✅ TOOL SUCCESS: get_YFin_data_online | Duration: {duration:.2f}s | Records: {len(data) if not data.empty else 0}")
logger.info(f"📋 TOOL OUTPUT LENGTH: {len(result)} characters")
return result
except Exception as e:
# Enhanced logging - Error (for comparison)
duration = time.time() - start_time
logger.error(f"❌ TOOL ERROR: get_YFin_data_online | Duration: {duration:.2f}s")
logger.error(f"🚨 FULL ERROR DETAILS: {type(e).__name__}: {str(e)}")
raise e
def get_YFin_data(
@ -703,12 +748,27 @@ def get_YFin_data(
def get_stock_news_openai(ticker, curr_date):
config = get_config()
client = OpenAI(base_url=config["backend_url"])
response = client.responses.create(
model=config["quick_think_llm"],
input=[
import logging
import time
logger = logging.getLogger(__name__)
# Import shared client functions from api.py
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
from api import get_shared_openai_client, get_compatible_model_for_tools
# Use shared client and compatible model
client = get_shared_openai_client()
model = get_compatible_model_for_tools()
# Enhanced logging - Tool entry
start_time = time.time()
logger.info(f"🔧 TOOL START: get_stock_news_openai | Agent: Social Media Analyst | Ticker: {ticker} | Date: {curr_date} | Model: {model}")
request_params = {
"model": model,
"input": [
{
"role": "system",
"content": [
@ -719,31 +779,68 @@ def get_stock_news_openai(ticker, curr_date):
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
"text": {"format": {"type": "text"}},
"reasoning": {},
"tools": [
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
"temperature": 1,
"max_output_tokens": 4096,
"top_p": 1,
"store": True,
}
# Log full request parameters
logger.info(f"📤 TOOL REQUEST PARAMS: {request_params}")
return response.output[1].content[0].text
try:
response = client.responses.create(**request_params)
# Enhanced logging - Success
duration = time.time() - start_time
logger.info(f"✅ TOOL SUCCESS: get_stock_news_openai | Duration: {duration:.2f}s")
logger.info(f"📥 TOOL RESPONSE STRUCTURE: {type(response)} | Available attrs: {dir(response)}")
result = response.output[1].content[0].text
logger.info(f"📋 TOOL OUTPUT LENGTH: {len(result)} characters")
return result
except Exception as e:
# Enhanced logging - Error
duration = time.time() - start_time
logger.error(f"❌ TOOL ERROR: get_stock_news_openai | Duration: {duration:.2f}s")
logger.error(f"🚨 FULL ERROR DETAILS: {type(e).__name__}: {str(e)}")
if hasattr(e, 'response'):
logger.error(f"🔍 ERROR RESPONSE: {e.response.text if hasattr(e.response, 'text') else 'No response text'}")
raise e
def get_global_news_openai(curr_date):
config = get_config()
client = OpenAI(base_url=config["backend_url"])
response = client.responses.create(
model=config["quick_think_llm"],
input=[
import logging
import time
logger = logging.getLogger(__name__)
# Import shared client functions from api.py
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
from api import get_shared_openai_client, get_compatible_model_for_tools
# Use shared client and compatible model
client = get_shared_openai_client()
model = get_compatible_model_for_tools()
# Enhanced logging - Tool entry
start_time = time.time()
logger.info(f"🔧 TOOL START: get_global_news_openai | Agent: News Analyst | Date: {curr_date} | Model: {model}")
request_params = {
"model": model,
"input": [
{
"role": "system",
"content": [
@ -754,31 +851,68 @@ def get_global_news_openai(curr_date):
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
"text": {"format": {"type": "text"}},
"reasoning": {},
"tools": [
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
"temperature": 1,
"max_output_tokens": 4096,
"top_p": 1,
"store": True,
}
# Log full request parameters
logger.info(f"📤 TOOL REQUEST PARAMS: {request_params}")
return response.output[1].content[0].text
try:
response = client.responses.create(**request_params)
# Enhanced logging - Success
duration = time.time() - start_time
logger.info(f"✅ TOOL SUCCESS: get_global_news_openai | Duration: {duration:.2f}s")
logger.info(f"📥 TOOL RESPONSE STRUCTURE: {type(response)} | Available attrs: {dir(response)}")
result = response.output[1].content[0].text
logger.info(f"📋 TOOL OUTPUT LENGTH: {len(result)} characters")
return result
except Exception as e:
# Enhanced logging - Error
duration = time.time() - start_time
logger.error(f"❌ TOOL ERROR: get_global_news_openai | Duration: {duration:.2f}s")
logger.error(f"🚨 FULL ERROR DETAILS: {type(e).__name__}: {str(e)}")
if hasattr(e, 'response'):
logger.error(f"🔍 ERROR RESPONSE: {e.response.text if hasattr(e.response, 'text') else 'No response text'}")
raise e
def get_fundamentals_openai(ticker, curr_date):
config = get_config()
client = OpenAI(base_url=config["backend_url"])
response = client.responses.create(
model=config["quick_think_llm"],
input=[
import logging
import time
logger = logging.getLogger(__name__)
# Import shared client functions from api.py
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
from api import get_shared_openai_client, get_compatible_model_for_tools
# Use shared client and compatible model
client = get_shared_openai_client()
model = get_compatible_model_for_tools()
# Enhanced logging - Tool entry
start_time = time.time()
logger.info(f"🔧 TOOL START: get_fundamentals_openai | Agent: Fundamentals Analyst | Ticker: {ticker} | Date: {curr_date} | Model: {model}")
request_params = {
"model": model,
"input": [
{
"role": "system",
"content": [
@ -789,19 +923,41 @@ def get_fundamentals_openai(ticker, curr_date):
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
"text": {"format": {"type": "text"}},
"reasoning": {},
"tools": [
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
"temperature": 1,
"max_output_tokens": 4096,
"top_p": 1,
"store": True,
}
# Log full request parameters
logger.info(f"📤 TOOL REQUEST PARAMS: {request_params}")
return response.output[1].content[0].text
try:
response = client.responses.create(**request_params)
# Enhanced logging - Success
duration = time.time() - start_time
logger.info(f"✅ TOOL SUCCESS: get_fundamentals_openai | Duration: {duration:.2f}s")
logger.info(f"📥 TOOL RESPONSE STRUCTURE: {type(response)} | Available attrs: {dir(response)}")
result = response.output[1].content[0].text
logger.info(f"📋 TOOL OUTPUT LENGTH: {len(result)} characters")
return result
except Exception as e:
# Enhanced logging - Error
duration = time.time() - start_time
logger.error(f"❌ TOOL ERROR: get_fundamentals_openai | Duration: {duration:.2f}s")
logger.error(f"🚨 FULL ERROR DETAILS: {type(e).__name__}: {str(e)}")
if hasattr(e, 'response'):
logger.error(f"🔍 ERROR RESPONSE: {e.response.text if hasattr(e.response, 'text') else 'No response text'}")
raise e

View File

@ -0,0 +1,211 @@
import pandas as pd
import yfinance as yf
from stockstats import wrap
from typing import Annotated
import os
import re
from .config import get_config
class StockstatsUtils:
@staticmethod
def clean_date_data(data):
"""Clean malformed date data where years might be duplicated and ensure proper datetime format"""
if 'Date' in data.columns:
# Convert Date column to string if it's not already
data['Date'] = data['Date'].astype(str)
# Fix malformed dates like "20182018-04-02" -> "2018-04-02"
# Pattern: match YYYY + YYYY + rest of date, capture the first year and rest
pattern = r'(\d{4})\1(.*)$' # \1 refers to the first captured group (year)
data['Date'] = data['Date'].str.replace(pattern, r'\1\2', regex=True)
# Also handle any other malformed date patterns
# Remove any duplicate year patterns more broadly
data['Date'] = data['Date'].str.replace(r'(\d{4})(\d{4})-', r'\1-', regex=True)
# Now convert to proper datetime format
try:
data['Date'] = pd.to_datetime(data['Date'], errors='coerce')
# Drop any rows where date conversion failed (NaT values)
data = data.dropna(subset=['Date'])
# Ensure we have a proper datetime column
if not pd.api.types.is_datetime64_any_dtype(data['Date']):
# If still not datetime, try alternative parsing
data['Date'] = pd.to_datetime(data['Date'], format='%Y-%m-%d', errors='coerce')
data = data.dropna(subset=['Date'])
except Exception as e:
print(f"Warning: Could not convert Date column to datetime: {e}")
# Fall back to string format but ensure consistency
data['Date'] = data['Date'].astype(str)
return data
@staticmethod
def prepare_data_for_stockstats(data):
"""Prepare data specifically for stockstats processing"""
if data is None or data.empty:
return data
# Ensure we have the required columns for stockstats
required_columns = ['Open', 'High', 'Low', 'Close', 'Volume']
missing_columns = [col for col in required_columns if col not in data.columns]
if missing_columns:
print(f"Warning: Missing required columns for stockstats: {missing_columns}")
return data
# Clean and prepare data
data = StockstatsUtils.clean_date_data(data)
# If Date is still not datetime, convert it
if 'Date' in data.columns and not pd.api.types.is_datetime64_any_dtype(data['Date']):
try:
data['Date'] = pd.to_datetime(data['Date'], errors='coerce')
data = data.dropna(subset=['Date'])
except Exception as e:
print(f"Final datetime conversion failed: {e}")
# Sort by date to ensure proper order
if 'Date' in data.columns:
data = data.sort_values('Date').reset_index(drop=True)
return data
@staticmethod
def get_stock_stats(
symbol: Annotated[str, "ticker symbol for the company"],
indicator: Annotated[
str, "quantitative indicators based off of the stock data for the company"
],
curr_date: Annotated[
str, "curr date for retrieving stock price data, YYYY-mm-dd"
],
data_dir: Annotated[
str,
"directory where the stock data is stored.",
],
online: Annotated[
bool,
"whether to use online tools to fetch data or offline tools. If True, will use online tools.",
] = False,
):
df = None
data = None
if not online:
try:
data = pd.read_csv(
os.path.join(
data_dir,
f"{symbol}-YFin-data-2015-01-01-2025-03-25.csv",
)
)
# Prepare data for stockstats processing
data = StockstatsUtils.prepare_data_for_stockstats(data)
if data.empty:
return "Error: No valid data after cleaning"
df = wrap(data)
except FileNotFoundError:
raise Exception("Stockstats fail: Yahoo Finance data not fetched yet!")
except Exception as e:
print(f"Error processing offline data for {symbol}: {e}")
return f"Error: {str(e)}"
else:
# Get today's date as YYYY-mm-dd to add to cache
today_date = pd.Timestamp.today()
curr_date_dt = pd.to_datetime(curr_date)
end_date = today_date
start_date = today_date - pd.DateOffset(years=15)
start_date = start_date.strftime("%Y-%m-%d")
end_date = end_date.strftime("%Y-%m-%d")
# Get config and ensure cache directory exists
config = get_config()
os.makedirs(config["data_cache_dir"], exist_ok=True)
data_file = os.path.join(
config["data_cache_dir"],
f"{symbol}-YFin-data-{start_date}-{end_date}.csv",
)
if os.path.exists(data_file):
try:
data = pd.read_csv(data_file)
# Prepare data for stockstats processing
data = StockstatsUtils.prepare_data_for_stockstats(data)
if data.empty:
print(f"No valid data found in cache for {symbol}")
# Remove the corrupted cache file and re-download
os.remove(data_file)
return StockstatsUtils.get_stock_stats(symbol, indicator, curr_date, data_dir, online=True)
except Exception as e:
print(f"Error reading cached data for {symbol}: {e}")
# Remove corrupted cache file and retry
if os.path.exists(data_file):
os.remove(data_file)
return StockstatsUtils.get_stock_stats(symbol, indicator, curr_date, data_dir, online=True)
else:
try:
data = yf.download(
symbol,
start=start_date,
end=end_date,
multi_level_index=False,
progress=False,
auto_adjust=True,
)
if data.empty:
return f"Error: No data available for {symbol}"
data = data.reset_index()
data.to_csv(data_file, index=False)
except Exception as e:
print(f"Error downloading data for {symbol}: {e}")
return f"Error: {str(e)}"
# Prepare data for stockstats processing
data = StockstatsUtils.prepare_data_for_stockstats(data)
if data.empty:
return "Error: No valid data after cleaning"
try:
df = wrap(data)
# Convert curr_date back to string format for comparison
curr_date = curr_date_dt.strftime("%Y-%m-%d")
except Exception as e:
print(f"Error wrapping data with stockstats for {symbol}: {e}")
return f"Error: Failed to process data with stockstats - {str(e)}"
try:
# Trigger stockstats to calculate the indicator
df[indicator]
# Convert Date column to string for comparison if it's datetime
if 'Date' in df.columns and pd.api.types.is_datetime64_any_dtype(df['Date']):
date_strings = df['Date'].dt.strftime('%Y-%m-%d')
matching_rows = df[date_strings == curr_date]
else:
# If Date is already string, use string comparison
matching_rows = df[df["Date"].astype(str).str.startswith(curr_date)]
if not matching_rows.empty:
indicator_value = matching_rows[indicator].values[0]
return indicator_value
else:
return "N/A: Not a trading day (weekend or holiday)"
except KeyError as e:
print(f"Error: Indicator '{indicator}' not found. Available indicators: {list(df.columns)}")
return f"Error: Invalid indicator '{indicator}'"
except Exception as e:
print(f"Error getting stockstats indicator data for indicator {indicator} on {curr_date}: {e}")
return f"Error: {str(e)}"

View File

@ -1,13 +1,20 @@
import os
# Get the backend directory (parent of tradingagents package)
BACKEND_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".."))
PROJECT_ROOT = os.path.abspath(os.path.join(BACKEND_DIR, ".."))
DEFAULT_CONFIG = {
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"results_dir": os.getenv("TRADINGAGENTS_RESULTS_DIR", "./results"),
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",
"results_dir": os.getenv("TRADINGAGENTS_RESULTS_DIR", os.path.join(BACKEND_DIR, "results")),
"data_dir": os.getenv("TRADINGAGENTS_DATA_DIR", os.path.join(BACKEND_DIR, "data")),
"data_cache_dir": os.path.join(
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"dataflows/data_cache",
),
# API Server settings
"api_host": os.getenv("TRADINGAGENTS_API_HOST", "localhost"),
"api_port": int(os.getenv("TRADINGAGENTS_API_PORT", "8000")),
# LLM settings
"llm_provider": "openai",
"deep_think_llm": "o4-mini",

62
backend/verify_api.sh Executable file
View File

@ -0,0 +1,62 @@
#!/bin/bash
# Colors for output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo "🚀 TradingAgents API Verification"
echo "================================"
# Check if server is running
echo -n "Checking if API server is running... "
if curl -s http://localhost:8000/ > /dev/null; then
echo -e "${GREEN}✓ Server is running${NC}"
else
echo -e "${RED}✗ Server is not running${NC}"
echo -e "${YELLOW}Please start the server with: python run_api.py${NC}"
exit 1
fi
# Test root endpoint
echo -n "Testing root endpoint... "
ROOT_RESPONSE=$(curl -s http://localhost:8000/)
if [[ $ROOT_RESPONSE == *"TradingAgents API is running"* ]]; then
echo -e "${GREEN}✓ OK${NC}"
else
echo -e "${RED}✗ Failed${NC}"
fi
# Test health endpoint
echo -n "Testing health endpoint... "
HEALTH_RESPONSE=$(curl -s http://localhost:8000/health)
if [[ $HEALTH_RESPONSE == *"healthy"* ]]; then
echo -e "${GREEN}✓ OK${NC}"
else
echo -e "${RED}✗ Failed${NC}"
fi
# Check environment variables
echo ""
echo "Checking environment variables:"
if [ -z "$OPENAI_API_KEY" ]; then
echo -e "${RED}✗ OPENAI_API_KEY not set${NC}"
else
echo -e "${GREEN}✓ OPENAI_API_KEY is set${NC}"
fi
if [ -z "$FINNHUB_API_KEY" ]; then
echo -e "${RED}✗ FINNHUB_API_KEY not set${NC}"
else
echo -e "${GREEN}✓ FINNHUB_API_KEY is set${NC}"
fi
echo ""
echo "================================"
echo -e "${GREEN}API verification complete!${NC}"
echo ""
echo "Next steps:"
echo "1. Visit http://localhost:8000/docs for interactive API docs"
echo "2. Run 'python test_api.py' for comprehensive testing"
echo "3. Test from iOS app or use curl commands"

153
docs/FASTAPI_SETUP.md Normal file
View File

@ -0,0 +1,153 @@
# TradingAgents FastAPI Setup Guide
## Overview
This guide explains how to run TradingAgents as a FastAPI server and integrate it with your Swift app.
## Prerequisites
- Python 3.8+
- API keys for OpenAI (or other LLM providers)
- Swift/SwiftUI project with ReSwift
## Python Server Setup
### 1. Install Dependencies
```bash
pip install -r requirements.txt
```
### 2. Set Environment Variables
Create a `.env` file in the project root:
```bash
# Required
OPENAI_API_KEY=your_openai_api_key_here
# Optional (defaults shown)
DEEP_THINK_MODEL=gpt-4o-mini
QUICK_THINK_MODEL=gpt-4o-mini
BACKEND_URL=https://api.openai.com/v1
```
### 3. Run the FastAPI Server
```bash
python run_api.py
```
The server will start at `http://localhost:8000`
### 4. Test the API
```bash
# Test health endpoint
curl http://localhost:8000/health
# Test analysis endpoint
curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{"ticker": "AAPL"}'
```
## Swift Integration
### 1. Add ReSwift to Your Project
In Xcode, go to File → Add Package Dependencies and add:
```
https://github.com/ReSwift/ReSwift
```
### 2. Copy Swift Files
Copy these files to your Swift project:
- `TradingAgents_Swift_Integration.swift` - Redux architecture and networking
- `TradingAnalysisView.swift` - SwiftUI views
### 3. Update Your App
In your main app file, initialize the store and show the view:
```swift
import SwiftUI
import ReSwift
@main
struct YourApp: App {
var body: some Scene {
WindowGroup {
TradingAnalysisView()
}
}
}
```
### 4. Configure API URL
If running on a real device or different network, update the base URL in `TradingAgentsAPIService`:
```swift
private let baseURL = "http://your-server-ip:8000"
```
## API Endpoints
### POST /analyze
Analyzes a stock ticker.
**Request:**
```json
{
"ticker": "AAPL"
}
```
**Response:**
```json
{
"ticker": "AAPL",
"analysis_date": "2024-01-15",
"market_report": "Technical analysis...",
"sentiment_report": "Social sentiment analysis...",
"news_report": "Recent news analysis...",
"fundamentals_report": "Fundamental analysis...",
"investment_plan": "Research team recommendation...",
"trader_investment_plan": "Trading strategy...",
"final_trade_decision": "Final decision...",
"processed_signal": "BUY"
}
```
## Production Deployment
### Docker (Recommended)
Create a `Dockerfile`:
```dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8000"]
```
Build and run:
```bash
docker build -t tradingagents-api .
docker run -p 8000:8000 --env-file .env tradingagents-api
```
### Security Considerations
1. Use HTTPS in production
2. Add API authentication
3. Implement rate limiting
4. Validate and sanitize inputs
5. Use environment-specific configurations
## Troubleshooting
### Common Issues
1. **Import errors**: Ensure all dependencies are installed
2. **API key errors**: Check your `.env` file
3. **Connection refused**: Verify the server is running and accessible
4. **CORS errors**: Check CORS configuration matches your Swift app's needs
### Performance Tips
- The analysis can take 30-60 seconds depending on the LLM models
- Consider implementing caching for repeated requests
- Use background processing for long-running analyses

39
docs/README.md Normal file
View File

@ -0,0 +1,39 @@
# Trading Agents Documentation
This directory contains all project documentation.
## Available Documents
### Project Documentation
- **`PRD.md`** - Product Requirements Document
- Project overview and objectives
- System architecture
- Feature specifications
- **`DOCUMENTATION.md`** - Technical Documentation
- Detailed system design
- Component descriptions
- Integration guides
### Setup Guides
- **`FASTAPI_SETUP.md`** - FastAPI Server Setup
- API endpoint documentation
- Server configuration
- Deployment instructions
- **`SIMPLIFIED_CLI_GUIDE.md`** - CLI Usage Guide
- Command-line interface tutorial
- Available commands
- Usage examples
### Development
- **`TODO.md`** - Project Roadmap
- Planned features
- Known issues
- Future improvements
## Quick Links
- [Backend Setup](../backend/README.md)
- [iOS App Setup](../ios/README.md)
- [Main Project README](../README.md)

View File

@ -1,87 +0,0 @@
import pandas as pd
import yfinance as yf
from stockstats import wrap
from typing import Annotated
import os
from .config import get_config
class StockstatsUtils:
@staticmethod
def get_stock_stats(
symbol: Annotated[str, "ticker symbol for the company"],
indicator: Annotated[
str, "quantitative indicators based off of the stock data for the company"
],
curr_date: Annotated[
str, "curr date for retrieving stock price data, YYYY-mm-dd"
],
data_dir: Annotated[
str,
"directory where the stock data is stored.",
],
online: Annotated[
bool,
"whether to use online tools to fetch data or offline tools. If True, will use online tools.",
] = False,
):
df = None
data = None
if not online:
try:
data = pd.read_csv(
os.path.join(
data_dir,
f"{symbol}-YFin-data-2015-01-01-2025-03-25.csv",
)
)
df = wrap(data)
except FileNotFoundError:
raise Exception("Stockstats fail: Yahoo Finance data not fetched yet!")
else:
# Get today's date as YYYY-mm-dd to add to cache
today_date = pd.Timestamp.today()
curr_date = pd.to_datetime(curr_date)
end_date = today_date
start_date = today_date - pd.DateOffset(years=15)
start_date = start_date.strftime("%Y-%m-%d")
end_date = end_date.strftime("%Y-%m-%d")
# Get config and ensure cache directory exists
config = get_config()
os.makedirs(config["data_cache_dir"], exist_ok=True)
data_file = os.path.join(
config["data_cache_dir"],
f"{symbol}-YFin-data-{start_date}-{end_date}.csv",
)
if os.path.exists(data_file):
data = pd.read_csv(data_file)
data["Date"] = pd.to_datetime(data["Date"])
else:
data = yf.download(
symbol,
start=start_date,
end=end_date,
multi_level_index=False,
progress=False,
auto_adjust=True,
)
data = data.reset_index()
data.to_csv(data_file, index=False)
df = wrap(data)
df["Date"] = df["Date"].dt.strftime("%Y-%m-%d")
curr_date = curr_date.strftime("%Y-%m-%d")
df[indicator] # trigger stockstats to calculate the indicator
matching_rows = df[df["Date"].str.startswith(curr_date)]
if not matching_rows.empty:
indicator_value = matching_rows[indicator].values[0]
return indicator_value
else:
return "N/A: Not a trading day (weekend or holiday)"

5405
uv.lock

File diff suppressed because it is too large Load Diff