Merge branch 'korean' into dev/web

This commit is contained in:
kimheesu 2025-07-01 16:38:31 +09:00
commit dcaa02f26c
32 changed files with 8331 additions and 3111 deletions

120
.gitignore vendored
View File

@ -1,123 +1,3 @@
# 환경 변수 파일
.env
web/backend/.env
*.env
env_local.txt
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# Django
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
/staticfiles/
/media/
# Virtual Environment
venv/
env/
ENV/
env.bak/
venv.bak/
# IDEs
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Node.js (React)
web/frontend/node_modules/
web/frontend/build/
web/frontend/dist/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Docker
docker-compose.override.yml
# Logs
logs/
*.log
# Coverage reports
htmlcov/
.coverage
.coverage.*
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Trading specific
trading_data/
analysis_results/
temp_files/
env/ env/
__pycache__/ __pycache__/
.DS_Store .DS_Store

1
.python-version Normal file
View File

@ -0,0 +1 @@
3.10

302
README.md
View File

@ -80,7 +80,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
- Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights. - Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.
<p align="center"> <p align="center">
<img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;"> <img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;">
</p> </p>
### Risk Management and Portfolio Manager ### Risk Management and Portfolio Manager
@ -88,7 +88,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
- The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed. - The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.
<p align="center"> <p align="center">
<img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;"> <img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;">
</p> </p>
## Installation and CLI ## Installation and CLI
@ -119,9 +119,10 @@ You will also need the FinnHub API for financial data. All of our code is implem
export FINNHUB_API_KEY=$YOUR_FINNHUB_API_KEY export FINNHUB_API_KEY=$YOUR_FINNHUB_API_KEY
``` ```
You will need the OpenAI API for all the agents. You will need the OpenAI API or GEMINI API for all the agents.
```bash ```bash
export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY
export GEMINI_API_KEY=$YOUR_GEMINI_API_KEY
``` ```
### CLI Usage ### CLI Usage
@ -211,298 +212,3 @@ Please reference our work if you find *TradingAgents* provides you with some hel
url={https://arxiv.org/abs/2412.20138}, url={https://arxiv.org/abs/2412.20138},
} }
``` ```
# TradingAgents Web Application
CLI 기능을 웹에서 사용할 수 있는 React + Django 웹 애플리케이션입니다.
## 주요 기능
1. **사용자 인증**
- JWT 기반 로그인/회원가입
- OpenAI API 키 관리 (암호화 저장)
- 개발자 기본 키 fallback
2. **거래 분석**
- CLI의 모든 분석 기능을 웹에서 사용
- 실시간 분석 진행 상황 (WebSocket)
- 분석 기록 관리
3. **사용자 경험**
- 현대적인 React UI (Ant Design)
- 반응형 디자인
- 실시간 업데이트
## 기술 스택
### 백엔드
- **Django 4.2** - 웹 프레임워크
- **Django REST Framework** - API 개발
- **Django Channels** - WebSocket 지원
- **MySQL 8.0** - 데이터베이스 (Docker)
- **Redis 7** - WebSocket 메시지 브로커 (Docker)
- **JWT** - 인증
### 프론트엔드
- **React 18** - UI 라이브러리
- **Ant Design** - UI 컴포넌트
- **Styled Components** - 스타일링
- **Axios** - HTTP 클라이언트
- **WebSocket** - 실시간 통신
## 설치 및 실행
### 1. 환경 설정
```bash
# 가상환경 생성 및 활성화
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Python 의존성 설치
pip install -r requirements.txt
# Node.js 의존성 설치
cd web/frontend
npm install
cd ../..
```
### 2. 데이터베이스 및 Redis 설정 (Docker)
Docker와 Docker Compose를 이용해 MySQL과 Redis를 실행합니다.
```bash
# Docker 및 Docker Compose 설치 확인
docker --version
docker-compose --version
# 편의 스크립트 사용 (권장)
chmod +x scripts/docker-commands.sh
./scripts/docker-commands.sh start
# 또는 직접 Docker Compose 명령 사용
docker-compose up -d mysql redis
# phpMyAdmin도 함께 시작 (데이터베이스 관리용)
./scripts/docker-commands.sh start-all
# 컨테이너 상태 확인
./scripts/docker-commands.sh status
```
### 3. 환경 변수 설정
`web/backend/.env` 파일을 생성합니다. `env_example.txt`를 참고하여 설정하세요:
```bash
# 예시 파일을 복사하여 시작
cp web/backend/env_example.txt web/backend/.env
# .env 파일을 편집하여 실제 값들로 변경
nano web/backend/.env # 또는 다른 텍스트 에디터 사용
```
주요 설정값들:
```env
# Django 설정
SECRET_KEY=your-secret-key-here-change-this-to-a-random-string
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1
# MySQL 데이터베이스 설정 (Docker)
DB_NAME=tradingagents_web
DB_USER=root
DB_PASSWORD=your-mysql-password-here
DB_HOST=127.0.0.1
DB_PORT=3306
# Redis 설정 (Docker)
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
# OpenAI API 키 (개발자 기본 키)
OPENAI_API_KEY=your-openai-api-key-here
```
### 4. 데이터베이스 마이그레이션
```bash
cd web/backend
python manage.py makemigrations
python manage.py migrate
python manage.py createsuperuser # 관리자 계정 생성
```
### 5. 개발 서버 실행
**터미널 1 - Docker 컨테이너 (MySQL + Redis):**
```bash
# 백그라운드에서 실행
docker-compose up -d mysql redis
# 또는 포그라운드에서 로그 확인
docker-compose up mysql redis
```
**터미널 2 - Django 백엔드:**
```bash
cd web/backend
python manage.py runserver
```
**터미널 3 - React 프론트엔드:**
```bash
cd web/frontend
npm start
```
## 접속 정보
- **프론트엔드**: http://localhost:3000
- **백엔드 API**: http://localhost:8000
- **Django Admin**: http://localhost:8000/admin
- **phpMyAdmin** (선택사항): http://localhost:8080
## API 엔드포인트
### 인증
- `POST /api/auth/register/` - 회원가입
- `POST /api/auth/login/` - 로그인
- `GET /api/auth/user/` - 사용자 정보
- `PUT /api/auth/profile/` - 프로필 수정
- `POST /api/auth/check-api-key/` - API 키 검증
### 거래 분석
- `GET /api/trading/config/` - 분석 설정 정보
- `POST /api/trading/start/` - 분석 시작
- `GET /api/trading/status/{id}/` - 분석 상태 조회
- `GET /api/trading/history/` - 분석 기록
- `GET /api/trading/report/{id}/` - 분석 보고서
### WebSocket
- `ws://localhost:8000/ws/trading-analysis/` - 실시간 분석 업데이트
## OpenAI API 키 관리
1. **사용자 개별 키**: 사용자가 프로필에서 설정한 개인 OpenAI API 키
2. **개발자 기본 키**: `.env` 파일의 `OPENAI_API_KEY` (사용자 키가 없을 때 사용)
3. **보안**: 사용자 키는 암호화되어 데이터베이스에 저장
## 프로젝트 구조
```
├── cli/ # 기존 CLI 코드
├── web/
│ ├── backend/ # Django 백엔드
│ │ ├── tradingagents_web/ # 프로젝트 설정
│ │ └── apps/ # Django 앱들
│ │ ├── authentication/ # 사용자 인증
│ │ ├── trading_api/ # 거래 분석 API
│ │ └── websocket/ # WebSocket 처리
│ └── frontend/ # React 프론트엔드
│ ├── public/
│ └── src/
│ ├── components/ # 재사용 컴포넌트
│ ├── contexts/ # React Context
│ ├── pages/ # 페이지 컴포넌트
│ ├── services/ # API 서비스
│ └── styles/ # 스타일 관련
└── requirements.txt # Python 의존성
```
## 개발 가이드
### 새로운 분석 기능 추가
1. `apps/trading_api/services.py`에 새로운 서비스 추가
2. `apps/trading_api/views.py`에 새로운 뷰 추가
3. `apps/trading_api/urls.py`에 URL 패턴 추가
4. 프론트엔드에서 해당 API 호출
### 새로운 페이지 추가
1. `src/pages/` 디렉토리에 새 페이지 컴포넌트 생성
2. `src/App.js`에 라우트 추가
3. 필요한 경우 레이아웃의 메뉴에 추가
## 배포
### Docker Compose (권장)
```bash
# 모든 서비스를 한 번에 시작 (개발 환경)
docker-compose up -d
# 특정 서비스만 시작
docker-compose up -d mysql redis
# 프로덕션 환경에서는 별도의 docker-compose.prod.yml 사용 권장
docker-compose -f docker-compose.prod.yml up -d
```
### 수동 배포
1. **프론트엔드 빌드**:
```bash
cd web/frontend
npm run build
```
2. **Django 정적 파일 수집**:
```bash
cd web/backend
python manage.py collectstatic
```
3. **프로덕션 서버 설정** (Nginx + Gunicorn + Daphne)
## 문제 해결
### 일반적인 문제
1. **Docker 컨테이너 관련**
```bash
# 컨테이너 상태 확인
docker-compose ps
# 컨테이너 로그 확인
docker-compose logs mysql
docker-compose logs redis
# 컨테이너 재시작
docker-compose restart mysql redis
```
2. **WebSocket 연결 실패**
- Redis 컨테이너가 실행 중인지 확인: `docker-compose ps`
- 방화벽 설정 확인
3. **API 키 관련 오류**
- `.env` 파일의 `OPENAI_API_KEY` 확인
- 사용자 프로필에서 API 키 재설정
4. **데이터베이스 연결 오류**
- MySQL 컨테이너 상태 확인: `docker-compose logs mysql`
- `.env` 파일의 데이터베이스 연결 정보 확인
- 컨테이너 포트 충돌 확인: `docker port tradingagents_mysql`
5. **MySQL 컨테이너 초기화 문제**
```bash
# 볼륨 삭제 후 재시작 (데이터 손실 주의!)
docker-compose down -v
docker-compose up -d mysql redis
```
## 라이선스
이 프로젝트는 기존 TradingAgents 프로젝트의 라이선스를 따릅니다.
## 기여
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

File diff suppressed because it is too large Load Diff

View File

@ -1,195 +1,277 @@
import questionary import questionary
from typing import List, Optional, Tuple, Dict from typing import List, Optional, Tuple, Dict
from cli.models import AnalystType from cli.models import AnalystType
ANALYST_ORDER = [ ANALYST_ORDER = [
("Market Analyst", AnalystType.MARKET), ("Market Analyst", AnalystType.MARKET),
("Social Media Analyst", AnalystType.SOCIAL), ("Social Media Analyst", AnalystType.SOCIAL),
("News Analyst", AnalystType.NEWS), ("News Analyst", AnalystType.NEWS),
("Fundamentals Analyst", AnalystType.FUNDAMENTALS), ("Fundamentals Analyst", AnalystType.FUNDAMENTALS),
] ]
def get_ticker() -> str: def get_ticker() -> str:
"""Prompt the user to enter a ticker symbol.""" """Prompt the user to enter a ticker symbol."""
ticker = questionary.text( ticker = questionary.text(
"Enter the ticker symbol to analyze:", "Enter the ticker symbol to analyze:",
validate=lambda x: len(x.strip()) > 0 or "Please enter a valid ticker symbol.", validate=lambda x: len(x.strip()) > 0 or "Please enter a valid ticker symbol.",
style=questionary.Style( style=questionary.Style(
[ [
("text", "fg:green"), ("text", "fg:green"),
("highlighted", "noinherit"), ("highlighted", "noinherit"),
] ]
), ),
).ask() ).ask()
if not ticker: if not ticker:
console.print("\n[red]No ticker symbol provided. Exiting...[/red]") console.print("\n[red]No ticker symbol provided. Exiting...[/red]")
exit(1) exit(1)
return ticker.strip().upper() return ticker.strip().upper()
def get_analysis_date() -> str: def get_analysis_date() -> str:
"""Prompt the user to enter a date in YYYY-MM-DD format.""" """Prompt the user to enter a date in YYYY-MM-DD format."""
import re import re
from datetime import datetime from datetime import datetime
def validate_date(date_str: str) -> bool: def validate_date(date_str: str) -> bool:
if not re.match(r"^\d{4}-\d{2}-\d{2}$", date_str): if not re.match(r"^\d{4}-\d{2}-\d{2}$", date_str):
return False return False
try: try:
datetime.strptime(date_str, "%Y-%m-%d") datetime.strptime(date_str, "%Y-%m-%d")
return True return True
except ValueError: except ValueError:
return False return False
date = questionary.text( date = questionary.text(
"Enter the analysis date (YYYY-MM-DD):", "Enter the analysis date (YYYY-MM-DD):",
validate=lambda x: validate_date(x.strip()) validate=lambda x: validate_date(x.strip())
or "Please enter a valid date in YYYY-MM-DD format.", or "Please enter a valid date in YYYY-MM-DD format.",
style=questionary.Style( style=questionary.Style(
[ [
("text", "fg:green"), ("text", "fg:green"),
("highlighted", "noinherit"), ("highlighted", "noinherit"),
] ]
), ),
).ask() ).ask()
if not date: if not date:
console.print("\n[red]No date provided. Exiting...[/red]") console.print("\n[red]No date provided. Exiting...[/red]")
exit(1) exit(1)
return date.strip() return date.strip()
def select_analysts() -> List[AnalystType]: def select_analysts() -> List[AnalystType]:
"""Select analysts using an interactive checkbox.""" """Select analysts using an interactive checkbox."""
choices = questionary.checkbox( choices = questionary.checkbox(
"Select Your [Analysts Team]:", "Select Your [Analysts Team]:",
choices=[ choices=[
questionary.Choice(display, value=value) for display, value in ANALYST_ORDER questionary.Choice(display, value=value) for display, value in ANALYST_ORDER
], ],
instruction="\n- Press Space to select/unselect analysts\n- Press 'a' to select/unselect all\n- Press Enter when done", instruction="\n- Press Space to select/unselect analysts\n- Press 'a' to select/unselect all\n- Press Enter when done",
validate=lambda x: len(x) > 0 or "You must select at least one analyst.", validate=lambda x: len(x) > 0 or "You must select at least one analyst.",
style=questionary.Style( style=questionary.Style(
[ [
("checkbox-selected", "fg:green"), ("checkbox-selected", "fg:green"),
("selected", "fg:green noinherit"), ("selected", "fg:green noinherit"),
("highlighted", "noinherit"), ("highlighted", "noinherit"),
("pointer", "noinherit"), ("pointer", "noinherit"),
] ]
), ),
).ask() ).ask()
if not choices: if not choices:
console.print("\n[red]No analysts selected. Exiting...[/red]") console.print("\n[red]No analysts selected. Exiting...[/red]")
exit(1) exit(1)
return choices return choices
def select_research_depth() -> int: def select_research_depth() -> int:
"""Select research depth using an interactive selection.""" """Select research depth using an interactive selection."""
# Define research depth options with their corresponding values # Define research depth options with their corresponding values
DEPTH_OPTIONS = [ DEPTH_OPTIONS = [
("Shallow - Quick research, few debate and strategy discussion rounds", 1), ("Shallow - Quick research, few debate and strategy discussion rounds", 1),
("Medium - Middle ground, moderate debate rounds and strategy discussion", 3), ("Medium - Middle ground, moderate debate rounds and strategy discussion", 3),
("Deep - Comprehensive research, in depth debate and strategy discussion", 5), ("Deep - Comprehensive research, in depth debate and strategy discussion", 5),
] ]
choice = questionary.select( choice = questionary.select(
"Select Your [Research Depth]:", "Select Your [Research Depth]:",
choices=[ choices=[
questionary.Choice(display, value=value) for display, value in DEPTH_OPTIONS questionary.Choice(display, value=value) for display, value in DEPTH_OPTIONS
], ],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select", instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style( style=questionary.Style(
[ [
("selected", "fg:yellow noinherit"), ("selected", "fg:yellow noinherit"),
("highlighted", "fg:yellow noinherit"), ("highlighted", "fg:yellow noinherit"),
("pointer", "fg:yellow noinherit"), ("pointer", "fg:yellow noinherit"),
] ]
), ),
).ask() ).ask()
if choice is None: if choice is None:
console.print("\n[red]No research depth selected. Exiting...[/red]") console.print("\n[red]No research depth selected. Exiting...[/red]")
exit(1) exit(1)
return choice return choice
def select_shallow_thinking_agent() -> str: def select_shallow_thinking_agent(provider) -> str:
"""Select shallow thinking llm engine using an interactive selection.""" """Select shallow thinking llm engine using an interactive selection."""
# Define shallow thinking llm engine options with their corresponding model names # Define shallow thinking llm engine options with their corresponding model names
SHALLOW_AGENT_OPTIONS = [ SHALLOW_AGENT_OPTIONS = {
("GPT-4o-mini - Fast and efficient for quick tasks", "gpt-4o-mini"), "openai": [
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"), ("GPT-4o-mini - Fast and efficient for quick tasks", "gpt-4o-mini"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"), ("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"), ("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
] ("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
],
choice = questionary.select( "anthropic": [
"Select Your [Quick-Thinking LLM Engine]:", ("Claude Haiku 3.5 - Fast inference and standard capabilities", "claude-3-5-haiku-latest"),
choices=[ ("Claude Sonnet 3.5 - Highly capable standard model", "claude-3-5-sonnet-latest"),
questionary.Choice(display, value=value) ("Claude Sonnet 3.7 - Exceptional hybrid reasoning and agentic capabilities", "claude-3-7-sonnet-latest"),
for display, value in SHALLOW_AGENT_OPTIONS ("Claude Sonnet 4 - High performance and excellent reasoning", "claude-sonnet-4-0"),
], ],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select", "google": [
style=questionary.Style(
[ ("Gemini 2.0 Flash - Next generation features, speed, and thinking", "gemini-2.0-flash"),
("selected", "fg:magenta noinherit"), ("Gemini 2.5 Flash-Lite - Cost efficiency and low latency", "gemini-2.5-flash-lite-preview-06-17"),
("highlighted", "fg:magenta noinherit"), ("Gemini 2.5 Flash - Adaptive thinking, cost efficiency", "gemini-2.5-flash"),
("pointer", "fg:magenta noinherit"), ],
] "openrouter": [
), ("Meta: Llama 4 Scout", "meta-llama/llama-4-scout:free"),
).ask() ("Meta: Llama 3.3 8B Instruct - A lightweight and ultra-fast variant of Llama 3.3 70B", "meta-llama/llama-3.3-8b-instruct:free"),
("google/gemini-2.0-flash-exp:free - Gemini Flash 2.0 offers a significantly faster time to first token", "google/gemini-2.0-flash-exp:free"),
if choice is None: ],
console.print( "ollama": [
"\n[red]No shallow thinking llm engine selected. Exiting...[/red]" ("llama3.1 local", "llama3.1"),
) ("llama3.2 local", "llama3.2"),
exit(1) ]
}
return choice
choice = questionary.select(
"Select Your [Quick-Thinking LLM Engine]:",
def select_deep_thinking_agent() -> str: choices=[
"""Select deep thinking llm engine using an interactive selection.""" questionary.Choice(display, value=value)
for display, value in SHALLOW_AGENT_OPTIONS[provider.lower()]
# Define deep thinking llm engine options with their corresponding model names ],
DEEP_AGENT_OPTIONS = [ instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"), style=questionary.Style(
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"), [
("GPT-4o - Standard model with solid capabilities", "gpt-4o"), ("selected", "fg:magenta noinherit"),
("o4-mini - Specialized reasoning model (compact)", "o4-mini"), ("highlighted", "fg:magenta noinherit"),
("o3-mini - Advanced reasoning model (lightweight)", "o3-mini"), ("pointer", "fg:magenta noinherit"),
("o3 - Full advanced reasoning model", "o3"), ]
("o1 - Premier reasoning and problem-solving model", "o1"), ),
] ).ask()
choice = questionary.select( if choice is None:
"Select Your [Deep-Thinking LLM Engine]:", console.print(
choices=[ "\n[red]No shallow thinking llm engine selected. Exiting...[/red]"
questionary.Choice(display, value=value) )
for display, value in DEEP_AGENT_OPTIONS exit(1)
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select", return choice
style=questionary.Style(
[
("selected", "fg:magenta noinherit"), def select_deep_thinking_agent(provider) -> str:
("highlighted", "fg:magenta noinherit"), """Select deep thinking llm engine using an interactive selection."""
("pointer", "fg:magenta noinherit"),
] # Define deep thinking llm engine options with their corresponding model names
), DEEP_AGENT_OPTIONS = {
).ask() "openai": [
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
if choice is None: ("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
console.print("\n[red]No deep thinking llm engine selected. Exiting...[/red]") ("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
exit(1) ("o4-mini - Specialized reasoning model (compact)", "o4-mini"),
("o3-mini - Advanced reasoning model (lightweight)", "o3-mini"),
return choice ("o3 - Full advanced reasoning model", "o3"),
("o1 - Premier reasoning and problem-solving model", "o1"),
],
"anthropic": [
("Claude Haiku 3.5 - Fast inference and standard capabilities", "claude-3-5-haiku-latest"),
("Claude Sonnet 3.5 - Highly capable standard model", "claude-3-5-sonnet-latest"),
("Claude Sonnet 3.7 - Exceptional hybrid reasoning and agentic capabilities", "claude-3-7-sonnet-latest"),
("Claude Sonnet 4 - High performance and excellent reasoning", "claude-sonnet-4-0"),
("Claude Opus 4 - Most powerful Anthropic model", " claude-opus-4-0"),
],
"google": [
("Gemini 2.0 Flash - Next generation features, speed, and thinking", "gemini-2.0-flash"),
("Gemini 2.5 Flash-Lite - Cost efficiency and low latency", "gemini-2.5-flash-lite-preview-06-17"),
("Gemini 2.5 Flash - Adaptive thinking, cost efficiency", "gemini-2.5-flash"),
("Gemini 2.5 Pro - Most powerful Gemini model", "gemini-2.5-pro"),
],
"openrouter": [
("DeepSeek V3 - a 685B-parameter, mixture-of-experts model", "deepseek/deepseek-chat-v3-0324:free"),
("Deepseek - latest iteration of the flagship chat model family from the DeepSeek team.", "deepseek/deepseek-chat-v3-0324:free"),
],
"ollama": [
("llama3.1 local", "llama3.1"),
("qwen3", "qwen3"),
]
}
choice = questionary.select(
"Select Your [Deep-Thinking LLM Engine]:",
choices=[
questionary.Choice(display, value=value)
for display, value in DEEP_AGENT_OPTIONS[provider.lower()]
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
[
("selected", "fg:magenta noinherit"),
("highlighted", "fg:magenta noinherit"),
("pointer", "fg:magenta noinherit"),
]
),
).ask()
if choice is None:
console.print("\n[red]No deep thinking llm engine selected. Exiting...[/red]")
exit(1)
return choice
def select_llm_provider() -> tuple[str, str]:
"""Select the OpenAI api url using interactive selection."""
# Define OpenAI api options with their corresponding endpoints
BASE_URLS = [
("OpenAI", "https://api.openai.com/v1"),
("Anthropic", "https://api.anthropic.com/"),
("Google", "https://generativelanguage.googleapis.com/v1"),
("Openrouter", "https://openrouter.ai/api/v1"),
("Ollama", "http://localhost:11434/v1"),
]
choice = questionary.select(
"Select your LLM Provider:",
choices=[
questionary.Choice(display, value=(display, value))
for display, value in BASE_URLS
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
[
("selected", "fg:magenta noinherit"),
("highlighted", "fg:magenta noinherit"),
("pointer", "fg:magenta noinherit"),
]
),
).ask()
if choice is None:
console.print("\n[red]no OpenAI backend selected. Exiting...[/red]")
exit(1)
display_name, url = choice
print(f"You selected: {display_name}\tURL: {url}")
return display_name, url

40
main.py
View File

@ -1,19 +1,21 @@
from tradingagents.graph.trading_graph import TradingAgentsGraph from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG from tradingagents.default_config import DEFAULT_CONFIG
# Create a custom config # Create a custom config
config = DEFAULT_CONFIG.copy() config = DEFAULT_CONFIG.copy()
config["deep_think_llm"] = "gpt-4.1-nano" # Use a different model config["llm_provider"] = "google" # Use a different model
config["quick_think_llm"] = "gpt-4.1-nano" # Use a different model config["backend_url"] = "https://generativelanguage.googleapis.com/v1" # Use a different backend
config["max_debate_rounds"] = 1 # Increase debate rounds config["deep_think_llm"] = "gemini-2.5-pro" # Use a different model
config["online_tools"] = True # Increase debate rounds config["quick_think_llm"] = "gemini-2.5-flash-lite-preview-06-17" # Use a different model
config["max_debate_rounds"] = 1 # Increase debate rounds
# Initialize with custom config config["online_tools"] = True # Increase debate rounds
ta = TradingAgentsGraph(debug=True, config=config)
# Initialize with custom config
# forward propagate ta = TradingAgentsGraph(debug=True, config=config)
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision) # forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
# Memorize mistakes and reflect print(decision)
# ta.reflect_and_remember(1000) # parameter is the position returns
# Memorize mistakes and reflect
# ta.reflect_and_remember(1000) # parameter is the position returns

34
pyproject.toml Normal file
View File

@ -0,0 +1,34 @@
[project]
name = "tradingagents"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"akshare>=1.16.98",
"backtrader>=1.9.78.123",
"chainlit>=2.5.5",
"chromadb>=1.0.12",
"eodhd>=1.0.32",
"feedparser>=6.0.11",
"finnhub-python>=2.4.23",
"langchain-anthropic>=0.3.15",
"langchain-experimental>=0.3.4",
"langchain-google-genai>=2.1.5",
"langchain-openai>=0.3.23",
"langgraph>=0.4.8",
"pandas>=2.3.0",
"parsel>=1.10.0",
"praw>=7.8.1",
"pytz>=2025.2",
"questionary>=2.1.0",
"redis>=6.2.0",
"requests>=2.32.4",
"rich>=14.0.0",
"setuptools>=80.9.0",
"stockstats>=0.6.5",
"tqdm>=4.67.1",
"tushare>=1.4.21",
"typing-extensions>=4.14.0",
"yfinance>=0.2.63",
]

View File

@ -1,22 +1,3 @@
# Backend dependencies - Django
Django==4.2.7
django-cors-headers==4.3.1
django-rest-framework==0.1.0
djangorestframework==3.14.0
djangorestframework-simplejwt==5.3.0
python-decouple==3.8
cryptography==41.0.7
mysqlclient==2.2.0
channels==4.0.0
channels-redis
# Existing CLI dependencies
typer
questionary
pydantic
# OpenAI and other AI dependencies
openai
typing-extensions typing-extensions
langchain-openai langchain-openai
langchain-experimental langchain-experimental
@ -26,6 +7,7 @@ praw
feedparser feedparser
stockstats stockstats
eodhd eodhd
langgraph
chromadb chromadb
setuptools setuptools
backtrader backtrader
@ -40,5 +22,6 @@ redis
chainlit chainlit
rich rich
questionary questionary
langgraph==0.4.8 langchain_anthropic
daphne langchain-google-genai
google-genai

View File

@ -10,7 +10,7 @@ def create_fundamentals_analyst(llm, toolkit):
company_name = state["company_of_interest"] company_name = state["company_of_interest"]
if toolkit.config["online_tools"]: if toolkit.config["online_tools"]:
tools = [toolkit.get_fundamentals_openai] tools = [toolkit.get_fundamentals]
else: else:
tools = [ tools = [
toolkit.get_finnhub_company_insider_sentiment, toolkit.get_finnhub_company_insider_sentiment,
@ -21,40 +21,8 @@ def create_fundamentals_analyst(llm, toolkit):
] ]
system_message = ( system_message = (
"""You are a fundamental analyst. Your task is to provide a comprehensive report on a given company by analyzing its financial documents, company profile, financial history, insider sentiment, and transactions. "**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)\n\nYou are a researcher tasked with analyzing fundamental information over the past week about a company. Please write a comprehensive report of the company's fundamental information such as financial documents, company profile, basic company financials, company financial history, insider sentiment and insider transactions to gain a full view of the company's fundamental information to inform traders. Make sure to include as much detail as possible. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions."
+ " Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read.",
You must output your findings in a structured JSON format. Do not add any text outside the JSON structure.
The JSON object must contain the following keys:
1. `company_overview`: A string with a summary of the company's business and market position.
2. `financial_performance`: An array of objects, each with `metric` and `value` keys (e.g., {"metric": "Earnings Per Share (EPS)", "value": "Increased by 354%"}).
3. `stock_market_info`: An array of objects, each with `metric` and `value` keys (e.g., {"metric": "Current Stock Price", "value": "$380.58"}).
4. `analyst_forecasts`: An array of objects, each with `metric` and `value` keys (e.g., {"metric": "Median Price Target", "value": "$538.00"}).
5. `insider_sentiment`: A string summarizing insider trading activity and sentiment.
6. `summary`: A string providing a final, overall conclusion based on all the fundamental data.
Here is an example of the expected JSON output format:
```json
{
"company_overview": "Applovin Corporation (APP)은 모바일 앱 개발 및 수익화에 특화된 기술 회사입니다. 지난 한 해 동안 괄목할 만한 재무 성과를 보여주며 시장에서 강력한 입지를 나타냈습니다.",
"financial_performance": [
{"metric": "주당 순이익 (EPS)", "value": "지난 1년간 354% 증가"},
{"metric": "매출 성장률", "value": "전년 대비 43.44% 성장"}
],
"stock_market_info": [
{"metric": "현재 주가", "value": "$380.58"},
{"metric": "전일 대비 변동", "value": "-0.74% 감소"}
],
"analyst_forecasts": [
{"metric": "중간 목표 주가", "value": "$538.00 (현재가 대비 약 75.4% 상승 가능성)"}
],
"insider_sentiment": "제공된 데이터에서는 구체적인 내부자 거래 내역이 자세히 설명되지 않았지만, 임원 및 이사회 구성원의 신뢰도에 대한 통찰력을 제공할 수 있습니다.",
"summary": "전반적인 재무 건전성은 긍정적이나, 주가 변동성을 고려할 때 신중한 접근이 필요합니다."
}
```
Please ensure all text content within the JSON is written in Korean.
"""
) )
prompt = ChatPromptTemplate.from_messages( prompt = ChatPromptTemplate.from_messages(
@ -83,9 +51,14 @@ Please ensure all text content within the JSON is written in Korean.
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = ""
if len(result.tool_calls) == 0:
report = result.content
return { return {
"messages": [result], "messages": [result],
"fundamentals_report": result.content, "fundamentals_report": report,
} }
return fundamentals_analyst_node return fundamentals_analyst_node

View File

@ -22,65 +22,34 @@ def create_market_analyst(llm, toolkit):
] ]
system_message = ( system_message = (
"""You are a trading assistant tasked with analyzing financial markets. Your role is to select the **most relevant indicators** for a given market condition or trading strategy from the following list. The goal is to choose up to **8 indicators** that provide complementary insights without redundancy. """**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
First, call `get_YFin_data` to retrieve the necessary stock data. Then, use `get_stockstats_indicators_report` with the selected indicators.
After analyzing the results, you must output your findings in a structured JSON format. Do not add any text outside the JSON structure. You are a trading assistant tasked with analyzing financial markets. Your role is to select the **most relevant indicators** for a given market condition or trading strategy from the following list. The goal is to choose up to **8 indicators** that provide complementary insights without redundancy. Categories and each category's indicators are:
The JSON object must contain the following keys:
1. `price_summary`: A string containing a detailed analysis of the stock's price movement (최고가, 최저가, 최근 동향 등).
2. `indicator_analysis`: An array of objects, where each object represents a technical indicator and has the following keys:
- `indicator`: The name of the indicator (e.g., "50 SMA").
- `value`: The calculated value of the indicator.
- `interpretation`: A detailed interpretation of what the indicator's value means in the current market context.
3. `overall_conclusion`: A string providing a comprehensive conclusion based on the combined analysis of price trends and technical indicators.
Here is an example of the expected JSON output format:
```json
{
"price_summary": "APP의 최근 주가는 2025년 6월 12일 기준으로 380.58 달러로 마감하였으며, 최고가는 428.99 달러(2025년 6월 5일), 최저가는 276.8 달러(2025년 5월 1일)입니다. 5월 초에 비해 급격히 상승하였으나, 최근에는 약간의 조정세를 보이고 있습니다.",
"indicator_analysis": [
{
"indicator": "50 SMA",
"value": "319.97",
"interpretation": "중기 추세 지표로, 현재 주가가 이 지표를 상회하고 있어 상승 추세를 나타냅니다."
},
{
"indicator": "MACD",
"value": "18.33",
"interpretation": "모멘텀 지표로, 양수 값을 유지하고 있어 상승 모멘텀을 나타냅니다."
}
],
"overall_conclusion": "APP의 주가는 현재 강한 상승세를 보이고 있으나, 단기 조정 가능성이 존재합니다. 따라서, 투자자들은 시장의 변동성을 고려하여 신중한 접근이 필요합니다."
}
```
Available indicators:
Moving Averages: Moving Averages:
- close_50_sma: 50 SMA - close_50_sma: 50 SMA: A medium-term trend indicator. Usage: Identify trend direction and serve as dynamic support/resistance. Tips: It lags price; combine with faster indicators for timely signals.
- close_200_sma: 200 SMA - close_200_sma: 200 SMA: A long-term trend benchmark. Usage: Confirm overall market trend and identify golden/death cross setups. Tips: It reacts slowly; best for strategic trend confirmation rather than frequent trading entries.
- close_10_ema: 10 EMA - close_10_ema: 10 EMA: A responsive short-term average. Usage: Capture quick shifts in momentum and potential entry points. Tips: Prone to noise in choppy markets; use alongside longer averages for filtering false signals.
MACD Related: MACD Related:
- macd: MACD - macd: MACD: Computes momentum via differences of EMAs. Usage: Look for crossovers and divergence as signals of trend changes. Tips: Confirm with other indicators in low-volatility or sideways markets.
- macds: MACD Signal - macds: MACD Signal: An EMA smoothing of the MACD line. Usage: Use crossovers with the MACD line to trigger trades. Tips: Should be part of a broader strategy to avoid false positives.
- macdh: MACD Histogram - macdh: MACD Histogram: Shows the gap between the MACD line and its signal. Usage: Visualize momentum strength and spot divergence early. Tips: Can be volatile; complement with additional filters in fast-moving markets.
Momentum Indicators: Momentum Indicators:
- rsi: RSI - rsi: RSI: Measures momentum to flag overbought/oversold conditions. Usage: Apply 70/30 thresholds and watch for divergence to signal reversals. Tips: In strong trends, RSI may remain extreme; always cross-check with trend analysis.
Volatility Indicators: Volatility Indicators:
- boll: Bollinger Middle - boll: Bollinger Middle: A 20 SMA serving as the basis for Bollinger Bands. Usage: Acts as a dynamic benchmark for price movement. Tips: Combine with the upper and lower bands to effectively spot breakouts or reversals.
- boll_ub: Bollinger Upper Band - boll_ub: Bollinger Upper Band: Typically 2 standard deviations above the middle line. Usage: Signals potential overbought conditions and breakout zones. Tips: Confirm signals with other tools; prices may ride the band in strong trends.
- boll_lb: Bollinger Lower Band - boll_lb: Bollinger Lower Band: Typically 2 standard deviations below the middle line. Usage: Indicates potential oversold conditions. Tips: Use additional analysis to avoid false reversal signals.
- atr: ATR - atr: ATR: Averages true range to measure volatility. Usage: Set stop-loss levels and adjust position sizes based on current market volatility. Tips: It's a reactive measure, so use it as part of a broader risk management strategy.
Volume-Based Indicators: Volume-Based Indicators:
- vwma: VWMA - vwma: VWMA: A moving average weighted by volume. Usage: Confirm trends by integrating price action with volume data. Tips: Watch for skewed results from volume spikes; use in combination with other volume analyses.
Please write all text content within the JSON in Korean. - Select indicators that provide diverse and complementary information. Avoid redundancy (e.g., do not select both rsi and stochrsi). Also briefly explain why they are suitable for the given market context. When you tool call, please use the exact name of the indicators provided above as they are defined parameters, otherwise your call will fail. Please make sure to call get_YFin_data first to retrieve the CSV that is needed to generate indicators. Write a very detailed and nuanced report of the trends you observe. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions."""
""" + """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
) )
prompt = ChatPromptTemplate.from_messages( prompt = ChatPromptTemplate.from_messages(
@ -109,9 +78,14 @@ Please write all text content within the JSON in Korean.
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = ""
if len(result.tool_calls) == 0:
report = result.content
return { return {
"messages": [result], "messages": [result],
"market_report": result.content, "market_report": report,
} }
return market_analyst_node return market_analyst_node

View File

@ -1,55 +1,60 @@
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
import time import time
import json import json
def create_news_analyst(llm, toolkit): def create_news_analyst(llm, toolkit):
def news_analyst_node(state): def news_analyst_node(state):
current_date = state["trade_date"] current_date = state["trade_date"]
ticker = state["company_of_interest"] ticker = state["company_of_interest"]
if toolkit.config["online_tools"]: if toolkit.config["online_tools"]:
tools = [toolkit.get_global_news_openai, toolkit.get_google_news] tools = [toolkit.get_global_news, toolkit.get_google_news]
else: else:
tools = [ tools = [
toolkit.get_finnhub_news, toolkit.get_finnhub_news,
toolkit.get_reddit_news, toolkit.get_reddit_news,
toolkit.get_google_news, toolkit.get_google_news,
] ]
system_message = ( system_message = (
"You are a news researcher tasked with analyzing recent news and trends over the past week. Please write a comprehensive report of the current state of the world that is relevant for trading and macroeconomics. Look at news from EODHD, and finnhub to be comprehensive. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions." "**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)\n\nYou are a news researcher tasked with analyzing recent news and trends over the past week. Please write a comprehensive report of the current state of the world that is relevant for trading and macroeconomics. Look at news from EODHD, and finnhub to be comprehensive. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions."
+ """ Make sure to append a Makrdown table at the end of the report to organize key points in the report, organized and easy to read. Please write all responses in Korean.""" + """ Make sure to append a Makrdown table at the end of the report to organize key points in the report, organized and easy to read."""
) )
prompt = ChatPromptTemplate.from_messages( prompt = ChatPromptTemplate.from_messages(
[ [
( (
"system", "system",
"You are a helpful AI assistant, collaborating with other assistants." "You are a helpful AI assistant, collaborating with other assistants."
" Use the provided tools to progress towards answering the question." " Use the provided tools to progress towards answering the question."
" If you are unable to fully answer, that's OK; another assistant with different tools" " If you are unable to fully answer, that's OK; another assistant with different tools"
" will help where you left off. Execute what you can to make progress." " will help where you left off. Execute what you can to make progress."
" If you or any other assistant has the FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** or deliverable," " If you or any other assistant has the FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** or deliverable,"
" prefix your response with FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** so the team knows to stop." " prefix your response with FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** so the team knows to stop."
" You have access to the following tools: {tool_names}.\n{system_message}" " You have access to the following tools: {tool_names}.\n{system_message}"
"For your reference, the current date is {current_date}. We are looking at the company {ticker}", "For your reference, the current date is {current_date}. We are looking at the company {ticker}",
), ),
MessagesPlaceholder(variable_name="messages"), MessagesPlaceholder(variable_name="messages"),
] ]
) )
prompt = prompt.partial(system_message=system_message) prompt = prompt.partial(system_message=system_message)
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools])) prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
prompt = prompt.partial(current_date=current_date) prompt = prompt.partial(current_date=current_date)
prompt = prompt.partial(ticker=ticker) prompt = prompt.partial(ticker=ticker)
chain = prompt | llm.bind_tools(tools) chain = prompt | llm.bind_tools(tools)
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
return { report = ""
"messages": [result],
"news_report": result.content, if len(result.tool_calls) == 0:
} report = result.content
return news_analyst_node return {
"messages": [result],
"news_report": report,
}
return news_analyst_node

View File

@ -1,55 +1,60 @@
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
import time import time
import json import json
def create_social_media_analyst(llm, toolkit): def create_social_media_analyst(llm, toolkit):
def social_media_analyst_node(state): def social_media_analyst_node(state):
current_date = state["trade_date"] current_date = state["trade_date"]
ticker = state["company_of_interest"] ticker = state["company_of_interest"]
company_name = state["company_of_interest"] company_name = state["company_of_interest"]
if toolkit.config["online_tools"]: if toolkit.config["online_tools"]:
tools = [toolkit.get_stock_news_openai] tools = [toolkit.get_stock_news]
else: else:
tools = [ tools = [
toolkit.get_reddit_stock_info, toolkit.get_reddit_stock_info,
] ]
system_message = ( system_message = (
"You are a social media and company specific news researcher/analyst tasked with analyzing social media posts, recent company news, and public sentiment for a specific company over the past week. You will be given a company's name your objective is to write a comprehensive long report detailing your analysis, insights, and implications for traders and investors on this company's current state after looking at social media and what people are saying about that company, analyzing sentiment data of what people feel each day about the company, and looking at recent company news. Try to look at all sources possible from social media to sentiment to news. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions." "**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)\n\nYou are a social media and company specific news researcher/analyst tasked with analyzing social media posts, recent company news, and public sentiment for a specific company over the past week. You will be given a company's name your objective is to write a comprehensive long report detailing your analysis, insights, and implications for traders and investors on this company's current state after looking at social media and what people are saying about that company, analyzing sentiment data of what people feel each day about the company, and looking at recent company news. Try to look at all sources possible from social media to sentiment to news. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions."
+ """ Make sure to append a Makrdown table at the end of the report to organize key points in the report, organized and easy to read.""", + """ Make sure to append a Makrdown table at the end of the report to organize key points in the report, organized and easy to read.""",
) )
prompt = ChatPromptTemplate.from_messages( prompt = ChatPromptTemplate.from_messages(
[ [
( (
"system", "system",
"You are a helpful AI assistant, collaborating with other assistants." "You are a helpful AI assistant, collaborating with other assistants."
" Use the provided tools to progress towards answering the question." " Use the provided tools to progress towards answering the question."
" If you are unable to fully answer, that's OK; another assistant with different tools" " If you are unable to fully answer, that's OK; another assistant with different tools"
" will help where you left off. Execute what you can to make progress." " will help where you left off. Execute what you can to make progress."
" If you or any other assistant has the FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** or deliverable," " If you or any other assistant has the FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** or deliverable,"
" prefix your response with FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** so the team knows to stop." " prefix your response with FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL** so the team knows to stop."
" You have access to the following tools: {tool_names}.\n{system_message}" " You have access to the following tools: {tool_names}.\n{system_message}"
"For your reference, the current date is {current_date}. The current company we want to analyze is {ticker}", "For your reference, the current date is {current_date}. The current company we want to analyze is {ticker}",
), ),
MessagesPlaceholder(variable_name="messages"), MessagesPlaceholder(variable_name="messages"),
] ]
) )
prompt = prompt.partial(system_message=system_message) prompt = prompt.partial(system_message=system_message)
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools])) prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
prompt = prompt.partial(current_date=current_date) prompt = prompt.partial(current_date=current_date)
prompt = prompt.partial(ticker=ticker) prompt = prompt.partial(ticker=ticker)
chain = prompt | llm.bind_tools(tools) chain = prompt | llm.bind_tools(tools)
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
return { report = ""
"messages": [result],
"sentiment_report": result.content, if len(result.tool_calls) == 0:
} report = result.content
return social_media_analyst_node return {
"messages": [result],
"sentiment_report": report,
}
return social_media_analyst_node

View File

@ -1,57 +1,57 @@
import time import time
import json import json
def create_research_manager(llm, memory): def create_research_manager(llm, memory):
def research_manager_node(state) -> dict: def research_manager_node(state) -> dict:
history = state["investment_debate_state"].get("history", "") history = state["investment_debate_state"].get("history", "")
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
investment_debate_state = state["investment_debate_state"] investment_debate_state = state["investment_debate_state"]
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
prompt = f"""As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented. prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
Summarize the key points from both sides concisely, focusing on the most compelling evidence or reasoning. Your recommendationBuy, Sell, or Holdmust be clear and actionable. Avoid defaulting to Hold simply because both sides have valid points; commit to a stance grounded in the debate's strongest arguments. As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented.
Additionally, develop a detailed investment plan for the trader. This should include: Summarize the key points from both sides concisely, focusing on the most compelling evidence or reasoning. Your recommendationBuy, Sell, or Holdmust be clear and actionable. Avoid defaulting to Hold simply because both sides have valid points; commit to a stance grounded in the debate's strongest arguments.
Your Recommendation: A decisive stance supported by the most convincing arguments. Additionally, develop a detailed investment plan for the trader. This should include:
Rationale: An explanation of why these arguments lead to your conclusion.
Strategic Actions: Concrete steps for implementing the recommendation. Your Recommendation: A decisive stance supported by the most convincing arguments.
Take into account your past mistakes on similar situations. Use these insights to refine your decision-making and ensure you are learning and improving. Present your analysis conversationally, as if speaking naturally, without special formatting. Rationale: An explanation of why these arguments lead to your conclusion.
Strategic Actions: Concrete steps for implementing the recommendation.
Here are your past reflections on mistakes: Take into account your past mistakes on similar situations. Use these insights to refine your decision-making and ensure you are learning and improving. Present your analysis conversationally, as if speaking naturally, without special formatting.
\"{past_memory_str}\"
Here are your past reflections on mistakes:
Here is the debate: \"{past_memory_str}\"
Debate History:
{history} Here is the debate:
Debate History:
Please write all responses in Korean.""" {history}"""
response = llm.invoke(prompt) response = llm.invoke(prompt)
new_investment_debate_state = { new_investment_debate_state = {
"judge_decision": response.content, "judge_decision": response.content,
"history": investment_debate_state.get("history", ""), "history": investment_debate_state.get("history", ""),
"bear_history": investment_debate_state.get("bear_history", ""), "bear_history": investment_debate_state.get("bear_history", ""),
"bull_history": investment_debate_state.get("bull_history", ""), "bull_history": investment_debate_state.get("bull_history", ""),
"current_response": response.content, "current_response": response.content,
"count": investment_debate_state["count"], "count": investment_debate_state["count"],
} }
return { return {
"investment_debate_state": new_investment_debate_state, "investment_debate_state": new_investment_debate_state,
"investment_plan": response.content, "investment_plan": response.content,
} }
return research_manager_node return research_manager_node

View File

@ -1,66 +1,68 @@
import time import time
import json import json
def create_risk_manager(llm, memory): def create_risk_manager(llm, memory):
def risk_manager_node(state) -> dict: def risk_manager_node(state) -> dict:
company_name = state["company_of_interest"] company_name = state["company_of_interest"]
history = state["risk_debate_state"]["history"] history = state["risk_debate_state"]["history"]
risk_debate_state = state["risk_debate_state"] risk_debate_state = state["risk_debate_state"]
market_research_report = state["market_report"] market_research_report = state["market_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["news_report"] fundamentals_report = state["news_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
trader_plan = state["investment_plan"] trader_plan = state["investment_plan"]
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
prompt = f"""As the Risk Management Judge and Debate Facilitator, your goal is to evaluate the debate between three risk analysts—Risky, Neutral, and Safe/Conservative—and determine the best course of action for the trader. Your decision must result in a clear recommendation: Buy, Sell, or Hold. Choose Hold only if strongly justified by specific arguments, not as a fallback when all sides seem valid. Strive for clarity and decisiveness. prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
Guidelines for Decision-Making: As the Risk Management Judge and Debate Facilitator, your goal is to evaluate the debate between three risk analystsRisky, Neutral, and Safe/Conservativeand determine the best course of action for the trader. Your decision must result in a clear recommendation: Buy, Sell, or Hold. Choose Hold only if strongly justified by specific arguments, not as a fallback when all sides seem valid. Strive for clarity and decisiveness.
1. **Summarize Key Arguments**: Extract the strongest points from each analyst, focusing on relevance to the context.
2. **Provide Rationale**: Support your recommendation with direct quotes and counterarguments from the debate. Guidelines for Decision-Making:
3. **Refine the Trader's Plan**: Start with the trader's original plan, **{trader_plan}**, and adjust it based on the analysts' insights. 1. **Summarize Key Arguments**: Extract the strongest points from each analyst, focusing on relevance to the context.
4. **Learn from Past Mistakes**: Use lessons from **{past_memory_str}** to address prior misjudgments and improve the decision you are making now to make sure you don't make a wrong BUY/SELL/HOLD call that loses money. 2. **Provide Rationale**: Support your recommendation with direct quotes and counterarguments from the debate.
3. **Refine the Trader's Plan**: Start with the trader's original plan, **{trader_plan}**, and adjust it based on the analysts' insights.
Deliverables: 4. **Learn from Past Mistakes**: Use lessons from **{past_memory_str}** to address prior misjudgments and improve the decision you are making now to make sure you don't make a wrong BUY/SELL/HOLD call that loses money.
- A clear and actionable recommendation: Buy, Sell, or Hold.
- Detailed reasoning anchored in the debate and past reflections. Deliverables:
- A clear and actionable recommendation: Buy, Sell, or Hold.
--- - Detailed reasoning anchored in the debate and past reflections.
**Analysts Debate History:** ---
{history}
**Analysts Debate History:**
--- {history}
Focus on actionable insights and continuous improvement. Build on past lessons, critically evaluate all perspectives, and ensure each decision advances better outcomes. Please write all responses in Korean.""" ---
response = llm.invoke(prompt) Focus on actionable insights and continuous improvement. Build on past lessons, critically evaluate all perspectives, and ensure each decision advances better outcomes."""
new_risk_debate_state = { response = llm.invoke(prompt)
"judge_decision": response.content,
"history": risk_debate_state["history"], new_risk_debate_state = {
"risky_history": risk_debate_state["risky_history"], "judge_decision": response.content,
"safe_history": risk_debate_state["safe_history"], "history": risk_debate_state["history"],
"neutral_history": risk_debate_state["neutral_history"], "risky_history": risk_debate_state["risky_history"],
"latest_speaker": "Judge", "safe_history": risk_debate_state["safe_history"],
"current_risky_response": risk_debate_state["current_risky_response"], "neutral_history": risk_debate_state["neutral_history"],
"current_safe_response": risk_debate_state["current_safe_response"], "latest_speaker": "Judge",
"current_neutral_response": risk_debate_state["current_neutral_response"], "current_risky_response": risk_debate_state["current_risky_response"],
"count": risk_debate_state["count"], "current_safe_response": risk_debate_state["current_safe_response"],
} "current_neutral_response": risk_debate_state["current_neutral_response"],
"count": risk_debate_state["count"],
return { }
"risk_debate_state": new_risk_debate_state,
"final_trade_decision": response.content, return {
} "risk_debate_state": new_risk_debate_state,
"final_trade_decision": response.content,
return risk_manager_node }
return risk_manager_node

View File

@ -1,61 +1,63 @@
from langchain_core.messages import AIMessage from langchain_core.messages import AIMessage
import time import time
import json import json
def create_bear_researcher(llm, memory): def create_bear_researcher(llm, memory):
def bear_node(state) -> dict: def bear_node(state) -> dict:
investment_debate_state = state["investment_debate_state"] investment_debate_state = state["investment_debate_state"]
history = investment_debate_state.get("history", "") history = investment_debate_state.get("history", "")
bear_history = investment_debate_state.get("bear_history", "") bear_history = investment_debate_state.get("bear_history", "")
current_response = investment_debate_state.get("current_response", "") current_response = investment_debate_state.get("current_response", "")
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
prompt = f"""You are a Bear Analyst making the case against investing in the stock. Your goal is to present a well-reasoned argument emphasizing risks, challenges, and negative indicators. Leverage the provided research and data to highlight potential downsides and counter bullish arguments effectively. prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
Key points to focus on: You are a Bear Analyst making the case against investing in the stock. Your goal is to present a well-reasoned argument emphasizing risks, challenges, and negative indicators. Leverage the provided research and data to highlight potential downsides and counter bullish arguments effectively.
- Risks and Challenges: Highlight factors like market saturation, financial instability, or macroeconomic threats that could hinder the stock's performance. Key points to focus on:
- Competitive Weaknesses: Emphasize vulnerabilities such as weaker market positioning, declining innovation, or threats from competitors.
- Negative Indicators: Use evidence from financial data, market trends, or recent adverse news to support your position. - Risks and Challenges: Highlight factors like market saturation, financial instability, or macroeconomic threats that could hinder the stock's performance.
- Bull Counterpoints: Critically analyze the bull argument with specific data and sound reasoning, exposing weaknesses or over-optimistic assumptions. - Competitive Weaknesses: Emphasize vulnerabilities such as weaker market positioning, declining innovation, or threats from competitors.
- Engagement: Present your argument in a conversational style, directly engaging with the bull analyst's points and debating effectively rather than simply listing facts. - Negative Indicators: Use evidence from financial data, market trends, or recent adverse news to support your position.
- Bull Counterpoints: Critically analyze the bull argument with specific data and sound reasoning, exposing weaknesses or over-optimistic assumptions.
Resources available: - Engagement: Present your argument in a conversational style, directly engaging with the bull analyst's points and debating effectively rather than simply listing facts.
Market research report: {market_research_report} Resources available:
Social media sentiment report: {sentiment_report}
Latest world affairs news: {news_report} Market research report: {market_research_report}
Company fundamentals report: {fundamentals_report} Social media sentiment report: {sentiment_report}
Conversation history of the debate: {history} Latest world affairs news: {news_report}
Last bull argument: {current_response} Company fundamentals report: {fundamentals_report}
Reflections from similar situations and lessons learned: {past_memory_str} Conversation history of the debate: {history}
Use this information to deliver a compelling bear argument, refute the bull's claims, and engage in a dynamic debate that demonstrates the risks and weaknesses of investing in the stock. You must also address reflections and learn from lessons and mistakes you made in the past. Please write all responses in Korean. Last bull argument: {current_response}
""" Reflections from similar situations and lessons learned: {past_memory_str}
Use this information to deliver a compelling bear argument, refute the bull's claims, and engage in a dynamic debate that demonstrates the risks and weaknesses of investing in the stock. You must also address reflections and learn from lessons and mistakes you made in the past.
response = llm.invoke(prompt) """
argument = f"Bear Analyst: {response.content}" response = llm.invoke(prompt)
new_investment_debate_state = { argument = f"Bear Analyst: {response.content}"
"history": history + "\n" + argument,
"bear_history": bear_history + "\n" + argument, new_investment_debate_state = {
"bull_history": investment_debate_state.get("bull_history", ""), "history": history + "\n" + argument,
"current_response": argument, "bear_history": bear_history + "\n" + argument,
"count": investment_debate_state["count"] + 1, "bull_history": investment_debate_state.get("bull_history", ""),
} "current_response": argument,
"count": investment_debate_state["count"] + 1,
return {"investment_debate_state": new_investment_debate_state} }
return bear_node return {"investment_debate_state": new_investment_debate_state}
return bear_node

View File

@ -1,59 +1,61 @@
from langchain_core.messages import AIMessage from langchain_core.messages import AIMessage
import time import time
import json import json
def create_bull_researcher(llm, memory): def create_bull_researcher(llm, memory):
def bull_node(state) -> dict: def bull_node(state) -> dict:
investment_debate_state = state["investment_debate_state"] investment_debate_state = state["investment_debate_state"]
history = investment_debate_state.get("history", "") history = investment_debate_state.get("history", "")
bull_history = investment_debate_state.get("bull_history", "") bull_history = investment_debate_state.get("bull_history", "")
current_response = investment_debate_state.get("current_response", "") current_response = investment_debate_state.get("current_response", "")
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
prompt = f"""You are a Bull Analyst advocating for investing in the stock. Your task is to build a strong, evidence-based case emphasizing growth potential, competitive advantages, and positive market indicators. Leverage the provided research and data to address concerns and counter bearish arguments effectively. prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
Key points to focus on: You are a Bull Analyst advocating for investing in the stock. Your task is to build a strong, evidence-based case emphasizing growth potential, competitive advantages, and positive market indicators. Leverage the provided research and data to address concerns and counter bearish arguments effectively.
- Growth Potential: Highlight the company's market opportunities, revenue projections, and scalability.
- Competitive Advantages: Emphasize factors like unique products, strong branding, or dominant market positioning. Key points to focus on:
- Positive Indicators: Use financial health, industry trends, and recent positive news as evidence. - Growth Potential: Highlight the company's market opportunities, revenue projections, and scalability.
- Bear Counterpoints: Critically analyze the bear argument with specific data and sound reasoning, addressing concerns thoroughly and showing why the bull perspective holds stronger merit. - Competitive Advantages: Emphasize factors like unique products, strong branding, or dominant market positioning.
- Engagement: Present your argument in a conversational style, engaging directly with the bear analyst's points and debating effectively rather than just listing data. - Positive Indicators: Use financial health, industry trends, and recent positive news as evidence.
- Bear Counterpoints: Critically analyze the bear argument with specific data and sound reasoning, addressing concerns thoroughly and showing why the bull perspective holds stronger merit.
Resources available: - Engagement: Present your argument in a conversational style, engaging directly with the bear analyst's points and debating effectively rather than just listing data.
Market research report: {market_research_report}
Social media sentiment report: {sentiment_report} Resources available:
Latest world affairs news: {news_report} Market research report: {market_research_report}
Company fundamentals report: {fundamentals_report} Social media sentiment report: {sentiment_report}
Conversation history of the debate: {history} Latest world affairs news: {news_report}
Last bear argument: {current_response} Company fundamentals report: {fundamentals_report}
Reflections from similar situations and lessons learned: {past_memory_str} Conversation history of the debate: {history}
Use this information to deliver a compelling bull argument, refute the bear's concerns, and engage in a dynamic debate that demonstrates the strengths of the bull position. You must also address reflections and learn from lessons and mistakes you made in the past. Please write all responses in Korean. Last bear argument: {current_response}
""" Reflections from similar situations and lessons learned: {past_memory_str}
Use this information to deliver a compelling bull argument, refute the bear's concerns, and engage in a dynamic debate that demonstrates the strengths of the bull position. You must also address reflections and learn from lessons and mistakes you made in the past.
response = llm.invoke(prompt) """
argument = f"Bull Analyst: {response.content}" response = llm.invoke(prompt)
new_investment_debate_state = { argument = f"Bull Analyst: {response.content}"
"history": history + "\n" + argument,
"bull_history": bull_history + "\n" + argument, new_investment_debate_state = {
"bear_history": investment_debate_state.get("bear_history", ""), "history": history + "\n" + argument,
"current_response": argument, "bull_history": bull_history + "\n" + argument,
"count": investment_debate_state["count"] + 1, "bear_history": investment_debate_state.get("bear_history", ""),
} "current_response": argument,
"count": investment_debate_state["count"] + 1,
return {"investment_debate_state": new_investment_debate_state} }
return bull_node return {"investment_debate_state": new_investment_debate_state}
return bull_node

View File

@ -1,55 +1,57 @@
import time import time
import json import json
def create_risky_debator(llm): def create_risky_debator(llm):
def risky_node(state) -> dict: def risky_node(state) -> dict:
risk_debate_state = state["risk_debate_state"] risk_debate_state = state["risk_debate_state"]
history = risk_debate_state.get("history", "") history = risk_debate_state.get("history", "")
risky_history = risk_debate_state.get("risky_history", "") risky_history = risk_debate_state.get("risky_history", "")
current_safe_response = risk_debate_state.get("current_safe_response", "") current_safe_response = risk_debate_state.get("current_safe_response", "")
current_neutral_response = risk_debate_state.get("current_neutral_response", "") current_neutral_response = risk_debate_state.get("current_neutral_response", "")
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
trader_decision = state["trader_investment_plan"] trader_decision = state["trader_investment_plan"]
prompt = f"""As the Risky Risk Analyst, your role is to actively champion high-reward, high-risk opportunities, emphasizing bold strategies and competitive advantages. When evaluating the trader's decision or plan, focus intently on the potential upside, growth potential, and innovative benefits—even when these come with elevated risk. Use the provided market data and sentiment analysis to strengthen your arguments and challenge the opposing views. Specifically, respond directly to each point made by the conservative and neutral analysts, countering with data-driven rebuttals and persuasive reasoning. Highlight where their caution might miss critical opportunities or where their assumptions may be overly conservative. Here is the trader's decision: prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
{trader_decision} As the Risky Risk Analyst, your role is to actively champion high-reward, high-risk opportunities, emphasizing bold strategies and competitive advantages. When evaluating the trader's decision or plan, focus intently on the potential upside, growth potential, and innovative benefits—even when these come with elevated risk. Use the provided market data and sentiment analysis to strengthen your arguments and challenge the opposing views. Specifically, respond directly to each point made by the conservative and neutral analysts, countering with data-driven rebuttals and persuasive reasoning. Highlight where their caution might miss critical opportunities or where their assumptions may be overly conservative. Here is the trader's decision:
Your task is to create a compelling case for the trader's decision by questioning and critiquing the conservative and neutral stances to demonstrate why your high-reward perspective offers the best path forward. Incorporate insights from the following sources into your arguments: {trader_decision}
Market Research Report: {market_research_report} Your task is to create a compelling case for the trader's decision by questioning and critiquing the conservative and neutral stances to demonstrate why your high-reward perspective offers the best path forward. Incorporate insights from the following sources into your arguments:
Social Media Sentiment Report: {sentiment_report}
Latest World Affairs Report: {news_report} Market Research Report: {market_research_report}
Company Fundamentals Report: {fundamentals_report} Social Media Sentiment Report: {sentiment_report}
Here is the current conversation history: {history} Here are the last arguments from the conservative analyst: {current_safe_response} Here are the last arguments from the neutral analyst: {current_neutral_response}. If there are no responses from the other viewpoints, do not halluncinate and just present your point. Latest World Affairs Report: {news_report}
Company Fundamentals Report: {fundamentals_report}
Engage actively by addressing any specific concerns raised, refuting the weaknesses in their logic, and asserting the benefits of risk-taking to outpace market norms. Maintain a focus on debating and persuading, not just presenting data. Challenge each counterpoint to underscore why a high-risk approach is optimal. Output conversationally as if you are speaking without any special formatting. Please write all responses in Korean.""" Here is the current conversation history: {history} Here are the last arguments from the conservative analyst: {current_safe_response} Here are the last arguments from the neutral analyst: {current_neutral_response}. If there are no responses from the other viewpoints, do not halluncinate and just present your point.
response = llm.invoke(prompt) Engage actively by addressing any specific concerns raised, refuting the weaknesses in their logic, and asserting the benefits of risk-taking to outpace market norms. Maintain a focus on debating and persuading, not just presenting data. Challenge each counterpoint to underscore why a high-risk approach is optimal. Output conversationally as if you are speaking without any special formatting."""
argument = f"Risky Analyst: {response.content}" response = llm.invoke(prompt)
new_risk_debate_state = { argument = f"Risky Analyst: {response.content}"
"history": history + "\n" + argument,
"risky_history": risky_history + "\n" + argument, new_risk_debate_state = {
"safe_history": risk_debate_state.get("safe_history", ""), "history": history + "\n" + argument,
"neutral_history": risk_debate_state.get("neutral_history", ""), "risky_history": risky_history + "\n" + argument,
"latest_speaker": "Risky", "safe_history": risk_debate_state.get("safe_history", ""),
"current_risky_response": argument, "neutral_history": risk_debate_state.get("neutral_history", ""),
"current_safe_response": risk_debate_state.get("current_safe_response", ""), "latest_speaker": "Risky",
"current_neutral_response": risk_debate_state.get( "current_risky_response": argument,
"current_neutral_response", "" "current_safe_response": risk_debate_state.get("current_safe_response", ""),
), "current_neutral_response": risk_debate_state.get(
"count": risk_debate_state["count"] + 1, "current_neutral_response", ""
} ),
"count": risk_debate_state["count"] + 1,
return {"risk_debate_state": new_risk_debate_state} }
return risky_node return {"risk_debate_state": new_risk_debate_state}
return risky_node

View File

@ -1,58 +1,60 @@
from langchain_core.messages import AIMessage from langchain_core.messages import AIMessage
import time import time
import json import json
def create_safe_debator(llm): def create_safe_debator(llm):
def safe_node(state) -> dict: def safe_node(state) -> dict:
risk_debate_state = state["risk_debate_state"] risk_debate_state = state["risk_debate_state"]
history = risk_debate_state.get("history", "") history = risk_debate_state.get("history", "")
safe_history = risk_debate_state.get("safe_history", "") safe_history = risk_debate_state.get("safe_history", "")
current_risky_response = risk_debate_state.get("current_risky_response", "") current_risky_response = risk_debate_state.get("current_risky_response", "")
current_neutral_response = risk_debate_state.get("current_neutral_response", "") current_neutral_response = risk_debate_state.get("current_neutral_response", "")
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
trader_decision = state["trader_investment_plan"] trader_decision = state["trader_investment_plan"]
prompt = f"""As the Safe/Conservative Risk Analyst, your primary objective is to protect assets, minimize volatility, and ensure steady, reliable growth. You prioritize stability, security, and risk mitigation, carefully assessing potential losses, economic downturns, and market volatility. When evaluating the trader's decision or plan, critically examine high-risk elements, pointing out where the decision may expose the firm to undue risk and where more cautious alternatives could secure long-term gains. Here is the trader's decision: prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
{trader_decision} As the Safe/Conservative Risk Analyst, your primary objective is to protect assets, minimize volatility, and ensure steady, reliable growth. You prioritize stability, security, and risk mitigation, carefully assessing potential losses, economic downturns, and market volatility. When evaluating the trader's decision or plan, critically examine high-risk elements, pointing out where the decision may expose the firm to undue risk and where more cautious alternatives could secure long-term gains. Here is the trader's decision:
Your task is to actively counter the arguments of the Risky and Neutral Analysts, highlighting where their views may overlook potential threats or fail to prioritize sustainability. Respond directly to their points, drawing from the following data sources to build a convincing case for a low-risk approach adjustment to the trader's decision: {trader_decision}
Market Research Report: {market_research_report} Your task is to actively counter the arguments of the Risky and Neutral Analysts, highlighting where their views may overlook potential threats or fail to prioritize sustainability. Respond directly to their points, drawing from the following data sources to build a convincing case for a low-risk approach adjustment to the trader's decision:
Social Media Sentiment Report: {sentiment_report}
Latest World Affairs Report: {news_report} Market Research Report: {market_research_report}
Company Fundamentals Report: {fundamentals_report} Social Media Sentiment Report: {sentiment_report}
Here is the current conversation history: {history} Here is the last response from the risky analyst: {current_risky_response} Here is the last response from the neutral analyst: {current_neutral_response}. If there are no responses from the other viewpoints, do not halluncinate and just present your point. Latest World Affairs Report: {news_report}
Company Fundamentals Report: {fundamentals_report}
Engage by questioning their optimism and emphasizing the potential downsides they may have overlooked. Address each of their counterpoints to showcase why a conservative stance is ultimately the safest path for the firm's assets. Focus on debating and critiquing their arguments to demonstrate the strength of a low-risk strategy over their approaches. Output conversationally as if you are speaking without any special formatting. Please write all responses in Korean.""" Here is the current conversation history: {history} Here is the last response from the risky analyst: {current_risky_response} Here is the last response from the neutral analyst: {current_neutral_response}. If there are no responses from the other viewpoints, do not halluncinate and just present your point.
response = llm.invoke(prompt) Engage by questioning their optimism and emphasizing the potential downsides they may have overlooked. Address each of their counterpoints to showcase why a conservative stance is ultimately the safest path for the firm's assets. Focus on debating and critiquing their arguments to demonstrate the strength of a low-risk strategy over their approaches. Output conversationally as if you are speaking without any special formatting."""
argument = f"Safe Analyst: {response.content}" response = llm.invoke(prompt)
new_risk_debate_state = { argument = f"Safe Analyst: {response.content}"
"history": history + "\n" + argument,
"risky_history": risk_debate_state.get("risky_history", ""), new_risk_debate_state = {
"safe_history": safe_history + "\n" + argument, "history": history + "\n" + argument,
"neutral_history": risk_debate_state.get("neutral_history", ""), "risky_history": risk_debate_state.get("risky_history", ""),
"latest_speaker": "Safe", "safe_history": safe_history + "\n" + argument,
"current_risky_response": risk_debate_state.get( "neutral_history": risk_debate_state.get("neutral_history", ""),
"current_risky_response", "" "latest_speaker": "Safe",
), "current_risky_response": risk_debate_state.get(
"current_safe_response": argument, "current_risky_response", ""
"current_neutral_response": risk_debate_state.get( ),
"current_neutral_response", "" "current_safe_response": argument,
), "current_neutral_response": risk_debate_state.get(
"count": risk_debate_state["count"] + 1, "current_neutral_response", ""
} ),
"count": risk_debate_state["count"] + 1,
return {"risk_debate_state": new_risk_debate_state} }
return safe_node return {"risk_debate_state": new_risk_debate_state}
return safe_node

View File

@ -1,55 +1,57 @@
import time import time
import json import json
def create_neutral_debator(llm): def create_neutral_debator(llm):
def neutral_node(state) -> dict: def neutral_node(state) -> dict:
risk_debate_state = state["risk_debate_state"] risk_debate_state = state["risk_debate_state"]
history = risk_debate_state.get("history", "") history = risk_debate_state.get("history", "")
neutral_history = risk_debate_state.get("neutral_history", "") neutral_history = risk_debate_state.get("neutral_history", "")
current_risky_response = risk_debate_state.get("current_risky_response", "") current_risky_response = risk_debate_state.get("current_risky_response", "")
current_safe_response = risk_debate_state.get("current_safe_response", "") current_safe_response = risk_debate_state.get("current_safe_response", "")
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
trader_decision = state["trader_investment_plan"] trader_decision = state["trader_investment_plan"]
prompt = f"""As the Neutral Risk Analyst, your role is to provide a balanced perspective, weighing both the potential benefits and risks of the trader's decision or plan. You prioritize a well-rounded approach, evaluating the upsides and downsides while factoring in broader market trends, potential economic shifts, and diversification strategies.Here is the trader's decision: prompt = f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
{trader_decision} As the Neutral Risk Analyst, your role is to provide a balanced perspective, weighing both the potential benefits and risks of the trader's decision or plan. You prioritize a well-rounded approach, evaluating the upsides and downsides while factoring in broader market trends, potential economic shifts, and diversification strategies.Here is the trader's decision:
Your task is to challenge both the Risky and Safe Analysts, pointing out where each perspective may be overly optimistic or overly cautious. Use insights from the following data sources to support a moderate, sustainable strategy to adjust the trader's decision: {trader_decision}
Market Research Report: {market_research_report} Your task is to challenge both the Risky and Safe Analysts, pointing out where each perspective may be overly optimistic or overly cautious. Use insights from the following data sources to support a moderate, sustainable strategy to adjust the trader's decision:
Social Media Sentiment Report: {sentiment_report}
Latest World Affairs Report: {news_report} Market Research Report: {market_research_report}
Company Fundamentals Report: {fundamentals_report} Social Media Sentiment Report: {sentiment_report}
Here is the current conversation history: {history} Here is the last response from the risky analyst: {current_risky_response} Here is the last response from the safe analyst: {current_safe_response}. If there are no responses from the other viewpoints, do not halluncinate and just present your point. Latest World Affairs Report: {news_report}
Company Fundamentals Report: {fundamentals_report}
Engage actively by analyzing both sides critically, addressing weaknesses in the risky and conservative arguments to advocate for a more balanced approach. Challenge each of their points to illustrate why a moderate risk strategy might offer the best of both worlds, providing growth potential while safeguarding against extreme volatility. Focus on debating rather than simply presenting data, aiming to show that a balanced view can lead to the most reliable outcomes. Output conversationally as if you are speaking without any special formatting. Please write all responses in Korean.""" Here is the current conversation history: {history} Here is the last response from the risky analyst: {current_risky_response} Here is the last response from the safe analyst: {current_safe_response}. If there are no responses from the other viewpoints, do not halluncinate and just present your point.
response = llm.invoke(prompt) Engage actively by analyzing both sides critically, addressing weaknesses in the risky and conservative arguments to advocate for a more balanced approach. Challenge each of their points to illustrate why a moderate risk strategy might offer the best of both worlds, providing growth potential while safeguarding against extreme volatility. Focus on debating rather than simply presenting data, aiming to show that a balanced view can lead to the most reliable outcomes. Output conversationally as if you are speaking without any special formatting."""
argument = f"Neutral Analyst: {response.content}" response = llm.invoke(prompt)
new_risk_debate_state = { argument = f"Neutral Analyst: {response.content}"
"history": history + "\n" + argument,
"risky_history": risk_debate_state.get("risky_history", ""), new_risk_debate_state = {
"safe_history": risk_debate_state.get("safe_history", ""), "history": history + "\n" + argument,
"neutral_history": neutral_history + "\n" + argument, "risky_history": risk_debate_state.get("risky_history", ""),
"latest_speaker": "Neutral", "safe_history": risk_debate_state.get("safe_history", ""),
"current_risky_response": risk_debate_state.get( "neutral_history": neutral_history + "\n" + argument,
"current_risky_response", "" "latest_speaker": "Neutral",
), "current_risky_response": risk_debate_state.get(
"current_safe_response": risk_debate_state.get("current_safe_response", ""), "current_risky_response", ""
"current_neutral_response": argument, ),
"count": risk_debate_state["count"] + 1, "current_safe_response": risk_debate_state.get("current_safe_response", ""),
} "current_neutral_response": argument,
"count": risk_debate_state["count"] + 1,
return {"risk_debate_state": new_risk_debate_state} }
return neutral_node return {"risk_debate_state": new_risk_debate_state}
return neutral_node

View File

@ -1,43 +1,45 @@
import functools import functools
import time import time
import json import json
def create_trader(llm, memory): def create_trader(llm, memory):
def trader_node(state, name): def trader_node(state, name):
company_name = state["company_of_interest"] company_name = state["company_of_interest"]
investment_plan = state["investment_plan"] investment_plan = state["investment_plan"]
market_research_report = state["market_report"] market_research_report = state["market_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
context = { context = {
"role": "user", "role": "user",
"content": f"Based on a comprehensive analysis by a team of analysts, here is an investment plan tailored for {company_name}. This plan incorporates insights from current technical market trends, macroeconomic indicators, and social media sentiment. Use this plan as a foundation for evaluating your next trading decision.\n\nProposed Investment Plan: {investment_plan}\n\nLeverage these insights to make an informed and strategic decision.", "content": f"Based on a comprehensive analysis by a team of analysts, here is an investment plan tailored for {company_name}. This plan incorporates insights from current technical market trends, macroeconomic indicators, and social media sentiment. Use this plan as a foundation for evaluating your next trading decision.\n\nProposed Investment Plan: {investment_plan}\n\nLeverage these insights to make an informed and strategic decision.",
} }
messages = [ messages = [
{ {
"role": "system", "role": "system",
"content": f"""You are a trading agent analyzing market data to make investment decisions. Based on your analysis, provide a specific recommendation to buy, sell, or hold. End with a firm decision and always conclude your response with 'FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**' to confirm your recommendation. Do not forget to utilize lessons from past decisions to learn from your mistakes. Here is some reflections from similar situatiosn you traded in and the lessons learned: {past_memory_str}. Please write all responses in Korean.""", "content": f"""**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
},
context, You are a trading agent analyzing market data to make investment decisions. Based on your analysis, provide a specific recommendation to buy, sell, or hold. End with a firm decision and always conclude your response with 'FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**' to confirm your recommendation. Do not forget to utilize lessons from past decisions to learn from your mistakes. Here is some reflections from similar situatiosn you traded in and the lessons learned: {past_memory_str}""",
] },
context,
result = llm.invoke(messages) ]
return { result = llm.invoke(messages)
"messages": [result],
"trader_investment_plan": result.content, return {
"sender": name, "messages": [result],
} "trader_investment_plan": result.content,
"sender": name,
return functools.partial(trader_node, name="Trader") }
return functools.partial(trader_node, name="Trader")

View File

@ -1,411 +1,419 @@
from langchain_core.messages import BaseMessage, HumanMessage, ToolMessage, AIMessage from langchain_core.messages import BaseMessage, HumanMessage, ToolMessage, AIMessage
from typing import List from typing import List
from typing import Annotated from typing import Annotated
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import RemoveMessage from langchain_core.messages import RemoveMessage
from langchain_core.tools import tool from langchain_core.tools import tool
from datetime import date, timedelta, datetime from datetime import date, timedelta, datetime
import functools import functools
import pandas as pd import pandas as pd
import os import os
from dateutil.relativedelta import relativedelta from dateutil.relativedelta import relativedelta
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
import tradingagents.dataflows.interface as interface import tradingagents.dataflows.interface as interface
from tradingagents.default_config import DEFAULT_CONFIG from tradingagents.default_config import DEFAULT_CONFIG
from langchain_core.messages import HumanMessage
def create_msg_delete():
def delete_messages(state): def create_msg_delete():
"""To prevent message history from overflowing, regularly clear message history after a stage of the pipeline is done""" def delete_messages(state):
messages = state["messages"] """Clear messages and add placeholder for Anthropic compatibility"""
return {"messages": [RemoveMessage(id=m.id) for m in messages]} messages = state["messages"]
return delete_messages # Remove all messages
removal_operations = [RemoveMessage(id=m.id) for m in messages]
class Toolkit: # Add a minimal placeholder message
_config = DEFAULT_CONFIG.copy() placeholder = HumanMessage(content="Continue")
@classmethod return {"messages": removal_operations + [placeholder]}
def update_config(cls, config):
"""Update the class-level configuration.""" return delete_messages
cls._config.update(config)
@property class Toolkit:
def config(self): _config = DEFAULT_CONFIG.copy()
"""Access the configuration."""
return self._config @classmethod
def update_config(cls, config):
def __init__(self, config=None): """Update the class-level configuration."""
if config: cls._config.update(config)
self.update_config(config)
@property
@staticmethod def config(self):
@tool """Access the configuration."""
def get_reddit_news( return self._config
curr_date: Annotated[str, "Date you want to get news for in yyyy-mm-dd format"],
) -> str: def __init__(self, config=None):
""" if config:
Retrieve global news from Reddit within a specified time frame. self.update_config(config)
Args:
curr_date (str): Date you want to get news for in yyyy-mm-dd format @staticmethod
Returns: @tool
str: A formatted dataframe containing the latest global news from Reddit in the specified time frame. def get_reddit_news(
""" curr_date: Annotated[str, "Date you want to get news for in yyyy-mm-dd format"],
) -> str:
global_news_result = interface.get_reddit_global_news(curr_date, 7, 5) """
Retrieve global news from Reddit within a specified time frame.
return global_news_result Args:
curr_date (str): Date you want to get news for in yyyy-mm-dd format
@staticmethod Returns:
@tool str: A formatted dataframe containing the latest global news from Reddit in the specified time frame.
def get_finnhub_news( """
ticker: Annotated[
str, global_news_result = interface.get_reddit_global_news(curr_date, 7, 5)
"Search query of a company, e.g. 'AAPL, TSM, etc.",
], return global_news_result
start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "End date in yyyy-mm-dd format"], @staticmethod
): @tool
""" def get_finnhub_news(
Retrieve the latest news about a given stock from Finnhub within a date range ticker: Annotated[
Args: str,
ticker (str): Ticker of a company. e.g. AAPL, TSM "Search query of a company, e.g. 'AAPL, TSM, etc.",
start_date (str): Start date in yyyy-mm-dd format ],
end_date (str): End date in yyyy-mm-dd format start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
Returns: end_date: Annotated[str, "End date in yyyy-mm-dd format"],
str: A formatted dataframe containing news about the company within the date range from start_date to end_date ):
""" """
Retrieve the latest news about a given stock from Finnhub within a date range
end_date_str = end_date Args:
ticker (str): Ticker of a company. e.g. AAPL, TSM
end_date = datetime.strptime(end_date, "%Y-%m-%d") start_date (str): Start date in yyyy-mm-dd format
start_date = datetime.strptime(start_date, "%Y-%m-%d") end_date (str): End date in yyyy-mm-dd format
look_back_days = (end_date - start_date).days Returns:
str: A formatted dataframe containing news about the company within the date range from start_date to end_date
finnhub_news_result = interface.get_finnhub_news( """
ticker, end_date_str, look_back_days
) end_date_str = end_date
return finnhub_news_result end_date = datetime.strptime(end_date, "%Y-%m-%d")
start_date = datetime.strptime(start_date, "%Y-%m-%d")
@staticmethod look_back_days = (end_date - start_date).days
@tool
def get_reddit_stock_info( finnhub_news_result = interface.get_finnhub_news(
ticker: Annotated[ ticker, end_date_str, look_back_days
str, )
"Ticker of a company. e.g. AAPL, TSM",
], return finnhub_news_result
curr_date: Annotated[str, "Current date you want to get news for"],
) -> str: @staticmethod
""" @tool
Retrieve the latest news about a given stock from Reddit, given the current date. def get_reddit_stock_info(
Args: ticker: Annotated[
ticker (str): Ticker of a company. e.g. AAPL, TSM str,
curr_date (str): current date in yyyy-mm-dd format to get news for "Ticker of a company. e.g. AAPL, TSM",
Returns: ],
str: A formatted dataframe containing the latest news about the company on the given date curr_date: Annotated[str, "Current date you want to get news for"],
""" ) -> str:
"""
stock_news_results = interface.get_reddit_company_news(ticker, curr_date, 7, 5) Retrieve the latest news about a given stock from Reddit, given the current date.
Args:
return stock_news_results ticker (str): Ticker of a company. e.g. AAPL, TSM
curr_date (str): current date in yyyy-mm-dd format to get news for
@staticmethod Returns:
@tool str: A formatted dataframe containing the latest news about the company on the given date
def get_YFin_data( """
symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], stock_news_results = interface.get_reddit_company_news(ticker, curr_date, 7, 5)
end_date: Annotated[str, "Start date in yyyy-mm-dd format"],
) -> str: return stock_news_results
"""
Retrieve the stock price data for a given ticker symbol from Yahoo Finance. @staticmethod
Args: @tool
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM def get_YFin_data(
start_date (str): Start date in yyyy-mm-dd format symbol: Annotated[str, "ticker symbol of the company"],
end_date (str): End date in yyyy-mm-dd format start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
Returns: end_date: Annotated[str, "End date in yyyy-mm-dd format"],
str: A formatted dataframe containing the stock price data for the specified ticker symbol in the specified date range. ) -> str:
""" """
Retrieve the stock price data for a given ticker symbol from Yahoo Finance.
result_data = interface.get_YFin_data(symbol, start_date, end_date) Args:
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
return result_data start_date (str): Start date in yyyy-mm-dd format
end_date (str): End date in yyyy-mm-dd format
@staticmethod Returns:
@tool str: A formatted dataframe containing the stock price data for the specified ticker symbol in the specified date range.
def get_YFin_data_online( """
symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], result_data = interface.get_YFin_data(symbol, start_date, end_date)
end_date: Annotated[str, "Start date in yyyy-mm-dd format"],
) -> str: return result_data
"""
Retrieve the stock price data for a given ticker symbol from Yahoo Finance. @staticmethod
Args: @tool
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM def get_YFin_data_online(
start_date (str): Start date in yyyy-mm-dd format symbol: Annotated[str, "ticker symbol of the company"],
end_date (str): End date in yyyy-mm-dd format start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
Returns: end_date: Annotated[str, "End date in yyyy-mm-dd format"],
str: A formatted dataframe containing the stock price data for the specified ticker symbol in the specified date range. ) -> str:
""" """
Retrieve the stock price data for a given ticker symbol from Yahoo Finance.
result_data = interface.get_YFin_data_online(symbol, start_date, end_date) Args:
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
return result_data start_date (str): Start date in yyyy-mm-dd format
end_date (str): End date in yyyy-mm-dd format
@staticmethod Returns:
@tool str: A formatted dataframe containing the stock price data for the specified ticker symbol in the specified date range.
def get_stockstats_indicators_report( """
symbol: Annotated[str, "ticker symbol of the company"],
indicator: Annotated[ result_data = interface.get_YFin_data_online(symbol, start_date, end_date)
str, "technical indicator to get the analysis and report of"
], return result_data
curr_date: Annotated[
str, "The current trading date you are trading on, YYYY-mm-dd" @staticmethod
], @tool
look_back_days: Annotated[int, "how many days to look back"] = 30, def get_stockstats_indicators_report(
) -> str: symbol: Annotated[str, "ticker symbol of the company"],
""" indicator: Annotated[
Retrieve stock stats indicators for a given ticker symbol and indicator. str, "technical indicator to get the analysis and report of"
Args: ],
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM curr_date: Annotated[
indicator (str): Technical indicator to get the analysis and report of str, "The current trading date you are trading on, YYYY-mm-dd"
curr_date (str): The current trading date you are trading on, YYYY-mm-dd ],
look_back_days (int): How many days to look back, default is 30 look_back_days: Annotated[int, "how many days to look back"] = 30,
Returns: ) -> str:
str: A formatted dataframe containing the stock stats indicators for the specified ticker symbol and indicator. """
""" Retrieve stock stats indicators for a given ticker symbol and indicator.
Args:
result_stockstats = interface.get_stock_stats_indicators_window( symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
symbol, indicator, curr_date, look_back_days, False indicator (str): Technical indicator to get the analysis and report of
) curr_date (str): The current trading date you are trading on, YYYY-mm-dd
look_back_days (int): How many days to look back, default is 30
return result_stockstats Returns:
str: A formatted dataframe containing the stock stats indicators for the specified ticker symbol and indicator.
@staticmethod """
@tool
def get_stockstats_indicators_report_online( result_stockstats = interface.get_stock_stats_indicators_window(
symbol: Annotated[str, "ticker symbol of the company"], symbol, indicator, curr_date, look_back_days, False
indicator: Annotated[ )
str, "technical indicator to get the analysis and report of"
], return result_stockstats
curr_date: Annotated[
str, "The current trading date you are trading on, YYYY-mm-dd" @staticmethod
], @tool
look_back_days: Annotated[int, "how many days to look back"] = 30, def get_stockstats_indicators_report_online(
) -> str: symbol: Annotated[str, "ticker symbol of the company"],
""" indicator: Annotated[
Retrieve stock stats indicators for a given ticker symbol and indicator. str, "technical indicator to get the analysis and report of"
Args: ],
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM curr_date: Annotated[
indicator (str): Technical indicator to get the analysis and report of str, "The current trading date you are trading on, YYYY-mm-dd"
curr_date (str): The current trading date you are trading on, YYYY-mm-dd ],
look_back_days (int): How many days to look back, default is 30 look_back_days: Annotated[int, "how many days to look back"] = 30,
Returns: ) -> str:
str: A formatted dataframe containing the stock stats indicators for the specified ticker symbol and indicator. """
""" Retrieve stock stats indicators for a given ticker symbol and indicator.
Args:
result_stockstats = interface.get_stock_stats_indicators_window( symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
symbol, indicator, curr_date, look_back_days, True indicator (str): Technical indicator to get the analysis and report of
) curr_date (str): The current trading date you are trading on, YYYY-mm-dd
look_back_days (int): How many days to look back, default is 30
return result_stockstats Returns:
str: A formatted dataframe containing the stock stats indicators for the specified ticker symbol and indicator.
@staticmethod """
@tool
def get_finnhub_company_insider_sentiment( result_stockstats = interface.get_stock_stats_indicators_window(
ticker: Annotated[str, "ticker symbol for the company"], symbol, indicator, curr_date, look_back_days, True
curr_date: Annotated[ )
str,
"current date of you are trading at, yyyy-mm-dd", return result_stockstats
],
): @staticmethod
""" @tool
Retrieve insider sentiment information about a company (retrieved from public SEC information) for the past 30 days def get_finnhub_company_insider_sentiment(
Args: ticker: Annotated[str, "ticker symbol for the company"],
ticker (str): ticker symbol of the company curr_date: Annotated[
curr_date (str): current date you are trading at, yyyy-mm-dd str,
Returns: "current date of you are trading at, yyyy-mm-dd",
str: a report of the sentiment in the past 30 days starting at curr_date ],
""" ):
"""
data_sentiment = interface.get_finnhub_company_insider_sentiment( Retrieve insider sentiment information about a company (retrieved from public SEC information) for the past 30 days
ticker, curr_date, 30 Args:
) ticker (str): ticker symbol of the company
curr_date (str): current date you are trading at, yyyy-mm-dd
return data_sentiment Returns:
str: a report of the sentiment in the past 30 days starting at curr_date
@staticmethod """
@tool
def get_finnhub_company_insider_transactions( data_sentiment = interface.get_finnhub_company_insider_sentiment(
ticker: Annotated[str, "ticker symbol"], ticker, curr_date, 30
curr_date: Annotated[ )
str,
"current date you are trading at, yyyy-mm-dd", return data_sentiment
],
): @staticmethod
""" @tool
Retrieve insider transaction information about a company (retrieved from public SEC information) for the past 30 days def get_finnhub_company_insider_transactions(
Args: ticker: Annotated[str, "ticker symbol"],
ticker (str): ticker symbol of the company curr_date: Annotated[
curr_date (str): current date you are trading at, yyyy-mm-dd str,
Returns: "current date you are trading at, yyyy-mm-dd",
str: a report of the company's insider transactions/trading information in the past 30 days ],
""" ):
"""
data_trans = interface.get_finnhub_company_insider_transactions( Retrieve insider transaction information about a company (retrieved from public SEC information) for the past 30 days
ticker, curr_date, 30 Args:
) ticker (str): ticker symbol of the company
curr_date (str): current date you are trading at, yyyy-mm-dd
return data_trans Returns:
str: a report of the company's insider transactions/trading information in the past 30 days
@staticmethod """
@tool
def get_simfin_balance_sheet( data_trans = interface.get_finnhub_company_insider_transactions(
ticker: Annotated[str, "ticker symbol"], ticker, curr_date, 30
freq: Annotated[ )
str,
"reporting frequency of the company's financial history: annual/quarterly", return data_trans
],
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"], @staticmethod
): @tool
""" def get_simfin_balance_sheet(
Retrieve the most recent balance sheet of a company ticker: Annotated[str, "ticker symbol"],
Args: freq: Annotated[
ticker (str): ticker symbol of the company str,
freq (str): reporting frequency of the company's financial history: annual / quarterly "reporting frequency of the company's financial history: annual/quarterly",
curr_date (str): current date you are trading at, yyyy-mm-dd ],
Returns: curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"],
str: a report of the company's most recent balance sheet ):
""" """
Retrieve the most recent balance sheet of a company
data_balance_sheet = interface.get_simfin_balance_sheet(ticker, freq, curr_date) Args:
ticker (str): ticker symbol of the company
return data_balance_sheet freq (str): reporting frequency of the company's financial history: annual / quarterly
curr_date (str): current date you are trading at, yyyy-mm-dd
@staticmethod Returns:
@tool str: a report of the company's most recent balance sheet
def get_simfin_cashflow( """
ticker: Annotated[str, "ticker symbol"],
freq: Annotated[ data_balance_sheet = interface.get_simfin_balance_sheet(ticker, freq, curr_date)
str,
"reporting frequency of the company's financial history: annual/quarterly", return data_balance_sheet
],
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"], @staticmethod
): @tool
""" def get_simfin_cashflow(
Retrieve the most recent cash flow statement of a company ticker: Annotated[str, "ticker symbol"],
Args: freq: Annotated[
ticker (str): ticker symbol of the company str,
freq (str): reporting frequency of the company's financial history: annual / quarterly "reporting frequency of the company's financial history: annual/quarterly",
curr_date (str): current date you are trading at, yyyy-mm-dd ],
Returns: curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"],
str: a report of the company's most recent cash flow statement ):
""" """
Retrieve the most recent cash flow statement of a company
data_cashflow = interface.get_simfin_cashflow(ticker, freq, curr_date) Args:
ticker (str): ticker symbol of the company
return data_cashflow freq (str): reporting frequency of the company's financial history: annual / quarterly
curr_date (str): current date you are trading at, yyyy-mm-dd
@staticmethod Returns:
@tool str: a report of the company's most recent cash flow statement
def get_simfin_income_stmt( """
ticker: Annotated[str, "ticker symbol"],
freq: Annotated[ data_cashflow = interface.get_simfin_cashflow(ticker, freq, curr_date)
str,
"reporting frequency of the company's financial history: annual/quarterly", return data_cashflow
],
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"], @staticmethod
): @tool
""" def get_simfin_income_stmt(
Retrieve the most recent income statement of a company ticker: Annotated[str, "ticker symbol"],
Args: freq: Annotated[
ticker (str): ticker symbol of the company str,
freq (str): reporting frequency of the company's financial history: annual / quarterly "reporting frequency of the company's financial history: annual/quarterly",
curr_date (str): current date you are trading at, yyyy-mm-dd ],
Returns: curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"],
str: a report of the company's most recent income statement ):
""" """
Retrieve the most recent income statement of a company
data_income_stmt = interface.get_simfin_income_statements( Args:
ticker, freq, curr_date ticker (str): ticker symbol of the company
) freq (str): reporting frequency of the company's financial history: annual / quarterly
curr_date (str): current date you are trading at, yyyy-mm-dd
return data_income_stmt Returns:
str: a report of the company's most recent income statement
@staticmethod """
@tool
def get_google_news( data_income_stmt = interface.get_simfin_income_statements(
query: Annotated[str, "Query to search with"], ticker, freq, curr_date
curr_date: Annotated[str, "Curr date in yyyy-mm-dd format"], )
):
""" return data_income_stmt
Retrieve the latest news from Google News based on a query and date range.
Args: @staticmethod
query (str): Query to search with @tool
curr_date (str): Current date in yyyy-mm-dd format def get_google_news(
look_back_days (int): How many days to look back query: Annotated[str, "Query to search with"],
Returns: curr_date: Annotated[str, "Curr date in yyyy-mm-dd format"],
str: A formatted string containing the latest news from Google News based on the query and date range. ):
""" """
Retrieve the latest news from Google News based on a query and date range.
google_news_results = interface.get_google_news(query, curr_date, 7) Args:
query (str): Query to search with
return google_news_results curr_date (str): Current date in yyyy-mm-dd format
look_back_days (int): How many days to look back
@staticmethod Returns:
@tool str: A formatted string containing the latest news from Google News based on the query and date range.
def get_stock_news_openai( """
ticker: Annotated[str, "the company's ticker"],
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"], google_news_results = interface.get_google_news(query, curr_date, 7)
):
""" return google_news_results
Retrieve the latest news about a given stock by using OpenAI's news API.
Args: @staticmethod
ticker (str): Ticker of a company. e.g. AAPL, TSM @tool
curr_date (str): Current date in yyyy-mm-dd format def get_stock_news(
Returns: ticker: Annotated[str, "the company's ticker"],
str: A formatted string containing the latest news about the company on the given date. curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
""" ):
"""
openai_news_results = interface.get_stock_news_openai(ticker, curr_date) Retrieve the latest news about a given stock by using LLM's web search capabilities.
Args:
return openai_news_results ticker (str): Ticker of a company. e.g. AAPL, TSM
curr_date (str): Current date in yyyy-mm-dd format
@staticmethod Returns:
@tool str: A formatted string containing the latest news about the company on the given date.
def get_global_news_openai( """
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
): results = interface.get_stock_news(ticker, curr_date)
"""
Retrieve the latest macroeconomics news on a given date using OpenAI's macroeconomics news API. return results
Args:
curr_date (str): Current date in yyyy-mm-dd format @staticmethod
Returns: @tool
str: A formatted string containing the latest macroeconomic news on the given date. def get_global_news(
""" curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
):
openai_news_results = interface.get_global_news_openai(curr_date) """
Retrieve the latest macroeconomics news on a given date using LLM's web search capabilities.
return openai_news_results Args:
curr_date (str): Current date in yyyy-mm-dd format
@staticmethod Returns:
@tool str: A formatted string containing the latest macroeconomic news on the given date.
def get_fundamentals_openai( """
ticker: Annotated[str, "the company's ticker"],
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"], results = interface.get_global_news(curr_date)
):
""" return results
Retrieve the latest fundamental information about a given stock on a given date by using OpenAI's news API.
Args: @staticmethod
ticker (str): Ticker of a company. e.g. AAPL, TSM @tool
curr_date (str): Current date in yyyy-mm-dd format def get_fundamentals(
Returns: ticker: Annotated[str, "the company's ticker"],
str: A formatted string containing the latest fundamental information about the company on the given date. curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
""" ):
"""
openai_fundamentals_results = interface.get_fundamentals_openai( Retrieve the latest fundamental information about a given stock on a given date by using LLM's web search capabilities.
ticker, curr_date Args:
) ticker (str): Ticker of a company. e.g. AAPL, TSM
curr_date (str): Current date in yyyy-mm-dd format
return openai_fundamentals_results Returns:
str: A formatted string containing the latest fundamental information about the company on the given date.
"""
results = interface.get_fundamentals(
ticker, curr_date
)
return results

View File

@ -0,0 +1,20 @@
from .embedding_providers import (
EmbeddingProvider,
OpenAIEmbeddingProvider,
GeminiEmbeddingProvider,
OllamaEmbeddingProvider
)
class EmbeddingProviderFactory:
@staticmethod
def create_provider(config : dict[str, any])->EmbeddingProvider:
backend_url = config["backend_url"]
if "generativelanguage.googleapis.com" in backend_url:
return GeminiEmbeddingProvider(backend_url)
elif "localhost:11434" in backend_url:
return OllamaEmbeddingProvider(backend_url)
else:
return OpenAIEmbeddingProvider(backend_url)

View File

@ -0,0 +1,66 @@
from abc import ABC, abstractmethod
from openai import OpenAI
from google import genai
class EmbeddingProvider(ABC):
@abstractmethod
def get_embedding(self, text: str)->list[float]:
pass
@property
@abstractmethod
def model_name(self)->str:
pass
class OpenAIEmbeddingProvider(EmbeddingProvider):
def __init__(self, backend_url: str, embedding_model: str = "text-embedding-3-small"):
self.client = OpenAI(base_url=backend_url)
self._embedding_model = embedding_model
def get_embedding(self, text: str)->list[float]:
response = self.client.embeddings.create(
model=self._embedding_model,
input=text
)
return response.data[0].embedding
@property
def model_name(self)->str:
return self._embedding_model
class GeminiEmbeddingProvider(EmbeddingProvider):
def __init__(self, backend_url: str, embedding_model: str = "gemini-embedding-exp-03-07"):
self.client = genai.Client()
self._embedding_model = embedding_model
def get_embedding(self, text: str)->list[float]:
response = self.client.models.embed_content(
model=self._embedding_model,
contents=text
)
return response.embeddings[0].values
@property
def model_name(self)->str:
return self._embedding_model
class OllamaEmbeddingProvider(EmbeddingProvider):
def __init__(self, backend_url: str, embedding_model: str = "nomic-embed-text"):
self.client = OpenAI(base_url=backend_url)
self._embedding_model = embedding_model
def get_embedding(self, text: str)->list[float]:
response = self.client.embeddings.create(
model=self._embedding_model,
input=text
)
return response.data[0].embedding
@property
def model_name(self)->str:
return self._embedding_model

View File

@ -1,110 +1,112 @@
import chromadb import chromadb
from chromadb.config import Settings from chromadb.config import Settings
from openai import OpenAI from openai import OpenAI
import numpy as np import os
from langchain_openai import OpenAIEmbeddings from .embedding_provider_factory import EmbeddingProviderFactory
import os from google import genai
class FinancialSituationMemory: class FinancialSituationMemory:
def __init__(self, name): def __init__(self, name, config):
# self.client = OpenAI() self.config = config
self.embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", api_key=os.getenv("OPENAI_API_KEY")) self.backend_url = config["backend_url"]
self.chroma_client = chromadb.Client(Settings(allow_reset=True))
self.situation_collection = self.chroma_client.get_or_create_collection(name=name) self.embedding_provider = EmbeddingProviderFactory.create_provider(config)
def get_embedding(self, text): self.chroma_client = chromadb.Client(Settings(allow_reset=True))
"""Get OpenAI embedding for a text""" self.situation_collection = self.chroma_client.create_collection(name=name)
embedding = self.embeddings.embed_query(text)
def get_embedding(self, text):
return embedding """Get embedding for a text using the appropriate API"""
def add_situations(self, situations_and_advice): return self.embedding_provider.get_embedding(text)
"""Add financial situations and their corresponding advice. Parameter is a list of tuples (situation, rec)"""
def add_situations(self, situations_and_advice):
situations = [] """Add financial situations and their corresponding advice. Parameter is a list of tuples (situation, rec)"""
advice = []
ids = [] situations = []
embeddings = [] advice = []
ids = []
offset = self.situation_collection.count() embeddings = []
for i, (situation, recommendation) in enumerate(situations_and_advice): offset = self.situation_collection.count()
situations.append(situation)
advice.append(recommendation) for i, (situation, recommendation) in enumerate(situations_and_advice):
ids.append(str(offset + i)) situations.append(situation)
embeddings.append(self.get_embedding(situation)) advice.append(recommendation)
ids.append(str(offset + i))
self.situation_collection.add( embeddings.append(self.get_embedding(situation))
documents=situations,
metadatas=[{"recommendation": rec} for rec in advice], self.situation_collection.add(
embeddings=embeddings, documents=situations,
ids=ids, metadatas=[{"recommendation": rec} for rec in advice],
) embeddings=embeddings,
ids=ids,
def get_memories(self, current_situation, n_matches=1): )
"""Find matching recommendations using OpenAI embeddings"""
query_embedding = self.get_embedding(current_situation) def get_memories(self, current_situation, n_matches=1):
"""Find matching recommendations using embeddings"""
results = self.situation_collection.query( query_embedding = self.get_embedding(current_situation)
query_embeddings=[query_embedding],
n_results=n_matches, results = self.situation_collection.query(
include=["metadatas", "documents", "distances"], query_embeddings=[query_embedding],
) n_results=n_matches,
include=["metadatas", "documents", "distances"],
matched_results = [] )
for i in range(len(results["documents"][0])):
matched_results.append( matched_results = []
{ for i in range(len(results["documents"][0])):
"matched_situation": results["documents"][0][i], matched_results.append(
"recommendation": results["metadatas"][0][i]["recommendation"], {
"similarity_score": 1 - results["distances"][0][i], "matched_situation": results["documents"][0][i],
} "recommendation": results["metadatas"][0][i]["recommendation"],
) "similarity_score": 1 - results["distances"][0][i],
}
return matched_results )
return matched_results
if __name__ == "__main__":
# Example usage
matcher = FinancialSituationMemory() if __name__ == "__main__":
# Example usage
# Example data matcher = FinancialSituationMemory()
example_data = [
( # Example data
"High inflation rate with rising interest rates and declining consumer spending", example_data = [
"Consider defensive sectors like consumer staples and utilities. Review fixed-income portfolio duration.", (
), "High inflation rate with rising interest rates and declining consumer spending",
( "Consider defensive sectors like consumer staples and utilities. Review fixed-income portfolio duration.",
"Tech sector showing high volatility with increasing institutional selling pressure", ),
"Reduce exposure to high-growth tech stocks. Look for value opportunities in established tech companies with strong cash flows.", (
), "Tech sector showing high volatility with increasing institutional selling pressure",
( "Reduce exposure to high-growth tech stocks. Look for value opportunities in established tech companies with strong cash flows.",
"Strong dollar affecting emerging markets with increasing forex volatility", ),
"Hedge currency exposure in international positions. Consider reducing allocation to emerging market debt.", (
), "Strong dollar affecting emerging markets with increasing forex volatility",
( "Hedge currency exposure in international positions. Consider reducing allocation to emerging market debt.",
"Market showing signs of sector rotation with rising yields", ),
"Rebalance portfolio to maintain target allocations. Consider increasing exposure to sectors benefiting from higher rates.", (
), "Market showing signs of sector rotation with rising yields",
] "Rebalance portfolio to maintain target allocations. Consider increasing exposure to sectors benefiting from higher rates.",
),
# Add the example situations and recommendations ]
matcher.add_situations(example_data)
# Add the example situations and recommendations
# Example query matcher.add_situations(example_data)
current_situation = """
Market showing increased volatility in tech sector, with institutional investors # Example query
reducing positions and rising interest rates affecting growth stock valuations current_situation = """
""" Market showing increased volatility in tech sector, with institutional investors
reducing positions and rising interest rates affecting growth stock valuations
try: """
recommendations = matcher.get_memories(current_situation, n_matches=2)
try:
for i, rec in enumerate(recommendations, 1): recommendations = matcher.get_memories(current_situation, n_matches=2)
print(f"\nMatch {i}:")
print(f"Similarity Score: {rec['similarity_score']:.2f}") for i, rec in enumerate(recommendations, 1):
print(f"Matched Situation: {rec['matched_situation']}") print(f"\nMatch {i}:")
print(f"Recommendation: {rec['recommendation']}") print(f"Similarity Score: {rec['similarity_score']:.2f}")
print(f"Matched Situation: {rec['matched_situation']}")
except Exception as e: print(f"Recommendation: {rec['recommendation']}")
print(f"Error during recommendation: {str(e)}")
except Exception as e:
print(f"Error during recommendation: {str(e)}")

View File

@ -14,6 +14,7 @@ from tqdm import tqdm
import yfinance as yf import yfinance as yf
from openai import OpenAI from openai import OpenAI
from .config import get_config, set_config, DATA_DIR from .config import get_config, set_config, DATA_DIR
from .search_provider_factory import SearchProviderFactory
def get_finnhub_news( def get_finnhub_news(
@ -628,7 +629,7 @@ def get_YFin_data_window(
def get_YFin_data_online( def get_YFin_data_online(
symbol: Annotated[str, "ticker symbol of the company"], symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "Start date in yyyy-mm-dd format"], end_date: Annotated[str, "End date in yyyy-mm-dd format"],
): ):
datetime.strptime(start_date, "%Y-%m-%d") datetime.strptime(start_date, "%Y-%m-%d")
@ -670,7 +671,7 @@ def get_YFin_data_online(
def get_YFin_data( def get_YFin_data(
symbol: Annotated[str, "ticker symbol of the company"], symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"], start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
end_date: Annotated[str, "Start date in yyyy-mm-dd format"], end_date: Annotated[str, "End date in yyyy-mm-dd format"],
) -> str: ) -> str:
# read in data # read in data
data = pd.read_csv( data = pd.read_csv(
@ -702,103 +703,25 @@ def get_YFin_data(
return filtered_data return filtered_data
def get_stock_news_openai(ticker, curr_date): def get_stock_news(ticker, curr_date):
client = OpenAI() config = get_config()
search_provider = SearchProviderFactory.create_provider(config)
response = client.responses.create( query = f"Can you search Social Media for {ticker} from 7 days before {curr_date} to {curr_date}? Make sure you only get the data posted during that period."
model="gpt-4.1-mini", return search_provider.search(query)
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": f"Can you search Social Media for {ticker} from 7 days before {curr_date} to {curr_date}? Make sure you only get the data posted during that period.",
}
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
return response.output[1].content[0].text
def get_global_news_openai(curr_date): def get_global_news(curr_date):
client = OpenAI() config = get_config()
search_provider = SearchProviderFactory.create_provider(config)
query = f"Search for global macroeconomic news and financial market updates from 7 days before {curr_date} to {curr_date}. Focus on central bank decisions, economic indicators, geopolitical events, and market-moving news that would be important for trading decisions."
return search_provider.search(query)
response = client.responses.create( def get_fundamentals(ticker, curr_date):
model="gpt-4.1-mini", config = get_config()
input=[ search_provider = SearchProviderFactory.create_provider(config)
{ query = f"Search for fundamental analysis data and financial metrics for {ticker} stock from the month before {curr_date} to the month of {curr_date}. Look for earnings reports, financial ratios like PE, PS, cash flow, revenue growth, analyst ratings, and any fundamental analysis discussions. Please present key metrics in a structured format."
"role": "system", return search_provider.search(query)
"content": [
{
"type": "input_text",
"text": f"Can you search global or macroeconomics news from 7 days before {curr_date} to {curr_date} that would be informative for trading purposes? Make sure you only get the data posted during that period.",
}
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
return response.output[1].content[0].text
def get_fundamentals_openai(ticker, curr_date):
client = OpenAI()
response = client.responses.create(
model="gpt-4.1-mini",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": f"Can you search Fundamental for discussions on {ticker} during of the month before {curr_date} to the month of {curr_date}. Make sure you only get the data posted during that period. List as a table, with PE/PS/Cash flow/ etc",
}
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
return response.output[1].content[0].text

View File

@ -0,0 +1,76 @@
from google import genai
from google.genai.types import Tool, GenerateContentConfig, GoogleSearch
from openai import OpenAI
from abc import ABC, abstractmethod
class SearchProvider(ABC):
@abstractmethod
def search(self, query: str, ticker: str, curr_date: str) -> str:
pass
class GoogleSearchProvider(SearchProvider):
def __init__(self, model: str):
self.client = genai.Client()
self.model = model
def search(self, query: str) -> str:
google_search_tool = Tool(
google_search=GoogleSearch()
)
response = self.client.models.generate_content(
model=self.model,
contents=query,
config=GenerateContentConfig(
tools=[google_search_tool],
response_modalities=["TEXT"]
)
)
result_text = ""
for part in response.candidates[0].content.parts:
if hasattr(part, 'text'):
result_text += part.text
return result_text
class OpenAISearchProvider(SearchProvider):
def __init__(self, model: str, backend_url: str):
self.client = OpenAI(base_url=backend_url)
self.model = model
def search(self, query: str) -> str:
response = self.client.responses.create(
model=self.model,
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": query
}
],
}
],
text={"format": {"type": "text"}},
reasoning={},
tools=[
{
"type": "web_search_preview",
"user_location": {"type": "approximate"},
"search_context_size": "low",
}
],
temperature=1,
max_output_tokens=4096,
top_p=1,
store=True,
)
return response.output[1].content[0].text

View File

@ -0,0 +1,47 @@
from .search_provider import (
SearchProvider,
GoogleSearchProvider,
OpenAISearchProvider
)
import hashlib
import json
class SearchProviderFactory:
_cache = {} # 클래스 레벨 캐시
@staticmethod
def create_provider(config: dict[str, any]) -> SearchProvider:
"""
Create a SearchProvider with caching to avoid creating new instances.
Uses config hash as cache key for efficient reuse.
"""
# Create cache key from relevant config values
cache_key_data = {
"backend_url": config["backend_url"],
"model": config["quick_think_llm"]
}
cache_key = hashlib.md5(json.dumps(cache_key_data, sort_keys=True).encode()).hexdigest()
# Return cached instance if exists
if cache_key in SearchProviderFactory._cache:
return SearchProviderFactory._cache[cache_key]
# Create new instance
backend_url = config["backend_url"]
model = config["quick_think_llm"]
if "generativelanguage.googleapis.com" in backend_url:
provider = GoogleSearchProvider(model)
else:
provider = OpenAISearchProvider(model, backend_url)
# Cache and return
SearchProviderFactory._cache[cache_key] = provider
return provider
@staticmethod
def clear_cache():
"""Clear the provider cache (useful for testing or config changes)."""
SearchProviderFactory._cache.clear()

View File

@ -1,19 +1,22 @@
import os import os
DEFAULT_CONFIG = { DEFAULT_CONFIG = {
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")), "project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data", "results_dir": os.getenv("TRADINGAGENTS_RESULTS_DIR", "./results"),
"data_cache_dir": os.path.join( "data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")), "data_cache_dir": os.path.join(
"dataflows/data_cache", os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
), "dataflows/data_cache",
# LLM settings ),
"deep_think_llm": "o4-mini", # LLM settings
"quick_think_llm": "gpt-4o-mini", "llm_provider": "openai",
# Debate and discussion settings "deep_think_llm": "o4-mini",
"max_debate_rounds": 1, "quick_think_llm": "gpt-4o-mini",
"max_risk_discuss_rounds": 1, "backend_url": "https://api.openai.com/v1",
"max_recur_limit": 100, # Debate and discussion settings
# Tool settings "max_debate_rounds": 1,
"online_tools": True, "max_risk_discuss_rounds": 1,
} "max_recur_limit": 100,
# Tool settings
"online_tools": True,
}

View File

@ -2,8 +2,6 @@
from typing import Dict, Any from typing import Dict, Any
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
import json
import re
class Reflector: class Reflector:
@ -17,6 +15,8 @@ class Reflector:
def _get_reflection_prompt(self) -> str: def _get_reflection_prompt(self) -> str:
"""Get the system prompt for reflection.""" """Get the system prompt for reflection."""
return """ return """
**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)
You are an expert financial analyst tasked with reviewing trading decisions/analysis and providing a comprehensive, step-by-step analysis. You are an expert financial analyst tasked with reviewing trading decisions/analysis and providing a comprehensive, step-by-step analysis.
Your goal is to deliver detailed insights into investment decisions and highlight opportunities for improvement, adhering strictly to the following guidelines: Your goal is to deliver detailed insights into investment decisions and highlight opportunities for improvement, adhering strictly to the following guidelines:
@ -121,59 +121,3 @@ Adhere strictly to these instructions, and ensure your output is detailed, accur
"RISK JUDGE", judge_decision, situation, returns_losses "RISK JUDGE", judge_decision, situation, returns_losses
) )
risk_manager_memory.add_situations([(situation, result)]) risk_manager_memory.add_situations([(situation, result)])
@staticmethod
def generate_final_report(final_state: dict) -> str:
"""
Generate a final, comprehensive report from the final state, ensuring
all parts are combined into a single, valid JSON object.
"""
final_report_json = {
"company_info": {
"ticker": final_state.get('company_of_interest', 'N/A'),
"analysis_date": final_state.get('trade_date', 'N/A')
},
"reports": {},
"final_decision": {}
}
def extract_json(text: str) -> dict:
"""Extracts a JSON object from a string, even if it's embedded in other text."""
if not isinstance(text, str):
return {} # Return empty dict if not a string
# Find the start and end of the JSON object
match = re.search(r'\{.*\}', text, re.DOTALL)
if match:
json_str = match.group(0)
try:
return json.loads(json_str)
except json.JSONDecodeError:
return {"error": "Failed to decode JSON", "original_text": json_str}
return {"error": "No JSON object found", "original_text": text}
# Process each report
report_keys = ['market_report', 'sentiment_report', 'news_report', 'fundamentals_report']
for key in report_keys:
if final_state.get(key):
report_name = key.replace('_report', '')
final_report_json['reports'][report_name] = extract_json(final_state[key])
# Add investment debate summary
if final_state.get('investment_debate_state'):
final_report_json['reports']['investment_debate'] = {
"summary": final_state['investment_debate_state'].get('judge_decision', 'N/A')
}
# Add final plan and decision
if final_state.get('investment_plan'):
final_report_json['final_decision']['investment_plan'] = final_state['investment_plan']
if final_state.get('final_trade_decision'):
# Extract the final proposal (BUY/HOLD/SELL)
proposal_match = re.search(r'FINAL TRANSACTION PROPOSAL:\s*\*{2}(.*?)\*{2}', final_state['final_trade_decision'])
proposal = proposal_match.group(1) if proposal_match else 'N/A'
final_report_json['final_decision']['final_proposal'] = proposal
final_report_json['final_decision']['full_text'] = final_state['final_trade_decision']
return json.dumps(final_report_json, ensure_ascii=False, indent=4)

View File

@ -1,31 +1,31 @@
# TradingAgents/graph/signal_processing.py # TradingAgents/graph/signal_processing.py
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
class SignalProcessor: class SignalProcessor:
"""Processes trading signals to extract actionable decisions.""" """Processes trading signals to extract actionable decisions."""
def __init__(self, quick_thinking_llm: ChatOpenAI): def __init__(self, quick_thinking_llm: ChatOpenAI):
"""Initialize with an LLM for processing.""" """Initialize with an LLM for processing."""
self.quick_thinking_llm = quick_thinking_llm self.quick_thinking_llm = quick_thinking_llm
def process_signal(self, full_signal: str) -> str: def process_signal(self, full_signal: str) -> str:
""" """
Process a full trading signal to extract the core decision. Process a full trading signal to extract the core decision.
Args: Args:
full_signal: Complete trading signal text full_signal: Complete trading signal text
Returns: Returns:
Extracted decision (BUY, SELL, or HOLD) Extracted decision (BUY, SELL, or HOLD)
""" """
messages = [ messages = [
( (
"system", "system",
"You are an efficient assistant designed to analyze paragraphs or financial reports provided by a group of analysts. Your task is to extract the investment decision: SELL, BUY, or HOLD. Provide only the extracted decision (SELL, BUY, or HOLD) as your output, without adding any additional text or information.", "**IMPORTANT THING** Respond in Korean(한국어로 대답해주세요)\n\nYou are an efficient assistant designed to analyze paragraphs or financial reports provided by a group of analysts. Your task is to extract the investment decision: SELL, BUY, or HOLD. Provide only the extracted decision (SELL, BUY, or HOLD) as your output, without adding any additional text or information.",
), ),
("human", full_signal), ("human", full_signal),
] ]
return self.quick_thinking_llm.invoke(messages).content return self.quick_thinking_llm.invoke(messages).content

View File

@ -5,9 +5,11 @@ from pathlib import Path
import json import json
from datetime import date from datetime import date
from typing import Dict, Any, Tuple, List, Optional from typing import Dict, Any, Tuple, List, Optional
import asyncio
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.prebuilt import ToolNode from langgraph.prebuilt import ToolNode
from tradingagents.agents import * from tradingagents.agents import *
@ -32,20 +34,19 @@ class TradingAgentsGraph:
def __init__( def __init__(
self, self,
config: Dict[str, Any] = None, selected_analysts=["market", "social", "news", "fundamentals"],
progress_callback=None,
debug=False, debug=False,
config: Dict[str, Any] = None,
): ):
"""Initialize the trading agents graph and components. """Initialize the trading agents graph and components.
Args: Args:
config: Configuration dictionary. If None, uses default config selected_analysts: List of analyst types to include
progress_callback: Async function to send progress updates
debug: Whether to run in debug mode debug: Whether to run in debug mode
config: Configuration dictionary. If None, uses default config
""" """
self.debug = debug self.debug = debug
self.config = config or DEFAULT_CONFIG self.config = config or DEFAULT_CONFIG
self.progress_callback = progress_callback
# Update the interface's config # Update the interface's config
set_config(self.config) set_config(self.config)
@ -57,18 +58,26 @@ class TradingAgentsGraph:
) )
# Initialize LLMs # Initialize LLMs
self.deep_thinking_llm = ChatOpenAI(model=self.config["deep_think_llm"]) if self.config["llm_provider"].lower() == "openai" or self.config["llm_provider"] == "ollama" or self.config["llm_provider"] == "openrouter":
self.quick_thinking_llm = ChatOpenAI( self.deep_thinking_llm = ChatOpenAI(model=self.config["deep_think_llm"], base_url=self.config["backend_url"])
model=self.config["quick_think_llm"], temperature=0.1 self.quick_thinking_llm = ChatOpenAI(model=self.config["quick_think_llm"], base_url=self.config["backend_url"])
) elif self.config["llm_provider"].lower() == "anthropic":
self.deep_thinking_llm = ChatAnthropic(model=self.config["deep_think_llm"], base_url=self.config["backend_url"])
self.quick_thinking_llm = ChatAnthropic(model=self.config["quick_think_llm"], base_url=self.config["backend_url"])
elif self.config["llm_provider"].lower() == "google":
self.deep_thinking_llm = ChatGoogleGenerativeAI(model=self.config["deep_think_llm"])
self.quick_thinking_llm = ChatGoogleGenerativeAI(model=self.config["quick_think_llm"])
else:
raise ValueError(f"Unsupported LLM provider: {self.config['llm_provider']}")
self.toolkit = Toolkit(config=self.config) self.toolkit = Toolkit(config=self.config)
# Initialize memories # Initialize memories
self.bull_memory = FinancialSituationMemory("bull_memory") self.bull_memory = FinancialSituationMemory("bull_memory", self.config)
self.bear_memory = FinancialSituationMemory("bear_memory") self.bear_memory = FinancialSituationMemory("bear_memory", self.config)
self.trader_memory = FinancialSituationMemory("trader_memory") self.trader_memory = FinancialSituationMemory("trader_memory", self.config)
self.invest_judge_memory = FinancialSituationMemory("invest_judge_memory") self.invest_judge_memory = FinancialSituationMemory("invest_judge_memory", self.config)
self.risk_manager_memory = FinancialSituationMemory("risk_manager_memory") self.risk_manager_memory = FinancialSituationMemory("risk_manager_memory", self.config)
# Create tool nodes # Create tool nodes
self.tool_nodes = self._create_tool_nodes() self.tool_nodes = self._create_tool_nodes()
@ -97,9 +106,8 @@ class TradingAgentsGraph:
self.ticker = None self.ticker = None
self.log_states_dict = {} # date to full state dict self.log_states_dict = {} # date to full state dict
# Set up the graph with default analysts initially # Set up the graph
default_analysts = ["market", "social", "news", "fundamentals"] self.graph = self.graph_setup.setup_graph(selected_analysts)
self.graph = self.graph_setup.setup_graph(default_analysts)
def _create_tool_nodes(self) -> Dict[str, ToolNode]: def _create_tool_nodes(self) -> Dict[str, ToolNode]:
"""Create tool nodes for different data sources.""" """Create tool nodes for different data sources."""
@ -117,7 +125,7 @@ class TradingAgentsGraph:
"social": ToolNode( "social": ToolNode(
[ [
# online tools # online tools
self.toolkit.get_stock_news_openai, self.toolkit.get_stock_news,
# offline tools # offline tools
self.toolkit.get_reddit_stock_info, self.toolkit.get_reddit_stock_info,
] ]
@ -125,7 +133,7 @@ class TradingAgentsGraph:
"news": ToolNode( "news": ToolNode(
[ [
# online tools # online tools
self.toolkit.get_global_news_openai, self.toolkit.get_global_news,
self.toolkit.get_google_news, self.toolkit.get_google_news,
# offline tools # offline tools
self.toolkit.get_finnhub_news, self.toolkit.get_finnhub_news,
@ -135,7 +143,7 @@ class TradingAgentsGraph:
"fundamentals": ToolNode( "fundamentals": ToolNode(
[ [
# online tools # online tools
self.toolkit.get_fundamentals_openai, self.toolkit.get_fundamentals,
# offline tools # offline tools
self.toolkit.get_finnhub_company_insider_sentiment, self.toolkit.get_finnhub_company_insider_sentiment,
self.toolkit.get_finnhub_company_insider_transactions, self.toolkit.get_finnhub_company_insider_transactions,
@ -146,55 +154,8 @@ class TradingAgentsGraph:
), ),
} }
def invoke(self, input_data: Dict) -> Dict:
"""Run the trading agents graph for a web-based request."""
self.ticker = input_data.get("ticker", "UNKNOWN")
trade_date = input_data.get("date", date.today().strftime("%Y-%m-%d"))
selected_analysts = input_data.get("selected_analysts", [])
self.graph = self.graph_setup.setup_graph(selected_analysts)
init_agent_state = self.propagator.create_initial_state(
self.ticker, trade_date
)
args = self.propagator.get_graph_args()
final_report = ""
final_state_result = None
# 진행률 계산을 위한 변수
total_steps = len(self.graph.nodes)
step_count = 0
# Stream the graph execution to get real-time updates
for chunk in self.graph.stream(init_agent_state, **args):
# 1 청크당 1단계로 간주
step_count += 1
for node_name, node_output in chunk.items():
if self.progress_callback:
agent_name = node_name.replace("_node", "").replace("_", " ").title()
message = f"Step {step_count}/{total_steps}: {agent_name} is working..."
# 계산된 진행률과 함께 콜백 호출
asyncio.run(self.progress_callback(
"agent_update",
message,
agent_name,
step=step_count,
total=total_steps
))
final_state_result = chunk
if final_state_result:
final_report = self.reflector.generate_final_report(final_state_result)
self._log_state(trade_date, final_state_result)
return {"final_report": final_report}
def propagate(self, company_name, trade_date): def propagate(self, company_name, trade_date):
"""Run the trading agents graph for a company on a specific date (CLI).""" """Run the trading agents graph for a company on a specific date."""
self.ticker = company_name self.ticker = company_name
@ -209,10 +170,20 @@ class TradingAgentsGraph:
trace = [] trace = []
for chunk in self.graph.stream(init_agent_state, **args): for chunk in self.graph.stream(init_agent_state, **args):
if len(chunk["messages"]) == 0: if len(chunk["messages"]) == 0:
pass continue
else:
chunk["messages"][-1].pretty_print() message = chunk["messages"][-1]
trace.append(chunk)
if message.content and message.content.strip():
if "FINAL TRANSACTION PROPOSAL:" in message.content:
if not hasattr(self, '_final_printed'):
message.pretty_print()
self._final_printed = True
else:
message.pretty_print()
trace.append(chunk)
final_state = trace[-1] final_state = trace[-1]
else: else:

5405
uv.lock Normal file

File diff suppressed because it is too large Load Diff