Merge 4f88c4c6c2 into fa4d01c23a
This commit is contained in:
commit
cd63214f06
11
.env.example
11
.env.example
|
|
@ -1,7 +1,14 @@
|
|||
# LLM Providers (set the one you use)
|
||||
# MiniMax via Anthropic-compatible API
|
||||
MINIMAX_API_KEY=
|
||||
ANTHROPIC_API_KEY=
|
||||
ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
|
||||
TRADINGAGENTS_LLM_PROVIDER=anthropic
|
||||
TRADINGAGENTS_MODEL=MiniMax-M2.7-highspeed
|
||||
TRADINGAGENTS_BACKEND_URL=https://api.minimaxi.com/anthropic
|
||||
|
||||
# Other providers (optional)
|
||||
OPENAI_API_KEY=
|
||||
GOOGLE_API_KEY=
|
||||
ANTHROPIC_API_KEY=
|
||||
XAI_API_KEY=
|
||||
DEEPSEEK_API_KEY=
|
||||
DASHSCOPE_API_KEY=
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
name: Dashboard Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, feat/**, fix/**]
|
||||
paths:
|
||||
- 'orchestrator/**/*.py'
|
||||
- 'tradingagents/**/*.py'
|
||||
- 'orchestrator/tests/**/*.py'
|
||||
- 'web_dashboard/backend/**/*.py'
|
||||
- 'web_dashboard/frontend/**/*.js'
|
||||
- '.github/workflows/dashboard-tests.yml'
|
||||
pull_request:
|
||||
paths:
|
||||
- 'orchestrator/**/*.py'
|
||||
- 'tradingagents/**/*.py'
|
||||
- 'orchestrator/tests/**/*.py'
|
||||
- 'web_dashboard/backend/**/*.py'
|
||||
- 'web_dashboard/frontend/**/*.js'
|
||||
- '.github/workflows/dashboard-tests.yml'
|
||||
|
||||
jobs:
|
||||
test-backend:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install pytest pytest-asyncio httpx
|
||||
pip install -e . 2>/dev/null || true
|
||||
|
||||
- name: Run orchestrator tests
|
||||
run: |
|
||||
python -m pytest orchestrator/tests/ -v --tb=short
|
||||
|
||||
- name: Run backend tests
|
||||
working-directory: web_dashboard/backend
|
||||
run: |
|
||||
python -m pytest tests/ -v --tb=short
|
||||
|
||||
test-frontend:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: web_dashboard/frontend
|
||||
run: npm ci
|
||||
|
||||
- name: Lint
|
||||
working-directory: web_dashboard/frontend
|
||||
run: npm run lint 2>/dev/null || true
|
||||
|
|
@ -1,3 +1,7 @@
|
|||
# Git worktrees
|
||||
.worktrees/
|
||||
orchestrator/profile_runs/
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[codz]
|
||||
|
|
@ -217,3 +221,6 @@ __marimo__/
|
|||
|
||||
# Cache
|
||||
**/data_cache/
|
||||
|
||||
# Orchestrator cache
|
||||
orchestrator/cache/
|
||||
|
|
|
|||
|
|
@ -0,0 +1,144 @@
|
|||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## 语言规则
|
||||
- **用中文回答用户的问题**
|
||||
|
||||
## 项目概述
|
||||
|
||||
TradingAgents 是一个基于 LangGraph 的多智能体 LLM 金融交易框架,模拟真实交易公司的运作模式。通过部署专业化的 LLM 智能体(基本面分析师、情绪分析师、技术分析师、交易员、风险管理团队)协作评估市场状况并做出交易决策。
|
||||
|
||||
## 常用命令
|
||||
|
||||
```bash
|
||||
# 激活环境
|
||||
source env312/bin/activate
|
||||
|
||||
# CLI 交互模式(推荐)
|
||||
python -m cli.main
|
||||
|
||||
# 单股分析(编程方式)
|
||||
python -c "from tradingagents.graph.trading_graph import TradingAgentsGraph; ta = TradingAgentsGraph(debug=True); _, decision = ta.propagate('NVDA', '2026-01-15'); print(decision)"
|
||||
|
||||
# 运行测试
|
||||
python -m pytest orchestrator/tests/
|
||||
|
||||
# Orchestrator 回测模式
|
||||
QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_backtest.py
|
||||
|
||||
# Orchestrator 实时模式
|
||||
QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_live.py
|
||||
```
|
||||
|
||||
## 核心架构
|
||||
|
||||
### 工作流程
|
||||
```
|
||||
分析师团队 → 研究员辩论 → 交易员 → 风险管理辩论 → 组合经理
|
||||
```
|
||||
|
||||
### 关键组件
|
||||
|
||||
**tradingagents/** - 核心多智能体框架
|
||||
- `agents/` - LLM智能体实现 (分析师、研究员、交易员、风控)
|
||||
- `dataflows/` - 数据源集成,通过 `interface.py` 路由到 yfinance/alpha_vantage/china_data
|
||||
- `graph/` - LangGraph 工作流编排,`trading_graph.py` 是主协调器
|
||||
- `llm_clients/` - 多Provider LLM支持 (OpenAI, Anthropic, Google, xAI, OpenRouter, Ollama)
|
||||
- `default_config.py` - 默认配置(LLM provider、模型选择、数据源路由、辩论轮数)
|
||||
|
||||
**orchestrator/** - 量化+LLM信号融合层
|
||||
- `orchestrator.py` - 主协调器,融合 quant 和 LLM 信号
|
||||
- `quant_runner.py` - 量化信号获取
|
||||
- `llm_runner.py` - LLM 信号获取(调用 TradingAgentsGraph)
|
||||
- `signals.py` - 信号合并逻辑
|
||||
- `backtest_mode.py` / `live_mode.py` - 回测/实时运行模式
|
||||
- `contracts/` - 配置和结果契约定义
|
||||
|
||||
**cli/** - 交互式命令行界面
|
||||
- `main.py` - Typer CLI 入口,实时显示智能体状态和报告
|
||||
|
||||
## 配置系统
|
||||
|
||||
### TradingAgents 配置 (`tradingagents/default_config.py`)
|
||||
|
||||
运行时可覆盖的关键配置:
|
||||
- `llm_provider`: "openai" | "google" | "anthropic" | "xai" | "openrouter" | "ollama"
|
||||
- `deep_think_llm`: 复杂推理模型(本地默认 `MiniMax-M2.7-highspeed`)
|
||||
- `quick_think_llm`: 快速任务模型(本地默认 `MiniMax-M2.7-highspeed`)
|
||||
- `backend_url`: LLM API endpoint
|
||||
- `data_vendors`: 按类别配置数据源 (core_stock_apis, technical_indicators, fundamental_data, news_data)
|
||||
- `tool_vendors`: 按工具覆盖数据源(优先级高于 data_vendors)
|
||||
- `max_debate_rounds`: 研究员辩论轮数
|
||||
- `max_risk_discuss_rounds`: 风险管理辩论轮数
|
||||
- `output_language`: 输出语言("English" | "中文")
|
||||
|
||||
### Orchestrator 配置 (`orchestrator/config.py`)
|
||||
|
||||
- `quant_backtest_path`: 量化回测输出目录(必须设置才能使用 quant 信号)
|
||||
- `trading_agents_config`: 传递给 TradingAgentsGraph 的配置
|
||||
- `quant_weight_cap` / `llm_weight_cap`: 信号置信度上限
|
||||
- `llm_batch_days`: LLM 运行间隔天数
|
||||
- `cache_dir`: LLM 信号缓存目录
|
||||
- `llm_solo_penalty` / `quant_solo_penalty`: 单轨运行时的置信度折扣
|
||||
|
||||
### A股特定配置
|
||||
|
||||
- **数据源**: yfinance (akshare 财务 API 已损坏)
|
||||
- **股票代码格式**: `300750.SZ` (深圳), `603259.SS` (上海), `688256.SS` (科创板)
|
||||
- **MiniMax API**: Anthropic 兼容,Base URL: `https://api.minimaxi.com/anthropic`
|
||||
- **本地默认模型**: `MiniMax-M2.7-highspeed`
|
||||
|
||||
## 数据流向
|
||||
|
||||
```
|
||||
1. 工具调用 (agents/utils/*_tools.py)
|
||||
↓
|
||||
2. 路由层 (dataflows/interface.py)
|
||||
- 根据 config["data_vendors"] 和 config["tool_vendors"] 路由
|
||||
↓
|
||||
3. 数据供应商实现
|
||||
- yfinance: y_finance.py, yfinance_news.py
|
||||
- alpha_vantage: alpha_vantage*.py
|
||||
- china_data: china_data.py (需要 akshare,当前不可用)
|
||||
↓
|
||||
4. 返回数据给智能体
|
||||
```
|
||||
|
||||
## 重要实现细节
|
||||
|
||||
### LLM 客户端
|
||||
- `llm_clients/base_client.py` - 统一接口
|
||||
- `llm_clients/model_catalog.py` - 模型目录和验证
|
||||
- 支持 provider-specific thinking 配置 (google_thinking_level, openai_reasoning_effort, anthropic_effort)
|
||||
|
||||
### 信号融合 (Orchestrator)
|
||||
- 双轨制:quant 信号 + LLM 信号
|
||||
- 降级策略:单轨失败时使用另一轨,应用 solo_penalty
|
||||
- 缓存机制:LLM 信号缓存到 `cache_dir`,避免重复 API 调用
|
||||
- 契约化:使用 `contracts/` 定义的结构化输出
|
||||
|
||||
### 测试
|
||||
- `orchestrator/tests/` - Orchestrator 单元测试
|
||||
- `tests/` - TradingAgents 核心测试
|
||||
- 使用 pytest 运行:`python -m pytest orchestrator/tests/`
|
||||
|
||||
## Skill routing
|
||||
|
||||
When the user's request matches an available skill, ALWAYS invoke it using the Skill
|
||||
tool as your FIRST action. Do NOT answer directly, do NOT use other tools first.
|
||||
The skill has specialized workflows that produce better results than ad-hoc answers.
|
||||
|
||||
Key routing rules:
|
||||
- Product ideas, "is this worth building", brainstorming → invoke office-hours
|
||||
- Bugs, errors, "why is this broken", 500 errors → invoke investigate
|
||||
- Ship, deploy, push, create PR → invoke ship
|
||||
- QA, test the site, find bugs → invoke qa
|
||||
- Code review, check my diff → invoke review
|
||||
- Update docs after shipping → invoke document-release
|
||||
- Weekly retro → invoke retro
|
||||
- Design system, brand → invoke design-consultation
|
||||
- Visual audit, design polish → invoke design-review
|
||||
- Architecture review → invoke plan-eng-review
|
||||
- Save progress, checkpoint, resume → invoke checkpoint
|
||||
- Code quality, health check → invoke health
|
||||
|
|
@ -0,0 +1,313 @@
|
|||
# Design System: Apple
|
||||
|
||||
## 1. Visual Theme & Atmosphere
|
||||
|
||||
Apple's website is a masterclass in controlled drama — vast expanses of pure black and near-white serve as cinematic backdrops for products that are photographed as if they were sculptures in a gallery. The design philosophy is reductive to its core: every pixel exists in service of the product, and the interface itself retreats until it becomes invisible. This is not minimalism as aesthetic preference; it is minimalism as reverence for the object.
|
||||
|
||||
The typography anchors everything. San Francisco (SF Pro Display for large sizes, SF Pro Text for body) is Apple's proprietary typeface, engineered with optical sizing that automatically adjusts letterforms depending on point size. At display sizes (56px), weight 600 with a tight line-height of 1.07 and subtle negative letter-spacing (-0.28px) creates headlines that feel machined rather than typeset — precise, confident, and unapologetically direct. At body sizes (17px), the tracking loosens slightly (-0.374px) and line-height opens to 1.47, creating a reading rhythm that is comfortable without ever feeling slack.
|
||||
|
||||
The color story is starkly binary. Product sections alternate between pure black (`#000000`) backgrounds with white text and light gray (`#f5f5f7`) backgrounds with near-black text (`#1d1d1f`). This creates a cinematic pacing — dark sections feel immersive and premium, light sections feel open and informational. The only chromatic accent is Apple Blue (`#0071e3`), reserved exclusively for interactive elements: links, buttons, and focus states. This singular accent color in a sea of neutrals gives every clickable element unmistakable visibility.
|
||||
|
||||
**Key Characteristics:**
|
||||
- SF Pro Display/Text with optical sizing — letterforms adapt automatically to size context
|
||||
- Binary light/dark section rhythm: black (`#000000`) alternating with light gray (`#f5f5f7`)
|
||||
- Single accent color: Apple Blue (`#0071e3`) reserved exclusively for interactive elements
|
||||
- Product-as-hero photography on solid color fields — no gradients, no textures, no distractions
|
||||
- Extremely tight headline line-heights (1.07-1.14) creating compressed, billboard-like impact
|
||||
- Full-width section layout with centered content — the viewport IS the canvas
|
||||
- Pill-shaped CTAs (980px radius) creating soft, approachable action buttons
|
||||
- Generous whitespace between sections allowing each product moment to breathe
|
||||
|
||||
## 2. Color Palette & Roles
|
||||
|
||||
### Primary
|
||||
- **Pure Black** (`#000000`): Hero section backgrounds, immersive product showcases. The darkest canvas for the brightest products.
|
||||
- **Light Gray** (`#f5f5f7`): Alternate section backgrounds, informational areas. Not white — the slight blue-gray tint prevents sterility.
|
||||
- **Near Black** (`#1d1d1f`): Primary text on light backgrounds, dark button fills. Slightly warmer than pure black for comfortable reading.
|
||||
|
||||
### Interactive
|
||||
- **Apple Blue** (`#0071e3`): `--sk-focus-color`, primary CTA backgrounds, focus rings. The ONLY chromatic color in the interface.
|
||||
- **Link Blue** (`#0066cc`): `--sk-body-link-color`, inline text links. Slightly darker than Apple Blue for text-level readability.
|
||||
- **Bright Blue** (`#2997ff`): Links on dark backgrounds. Higher luminance for contrast on black sections.
|
||||
|
||||
### Text
|
||||
- **White** (`#ffffff`): Text on dark backgrounds, button text on blue/dark CTAs.
|
||||
- **Near Black** (`#1d1d1f`): Primary body text on light backgrounds.
|
||||
- **Black 80%** (`rgba(0, 0, 0, 0.8)`): Secondary text, nav items on light backgrounds. Slightly softened.
|
||||
- **Black 48%** (`rgba(0, 0, 0, 0.48)`): Tertiary text, disabled states, carousel controls.
|
||||
|
||||
### Surface & Dark Variants
|
||||
- **Dark Surface 1** (`#272729`): Card backgrounds in dark sections.
|
||||
- **Dark Surface 2** (`#262628`): Subtle surface variation in dark contexts.
|
||||
- **Dark Surface 3** (`#28282a`): Elevated cards on dark backgrounds.
|
||||
- **Dark Surface 4** (`#2a2a2d`): Highest dark surface elevation.
|
||||
- **Dark Surface 5** (`#242426`): Deepest dark surface tone.
|
||||
|
||||
### Button States
|
||||
- **Button Active** (`#ededf2`): Active/pressed state for light buttons.
|
||||
- **Button Default Light** (`#fafafc`): Search/filter button backgrounds.
|
||||
- **Overlay** (`rgba(210, 210, 215, 0.64)`): Media control scrims, overlays.
|
||||
- **White 32%** (`rgba(255, 255, 255, 0.32)`): Hover state on dark modal close buttons.
|
||||
|
||||
### Shadows
|
||||
- **Card Shadow** (`rgba(0, 0, 0, 0.22) 3px 5px 30px 0px`): Soft, diffused elevation for product cards. Offset and wide blur create a natural, photographic shadow.
|
||||
|
||||
## 3. Typography Rules
|
||||
|
||||
### Font Family
|
||||
- **Display**: `SF Pro Display`, with fallbacks: `SF Pro Icons, Helvetica Neue, Helvetica, Arial, sans-serif`
|
||||
- **Body**: `SF Pro Text`, with fallbacks: `SF Pro Icons, Helvetica Neue, Helvetica, Arial, sans-serif`
|
||||
- SF Pro Display is used at 20px and above; SF Pro Text is optimized for 19px and below.
|
||||
|
||||
### Hierarchy
|
||||
|
||||
| Role | Font | Size | Weight | Line Height | Letter Spacing | Notes |
|
||||
|------|------|------|--------|-------------|----------------|-------|
|
||||
| Display Hero | SF Pro Display | 56px (3.50rem) | 600 | 1.07 (tight) | -0.28px | Product launch headlines, maximum impact |
|
||||
| Section Heading | SF Pro Display | 40px (2.50rem) | 600 | 1.10 (tight) | normal | Feature section titles |
|
||||
| Tile Heading | SF Pro Display | 28px (1.75rem) | 400 | 1.14 (tight) | 0.196px | Product tile headlines |
|
||||
| Card Title | SF Pro Display | 21px (1.31rem) | 700 | 1.19 (tight) | 0.231px | Bold card headings |
|
||||
| Sub-heading | SF Pro Display | 21px (1.31rem) | 400 | 1.19 (tight) | 0.231px | Regular card headings |
|
||||
| Nav Heading | SF Pro Text | 34px (2.13rem) | 600 | 1.47 | -0.374px | Large navigation headings |
|
||||
| Sub-nav | SF Pro Text | 24px (1.50rem) | 300 | 1.50 | normal | Light sub-navigation text |
|
||||
| Body | SF Pro Text | 17px (1.06rem) | 400 | 1.47 | -0.374px | Standard reading text |
|
||||
| Body Emphasis | SF Pro Text | 17px (1.06rem) | 600 | 1.24 (tight) | -0.374px | Emphasized body text, labels |
|
||||
| Button Large | SF Pro Text | 18px (1.13rem) | 300 | 1.00 (tight) | normal | Large button text, light weight |
|
||||
| Button | SF Pro Text | 17px (1.06rem) | 400 | 2.41 (relaxed) | normal | Standard button text |
|
||||
| Link | SF Pro Text | 14px (0.88rem) | 400 | 1.43 | -0.224px | Body links, "Learn more" |
|
||||
| Caption | SF Pro Text | 14px (0.88rem) | 400 | 1.29 (tight) | -0.224px | Secondary text, descriptions |
|
||||
| Caption Bold | SF Pro Text | 14px (0.88rem) | 600 | 1.29 (tight) | -0.224px | Emphasized captions |
|
||||
| Micro | SF Pro Text | 12px (0.75rem) | 400 | 1.33 | -0.12px | Fine print, footnotes |
|
||||
| Micro Bold | SF Pro Text | 12px (0.75rem) | 600 | 1.33 | -0.12px | Bold fine print |
|
||||
| Nano | SF Pro Text | 10px (0.63rem) | 400 | 1.47 | -0.08px | Legal text, smallest size |
|
||||
|
||||
### Principles
|
||||
- **Optical sizing as philosophy**: SF Pro automatically switches between Display and Text optical sizes. Display versions have wider letter spacing and thinner strokes optimized for large sizes; Text versions are tighter and sturdier for small sizes. This means the font literally changes its DNA based on context.
|
||||
- **Weight restraint**: The scale spans 300 (light) to 700 (bold) but most text lives at 400 (regular) and 600 (semibold). Weight 300 appears only on large decorative text. Weight 700 is rare, used only for bold card titles.
|
||||
- **Negative tracking at all sizes**: Unlike most systems that only track headlines, Apple applies subtle negative letter-spacing even at body sizes (-0.374px at 17px, -0.224px at 14px, -0.12px at 12px). This creates universally tight, efficient text.
|
||||
- **Extreme line-height range**: Headlines compress to 1.07 while body text opens to 1.47, and some button contexts stretch to 2.41. This dramatic range creates clear visual hierarchy through rhythm alone.
|
||||
|
||||
## 4. Component Stylings
|
||||
|
||||
### Buttons
|
||||
|
||||
**Primary Blue (CTA)**
|
||||
- Background: `#0071e3` (Apple Blue)
|
||||
- Text: `#ffffff`
|
||||
- Padding: 8px 15px
|
||||
- Radius: 8px
|
||||
- Border: 1px solid transparent
|
||||
- Font: SF Pro Text, 17px, weight 400
|
||||
- Hover: background brightens slightly
|
||||
- Active: `#ededf2` background shift
|
||||
- Focus: `2px solid var(--sk-focus-color, #0071E3)` outline
|
||||
- Use: Primary call-to-action ("Buy", "Shop iPhone")
|
||||
|
||||
**Primary Dark**
|
||||
- Background: `#1d1d1f`
|
||||
- Text: `#ffffff`
|
||||
- Padding: 8px 15px
|
||||
- Radius: 8px
|
||||
- Font: SF Pro Text, 17px, weight 400
|
||||
- Use: Secondary CTA, dark variant
|
||||
|
||||
**Pill Link (Learn More / Shop)**
|
||||
- Background: transparent
|
||||
- Text: `#0066cc` (light bg) or `#2997ff` (dark bg)
|
||||
- Radius: 980px (full pill)
|
||||
- Border: 1px solid `#0066cc`
|
||||
- Font: SF Pro Text, 14px-17px
|
||||
- Hover: underline decoration
|
||||
- Use: "Learn more" and "Shop" links — the signature Apple inline CTA
|
||||
|
||||
**Filter / Search Button**
|
||||
- Background: `#fafafc`
|
||||
- Text: `rgba(0, 0, 0, 0.8)`
|
||||
- Padding: 0px 14px
|
||||
- Radius: 11px
|
||||
- Border: 3px solid `rgba(0, 0, 0, 0.04)`
|
||||
- Focus: `2px solid var(--sk-focus-color, #0071E3)` outline
|
||||
- Use: Search bars, filter controls
|
||||
|
||||
**Media Control**
|
||||
- Background: `rgba(210, 210, 215, 0.64)`
|
||||
- Text: `rgba(0, 0, 0, 0.48)`
|
||||
- Radius: 50% (circular)
|
||||
- Active: scale(0.9), background shifts
|
||||
- Focus: `2px solid var(--sk-focus-color, #0071e3)` outline, white bg, black text
|
||||
- Use: Play/pause, carousel arrows
|
||||
|
||||
### Cards & Containers
|
||||
- Background: `#f5f5f7` (light) or `#272729`-`#2a2a2d` (dark)
|
||||
- Border: none (borders are rare in Apple's system)
|
||||
- Radius: 5px-8px
|
||||
- Shadow: `rgba(0, 0, 0, 0.22) 3px 5px 30px 0px` for elevated product cards
|
||||
- Content: centered, generous padding
|
||||
- Hover: no standard hover state — cards are static, links within them are interactive
|
||||
|
||||
### Navigation
|
||||
- Background: `rgba(0, 0, 0, 0.8)` (translucent dark) with `backdrop-filter: saturate(180%) blur(20px)`
|
||||
- Height: 48px (compact)
|
||||
- Text: `#ffffff` at 12px, weight 400
|
||||
- Active: underline on hover
|
||||
- Logo: Apple logomark (SVG) centered or left-aligned, 17x48px viewport
|
||||
- Mobile: collapses to hamburger with full-screen overlay menu
|
||||
- The nav floats above content, maintaining its dark translucent glass regardless of section background
|
||||
|
||||
### Image Treatment
|
||||
- Products on solid-color fields (black or white) — no backgrounds, no context, just the object
|
||||
- Full-bleed section images that span the entire viewport width
|
||||
- Product photography at extremely high resolution with subtle shadows
|
||||
- Lifestyle images confined to rounded-corner containers (12px+ radius)
|
||||
|
||||
### Distinctive Components
|
||||
|
||||
**Product Hero Module**
|
||||
- Full-viewport-width section with solid background (black or `#f5f5f7`)
|
||||
- Product name as the primary headline (SF Pro Display, 56px, weight 600)
|
||||
- One-line descriptor below in lighter weight
|
||||
- Two pill CTAs side by side: "Learn more" (outline) and "Buy" / "Shop" (filled)
|
||||
|
||||
**Product Grid Tile**
|
||||
- Square or near-square card on contrasting background
|
||||
- Product image dominating 60-70% of the tile
|
||||
- Product name + one-line description below
|
||||
- "Learn more" and "Shop" link pair at bottom
|
||||
|
||||
**Feature Comparison Strip**
|
||||
- Horizontal scroll of product variants
|
||||
- Each variant as a vertical card with image, name, and key specs
|
||||
- Minimal chrome — the products speak for themselves
|
||||
|
||||
## 5. Layout Principles
|
||||
|
||||
### Spacing System
|
||||
- Base unit: 8px
|
||||
- Scale: 2px, 4px, 5px, 6px, 7px, 8px, 9px, 10px, 11px, 14px, 15px, 17px, 20px, 24px
|
||||
- Notable characteristic: the scale is dense at small sizes (2-11px) with granular 1px increments, then jumps in larger steps. This allows precise micro-adjustments for typography and icon alignment.
|
||||
|
||||
### Grid & Container
|
||||
- Max content width: approximately 980px (the recurring "980px radius" in pill buttons echoes this width)
|
||||
- Hero: full-viewport-width sections with centered content block
|
||||
- Product grids: 2-3 column layouts within centered container
|
||||
- Single-column for hero moments — one product, one message, full attention
|
||||
- No visible grid lines or gutters — spacing creates implied structure
|
||||
|
||||
### Whitespace Philosophy
|
||||
- **Cinematic breathing room**: Each product section occupies a full viewport height (or close to it). The whitespace between products is not empty — it is the pause between scenes in a film.
|
||||
- **Vertical rhythm through color blocks**: Rather than using spacing alone to separate sections, Apple uses alternating background colors (black, `#f5f5f7`, white). Each color change signals a new "scene."
|
||||
- **Compression within, expansion between**: Text blocks are tightly set (negative letter-spacing, tight line-heights) while the space surrounding them is vast. This creates a tension between density and openness.
|
||||
|
||||
### Border Radius Scale
|
||||
- Micro (5px): Small containers, link tags
|
||||
- Standard (8px): Buttons, product cards, image containers
|
||||
- Comfortable (11px): Search inputs, filter buttons
|
||||
- Large (12px): Feature panels, lifestyle image containers
|
||||
- Full Pill (980px): CTA links ("Learn more", "Shop"), navigation pills
|
||||
- Circle (50%): Media controls (play/pause, arrows)
|
||||
|
||||
## 6. Depth & Elevation
|
||||
|
||||
| Level | Treatment | Use |
|
||||
|-------|-----------|-----|
|
||||
| Flat (Level 0) | No shadow, solid background | Standard content sections, text blocks |
|
||||
| Navigation Glass | `backdrop-filter: saturate(180%) blur(20px)` on `rgba(0,0,0,0.8)` | Sticky navigation bar — the glass effect |
|
||||
| Subtle Lift (Level 1) | `rgba(0, 0, 0, 0.22) 3px 5px 30px 0px` | Product cards, floating elements |
|
||||
| Media Control | `rgba(210, 210, 215, 0.64)` background with scale transforms | Play/pause buttons, carousel controls |
|
||||
| Focus (Accessibility) | `2px solid #0071e3` outline | Keyboard focus on all interactive elements |
|
||||
|
||||
**Shadow Philosophy**: Apple uses shadow extremely sparingly. The primary shadow (`3px 5px 30px` with 0.22 opacity) is soft, wide, and offset — mimicking a diffused studio light casting a natural shadow beneath a physical object. This reinforces the "product as physical sculpture" metaphor. Most elements have NO shadow at all; elevation comes from background color contrast (dark card on darker background, or light card on slightly different gray).
|
||||
|
||||
### Decorative Depth
|
||||
- Navigation glass: the translucent, blurred navigation bar is the most recognizable depth element, creating a sense of floating UI above scrolling content
|
||||
- Section color transitions: depth is implied by the alternation between black and light gray sections rather than by shadows
|
||||
- Product photography shadows: the products themselves cast shadows in their photography, so the UI doesn't need to add synthetic ones
|
||||
|
||||
## 7. Do's and Don'ts
|
||||
|
||||
### Do
|
||||
- Use SF Pro Display at 20px+ and SF Pro Text below 20px — respect the optical sizing boundary
|
||||
- Apply negative letter-spacing at all text sizes (not just headlines) — Apple tracks tight universally
|
||||
- Use Apple Blue (`#0071e3`) ONLY for interactive elements — it must be the singular accent
|
||||
- Alternate between black and light gray (`#f5f5f7`) section backgrounds for cinematic rhythm
|
||||
- Use 980px pill radius for CTA links — the signature Apple link shape
|
||||
- Keep product imagery on solid-color fields with no competing visual elements
|
||||
- Use the translucent dark glass (`rgba(0,0,0,0.8)` + blur) for sticky navigation
|
||||
- Compress headline line-heights to 1.07-1.14 — Apple headlines are famously tight
|
||||
|
||||
### Don't
|
||||
- Don't introduce additional accent colors — the entire chromatic budget is spent on blue
|
||||
- Don't use heavy shadows or multiple shadow layers — Apple's shadow system is one soft diffused shadow or nothing
|
||||
- Don't use borders on cards or containers — Apple almost never uses visible borders (except on specific buttons)
|
||||
- Don't apply wide letter-spacing to SF Pro — it is designed to run tight at every size
|
||||
- Don't use weight 800 or 900 — the maximum is 700 (bold), and even that is rare
|
||||
- Don't add textures, patterns, or gradients to backgrounds — solid colors only
|
||||
- Don't make the navigation opaque — the glass blur effect is essential to the Apple UI identity
|
||||
- Don't center-align body text — Apple body copy is left-aligned; only headlines center
|
||||
- Don't use rounded corners larger than 12px on rectangular elements (980px is for pills only)
|
||||
|
||||
## 8. Responsive Behavior
|
||||
|
||||
### Breakpoints
|
||||
| Name | Width | Key Changes |
|
||||
|------|-------|-------------|
|
||||
| Small Mobile | <360px | Minimum supported, single column |
|
||||
| Mobile | 360-480px | Standard mobile layout |
|
||||
| Mobile Large | 480-640px | Wider single column, larger images |
|
||||
| Tablet Small | 640-834px | 2-column product grids begin |
|
||||
| Tablet | 834-1024px | Full tablet layout, expanded nav |
|
||||
| Desktop Small | 1024-1070px | Standard desktop layout begins |
|
||||
| Desktop | 1070-1440px | Full layout, max content width |
|
||||
| Large Desktop | >1440px | Centered with generous margins |
|
||||
|
||||
### Touch Targets
|
||||
- Primary CTAs: 8px 15px padding creating ~44px touch height
|
||||
- Navigation links: 48px height with adequate spacing
|
||||
- Media controls: 50% radius circular buttons, minimum 44x44px
|
||||
- "Learn more" pills: generous padding for comfortable tapping
|
||||
|
||||
### Collapsing Strategy
|
||||
- Hero headlines: 56px Display → 40px → 28px on mobile, maintaining tight line-height proportionally
|
||||
- Product grids: 3-column → 2-column → single column stacked
|
||||
- Navigation: full horizontal nav → compact mobile menu (hamburger)
|
||||
- Product hero modules: full-bleed maintained at all sizes, text scales down
|
||||
- Section backgrounds: maintain full-width color blocks at all breakpoints — the cinematic rhythm never breaks
|
||||
- Image sizing: products scale proportionally, never crop — the product silhouette is sacred
|
||||
|
||||
### Image Behavior
|
||||
- Product photography maintains aspect ratio at all breakpoints
|
||||
- Hero product images scale down but stay centered
|
||||
- Full-bleed section backgrounds persist at every size
|
||||
- Lifestyle images may crop on mobile but maintain their rounded corners
|
||||
- Lazy loading for below-fold product images
|
||||
|
||||
## 9. Agent Prompt Guide
|
||||
|
||||
### Quick Color Reference
|
||||
- Primary CTA: Apple Blue (`#0071e3`)
|
||||
- Page background (light): `#f5f5f7`
|
||||
- Page background (dark): `#000000`
|
||||
- Heading text (light): `#1d1d1f`
|
||||
- Heading text (dark): `#ffffff`
|
||||
- Body text: `rgba(0, 0, 0, 0.8)` on light, `#ffffff` on dark
|
||||
- Link (light bg): `#0066cc`
|
||||
- Link (dark bg): `#2997ff`
|
||||
- Focus ring: `#0071e3`
|
||||
- Card shadow: `rgba(0, 0, 0, 0.22) 3px 5px 30px 0px`
|
||||
|
||||
### Example Component Prompts
|
||||
- "Create a hero section on black background. Headline at 56px SF Pro Display weight 600, line-height 1.07, letter-spacing -0.28px, color white. One-line subtitle at 21px SF Pro Display weight 400, line-height 1.19, color white. Two pill CTAs: 'Learn more' (transparent bg, white text, 1px solid white border, 980px radius) and 'Buy' (Apple Blue #0071e3 bg, white text, 8px radius, 8px 15px padding)."
|
||||
- "Design a product card: #f5f5f7 background, 8px border-radius, no border, no shadow. Product image top 60% of card on solid background. Title at 28px SF Pro Display weight 400, letter-spacing 0.196px, line-height 1.14. Description at 14px SF Pro Text weight 400, color rgba(0,0,0,0.8). 'Learn more' and 'Shop' links in #0066cc at 14px."
|
||||
- "Build the Apple navigation: sticky, 48px height, background rgba(0,0,0,0.8) with backdrop-filter: saturate(180%) blur(20px). Links at 12px SF Pro Text weight 400, white text. Apple logo left, links centered, search and bag icons right."
|
||||
- "Create an alternating section layout: first section black bg with white text and centered product image, second section #f5f5f7 bg with #1d1d1f text. Each section near full-viewport height with 56px headline and two pill CTAs below."
|
||||
- "Design a 'Learn more' link: text #0066cc on light bg or #2997ff on dark bg, 14px SF Pro Text, underline on hover. After the text, include a right-arrow chevron character (>). Wrap in a container with 980px border-radius for pill shape when used as a standalone CTA."
|
||||
|
||||
### Iteration Guide
|
||||
1. Every interactive element gets Apple Blue (`#0071e3`) — no other accent colors
|
||||
2. Section backgrounds alternate: black for immersive moments, `#f5f5f7` for informational moments
|
||||
3. Typography optical sizing: SF Pro Display at 20px+, SF Pro Text below — never mix
|
||||
4. Negative letter-spacing at all sizes: -0.28px at 56px, -0.374px at 17px, -0.224px at 14px, -0.12px at 12px
|
||||
5. The navigation glass effect (translucent dark + blur) is non-negotiable — it defines the Apple web experience
|
||||
6. Products always appear on solid color fields — never on gradients, textures, or lifestyle backgrounds in hero modules
|
||||
7. Shadow is rare and always soft: `3px 5px 30px 0.22 opacity` or nothing at all
|
||||
8. Pill CTAs use 980px radius — this creates the signature Apple rounded-rectangle-that-looks-like-a-capsule shape
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
# TradingAgents A股分析项目 - 交接文档
|
||||
|
||||
## 项目位置
|
||||
```
|
||||
/Users/chenshaojie/Downloads/autoresearch/TradingAgents/
|
||||
```
|
||||
|
||||
## 环境配置
|
||||
|
||||
- **Python 版本**: 3.12 (非系统默认)
|
||||
- **环境路径**: `env312/`
|
||||
- **激活命令**: `source env312/bin/activate`
|
||||
|
||||
## 运行方式
|
||||
|
||||
### 方式1: 完整流程 (SEPA筛选 + TradingAgents分析)
|
||||
```bash
|
||||
cd /Users/chenshaojie/Downloads/autoresearch/TradingAgents
|
||||
source env312/bin/activate
|
||||
python sepa_v5.py
|
||||
```
|
||||
|
||||
### 方式2: 单股分析
|
||||
```bash
|
||||
cd /Users/chenshaojie/Downloads/autoresearch/TradingAgents
|
||||
source env312/bin/activate
|
||||
python run_ningde.py # 宁德时代
|
||||
```
|
||||
|
||||
## 关键文件
|
||||
|
||||
| 文件 | 说明 |
|
||||
|------|------|
|
||||
| `sepa_v5.py` | SEPA筛选 + TradingAgents 工作流 |
|
||||
| `run_ningde.py` | 宁德时代单股分析 |
|
||||
| `run_312.py` | 贵州茅台分析 (原演示脚本) |
|
||||
|
||||
## 当前进度
|
||||
|
||||
- ✅ TradingAgents 部署完成
|
||||
- ✅ Python 3.12 环境配置完成
|
||||
- ✅ MiniMax API (Anthropic兼容) 配置完成
|
||||
- ✅ SEPA筛选流程完成 (yfinance数据源)
|
||||
- ⚠️ 只完成1只股票分析 (宁德时代)
|
||||
|
||||
## 当前发现
|
||||
|
||||
1. **SEPA筛选结果**: 5只基本面达标
|
||||
- 宁德时代 (300750.SZ): ROE=23.8%, 营收=36.6%, 利润=50.1%
|
||||
- 药明康德 (603259.SS): ROE=25.8%, 营收=18.2%, 利润=128.7%
|
||||
- 立讯精密 (002475.SZ): ROE=19.6%, 营收=31.0%, 利润=29.1%
|
||||
- 寒武纪 (688256.SS): ROE=23.8%, 营收=91.0%, 利润=61.7%
|
||||
- 澜起科技 (688008.SS): ROE=17.6%, 营收=31.0%, 利润=39.9%
|
||||
|
||||
2. **问题**: 这些股票目前都在均线下方(调整期),SEPA技术条件未通过
|
||||
|
||||
3. **TradingAgents运行缓慢**: 建议一次只分析1-2只股票
|
||||
|
||||
4. **akshare财务API已损坏**: 使用yfinance替代
|
||||
|
||||
## 宁德时代分析结果
|
||||
|
||||
**最终交易建议**: HOLD / WAIT FOR PULLBACK
|
||||
|
||||
| 指标 | 数值 | 信号 |
|
||||
|------|------|------|
|
||||
| 当前价格 | ¥397.00 | - |
|
||||
| 50日均线 | ¥360.51 | 🟢 价格在线上 |
|
||||
| 200日均线 | ¥329.40 | 🟢 均线之上 (强势) |
|
||||
| RSI (14) | 70.14 | 🔴 超买 |
|
||||
| MACD | 金叉看涨 | 🟢 强势 |
|
||||
| ATR | 12.43 | 🟡 高波动 |
|
||||
|
||||
**建议**: 持有现有仓位 / 新资金等待回调至¥360-365再入场
|
||||
|
||||
## 建议任务
|
||||
|
||||
1. 继续分析剩余4只股票
|
||||
2. 优化SEPA参数(中国市场更宽松的阈值)
|
||||
3. 添加ST股和次新股过滤
|
||||
4. 批量分析100+只股票
|
||||
|
||||
## API配置
|
||||
|
||||
- API Key: 从本地环境变量读取(不要提交到仓库)
|
||||
- Base URL: `https://api.minimaxi.com/anthropic`
|
||||
- Model: `MiniMax-M2.7-highspeed`
|
||||
31
README.md
31
README.md
|
|
@ -147,15 +147,21 @@ export OPENROUTER_API_KEY=... # OpenRouter
|
|||
export ALPHA_VANTAGE_API_KEY=... # Alpha Vantage
|
||||
```
|
||||
|
||||
For this local repo, the default daily lane is MiniMax via Anthropic-compatible API:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# then fill:
|
||||
# MINIMAX_API_KEY=...
|
||||
# ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
|
||||
# TRADINGAGENTS_LLM_PROVIDER=anthropic
|
||||
# TRADINGAGENTS_MODEL=MiniMax-M2.7-highspeed
|
||||
```
|
||||
|
||||
For enterprise providers (e.g. Azure OpenAI, AWS Bedrock), copy `.env.enterprise.example` to `.env.enterprise` and fill in your credentials.
|
||||
|
||||
For local models, configure Ollama with `llm_provider: "ollama"` in your config.
|
||||
|
||||
Alternatively, copy `.env.example` to `.env` and fill in your keys:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### CLI Usage
|
||||
|
||||
Launch the interactive CLI:
|
||||
|
|
@ -191,9 +197,10 @@ To use TradingAgents inside your code, you can import the `tradingagents` module
|
|||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.default_config import get_default_config, load_project_env
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
|
||||
load_project_env(__file__)
|
||||
ta = TradingAgentsGraph(debug=True, config=get_default_config())
|
||||
|
||||
# forward propagate
|
||||
_, decision = ta.propagate("NVDA", "2026-01-15")
|
||||
|
|
@ -204,12 +211,12 @@ You can also adjust the default configuration to set your own choice of LLMs, de
|
|||
|
||||
```python
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.default_config import get_default_config, load_project_env
|
||||
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "openai" # openai, google, anthropic, xai, openrouter, ollama
|
||||
config["deep_think_llm"] = "gpt-5.4" # Model for complex reasoning
|
||||
config["quick_think_llm"] = "gpt-5.4-mini" # Model for quick tasks
|
||||
load_project_env(__file__)
|
||||
config = get_default_config()
|
||||
# Local repo default is MiniMax Anthropic-compatible.
|
||||
# Override only when you intentionally want a different provider/model.
|
||||
config["max_debate_rounds"] = 2
|
||||
|
||||
ta = TradingAgentsGraph(debug=True, config=config)
|
||||
|
|
|
|||
13
cli/main.py
13
cli/main.py
|
|
@ -25,7 +25,7 @@ from rich.align import Align
|
|||
from rich.rule import Rule
|
||||
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.default_config import get_default_config
|
||||
from cli.models import AnalystType
|
||||
from cli.utils import *
|
||||
from cli.announcements import fetch_announcements, display_announcements
|
||||
|
|
@ -931,7 +931,7 @@ def run_analysis():
|
|||
selections = get_user_selections()
|
||||
|
||||
# Create config with selected research depth
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config = get_default_config()
|
||||
config["max_debate_rounds"] = selections["research_depth"]
|
||||
config["max_risk_discuss_rounds"] = selections["research_depth"]
|
||||
config["quick_think_llm"] = selections["shallow_thinker"]
|
||||
|
|
@ -1165,7 +1165,14 @@ def run_analysis():
|
|||
|
||||
# Update final report sections
|
||||
for section in message_buffer.report_sections.keys():
|
||||
if section in final_state:
|
||||
if section == "final_trade_decision":
|
||||
report_value = final_state.get(
|
||||
"final_trade_decision_report",
|
||||
final_state.get("final_trade_decision"),
|
||||
)
|
||||
if report_value:
|
||||
message_buffer.update_report_section(section, report_value)
|
||||
elif section in final_state:
|
||||
message_buffer.update_report_section(section, final_state[section])
|
||||
|
||||
update_display(layout, stats_handler=stats_handler, start_time=start_time)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,228 @@
|
|||
# TradingAgents architecture convergence draft: application boundary
|
||||
|
||||
Status: draft
|
||||
Audience: backend/dashboard/orchestrator maintainers
|
||||
Scope: define the boundary between HTTP/WebSocket delivery, application service orchestration, and the quant+LLM merge kernel
|
||||
|
||||
## Current status snapshot (2026-04)
|
||||
|
||||
This document is still the **target boundary** document, but several convergence pieces are already landed on the mainline:
|
||||
|
||||
- `web_dashboard/backend/services/job_service.py` now owns public task/job projection logic;
|
||||
- `web_dashboard/backend/services/result_store.py` persists result contracts under `results/<task_id>/result.v1alpha1.json`;
|
||||
- `web_dashboard/backend/services/analysis_service.py` and `api/portfolio.py` already expose contract-first result payloads by default;
|
||||
- task lifecycle query/command routing for `status/list/cancel` now sits behind backend task services instead of route-local orchestration in `main.py`;
|
||||
- `/ws/analysis/{task_id}` and `/ws/orchestrator` already carry `contract_version = "v1alpha1"` and include result/degradation/data-quality metadata.
|
||||
|
||||
What is **not** fully finished yet:
|
||||
|
||||
- `web_dashboard/backend/main.py` still contains too much orchestration glue and transport-local logic outside the task lifecycle slice;
|
||||
- route handlers are thinner than before, but the application layer has not fully absorbed reports/export and every remaining lifecycle branch;
|
||||
- migration flags/modes still coexist with legacy compatibility paths.
|
||||
|
||||
## 1. Why this document exists
|
||||
|
||||
The current backend mixes three concerns inside `web_dashboard/backend/main.py`:
|
||||
|
||||
1. transport concerns: FastAPI routes, headers, WebSocket sessions, task persistence;
|
||||
2. application orchestration: task lifecycle, stage progress, subprocess wiring, result projection;
|
||||
3. domain execution: `TradingOrchestrator`, `LiveMode`, quant+LLM signal merge.
|
||||
|
||||
For architecture convergence, these concerns should be separated so that:
|
||||
|
||||
- the application service remains a no-strategy orchestration and contract layer;
|
||||
- `orchestrator/` remains the quant+LLM merge kernel;
|
||||
- transport adapters can migrate without re-embedding business rules.
|
||||
|
||||
## 2. Current evidence in repo
|
||||
|
||||
### 2.1 Merge kernel already exists
|
||||
|
||||
- `orchestrator/orchestrator.py` owns quant runner + LLM runner composition.
|
||||
- `orchestrator/signals.py` owns `Signal`, `FinalSignal`, and merge math.
|
||||
- `orchestrator/live_mode.py` owns batch live execution against the orchestrator.
|
||||
|
||||
This is the correct place for quant/LLM merge semantics.
|
||||
|
||||
### 2.2 Backend currently crosses the boundary
|
||||
|
||||
`web_dashboard/backend/main.py` currently also owns:
|
||||
|
||||
- analysis subprocess template creation;
|
||||
- stage-to-progress mapping;
|
||||
- conversion from `FinalSignal` to UI-oriented fields such as `decision`, `quant_signal`, `llm_signal`, `confidence`;
|
||||
- report materialization into `results/<ticker>/<date>/complete_report.md`.
|
||||
|
||||
This makes the transport layer hard to replace and makes result contracts implicit.
|
||||
|
||||
At the same time, current mainline no longer matches the oldest “all logic sits in routes” description exactly. The codebase now sits in a **mid-migration** state:
|
||||
|
||||
- merge semantics remain in `orchestrator/`;
|
||||
- public payload shaping has started moving into backend services;
|
||||
- task lifecycle query/command paths now route through backend task services;
|
||||
- legacy compatibility fields still exist for UI safety.
|
||||
|
||||
## 3. Target boundary
|
||||
|
||||
## 3.1 Layer model
|
||||
|
||||
### Transport adapters
|
||||
|
||||
Examples:
|
||||
|
||||
- FastAPI REST routes
|
||||
- FastAPI WebSocket endpoints
|
||||
- future CLI/Tauri/worker adapters
|
||||
|
||||
Responsibilities:
|
||||
|
||||
- request parsing and auth
|
||||
- response serialization
|
||||
- websocket connection management
|
||||
- mapping application errors to HTTP/WebSocket status
|
||||
|
||||
Non-responsibilities:
|
||||
|
||||
- no strategy logic
|
||||
- no quant/LLM weighting logic
|
||||
- no task-stage business rules beyond rendering application events
|
||||
|
||||
### Application service
|
||||
|
||||
Suggested responsibility set:
|
||||
|
||||
- accept typed command/query inputs from transport
|
||||
- orchestrate analysis execution lifecycle
|
||||
- map domain results into stable result contracts
|
||||
- own task ids, progress events, persistence coordination, and rollback-safe migration switches
|
||||
- decide which backend implementation to call during migration
|
||||
|
||||
Non-responsibilities:
|
||||
|
||||
- no rating-to-signal research logic
|
||||
- no quant/LLM merge math
|
||||
- no provider-specific data acquisition details
|
||||
|
||||
### Domain kernel
|
||||
|
||||
Examples:
|
||||
|
||||
- `TradingOrchestrator`
|
||||
- `SignalMerger`
|
||||
- `QuantRunner`
|
||||
- `LLMRunner`
|
||||
- `TradingAgentsGraph`
|
||||
|
||||
Responsibilities:
|
||||
|
||||
- produce quant signal, LLM signal, merged signal
|
||||
- expose domain-native dataclasses and metadata
|
||||
- degrade gracefully when one lane fails
|
||||
|
||||
## 3.2 Canonical dependency direction
|
||||
|
||||
```text
|
||||
transport adapter -> application service -> domain kernel
|
||||
transport adapter -> application service -> persistence adapter
|
||||
application service -> result contract mapper
|
||||
```
|
||||
|
||||
Forbidden direction:
|
||||
|
||||
```text
|
||||
transport adapter -> domain kernel + ad hoc mapping + ad hoc persistence
|
||||
```
|
||||
|
||||
## 4. Proposed application-service interface
|
||||
|
||||
The application service should expose typed use cases instead of letting routes assemble logic inline.
|
||||
|
||||
## 4.1 Commands / queries
|
||||
|
||||
Suggested surface:
|
||||
|
||||
- `start_analysis(request) -> AnalysisTaskAccepted`
|
||||
- `get_analysis_status(task_id) -> AnalysisTaskStatus`
|
||||
- `cancel_analysis(task_id) -> AnalysisTaskStatus`
|
||||
- `run_live_signals(request) -> LiveSignalBatch`
|
||||
- `list_analysis_tasks() -> AnalysisTaskList`
|
||||
- `get_report(ticker, date) -> HistoricalReport`
|
||||
|
||||
## 4.2 Domain input boundary
|
||||
|
||||
Inputs from transport should already be normalized into application DTOs:
|
||||
|
||||
- ticker
|
||||
- trade date
|
||||
- auth context
|
||||
- provider/config selection
|
||||
- execution mode
|
||||
|
||||
The application service may choose subprocess/backend/orchestrator execution strategy, but it must not redefine domain semantics.
|
||||
|
||||
## 5. Boundary rules for convergence work
|
||||
|
||||
### Rule A: result mapping happens once
|
||||
|
||||
Current code maps `FinalSignal` to dashboard fields inside the analysis subprocess template. That mapping should move behind a single application mapper so REST, WebSocket, export, and persisted task status share one contract.
|
||||
|
||||
### Rule B: stage model belongs to application layer
|
||||
|
||||
Stage names such as `analysts`, `research`, `trading`, `risk`, `portfolio` are delivery/progress concepts, not merge-kernel concepts. Keep them outside `orchestrator/`.
|
||||
|
||||
### Rule C: orchestrator stays contract-light
|
||||
|
||||
`orchestrator/` should continue returning `Signal` / `FinalSignal` and domain metadata. It should not learn about HTTP status, WebSocket payloads, pagination, or UI labels beyond domain rating semantics already present.
|
||||
|
||||
### Rule D: transport only renders contracts
|
||||
|
||||
Routes should call the application service and return the already-shaped DTO/contract. They should not reconstruct `decision`, `quant_signal`, `llm_signal`, or progress math themselves.
|
||||
|
||||
## 6. Suggested module split
|
||||
|
||||
One viable split:
|
||||
|
||||
```text
|
||||
web_dashboard/backend/
|
||||
application/
|
||||
analysis_service.py
|
||||
live_signal_service.py
|
||||
report_service.py
|
||||
contracts.py
|
||||
mappers.py
|
||||
infra/
|
||||
task_store.py
|
||||
subprocess_runner.py
|
||||
report_store.py
|
||||
api/
|
||||
fastapi_routes remain thin
|
||||
```
|
||||
|
||||
This keeps convergence local to backend/application without moving merge logic out of `orchestrator/`.
|
||||
|
||||
## 7. Non-goals
|
||||
|
||||
- Do not move signal merge math into the application service.
|
||||
- Do not turn the application service into a strategy engine.
|
||||
- Do not require frontend-specific field naming inside `orchestrator/`.
|
||||
- Do not block migration on a full rewrite of existing routes.
|
||||
|
||||
## 8. Review checklist
|
||||
|
||||
A change respects this boundary if all are true:
|
||||
|
||||
- route handlers mainly validate/auth/call service/return contract;
|
||||
- application service owns task lifecycle and contract mapping;
|
||||
- `orchestrator/` remains the only owner of merge semantics;
|
||||
- domain dataclasses can still be tested without FastAPI or WebSocket context.
|
||||
|
||||
## 9. Current maintainer guidance
|
||||
|
||||
When touching backend convergence code, treat these files as the current application-facing boundary:
|
||||
|
||||
- `web_dashboard/backend/services/job_service.py`
|
||||
- `web_dashboard/backend/services/result_store.py`
|
||||
- `web_dashboard/backend/services/analysis_service.py`
|
||||
- `web_dashboard/backend/api/portfolio.py`
|
||||
|
||||
If a change adds or removes externally visible fields, update `docs/contracts/result-contract-v1alpha1.md` in the same change set.
|
||||
|
|
@ -0,0 +1,333 @@
|
|||
# Orchestrator Configuration Validation
|
||||
|
||||
Status: implemented (2026-04-16)
|
||||
Audience: orchestrator users, backend maintainers
|
||||
Scope: LLMRunner configuration validation and error classification
|
||||
|
||||
## Change Log
|
||||
|
||||
**2026-04-16**: Refactored provider validation to centralize patterns in `factory.py`
|
||||
- Moved `_PROVIDER_BASE_URL_PATTERNS` from `llm_runner.py` to `ProviderSpec.base_url_patterns` in `factory.py`
|
||||
- Added `validate_provider_base_url()` function with pattern caching for performance
|
||||
- Added `ProviderMismatch` TypedDict for type-safe validation results
|
||||
- Split ollama and openrouter into separate `ProviderSpec` entries (previously shared openai's spec)
|
||||
- Reduced `llm_runner.py` from 45 lines to 13 lines for validation logic
|
||||
- All 21 tests pass, including 6 provider mismatch tests
|
||||
|
||||
## Overview
|
||||
|
||||
`orchestrator/llm_runner.py` implements three layers of configuration validation to catch errors before expensive graph initialization or API calls:
|
||||
|
||||
1. **Provider × Base URL Matrix Validation** - detects provider/endpoint mismatches
|
||||
2. **Timeout Configuration Validation** - warns when timeouts may be insufficient
|
||||
3. **Runtime Error Classification** - categorizes failures into actionable reason codes
|
||||
|
||||
## 1. Provider × Base URL Matrix Validation
|
||||
|
||||
### Purpose
|
||||
|
||||
Prevent wasted initialization time and API calls when provider and base_url are incompatible.
|
||||
|
||||
### Implementation
|
||||
|
||||
`LLMRunner._detect_provider_mismatch()` validates provider × base_url combinations using a pattern matrix:
|
||||
|
||||
```python
|
||||
_PROVIDER_BASE_URL_PATTERNS = {
|
||||
"anthropic": [r"api\.anthropic\.com", r"api\.minimaxi\.com/anthropic"],
|
||||
"openai": [r"api\.openai\.com"],
|
||||
"google": [r"generativelanguage\.googleapis\.com"],
|
||||
"xai": [r"api\.x\.ai"],
|
||||
"ollama": [r"localhost:\d+", r"127\.0\.0\.1:\d+", r"ollama"],
|
||||
"openrouter": [r"openrouter\.ai"],
|
||||
}
|
||||
```
|
||||
|
||||
### Validation Logic
|
||||
|
||||
1. Extract `llm_provider` and `backend_url` from `trading_agents_config`
|
||||
2. Look up expected URL patterns for the provider
|
||||
3. Check if `backend_url` matches any expected pattern (regex)
|
||||
4. If no match found, return mismatch details before graph initialization
|
||||
|
||||
### Error Response
|
||||
|
||||
When mismatch detected, `get_signal()` returns:
|
||||
|
||||
```python
|
||||
Signal(
|
||||
degraded=True,
|
||||
reason_code="provider_mismatch",
|
||||
metadata={
|
||||
"data_quality": {
|
||||
"state": "provider_mismatch",
|
||||
"provider": "google",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
"expected_patterns": [r"generativelanguage\.googleapis\.com"],
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
**Valid configurations:**
|
||||
- `anthropic` + `https://api.minimaxi.com/anthropic` ✓
|
||||
- `openai` + `https://api.openai.com/v1` ✓
|
||||
- `ollama` + `http://localhost:11434` ✓
|
||||
|
||||
**Invalid configurations (detected):**
|
||||
- `google` + `https://api.openai.com/v1` → `provider_mismatch`
|
||||
- `xai` + `https://api.minimaxi.com/anthropic` → `provider_mismatch`
|
||||
- `ollama` + `https://api.openai.com/v1` → `provider_mismatch`
|
||||
|
||||
### Design Notes
|
||||
|
||||
- Uses **original provider name** (not canonical) for validation
|
||||
- `ollama`, `openrouter`, and `openai` share the same canonical provider (`openai`) but have different URL patterns
|
||||
- Validation must distinguish between them
|
||||
- Validation runs **before** `TradingAgentsGraph` initialization
|
||||
- Saves ~5-10s of initialization time on mismatch
|
||||
- Avoids confusing error messages from LangChain/provider SDKs
|
||||
|
||||
## 2. Timeout Configuration Validation
|
||||
|
||||
### Purpose
|
||||
|
||||
Warn users when timeout settings may be insufficient for their analyst profile, preventing unexpected research degradation.
|
||||
|
||||
### Implementation
|
||||
|
||||
`LLMRunner._validate_timeout_config()` checks timeout sufficiency based on analyst count:
|
||||
|
||||
```python
|
||||
_RECOMMENDED_TIMEOUTS = {
|
||||
1: {"analyst": 75.0, "research": 30.0}, # single analyst
|
||||
2: {"analyst": 90.0, "research": 45.0}, # two analysts
|
||||
3: {"analyst": 105.0, "research": 60.0}, # three analysts
|
||||
4: {"analyst": 120.0, "research": 75.0}, # four analysts
|
||||
}
|
||||
```
|
||||
|
||||
### Validation Logic
|
||||
|
||||
1. Extract `selected_analysts` from `trading_agents_config` (default: 4 analysts)
|
||||
2. Extract `analyst_node_timeout_secs` and `research_node_timeout_secs`
|
||||
3. Compare against recommended thresholds for analyst count
|
||||
4. Log `WARNING` if configured timeout < recommended threshold
|
||||
|
||||
### Warning Example
|
||||
|
||||
```
|
||||
LLMRunner: analyst_node_timeout_secs=75.0s may be insufficient for 4 analyst(s) (recommended: 120.0s)
|
||||
```
|
||||
|
||||
### Design Notes
|
||||
|
||||
- **Non-blocking validation** - logs warning but does not prevent initialization
|
||||
- Different LLM providers have vastly different speeds (MiniMax vs OpenAI)
|
||||
- Users may have profiled their specific setup and chosen lower timeouts intentionally
|
||||
- **Conservative recommendations** - thresholds assume slower providers
|
||||
- Based on real profiling data from MiniMax Anthropic-compatible endpoint
|
||||
- Users with faster providers can safely ignore warnings
|
||||
- **Runs at `__init__` time** - warns early, before any API calls
|
||||
|
||||
### Timeout Calculation Rationale
|
||||
|
||||
Multi-analyst execution is **serial** for analysts, **parallel** for research:
|
||||
|
||||
```
|
||||
Total time ≈ (analyst_count × analyst_timeout) + research_timeout + trading + risk + portfolio
|
||||
```
|
||||
|
||||
For 4 analysts with 75s timeout each:
|
||||
- Analyst phase: ~300s (serial)
|
||||
- Research phase: ~30s (parallel bull/bear)
|
||||
- Trading phase: ~15s
|
||||
- Risk phase: ~10s
|
||||
- Portfolio phase: ~10s
|
||||
- **Total: ~365s** (6+ minutes)
|
||||
|
||||
Recommended 120s per analyst assumes:
|
||||
- Some analysts may timeout and degrade
|
||||
- Degraded path still completes within timeout
|
||||
- Total execution stays under reasonable bounds (~8-10 minutes)
|
||||
|
||||
## 3. Runtime Error Classification
|
||||
|
||||
### Purpose
|
||||
|
||||
Categorize runtime failures into actionable reason codes for debugging and monitoring.
|
||||
|
||||
### Error Taxonomy
|
||||
|
||||
Defined in `orchestrator/contracts/error_taxonomy.py`:
|
||||
|
||||
```python
|
||||
class ReasonCode(str, Enum):
|
||||
CONFIG_INVALID = "config_invalid"
|
||||
PROVIDER_MISMATCH = "provider_mismatch"
|
||||
PROVIDER_AUTH_FAILED = "provider_auth_failed"
|
||||
LLM_INIT_FAILED = "llm_init_failed"
|
||||
LLM_SIGNAL_FAILED = "llm_signal_failed"
|
||||
LLM_UNKNOWN_RATING = "llm_unknown_rating"
|
||||
# ... (quant-related codes omitted)
|
||||
```
|
||||
|
||||
### Classification Logic
|
||||
|
||||
`LLMRunner.get_signal()` catches exceptions from `propagate()` and classifies them:
|
||||
|
||||
1. **Provider mismatch** (pre-initialization)
|
||||
- Detected by `_detect_provider_mismatch()` before graph creation
|
||||
- Returns `provider_mismatch` immediately
|
||||
|
||||
2. **Provider auth failure** (runtime)
|
||||
- Detected by `_looks_like_provider_auth_failure()` heuristic
|
||||
- Markers: `"authentication_error"`, `"login fail"`, `"invalid api key"`, `"unauthorized"`, `"error code: 401"`
|
||||
- Returns `provider_auth_failed`
|
||||
|
||||
3. **Generic LLM failure** (runtime)
|
||||
- Any other exception from `propagate()`
|
||||
- Returns `llm_signal_failed`
|
||||
|
||||
### Error Response Structure
|
||||
|
||||
All error signals include:
|
||||
|
||||
```python
|
||||
Signal(
|
||||
degraded=True,
|
||||
reason_code="<reason_code>",
|
||||
direction=0,
|
||||
confidence=0.0,
|
||||
metadata={
|
||||
"error": "<exception message>",
|
||||
"data_quality": {
|
||||
"state": "<state>",
|
||||
# ... additional context
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Design Notes
|
||||
|
||||
- **Fail-fast on config errors** - mismatch detected before expensive operations
|
||||
- **Heuristic auth detection** - no API call overhead, relies on error message patterns
|
||||
- **Structured metadata** - `data_quality.state` mirrors `reason_code` for consistency
|
||||
|
||||
## 4. Testing
|
||||
|
||||
### Test Coverage
|
||||
|
||||
`orchestrator/tests/test_llm_runner.py` includes:
|
||||
|
||||
**Provider matrix validation:**
|
||||
- `test_detect_provider_mismatch_google_with_openai_url`
|
||||
- `test_detect_provider_mismatch_xai_with_anthropic_url`
|
||||
- `test_detect_provider_mismatch_ollama_with_openai_url`
|
||||
- `test_detect_provider_mismatch_valid_anthropic_minimax`
|
||||
- `test_detect_provider_mismatch_valid_openai`
|
||||
|
||||
**Timeout validation:**
|
||||
- `test_timeout_validation_warns_for_multiple_analysts_low_timeout`
|
||||
- `test_timeout_validation_no_warn_for_single_analyst`
|
||||
- `test_timeout_validation_no_warn_for_sufficient_timeout`
|
||||
|
||||
**Error classification:**
|
||||
- `test_get_signal_classifies_provider_auth_failure`
|
||||
- `test_get_signal_returns_provider_mismatch_before_graph_init`
|
||||
- `test_get_signal_returns_reason_code_on_propagate_failure`
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
cd /path/to/TradingAgents
|
||||
python -m pytest orchestrator/tests/test_llm_runner.py -v
|
||||
```
|
||||
|
||||
## 5. Maintenance
|
||||
|
||||
### Adding New Providers
|
||||
|
||||
When adding a new provider to `tradingagents/llm_clients/factory.py`:
|
||||
|
||||
1. Add a new `ProviderSpec` entry to `_PROVIDER_SPECS` tuple with `base_url_patterns`
|
||||
2. Add test cases for valid and invalid configurations in `orchestrator/tests/test_llm_runner.py`
|
||||
3. Update this documentation
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
ProviderSpec(
|
||||
canonical_name="newprovider",
|
||||
aliases=("newprovider",),
|
||||
builder=lambda model, base_url=None, **kwargs: NewProviderClient(model, base_url, **kwargs),
|
||||
base_url_patterns=(r"api\.newprovider\.com",),
|
||||
)
|
||||
```
|
||||
|
||||
### Adjusting Timeout Recommendations
|
||||
|
||||
If profiling shows different timeout requirements:
|
||||
|
||||
1. Update `_RECOMMENDED_TIMEOUTS` in `llm_runner.py`
|
||||
2. Document rationale in this file
|
||||
3. Update test expectations if needed
|
||||
|
||||
### Extending Error Classification
|
||||
|
||||
To add new reason codes:
|
||||
|
||||
1. Add to `ReasonCode` enum in `contracts/error_taxonomy.py`
|
||||
2. Add detection logic in `LLMRunner.get_signal()`
|
||||
3. Add test case in `test_llm_runner.py`
|
||||
4. Update this documentation
|
||||
|
||||
## 6. Known Limitations
|
||||
|
||||
### API Key Validation
|
||||
|
||||
Current implementation does **not** validate API key validity before graph initialization:
|
||||
|
||||
- **Limitation**: Expired/invalid keys are only detected during first `propagate()` call
|
||||
- **Impact**: ~5-10s wasted on graph initialization before auth failure
|
||||
- **Rationale**: Lightweight key validation would require provider-specific API calls, adding latency and complexity
|
||||
- **Mitigation**: Auth failures are still classified correctly as `provider_auth_failed`
|
||||
|
||||
### Provider Pattern Maintenance
|
||||
|
||||
~~URL patterns must be manually kept in sync with provider changes:~~
|
||||
|
||||
**UPDATE (2026-04-16)**: Provider URL patterns have been moved to `tradingagents/llm_clients/factory.py` as part of `ProviderSpec`. This centralizes validation logic with provider definitions.
|
||||
|
||||
**Current implementation:**
|
||||
- Each `ProviderSpec` includes optional `base_url_patterns` tuple
|
||||
- `validate_provider_base_url()` function provides validation logic
|
||||
- `LLMRunner._detect_provider_mismatch()` delegates to factory validation
|
||||
- Patterns are co-located with provider builders, reducing maintenance burden
|
||||
|
||||
**Benefits:**
|
||||
- Single source of truth for provider configuration
|
||||
- Easier to keep patterns in sync when adding/updating providers
|
||||
- Factory can be tested independently of orchestrator
|
||||
- Reduced code duplication
|
||||
|
||||
**Remaining considerations:**
|
||||
- **Risk**: Provider changes base URL structure (e.g., API versioning)
|
||||
- **Mitigation**: Validation is non-blocking; mismatches are logged but don't prevent operation
|
||||
|
||||
### Timeout Recommendations
|
||||
|
||||
Recommendations are based on MiniMax profiling and may not generalize:
|
||||
|
||||
- **Risk**: Faster providers (OpenAI GPT-4) may trigger unnecessary warnings
|
||||
- **Mitigation**: Warnings are advisory only; users can ignore if they've profiled their setup
|
||||
- **Future**: Consider provider-specific timeout recommendations
|
||||
|
||||
## 7. Related Documentation
|
||||
|
||||
- `docs/contracts/result-contract-v1alpha1.md` - Signal contract structure
|
||||
- `docs/architecture/research-provenance.md` - Research degradation semantics
|
||||
- `docs/migration/rollback-notes.md` - Backend migration status
|
||||
- `orchestrator/contracts/error_taxonomy.py` - Complete reason code list
|
||||
|
|
@ -0,0 +1,250 @@
|
|||
# TradingAgents research provenance, node guards, and profiling harness
|
||||
|
||||
Status: draft
|
||||
Audience: orchestrator, TradingAgents graph, verification
|
||||
Scope: document the Phase 1-4 provenance fields, Bull/Bear/Manager guard behavior, trace schema, and the smallest safe A/B workflow for verification
|
||||
|
||||
## Current implementation snapshot (2026-04)
|
||||
|
||||
Mainline now has four distinct but connected pieces in place:
|
||||
|
||||
1. `research provenance` fields are carried in `investment_debate_state`;
|
||||
2. the same provenance is reused by:
|
||||
- `orchestrator/llm_runner.py`
|
||||
- `orchestrator/live_mode.py`
|
||||
- `tradingagents/graph/trading_graph.py` full-state logs;
|
||||
3. `orchestrator/profile_stage_chain.py` emits node-level traces for offline analysis;
|
||||
4. `orchestrator/profile_ab.py` compares two trace cohorts offline without changing the production execution path.
|
||||
|
||||
This document describes the **current mainline behavior**, not a future structured-memo design.
|
||||
|
||||
## 1. Why this document exists
|
||||
|
||||
Phase 1-4 convergence added three closely related behaviors:
|
||||
|
||||
1. research-stage provenance is carried inside `investment_debate_state` and surfaced into application-facing metadata;
|
||||
2. Bull Researcher, Bear Researcher, and Research Manager are guarded so timeouts/exceptions degrade gracefully without changing the default full-debate path;
|
||||
3. `orchestrator/profile_stage_chain.py` can be used as a minimal A/B harness to compare prompt/profile variants while preserving the production path.
|
||||
|
||||
The implementation is intentionally conservative:
|
||||
|
||||
- **no structured memo output** is introduced;
|
||||
- **default behavior remains the full debate path** when no guard trips;
|
||||
- **existing debate string fields stay authoritative** (`history`, `bull_history`, `bear_history`, `current_response`, `judge_decision`).
|
||||
|
||||
## 2. Provenance schema and ownership
|
||||
|
||||
### 2.1 Canonical provenance fields
|
||||
|
||||
The research provenance fields currently carried in `investment_debate_state` are:
|
||||
|
||||
| Field | Meaning | Primary source |
|
||||
| --- | --- | --- |
|
||||
| `research_status` | Research health/status. Current in-repo values are `full` and `degraded`; `failed` is tolerated in surfaced diagnostics. | `tradingagents/graph/propagation.py`, `tradingagents/graph/setup.py`, `tradingagents/agents/utils/agent_states.py` |
|
||||
| `research_mode` | Research execution mode. Normal path is `debate`; degraded path is `degraded_synthesis`. | same |
|
||||
| `timed_out_nodes` | Ordered list of guarded research nodes that hit timeout. | `tradingagents/graph/setup.py` |
|
||||
| `degraded_reason` | Machine-readable reason string such as `bull_researcher_timeout`. | `tradingagents/graph/setup.py` |
|
||||
| `covered_dimensions` | Which debate dimensions completed successfully so far (`bull`, `bear`, `manager`). | `tradingagents/graph/setup.py` |
|
||||
| `manager_confidence` | Optional confidence marker for the research-manager layer. `1.0` on clean manager success, `0.5` when manager succeeds after prior degradation, `0.0` on manager fallback. | `tradingagents/graph/setup.py` |
|
||||
|
||||
### 2.2 Initialization and propagation
|
||||
|
||||
- `tradingagents/graph/propagation.py` initializes the default path with:
|
||||
- `research_status = "full"`
|
||||
- `research_mode = "debate"`
|
||||
- `timed_out_nodes = []`
|
||||
- `degraded_reason = None`
|
||||
- `covered_dimensions = []`
|
||||
- `manager_confidence = None`
|
||||
- `tradingagents/graph/setup.py::_apply_research_success()` extends `covered_dimensions` and preserves the default debate mode while the research status remains `full`.
|
||||
- `tradingagents/graph/setup.py::_apply_research_fallback()` marks the state as degraded, records the reason, and updates only the existing debate fields instead of inventing a parallel memo structure.
|
||||
|
||||
## 3. Guard behavior by node
|
||||
|
||||
`GraphSetup._guard_research_node()` wraps each research node in a single-worker thread pool and enforces `research_node_timeout_secs`.
|
||||
|
||||
### 3.1 Bull / Bear researcher fallback
|
||||
|
||||
On timeout or exception for `Bull Researcher` or `Bear Researcher`:
|
||||
|
||||
- the corresponding node name is added to `timed_out_nodes` when the reason includes `timeout`;
|
||||
- `research_status` becomes `degraded`;
|
||||
- `research_mode` becomes `degraded_synthesis`;
|
||||
- a plain-text degraded argument is appended to:
|
||||
- `history`
|
||||
- the node-specific history field (`bull_history` or `bear_history`)
|
||||
- `current_response`
|
||||
- `count` is incremented so the debate routing still advances.
|
||||
|
||||
This keeps the **existing debate output shape** intact: downstream consumers continue reading the same string fields they already depend on.
|
||||
|
||||
### 3.2 Research Manager fallback
|
||||
|
||||
On timeout or exception for `Research Manager`:
|
||||
|
||||
- provenance is marked degraded using the same schema;
|
||||
- `manager_confidence` is forced to `0.0`;
|
||||
- `judge_decision`, `current_response`, and returned `investment_plan` are set to a plain-text HOLD recommendation that explicitly calls out degraded research.
|
||||
|
||||
This is intentionally **string-first**, not schema-first, so the downstream plan/report path does not have to learn a new memo envelope.
|
||||
|
||||
## 4. Application-facing surfacing
|
||||
|
||||
### 4.1 LLM runner metadata
|
||||
|
||||
`orchestrator/llm_runner.py` extracts the provenance subset from `investment_debate_state` and stores it under:
|
||||
|
||||
- `metadata.research`
|
||||
- `metadata.data_quality`
|
||||
- `metadata.sample_quality`
|
||||
|
||||
The extraction path is now centralized through:
|
||||
|
||||
- `tradingagents/agents/utils/agent_states.py::extract_research_provenance()`
|
||||
|
||||
Current conventions:
|
||||
|
||||
- normal path: `data_quality.state = "ok"`, `sample_quality = "full_research"`;
|
||||
- degraded path: `data_quality.state = "research_degraded"`, `sample_quality = "degraded_research"`.
|
||||
|
||||
### 4.2 Live-mode contract projection
|
||||
|
||||
`orchestrator/live_mode.py` forwards provenance under top-level `research` in live-mode payloads for both:
|
||||
|
||||
- `completed` / `degraded_success` results; and
|
||||
- structured failures that carry research diagnostics in `source_diagnostics`.
|
||||
|
||||
This means consumers can inspect research degradation without parsing raw debate text.
|
||||
|
||||
### 4.3 Full-state log projection
|
||||
|
||||
`tradingagents/graph/trading_graph.py::_log_state()` now also persists the same provenance subset into:
|
||||
|
||||
- `results/<ticker>/TradingAgentsStrategy_logs/full_states_log_<trade_date>.json`
|
||||
|
||||
This keeps the post-run JSON logs aligned with the runner/live metadata instead of silently dropping the structured fields.
|
||||
|
||||
## 5. Profiling trace schema
|
||||
|
||||
`orchestrator/profile_stage_chain.py` is the current timing/provenance trace generator.
|
||||
`orchestrator/profile_trace_utils.py` holds the shared summary helper used by the offline A/B comparison path.
|
||||
|
||||
### 5.1 Top-level payload
|
||||
|
||||
Successful runs currently write a JSON payload with:
|
||||
|
||||
- `status`
|
||||
- `ticker`
|
||||
- `date`
|
||||
- `selected_analysts`
|
||||
- `analysis_prompt_style`
|
||||
- `node_timings`
|
||||
- `phase_totals_seconds`
|
||||
- `dump_path`
|
||||
- `raw_events` (normally empty unless explicitly requested on failure)
|
||||
|
||||
Error payloads add:
|
||||
|
||||
- `run_id`
|
||||
- `error`
|
||||
- `exception_type`
|
||||
|
||||
### 5.2 `node_timings[]` entry schema
|
||||
|
||||
Each `node_timings[]` entry currently contains:
|
||||
|
||||
| Field | Meaning |
|
||||
| --- | --- |
|
||||
| `run_id` | Correlates all rows from one profiling run |
|
||||
| `nodes` | Node names emitted by the LangGraph update |
|
||||
| `phases` | Normalized application phase names (`analyst`, `research`, `trading`, `risk`, `portfolio`) |
|
||||
| `llm_kinds` | Normalized LLM bucket labels (`quick`, `deep`) |
|
||||
| `start_at` / `end_at` | Relative offsets from run start, in seconds |
|
||||
| `elapsed_ms` | Duration since the previous event |
|
||||
| `selected_analysts` | Analyst slice used for the run |
|
||||
| `analysis_prompt_style` | Prompt profile used for the run |
|
||||
| `research_status` | Provenance snapshot extracted from `investment_debate_state` |
|
||||
| `degraded_reason` | Provenance reason snapshot |
|
||||
| `history_len` | Current debate history length |
|
||||
| `response_len` | Current response length |
|
||||
|
||||
This schema is intentionally **trace-oriented**, not a replacement for the application result contract.
|
||||
|
||||
## 6. Offline A/B comparison helper
|
||||
|
||||
`orchestrator/profile_ab.py` is the current offline comparison helper.
|
||||
|
||||
It consumes one or more trace JSON files from cohort `A` and cohort `B`, then reports:
|
||||
|
||||
- `median_total_elapsed_ms`
|
||||
- `median_event_count`
|
||||
- `median_phase_elapsed_ms`
|
||||
- `degraded_run_count`
|
||||
- `error_count`
|
||||
- `trace_schema_versions`
|
||||
- `source_files`
|
||||
- recommendation tie-breaks across elapsed time, degradation count, and error count
|
||||
|
||||
This helper is intentionally offline-only: it does **not** re-run live providers or change the production runtime path.
|
||||
|
||||
## 7. Minimal A/B harness guidance
|
||||
|
||||
Use `python -m orchestrator.profile_stage_chain` to generate traces, then `python -m orchestrator.profile_ab` to compare them.
|
||||
|
||||
### 6.1 Safe comparison knobs
|
||||
|
||||
Run the harness from the repo root as a module (`python -m orchestrator.profile_stage_chain`) so package imports resolve without extra path tweaking.
|
||||
|
||||
The smallest useful A/B comparisons are:
|
||||
|
||||
- `--analysis-prompt-style` (for example `compact` vs another supported style)
|
||||
- `--selected-analysts` (for example a narrower analyst slice vs a broader slice)
|
||||
- provider/model/timeout settings while keeping the graph semantics fixed
|
||||
|
||||
### 6.2 Recommended invariants
|
||||
|
||||
Keep these fixed when doing an A/B comparison:
|
||||
|
||||
- the same `--ticker`
|
||||
- the same `--date`
|
||||
- the same provider/model unless the provider/model itself is the experimental variable
|
||||
- the same `--overall-timeout`
|
||||
- `max_debate_rounds = 1` and `max_risk_discuss_rounds = 1` as currently baked into the harness
|
||||
|
||||
### 7.3 Example commands
|
||||
|
||||
```bash
|
||||
python -m orchestrator.profile_stage_chain \
|
||||
--ticker AAPL \
|
||||
--date 2026-04-11 \
|
||||
--selected-analysts market \
|
||||
--analysis-prompt-style compact
|
||||
|
||||
python -m orchestrator.profile_stage_chain \
|
||||
--ticker AAPL \
|
||||
--date 2026-04-11 \
|
||||
--selected-analysts market \
|
||||
--analysis-prompt-style detailed
|
||||
|
||||
python -m orchestrator.profile_ab \
|
||||
--a orchestrator/profile_runs/compact \
|
||||
--b orchestrator/profile_runs/detailed \
|
||||
--label-a compact \
|
||||
--label-b detailed
|
||||
```
|
||||
|
||||
Compare the generated JSON dumps by focusing on:
|
||||
|
||||
- `phase_totals_seconds`
|
||||
- `node_timings[].elapsed_ms`
|
||||
- provenance changes (`research_status`, `degraded_reason`)
|
||||
- history/response growth (`history_len`, `response_len`)
|
||||
|
||||
## 8. Review guardrails
|
||||
|
||||
When modifying this area, keep these invariants intact unless a broader migration explicitly approves otherwise:
|
||||
|
||||
1. **Do not change the default path**: normal successful runs should still stay in `research_status = "full"` and `research_mode = "debate"`.
|
||||
2. **Do not introduce structured memo output** for degraded research unless all downstream consumers are migrated together.
|
||||
3. **Preserve debate output shape**: downstream readers still expect plain strings in `history`, `bull_history`, `bear_history`, `current_response`, `judge_decision`, and `investment_plan`.
|
||||
4. **Keep provenance additive**: provenance fields should explain degraded behavior, not replace the existing textual debate artifacts.
|
||||
|
|
@ -0,0 +1,304 @@
|
|||
# TradingAgents result contract v1alpha1 draft
|
||||
|
||||
Status: draft
|
||||
Audience: backend, desktop, frontend, verification
|
||||
Format: JSON-oriented contract notes with examples
|
||||
|
||||
## Current implementation snapshot (2026-04)
|
||||
|
||||
Mainline backend behavior now partially matches this draft already:
|
||||
|
||||
- `web_dashboard/backend/services/job_service.py` emits public task/job payloads with `contract_version = "v1alpha1"`;
|
||||
- `web_dashboard/backend/services/result_store.py` persists result contracts under `results/<task_id>/result.v1alpha1.json`;
|
||||
- `web_dashboard/backend/api/portfolio.py` and `/ws/orchestrator` already expose `v1alpha1` envelopes by default;
|
||||
- live signal payloads currently carry `data_quality`, `degradation`, and `research` as top-level contract fields in addition to `result` / `error`.
|
||||
|
||||
This document is therefore a **working contract doc**, not a pure future sketch.
|
||||
|
||||
## 1. Goals
|
||||
|
||||
`result-contract-v1alpha1` defines the stable shapes exchanged across:
|
||||
|
||||
- analysis start/status APIs
|
||||
- websocket progress events
|
||||
- live orchestrator streaming
|
||||
- persisted task state
|
||||
- historical report projection
|
||||
|
||||
The contract should be application-facing, not raw domain dataclasses.
|
||||
|
||||
## 2. Design principles
|
||||
|
||||
- version every externally consumed payload
|
||||
- keep transport-neutral field meanings
|
||||
- allow partial/degraded results when quant or LLM lane fails
|
||||
- distinguish task lifecycle from signal outcome
|
||||
- keep raw domain metadata nested, not smeared across top-level fields
|
||||
|
||||
## 3. Core enums
|
||||
|
||||
## 3.1 Task status
|
||||
|
||||
```json
|
||||
["pending", "running", "completed", "failed", "cancelled"]
|
||||
```
|
||||
|
||||
## 3.2 Stage name
|
||||
|
||||
```json
|
||||
["analysts", "research", "trading", "risk", "portfolio"]
|
||||
```
|
||||
|
||||
## 3.3 Decision rating
|
||||
|
||||
```json
|
||||
["BUY", "OVERWEIGHT", "HOLD", "UNDERWEIGHT", "SELL"]
|
||||
```
|
||||
|
||||
## 4. Canonical envelope
|
||||
|
||||
All application-facing payloads should include:
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1"
|
||||
}
|
||||
```
|
||||
|
||||
Optional transport-specific wrapper fields such as WebSocket `type` may sit outside the contract body.
|
||||
|
||||
## 5. Analysis task contract
|
||||
|
||||
## 5.1 Accepted response
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "600519.SS_20260413_120000_ab12cd",
|
||||
"ticker": "600519.SS",
|
||||
"date": "2026-04-13",
|
||||
"status": "running"
|
||||
}
|
||||
```
|
||||
|
||||
## 5.2 Status / progress document
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "600519.SS_20260413_120000_ab12cd",
|
||||
"ticker": "600519.SS",
|
||||
"date": "2026-04-13",
|
||||
"status": "running",
|
||||
"progress": 40,
|
||||
"current_stage": "research",
|
||||
"created_at": "2026-04-13T12:00:00Z",
|
||||
"elapsed_seconds": 18,
|
||||
"stages": [
|
||||
{"name": "analysts", "status": "completed", "completed_at": "12:00:05"},
|
||||
{"name": "research", "status": "running", "completed_at": null},
|
||||
{"name": "trading", "status": "pending", "completed_at": null},
|
||||
{"name": "risk", "status": "pending", "completed_at": null},
|
||||
{"name": "portfolio", "status": "pending", "completed_at": null}
|
||||
],
|
||||
"result": null,
|
||||
"error": null,
|
||||
"evidence_summary": null,
|
||||
"tentative_classification": null,
|
||||
"budget_state": {}
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- `elapsed_seconds` is preferred over the current loosely typed `elapsed`.
|
||||
- stage entries should carry explicit `name`; current positional arrays are fragile.
|
||||
- `result` remains nullable until completion.
|
||||
- `evidence_summary`, `tentative_classification`, and `budget_state` are additive helper fields for runtime recovery / attribution and may be absent in older payloads.
|
||||
|
||||
## 5.3 Completed result payload
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "600519.SS_20260413_120000_ab12cd",
|
||||
"ticker": "600519.SS",
|
||||
"date": "2026-04-13",
|
||||
"status": "completed",
|
||||
"progress": 100,
|
||||
"current_stage": "portfolio",
|
||||
"result": {
|
||||
"decision": "OVERWEIGHT",
|
||||
"confidence": 0.64,
|
||||
"signals": {
|
||||
"merged": {"direction": 1, "rating": "OVERWEIGHT"},
|
||||
"quant": {"direction": 1, "rating": "OVERWEIGHT", "available": true},
|
||||
"llm": {"direction": 1, "rating": "BUY", "available": true}
|
||||
},
|
||||
"degraded": false,
|
||||
"report": {
|
||||
"path": "results/600519.SS/2026-04-13/complete_report.md",
|
||||
"available": true
|
||||
}
|
||||
},
|
||||
"evidence": {
|
||||
"attempts": [
|
||||
{
|
||||
"status": "completed",
|
||||
"observation_code": "completed",
|
||||
"stage": "portfolio"
|
||||
}
|
||||
],
|
||||
"last_observation": {
|
||||
"status": "completed",
|
||||
"observation_code": "completed",
|
||||
"stage": "portfolio"
|
||||
}
|
||||
},
|
||||
"tentative_classification": {
|
||||
"kind": "healthy",
|
||||
"summary": "baseline execution succeeded without fallback"
|
||||
},
|
||||
"budget_state": {
|
||||
"local_recovery_used": false,
|
||||
"provider_probe_used": false,
|
||||
"baseline_timeout_secs": 300.0
|
||||
},
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
## 5.4 Failed result payload
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "600519.SS_20260413_120000_ab12cd",
|
||||
"ticker": "600519.SS",
|
||||
"date": "2026-04-13",
|
||||
"status": "failed",
|
||||
"progress": 60,
|
||||
"current_stage": "trading",
|
||||
"result": null,
|
||||
"error": {
|
||||
"code": "analysis_failed",
|
||||
"message": "both quant and llm signals are None",
|
||||
"retryable": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Live signal batch contract
|
||||
|
||||
This covers `/ws/orchestrator` style responses currently produced by `LiveMode`.
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"signals": [
|
||||
{
|
||||
"ticker": "600519.SS",
|
||||
"date": "2026-04-13",
|
||||
"status": "completed",
|
||||
"result": {
|
||||
"direction": 1,
|
||||
"confidence": 0.64,
|
||||
"quant_direction": 1,
|
||||
"llm_direction": 1,
|
||||
"timestamp": "2026-04-13T12:00:11Z"
|
||||
},
|
||||
"degradation": null,
|
||||
"data_quality": {"state": "ok"},
|
||||
"research": null,
|
||||
"error": null
|
||||
},
|
||||
{
|
||||
"ticker": "300750.SZ",
|
||||
"date": "2026-04-13",
|
||||
"status": "failed",
|
||||
"result": null,
|
||||
"degradation": {
|
||||
"degraded": true,
|
||||
"reason_code": "provider_mismatch"
|
||||
},
|
||||
"data_quality": {"state": "provider_mismatch", "source": "llm"},
|
||||
"research": {
|
||||
"research_status": "failed",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_connectionerror",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": null
|
||||
},
|
||||
"error": {
|
||||
"code": "live_signal_failed",
|
||||
"message": "both quant and llm signals are None",
|
||||
"retryable": false
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## 7. Historical report contract
|
||||
|
||||
```json
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"ticker": "600519.SS",
|
||||
"date": "2026-04-13",
|
||||
"decision": "OVERWEIGHT",
|
||||
"report": "# TradingAgents ...",
|
||||
"artifacts": {
|
||||
"complete_report": true,
|
||||
"stage_reports": {
|
||||
"analysts": true,
|
||||
"research": true,
|
||||
"trading": true,
|
||||
"risk": true,
|
||||
"portfolio": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 8. Mapping from current implementation
|
||||
|
||||
Current backend fields in `web_dashboard/backend/main.py` map roughly as follows:
|
||||
|
||||
- `decision` -> `result.decision`
|
||||
- `quant_signal` -> `result.signals.quant.rating`
|
||||
- `llm_signal` -> `result.signals.llm.rating`
|
||||
- `confidence` -> `result.confidence`
|
||||
- `result_ref` -> persisted result contract location under `results/<task_id>/result.v1alpha1.json`
|
||||
- top-level `error` string -> structured `error`
|
||||
- positional `stages[]` -> named `stages[]`
|
||||
|
||||
## 9. Compatibility notes
|
||||
|
||||
### v1alpha1 tolerances
|
||||
|
||||
Consumers should tolerate:
|
||||
|
||||
- absent `result.signals.quant` when quant path is unavailable
|
||||
- absent `result.signals.llm` when LLM path is unavailable
|
||||
- `result.degraded = true` when only one lane produced a usable signal
|
||||
- optional additive fields such as `evidence`, `tentative_classification`, `budget_state`, `evidence_summary`
|
||||
|
||||
### fields to avoid freezing yet
|
||||
|
||||
Do not freeze these until config-schema work lands:
|
||||
|
||||
- provider-specific configuration echo fields
|
||||
- raw metadata blobs from quant/LLM internals
|
||||
- report summary extraction fields
|
||||
|
||||
Additional note:
|
||||
|
||||
- trace/profiling payloads are **not** part of `result-contract-v1alpha1`; they use separate offline trace/A-B helper files under `orchestrator/`.
|
||||
|
||||
## 10. Open review questions
|
||||
|
||||
- Should `rating` remain duplicated with `direction`, or should one be derived client-side?
|
||||
- Should task progress timestamps standardize on RFC 3339 instead of mixed clock-only strings?
|
||||
- Should historical report APIs return extracted summary separately from full markdown?
|
||||
|
|
@ -0,0 +1,233 @@
|
|||
# TradingAgents backend migration and rollback notes draft
|
||||
|
||||
Status: draft
|
||||
Audience: backend/application maintainers
|
||||
Scope: migrate toward application-service boundary and result-contract-v1alpha1 with rollback safety
|
||||
|
||||
## Current progress snapshot (2026-04)
|
||||
|
||||
Mainline has moved beyond pure planning, but it has not finished the full boundary migration:
|
||||
|
||||
- `Phase 0` is effectively done: contract and architecture drafts exist.
|
||||
- `Phase 1-4` are **partially landed**:
|
||||
- backend services now project `v1alpha1`-style public payloads;
|
||||
- result contracts are persisted via `result_store.py`;
|
||||
- `/ws/analysis/{task_id}` and `/ws/orchestrator` already wrap payloads with `contract_version`;
|
||||
- recommendation and task-status reads already depend on application-layer shaping more than route-local reconstruction.
|
||||
- `Phase 5` is **partially landed** via the task lifecycle boundary slice:
|
||||
- `status/list/cancel` now route through backend task services instead of route-local orchestration;
|
||||
- `web_dashboard/backend/main.py` is still too large outside that slice;
|
||||
- reports/export and other residual route-local orchestration are still pending;
|
||||
- compatibility fields still coexist with the newer contract-first path.
|
||||
|
||||
Also note that research provenance / node guard / profiling work is now landed on the orchestrator side. That effort complements the backend migration but should not be confused with “application boundary fully complete.”
|
||||
|
||||
**Recent improvements (2026-04-16)**:
|
||||
- Orchestrator error classification now includes comprehensive provider × base_url matrix validation
|
||||
- Timeout configuration validation warns when analyst/research timeouts may be insufficient for multi-analyst profiles
|
||||
- All provider mismatches (anthropic, openai, google, xai, ollama, openrouter) are now detected before graph initialization
|
||||
|
||||
## 1. Migration objective
|
||||
|
||||
Move backend delivery code from route-local orchestration to an application-service layer without changing the quant+LLM merge kernel behavior.
|
||||
|
||||
Target outcomes:
|
||||
|
||||
- stable result contract (`v1alpha1`)
|
||||
- thin FastAPI transport
|
||||
- application-owned task lifecycle and mapping
|
||||
- rollback-safe migration using dual-read/dual-write where useful
|
||||
|
||||
## 2. Current coupling hotspots
|
||||
|
||||
Primary hotspot: `web_dashboard/backend/main.py`
|
||||
|
||||
It currently combines:
|
||||
|
||||
- route handlers
|
||||
- task persistence
|
||||
- subprocess creation and monitoring
|
||||
- progress/stage state mutation
|
||||
- result projection into API fields
|
||||
- report export concerns
|
||||
|
||||
This file is the first migration target.
|
||||
|
||||
## 3. Recommended migration sequence
|
||||
|
||||
## Phase 0: contract freeze draft
|
||||
|
||||
Deliverables:
|
||||
|
||||
- agree on `docs/contracts/result-contract-v1alpha1.md`
|
||||
- agree on application boundary in `docs/architecture/application-boundary.md`
|
||||
|
||||
Rollback:
|
||||
|
||||
- none needed; documentation only
|
||||
|
||||
## Phase 1: introduce application service behind existing routes
|
||||
|
||||
Actions:
|
||||
|
||||
- add backend application modules for analysis status, live signals, and report reads
|
||||
- keep existing route URLs unchanged
|
||||
- move mapping logic out of route functions into service/mappers
|
||||
|
||||
Compatibility tactic:
|
||||
|
||||
- routes still return current payload shape if frontend depends on it
|
||||
- internal service also emits `v1alpha1` DTOs for verification comparison
|
||||
|
||||
Rollback:
|
||||
|
||||
- route handlers can call old inline functions directly via feature flag or import switch
|
||||
|
||||
Current status:
|
||||
|
||||
- partially complete on mainline via `analysis_service.py`, `job_service.py`, and `result_store.py`
|
||||
- task lifecycle (`status/list/cancel`) is now service-routed
|
||||
- not complete enough yet to claim `main.py` is only a thin adapter
|
||||
|
||||
## Phase 2: dual-read for task status
|
||||
|
||||
Why:
|
||||
|
||||
Task status currently lives in memory plus `data/task_status/*.json`. During migration, new service storage and old persisted shape may diverge.
|
||||
|
||||
Recommended strategy:
|
||||
|
||||
- read preference: new application store first
|
||||
- fallback read: legacy JSON task status
|
||||
- compare key fields during shadow period: `status`, `progress`, `current_stage`, `decision`, `error`
|
||||
|
||||
Rollback:
|
||||
|
||||
- switch read preference back to legacy JSON only
|
||||
- leave new store populated for debugging, but non-authoritative
|
||||
|
||||
## Phase 3: dual-write for task results
|
||||
|
||||
Why:
|
||||
|
||||
To avoid breaking status pages and historical tooling during rollout.
|
||||
|
||||
Recommended strategy:
|
||||
|
||||
- authoritative write: new application store
|
||||
- compatibility write: legacy `app.state.task_results` + `data/task_status/*.json`
|
||||
- emit diff logs when new-vs-legacy projections disagree
|
||||
|
||||
Guardrails:
|
||||
|
||||
- dual-write only for application-layer payloads
|
||||
- do not dual-write alternate domain semantics into `orchestrator/`
|
||||
|
||||
Rollback:
|
||||
|
||||
- disable new-store writes
|
||||
- continue legacy writes only
|
||||
|
||||
## Phase 4: websocket and live signal migration
|
||||
|
||||
Actions:
|
||||
|
||||
- make `/ws/analysis/{task_id}` and `/ws/orchestrator` render application contracts
|
||||
- keep websocket wrapper fields stable while migrating internal body shape
|
||||
|
||||
Suggested compatibility step:
|
||||
|
||||
- send legacy event envelope with embedded `contract_version`
|
||||
- update frontend consumers before removing legacy-only fields
|
||||
|
||||
Rollback:
|
||||
|
||||
- restore websocket serializer to legacy shape
|
||||
- keep application service intact behind adapter
|
||||
|
||||
Current status:
|
||||
|
||||
- partially complete on mainline
|
||||
- `/ws/orchestrator` already emits `contract_version`, `data_quality`, `degradation`, and `research`
|
||||
- `/ws/analysis/{task_id}` already reads application-shaped task state
|
||||
|
||||
## Phase 5: remove route-local orchestration
|
||||
|
||||
Actions:
|
||||
|
||||
- delete dead inline task mutation helpers from `main.py`
|
||||
- keep routes as thin adapter layer
|
||||
- preserve report retrieval behavior
|
||||
|
||||
Rollback:
|
||||
|
||||
- only safe after shadow metrics show parity
|
||||
- otherwise revert to Phase 3 dual-write mode, not direct deletion
|
||||
|
||||
## 4. Suggested feature flags
|
||||
|
||||
Environment-variable style examples:
|
||||
|
||||
- `TA_APP_SERVICE_ENABLED=1`
|
||||
- `TA_RESULT_CONTRACT_VERSION=v1alpha1`
|
||||
- `TA_TASKSTORE_DUAL_READ=1`
|
||||
- `TA_TASKSTORE_DUAL_WRITE=1`
|
||||
- `TA_WS_V1ALPHA1_ENABLED=0`
|
||||
|
||||
These names are placeholders; exact naming can be chosen during implementation.
|
||||
|
||||
## 5. Verification checkpoints per phase
|
||||
|
||||
For each migration phase, verify:
|
||||
|
||||
- same task ids are returned for the same route behavior
|
||||
- stage transitions remain monotonic
|
||||
- completed tasks persist `decision`, `confidence`, and degraded-path outcomes
|
||||
- failure path still preserves actionable error text
|
||||
- live websocket payloads preserve ticker/date ordering expectations
|
||||
|
||||
## 6. Rollback triggers
|
||||
|
||||
Rollback immediately if any of these happen:
|
||||
|
||||
- task status disappears after backend restart
|
||||
- WebSocket clients stop receiving progress updates
|
||||
- completed analysis loses `decision` or confidence fields
|
||||
- degraded single-lane signals are reclassified incorrectly
|
||||
- report export or historical report retrieval cannot find prior artifacts
|
||||
|
||||
## 7. Explicit non-goals during migration
|
||||
|
||||
- do not rewrite `orchestrator/signals.py` merge math as part of boundary migration
|
||||
- do not rework provider/model selection semantics in the same change set
|
||||
- do not force frontend redesign before contract shadowing proves parity
|
||||
- do not implement a new strategy layer inside the application service
|
||||
|
||||
## 8. Minimal rollback playbook
|
||||
|
||||
If production or local verification fails after migration cutover:
|
||||
|
||||
1. disable application-service read path
|
||||
2. disable dual-write to new store if it corrupts parity checks
|
||||
3. restore legacy route-local serializers
|
||||
4. keep generated comparison logs/artifacts for diff analysis
|
||||
5. re-run backend tests and one end-to-end manual analysis flow
|
||||
|
||||
## 9. Review checklist
|
||||
|
||||
A migration plan is acceptable only if it:
|
||||
|
||||
- preserves orchestrator ownership of quant+LLM merge semantics
|
||||
- introduces feature-flagged cutover points
|
||||
- supports dual-read/dual-write only at application/persistence boundary
|
||||
- provides a one-step rollback path at each release phase
|
||||
|
||||
## 10. Maintainer note
|
||||
|
||||
When updating migration status, keep these three documents aligned:
|
||||
|
||||
- `docs/architecture/application-boundary.md`
|
||||
- `docs/contracts/result-contract-v1alpha1.md`
|
||||
- `docs/architecture/research-provenance.md`
|
||||
|
||||
The first two describe backend/application convergence; the third describes orchestrator-side research degradation and profiling semantics that now feed those contracts.
|
||||
10
main.py
10
main.py
|
|
@ -1,15 +1,11 @@
|
|||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from tradingagents.default_config import get_default_config, load_project_env
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
load_project_env(__file__)
|
||||
|
||||
# Create a custom config
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["deep_think_llm"] = "gpt-5.4-mini" # Use a different model
|
||||
config["quick_think_llm"] = "gpt-5.4-mini" # Use a different model
|
||||
config = get_default_config()
|
||||
config["max_debate_rounds"] = 1 # Increase debate rounds
|
||||
|
||||
# Configure data vendors (default uses yfinance, no extra API keys needed)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List
|
||||
|
||||
from orchestrator.signals import FinalSignal
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class BacktestResult:
|
||||
records: List[dict] = field(default_factory=list)
|
||||
summary: dict = field(default_factory=dict)
|
||||
|
||||
|
||||
class BacktestMode:
|
||||
def __init__(self, orchestrator):
|
||||
self._orchestrator = orchestrator
|
||||
|
||||
def run(self, tickers: List[str], start_date: str, end_date: str) -> BacktestResult:
|
||||
start = datetime.strptime(start_date, "%Y-%m-%d")
|
||||
end = datetime.strptime(end_date, "%Y-%m-%d")
|
||||
|
||||
records = []
|
||||
current = start
|
||||
while current <= end:
|
||||
if current.weekday() < 5: # skip weekends
|
||||
date_str = current.strftime("%Y-%m-%d")
|
||||
for ticker in tickers:
|
||||
try:
|
||||
sig = self._orchestrator.get_combined_signal(ticker, date_str)
|
||||
records.append({
|
||||
"ticker": ticker,
|
||||
"date": date_str,
|
||||
"direction": sig.direction,
|
||||
"confidence": sig.confidence,
|
||||
"quant_direction": sig.quant_signal.direction if sig.quant_signal else None,
|
||||
"llm_direction": sig.llm_signal.direction if sig.llm_signal else None,
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error("BacktestMode: failed for %s %s: %s", ticker, date_str, e)
|
||||
current += timedelta(days=1)
|
||||
|
||||
summary = self._compute_summary(records, tickers)
|
||||
return BacktestResult(records=records, summary=summary)
|
||||
|
||||
def _compute_summary(self, records: List[dict], tickers: List[str]) -> dict:
|
||||
summary = {}
|
||||
for ticker in tickers:
|
||||
ticker_records = [r for r in records if r["ticker"] == ticker]
|
||||
if not ticker_records:
|
||||
summary[ticker] = {"total_days": 0}
|
||||
continue
|
||||
directions = [r["direction"] for r in ticker_records]
|
||||
confidences = [r["confidence"] for r in ticker_records]
|
||||
summary[ticker] = {
|
||||
"total_days": len(ticker_records),
|
||||
"buy_days": directions.count(1),
|
||||
"sell_days": directions.count(-1),
|
||||
"hold_days": directions.count(0),
|
||||
"avg_confidence": sum(confidences) / len(confidences),
|
||||
}
|
||||
return summary
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
from dataclasses import dataclass, field
|
||||
|
||||
from orchestrator.contracts.config_loader import normalize_orchestrator_fields
|
||||
|
||||
|
||||
@dataclass
|
||||
class OrchestratorConfig:
|
||||
# Must be set to the local quant backtest output directory before use
|
||||
quant_backtest_path: str = ""
|
||||
trading_agents_config: dict = field(default_factory=dict)
|
||||
quant_weight_cap: float = 0.8 # quant 置信度上限
|
||||
llm_weight_cap: float = 0.9 # llm 置信度上限
|
||||
llm_batch_days: int = 7 # LLM 每隔几天运行一次(节省 API)
|
||||
cache_dir: str = "orchestrator/cache" # LLM 信号缓存目录
|
||||
llm_solo_penalty: float = 0.7 # LLM 单轨时的置信度折扣
|
||||
quant_solo_penalty: float = 0.8 # Quant 单轨时的置信度折扣
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
normalized = normalize_orchestrator_fields(
|
||||
{
|
||||
"quant_backtest_path": self.quant_backtest_path,
|
||||
"trading_agents_config": self.trading_agents_config,
|
||||
"quant_weight_cap": self.quant_weight_cap,
|
||||
"llm_weight_cap": self.llm_weight_cap,
|
||||
"llm_batch_days": self.llm_batch_days,
|
||||
"cache_dir": self.cache_dir,
|
||||
"llm_solo_penalty": self.llm_solo_penalty,
|
||||
"quant_solo_penalty": self.quant_solo_penalty,
|
||||
}
|
||||
)
|
||||
for key, value in normalized.items():
|
||||
setattr(self, key, value)
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
from orchestrator.contracts.config_loader import (
|
||||
normalize_orchestrator_fields,
|
||||
normalize_trading_agents_config,
|
||||
)
|
||||
from orchestrator.contracts.config_schema import (
|
||||
CONTRACT_VERSION,
|
||||
OrchestratorConfigSchema,
|
||||
build_orchestrator_schema,
|
||||
build_trading_agents_config,
|
||||
)
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.contracts.result_contract import (
|
||||
CombinedSignalFailure,
|
||||
FinalSignal,
|
||||
Signal,
|
||||
build_error_signal,
|
||||
signal_reason_code,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"CONTRACT_VERSION",
|
||||
"CombinedSignalFailure",
|
||||
"FinalSignal",
|
||||
"OrchestratorConfigSchema",
|
||||
"ReasonCode",
|
||||
"Signal",
|
||||
"build_error_signal",
|
||||
"build_orchestrator_schema",
|
||||
"build_trading_agents_config",
|
||||
"normalize_orchestrator_fields",
|
||||
"normalize_trading_agents_config",
|
||||
"signal_reason_code",
|
||||
]
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Mapping, Optional
|
||||
|
||||
from orchestrator.contracts.config_schema import (
|
||||
build_orchestrator_schema,
|
||||
build_trading_agents_config,
|
||||
)
|
||||
|
||||
|
||||
def normalize_trading_agents_config(
|
||||
config: Optional[Mapping[str, Any]],
|
||||
) -> dict[str, Any]:
|
||||
return dict(build_trading_agents_config(config))
|
||||
|
||||
|
||||
def normalize_orchestrator_fields(raw: Mapping[str, Any]) -> dict[str, Any]:
|
||||
return build_orchestrator_schema(raw).to_runtime_fields()
|
||||
|
|
@ -0,0 +1,174 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Mapping, Optional, TypedDict, cast
|
||||
|
||||
from tradingagents.default_config import get_default_config
|
||||
|
||||
|
||||
CONTRACT_VERSION = "v1alpha1"
|
||||
|
||||
|
||||
class TradingAgentsConfigPayload(TypedDict, total=False):
|
||||
project_dir: str
|
||||
results_dir: str
|
||||
data_cache_dir: str
|
||||
llm_provider: str
|
||||
deep_think_llm: str
|
||||
quick_think_llm: str
|
||||
backend_url: str
|
||||
google_thinking_level: Optional[str]
|
||||
openai_reasoning_effort: Optional[str]
|
||||
anthropic_effort: Optional[str]
|
||||
output_language: str
|
||||
portfolio_context: str
|
||||
peer_context: str
|
||||
peer_context_mode: str
|
||||
max_debate_rounds: int
|
||||
max_risk_discuss_rounds: int
|
||||
max_recur_limit: int
|
||||
analyst_node_timeout_secs: float
|
||||
data_vendors: dict[str, str]
|
||||
tool_vendors: dict[str, str]
|
||||
selected_analysts: list[str]
|
||||
llm_timeout: float
|
||||
llm_max_retries: int
|
||||
minimax_retry_attempts: int
|
||||
minimax_retry_base_delay: float
|
||||
timeout: float
|
||||
max_retries: int
|
||||
use_responses_api: bool
|
||||
|
||||
|
||||
REQUIRED_TRADING_CONFIG_KEYS = (
|
||||
"project_dir",
|
||||
"results_dir",
|
||||
"data_cache_dir",
|
||||
"llm_provider",
|
||||
"deep_think_llm",
|
||||
"quick_think_llm",
|
||||
)
|
||||
|
||||
|
||||
def _validate_probability(name: str, value: Any) -> float:
|
||||
if not isinstance(value, (int, float)):
|
||||
raise TypeError(f"{name} must be a number")
|
||||
if not 0.0 <= float(value) <= 1.0:
|
||||
raise ValueError(f"{name} must be between 0.0 and 1.0")
|
||||
return float(value)
|
||||
|
||||
|
||||
def _validate_positive_int(name: str, value: Any) -> int:
|
||||
if not isinstance(value, int):
|
||||
raise TypeError(f"{name} must be an int")
|
||||
if value <= 0:
|
||||
raise ValueError(f"{name} must be > 0")
|
||||
return value
|
||||
|
||||
|
||||
def _validate_string_map(name: str, value: Any) -> dict[str, str]:
|
||||
if not isinstance(value, Mapping):
|
||||
raise TypeError(f"{name} must be a mapping")
|
||||
normalized = {}
|
||||
for key, item in value.items():
|
||||
if not isinstance(key, str) or not isinstance(item, str):
|
||||
raise TypeError(f"{name} keys and values must be strings")
|
||||
normalized[key] = item
|
||||
return normalized
|
||||
|
||||
|
||||
def build_trading_agents_config(
|
||||
overrides: Optional[Mapping[str, Any]],
|
||||
) -> TradingAgentsConfigPayload:
|
||||
merged: dict[str, Any] = get_default_config()
|
||||
|
||||
if overrides:
|
||||
if not isinstance(overrides, Mapping):
|
||||
raise TypeError("trading_agents_config must be a mapping")
|
||||
for key, value in overrides.items():
|
||||
if (
|
||||
key in ("data_vendors", "tool_vendors")
|
||||
and value is not None
|
||||
):
|
||||
merged[key] = _validate_string_map(key, value)
|
||||
elif key == "selected_analysts" and value is not None:
|
||||
if not isinstance(value, list) or any(
|
||||
not isinstance(item, str) for item in value
|
||||
):
|
||||
raise TypeError("selected_analysts must be a list of strings")
|
||||
merged[key] = list(value)
|
||||
else:
|
||||
merged[key] = value
|
||||
|
||||
for key in REQUIRED_TRADING_CONFIG_KEYS:
|
||||
value = merged.get(key)
|
||||
if not isinstance(value, str) or not value.strip():
|
||||
raise ValueError(f"trading_agents_config.{key} must be a non-empty string")
|
||||
|
||||
merged["data_vendors"] = _validate_string_map("data_vendors", merged["data_vendors"])
|
||||
merged["tool_vendors"] = _validate_string_map("tool_vendors", merged["tool_vendors"])
|
||||
|
||||
return cast(TradingAgentsConfigPayload, merged)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class OrchestratorConfigSchema:
|
||||
quant_backtest_path: str = ""
|
||||
trading_agents_config: TradingAgentsConfigPayload = field(
|
||||
default_factory=lambda: build_trading_agents_config(None)
|
||||
)
|
||||
quant_weight_cap: float = 0.8
|
||||
llm_weight_cap: float = 0.9
|
||||
llm_batch_days: int = 7
|
||||
cache_dir: str = "orchestrator/cache"
|
||||
llm_solo_penalty: float = 0.7
|
||||
quant_solo_penalty: float = 0.8
|
||||
contract_version: str = CONTRACT_VERSION
|
||||
|
||||
def to_runtime_fields(self) -> dict[str, Any]:
|
||||
return {
|
||||
"quant_backtest_path": self.quant_backtest_path,
|
||||
"trading_agents_config": dict(self.trading_agents_config),
|
||||
"quant_weight_cap": self.quant_weight_cap,
|
||||
"llm_weight_cap": self.llm_weight_cap,
|
||||
"llm_batch_days": self.llm_batch_days,
|
||||
"cache_dir": self.cache_dir,
|
||||
"llm_solo_penalty": self.llm_solo_penalty,
|
||||
"quant_solo_penalty": self.quant_solo_penalty,
|
||||
}
|
||||
|
||||
|
||||
def build_orchestrator_schema(raw: Mapping[str, Any]) -> OrchestratorConfigSchema:
|
||||
if not isinstance(raw, Mapping):
|
||||
raise TypeError("orchestrator config must be a mapping")
|
||||
|
||||
quant_backtest_path = raw.get("quant_backtest_path", "")
|
||||
if not isinstance(quant_backtest_path, str):
|
||||
raise TypeError("quant_backtest_path must be a string")
|
||||
|
||||
cache_dir = raw.get("cache_dir", "orchestrator/cache")
|
||||
if not isinstance(cache_dir, str) or not cache_dir.strip():
|
||||
raise ValueError("cache_dir must be a non-empty string")
|
||||
|
||||
return OrchestratorConfigSchema(
|
||||
quant_backtest_path=quant_backtest_path,
|
||||
trading_agents_config=build_trading_agents_config(
|
||||
cast(Optional[Mapping[str, Any]], raw.get("trading_agents_config"))
|
||||
),
|
||||
quant_weight_cap=_validate_probability(
|
||||
"quant_weight_cap", raw.get("quant_weight_cap", 0.8)
|
||||
),
|
||||
llm_weight_cap=_validate_probability(
|
||||
"llm_weight_cap", raw.get("llm_weight_cap", 0.9)
|
||||
),
|
||||
llm_batch_days=_validate_positive_int(
|
||||
"llm_batch_days", raw.get("llm_batch_days", 7)
|
||||
),
|
||||
cache_dir=cache_dir,
|
||||
llm_solo_penalty=_validate_probability(
|
||||
"llm_solo_penalty", raw.get("llm_solo_penalty", 0.7)
|
||||
),
|
||||
quant_solo_penalty=_validate_probability(
|
||||
"quant_solo_penalty", raw.get("quant_solo_penalty", 0.8)
|
||||
),
|
||||
)
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
from enum import Enum
|
||||
|
||||
|
||||
class ReasonCode(str, Enum):
|
||||
CONFIG_INVALID = "config_invalid"
|
||||
QUANT_NOT_CONFIGURED = "quant_not_configured"
|
||||
QUANT_INIT_FAILED = "quant_init_failed"
|
||||
QUANT_SIGNAL_FAILED = "quant_signal_failed"
|
||||
QUANT_NO_DATA = "quant_no_data"
|
||||
NON_TRADING_DAY = "non_trading_day"
|
||||
PARTIAL_DATA = "partial_data"
|
||||
STALE_DATA = "stale_data"
|
||||
LLM_INIT_FAILED = "llm_init_failed"
|
||||
LLM_SIGNAL_FAILED = "llm_signal_failed"
|
||||
LLM_UNKNOWN_RATING = "llm_unknown_rating"
|
||||
PROVIDER_MISMATCH = "provider_mismatch"
|
||||
PROVIDER_AUTH_FAILED = "provider_auth_failed"
|
||||
BOTH_SIGNALS_UNAVAILABLE = "both_signals_unavailable"
|
||||
|
||||
|
||||
def reason_code_value(value: "ReasonCode | str") -> str:
|
||||
if isinstance(value, ReasonCode):
|
||||
return value.value
|
||||
return value
|
||||
|
|
@ -0,0 +1,116 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Optional
|
||||
|
||||
from orchestrator.contracts.config_schema import CONTRACT_VERSION
|
||||
from orchestrator.contracts.error_taxonomy import reason_code_value
|
||||
|
||||
|
||||
def _normalize_metadata(
|
||||
metadata: Optional[dict[str, Any]],
|
||||
*,
|
||||
reason_code: Optional[str] = None,
|
||||
) -> dict[str, Any]:
|
||||
normalized = dict(metadata or {})
|
||||
normalized.setdefault("contract_version", CONTRACT_VERSION)
|
||||
if reason_code:
|
||||
normalized.setdefault("reason_code", reason_code)
|
||||
return normalized
|
||||
|
||||
|
||||
@dataclass
|
||||
class Signal:
|
||||
ticker: str
|
||||
direction: int
|
||||
confidence: float
|
||||
source: str
|
||||
timestamp: datetime
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
contract_version: str = CONTRACT_VERSION
|
||||
reason_code: Optional[str] = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.reason_code is not None:
|
||||
self.reason_code = reason_code_value(self.reason_code)
|
||||
self.metadata = _normalize_metadata(self.metadata, reason_code=self.reason_code)
|
||||
self.reason_code = self.reason_code or self.metadata.get("reason_code")
|
||||
self.metadata.setdefault("source", self.source)
|
||||
|
||||
@property
|
||||
def degraded(self) -> bool:
|
||||
return self.reason_code is not None or bool(self.metadata.get("error"))
|
||||
|
||||
|
||||
@dataclass
|
||||
class FinalSignal:
|
||||
ticker: str
|
||||
direction: int
|
||||
confidence: float
|
||||
quant_signal: Optional[Signal]
|
||||
llm_signal: Optional[Signal]
|
||||
timestamp: datetime
|
||||
degrade_reason_codes: tuple[str, ...] = ()
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
contract_version: str = CONTRACT_VERSION
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
self.degrade_reason_codes = tuple(
|
||||
dict.fromkeys(code for code in self.degrade_reason_codes if code)
|
||||
)
|
||||
self.metadata = _normalize_metadata(self.metadata)
|
||||
if self.degrade_reason_codes:
|
||||
self.metadata.setdefault(
|
||||
"degrade_reason_codes",
|
||||
list(self.degrade_reason_codes),
|
||||
)
|
||||
|
||||
@property
|
||||
def degraded(self) -> bool:
|
||||
return bool(self.degrade_reason_codes)
|
||||
|
||||
|
||||
def build_error_signal(
|
||||
*,
|
||||
ticker: str,
|
||||
source: str,
|
||||
reason_code: str,
|
||||
message: str,
|
||||
metadata: Optional[dict[str, Any]] = None,
|
||||
timestamp: Optional[datetime] = None,
|
||||
) -> Signal:
|
||||
payload = dict(metadata or {})
|
||||
payload["error"] = message
|
||||
return Signal(
|
||||
ticker=ticker,
|
||||
direction=0,
|
||||
confidence=0.0,
|
||||
source=source,
|
||||
timestamp=timestamp or datetime.now(timezone.utc),
|
||||
metadata=payload,
|
||||
reason_code=reason_code,
|
||||
)
|
||||
|
||||
|
||||
def signal_reason_code(signal: Optional[Signal]) -> Optional[str]:
|
||||
if signal is None:
|
||||
return None
|
||||
return signal.reason_code or signal.metadata.get("reason_code")
|
||||
|
||||
|
||||
class CombinedSignalFailure(ValueError):
|
||||
"""Structured failure for cases where no merged signal can be produced."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
reason_codes: tuple[str, ...] = (),
|
||||
source_diagnostics: Optional[dict[str, Any]] = None,
|
||||
data_quality: Optional[dict[str, Any]] = None,
|
||||
) -> None:
|
||||
super().__init__(message)
|
||||
self.reason_codes = tuple(reason_codes)
|
||||
self.source_diagnostics = dict(source_diagnostics or {})
|
||||
self.data_quality = dict(data_quality) if data_quality is not None else None
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
{
|
||||
"a_share": {
|
||||
"2024": [
|
||||
"2024-01-01",
|
||||
"2024-02-10",
|
||||
"2024-02-11",
|
||||
"2024-02-12",
|
||||
"2024-02-13",
|
||||
"2024-02-14",
|
||||
"2024-02-15",
|
||||
"2024-02-16",
|
||||
"2024-02-17",
|
||||
"2024-04-04",
|
||||
"2024-04-05",
|
||||
"2024-04-06",
|
||||
"2024-05-01",
|
||||
"2024-05-02",
|
||||
"2024-05-03",
|
||||
"2024-05-04",
|
||||
"2024-05-05",
|
||||
"2024-06-08",
|
||||
"2024-06-09",
|
||||
"2024-06-10",
|
||||
"2024-09-15",
|
||||
"2024-09-16",
|
||||
"2024-09-17",
|
||||
"2024-10-01",
|
||||
"2024-10-02",
|
||||
"2024-10-03",
|
||||
"2024-10-04",
|
||||
"2024-10-05",
|
||||
"2024-10-06",
|
||||
"2024-10-07"
|
||||
],
|
||||
"2025": [
|
||||
"2025-01-01",
|
||||
"2025-01-28",
|
||||
"2025-01-29",
|
||||
"2025-01-30",
|
||||
"2025-01-31",
|
||||
"2025-02-01",
|
||||
"2025-02-02",
|
||||
"2025-02-03",
|
||||
"2025-02-04",
|
||||
"2025-04-04",
|
||||
"2025-04-05",
|
||||
"2025-04-06",
|
||||
"2025-05-01",
|
||||
"2025-05-02",
|
||||
"2025-05-03",
|
||||
"2025-05-04",
|
||||
"2025-05-05",
|
||||
"2025-05-31",
|
||||
"2025-06-01",
|
||||
"2025-06-02",
|
||||
"2025-10-01",
|
||||
"2025-10-02",
|
||||
"2025-10-03",
|
||||
"2025-10-04",
|
||||
"2025-10-05",
|
||||
"2025-10-06",
|
||||
"2025-10-07",
|
||||
"2025-10-08"
|
||||
],
|
||||
"2026": [
|
||||
"2026-01-01",
|
||||
"2026-01-02",
|
||||
"2026-01-03",
|
||||
"2026-02-15",
|
||||
"2026-02-16",
|
||||
"2026-02-17",
|
||||
"2026-02-18",
|
||||
"2026-02-19",
|
||||
"2026-02-20",
|
||||
"2026-02-21",
|
||||
"2026-02-22",
|
||||
"2026-02-23",
|
||||
"2026-04-04",
|
||||
"2026-04-05",
|
||||
"2026-04-06",
|
||||
"2026-05-01",
|
||||
"2026-05-02",
|
||||
"2026-05-03",
|
||||
"2026-05-04",
|
||||
"2026-05-05",
|
||||
"2026-06-19",
|
||||
"2026-06-20",
|
||||
"2026-06-21",
|
||||
"2026-09-25",
|
||||
"2026-09-26",
|
||||
"2026-09-27",
|
||||
"2026-10-01",
|
||||
"2026-10-02",
|
||||
"2026-10-03",
|
||||
"2026-10-04",
|
||||
"2026-10-05",
|
||||
"2026-10-06",
|
||||
"2026-10-07"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
"""
|
||||
Example: Run orchestrator backtest for 宁德时代 (300750.SZ) over 2023.
|
||||
|
||||
Usage:
|
||||
cd /path/to/TradingAgents
|
||||
QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_backtest.py
|
||||
"""
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add repo root to path
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.orchestrator import TradingOrchestrator
|
||||
from orchestrator.backtest_mode import BacktestMode
|
||||
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(name)s: %(message)s")
|
||||
|
||||
config = OrchestratorConfig(
|
||||
quant_backtest_path=os.environ.get("QUANT_BACKTEST_PATH", ""),
|
||||
cache_dir="orchestrator/cache",
|
||||
)
|
||||
|
||||
orchestrator = TradingOrchestrator(config)
|
||||
backtest = BacktestMode(orchestrator)
|
||||
|
||||
result = backtest.run(
|
||||
tickers=["300750.SZ"],
|
||||
start_date="2023-01-01",
|
||||
end_date="2023-12-31",
|
||||
)
|
||||
|
||||
print(f"\n=== Backtest Summary ===")
|
||||
print(json.dumps(result.summary, indent=2, ensure_ascii=False))
|
||||
print(f"\nTotal records: {len(result.records)}")
|
||||
if result.records:
|
||||
print(f"First record: {result.records[0]}")
|
||||
print(f"Last record: {result.records[-1]}")
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
"""
|
||||
Example: Run orchestrator live mode for a list of tickers.
|
||||
|
||||
Usage:
|
||||
cd /path/to/TradingAgents
|
||||
QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_live.py
|
||||
"""
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.orchestrator import TradingOrchestrator
|
||||
from orchestrator.live_mode import LiveMode
|
||||
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(name)s: %(message)s")
|
||||
|
||||
TICKERS = ["300750.SZ", "603259.SS"]
|
||||
|
||||
config = OrchestratorConfig(
|
||||
quant_backtest_path=os.environ.get("QUANT_BACKTEST_PATH", ""),
|
||||
cache_dir="orchestrator/cache",
|
||||
)
|
||||
|
||||
orchestrator = TradingOrchestrator(config)
|
||||
live = LiveMode(orchestrator)
|
||||
|
||||
|
||||
async def main():
|
||||
today = datetime.now(timezone.utc).strftime("%Y-%m-%d")
|
||||
print(f"\n=== Live Signals for {today} ===")
|
||||
results = await live.run_once(TICKERS, date=today)
|
||||
print(json.dumps(results, indent=2, ensure_ascii=False))
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
|
|
@ -0,0 +1,150 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Orchestrator configuration validation examples.
|
||||
|
||||
Demonstrates provider mismatch detection and timeout validation.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directories to path
|
||||
repo_root = Path(__file__).parent.parent.parent
|
||||
sys.path.insert(0, str(repo_root))
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.llm_runner import LLMRunner
|
||||
|
||||
logging.basicConfig(level=logging.WARNING, format='%(levelname)s: %(message)s')
|
||||
|
||||
|
||||
def example_1_provider_mismatch():
|
||||
"""Example 1: Provider mismatch detection."""
|
||||
print("=" * 60)
|
||||
print("Example 1: Provider Mismatch Detection")
|
||||
print("=" * 60)
|
||||
|
||||
# Invalid: Google provider with OpenAI URL
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir="/tmp/orchestrator_validation_example",
|
||||
trading_agents_config={
|
||||
"llm_provider": "google",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
},
|
||||
)
|
||||
|
||||
runner = LLMRunner(cfg)
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
print(f"\nConfiguration:")
|
||||
print(f" Provider: google")
|
||||
print(f" Base URL: https://api.openai.com/v1")
|
||||
print(f"\nResult:")
|
||||
print(f" Degraded: {signal.degraded}")
|
||||
print(f" Reason: {signal.reason_code}")
|
||||
print(f" Message: {signal.metadata.get('error', 'N/A')}")
|
||||
print(f" Expected patterns: {signal.metadata.get('data_quality', {}).get('expected_patterns', [])}")
|
||||
print()
|
||||
|
||||
|
||||
def example_2_valid_configuration():
|
||||
"""Example 2: Valid configuration (no mismatch)."""
|
||||
print("=" * 60)
|
||||
print("Example 2: Valid Configuration")
|
||||
print("=" * 60)
|
||||
|
||||
# Valid: Anthropic provider with MiniMax Anthropic-compatible URL
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir="/tmp/orchestrator_validation_example",
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"selected_analysts": ["market"],
|
||||
"analyst_node_timeout_secs": 75.0,
|
||||
},
|
||||
)
|
||||
|
||||
runner = LLMRunner(cfg)
|
||||
mismatch = runner._detect_provider_mismatch()
|
||||
|
||||
print(f"\nConfiguration:")
|
||||
print(f" Provider: anthropic")
|
||||
print(f" Base URL: https://api.minimaxi.com/anthropic")
|
||||
print(f" Selected analysts: ['market']")
|
||||
print(f" Analyst timeout: 75.0s")
|
||||
print(f"\nResult:")
|
||||
print(f" Mismatch detected: {mismatch is not None}")
|
||||
if mismatch:
|
||||
print(f" Details: {mismatch}")
|
||||
else:
|
||||
print(f" Status: Configuration is valid ✓")
|
||||
print()
|
||||
|
||||
|
||||
def example_3_timeout_warning():
|
||||
"""Example 3: Timeout configuration warning."""
|
||||
print("=" * 60)
|
||||
print("Example 3: Timeout Configuration Warning")
|
||||
print("=" * 60)
|
||||
|
||||
# Warning: 4 analysts with insufficient timeout
|
||||
print("\nConfiguration:")
|
||||
print(f" Provider: anthropic")
|
||||
print(f" Base URL: https://api.minimaxi.com/anthropic")
|
||||
print(f" Selected analysts: ['market', 'social', 'news', 'fundamentals']")
|
||||
print(f" Analyst timeout: 75.0s (recommended: 120.0s)")
|
||||
print(f"\nExpected warning:")
|
||||
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir="/tmp/orchestrator_validation_example",
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"selected_analysts": ["market", "social", "news", "fundamentals"],
|
||||
"analyst_node_timeout_secs": 75.0,
|
||||
},
|
||||
)
|
||||
|
||||
# Warning will be logged during initialization
|
||||
runner = LLMRunner(cfg)
|
||||
print()
|
||||
|
||||
|
||||
def example_4_multiple_mismatches():
|
||||
"""Example 4: Multiple provider mismatch scenarios."""
|
||||
print("=" * 60)
|
||||
print("Example 4: Multiple Provider Mismatch Scenarios")
|
||||
print("=" * 60)
|
||||
|
||||
scenarios = [
|
||||
("xai", "https://api.minimaxi.com/anthropic"),
|
||||
("ollama", "https://api.openai.com/v1"),
|
||||
("openrouter", "https://api.anthropic.com/v1"),
|
||||
]
|
||||
|
||||
for provider, url in scenarios:
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir="/tmp/orchestrator_validation_example",
|
||||
trading_agents_config={
|
||||
"llm_provider": provider,
|
||||
"backend_url": url,
|
||||
},
|
||||
)
|
||||
|
||||
runner = LLMRunner(cfg)
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
print(f"\n {provider} + {url}")
|
||||
print(f" → Degraded: {signal.degraded}, Reason: {signal.reason_code}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
example_1_provider_mismatch()
|
||||
example_2_valid_configuration()
|
||||
example_3_timeout_warning()
|
||||
example_4_multiple_mismatches()
|
||||
|
||||
print("=" * 60)
|
||||
print("All examples completed")
|
||||
print("=" * 60)
|
||||
|
|
@ -0,0 +1,112 @@
|
|||
import asyncio
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from typing import List, Optional
|
||||
|
||||
from orchestrator.contracts.config_schema import CONTRACT_VERSION
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LiveMode:
|
||||
"""
|
||||
Triggers signal computation for a list of tickers and broadcasts
|
||||
results via a callback (e.g., WebSocket send).
|
||||
"""
|
||||
|
||||
def __init__(self, orchestrator):
|
||||
self._orchestrator = orchestrator
|
||||
|
||||
@staticmethod
|
||||
def _serialize_result(signal) -> dict:
|
||||
return {
|
||||
"direction": signal.direction,
|
||||
"confidence": signal.confidence,
|
||||
"quant_direction": signal.quant_signal.direction if signal.quant_signal else None,
|
||||
"llm_direction": signal.llm_signal.direction if signal.llm_signal else None,
|
||||
"timestamp": signal.timestamp.isoformat(),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _serialize_degradation(signal, data_quality: Optional[dict]) -> dict:
|
||||
metadata = getattr(signal, "metadata", {}) or {}
|
||||
return {
|
||||
"degraded": bool(getattr(signal, "degrade_reason_codes", ())) or bool(data_quality),
|
||||
"reason_codes": list(getattr(signal, "degrade_reason_codes", ()) or ()),
|
||||
"source_diagnostics": metadata.get("source_diagnostics") or {},
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _contract_version(signal) -> str:
|
||||
metadata = getattr(signal, "metadata", {}) or {}
|
||||
return getattr(signal, "contract_version", None) or metadata.get("contract_version") or CONTRACT_VERSION
|
||||
|
||||
def _serialize_signal(self, *, ticker: str, date: str, signal) -> dict:
|
||||
metadata = getattr(signal, "metadata", {}) or {}
|
||||
data_quality = metadata.get("data_quality")
|
||||
research = metadata.get("research")
|
||||
degradation = self._serialize_degradation(signal, data_quality)
|
||||
return {
|
||||
"contract_version": self._contract_version(signal),
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"status": "degraded_success" if degradation["degraded"] else "completed",
|
||||
"result": self._serialize_result(signal),
|
||||
"error": None,
|
||||
"degradation": degradation,
|
||||
"data_quality": data_quality,
|
||||
"research": research,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _serialize_error(*, ticker: str, date: str, exc: Exception) -> dict:
|
||||
reason_codes = list(getattr(exc, "reason_codes", ()) or ())
|
||||
if not reason_codes and isinstance(exc, ValueError) and "both quant and llm signals are None" in str(exc):
|
||||
reason_codes.append(ReasonCode.BOTH_SIGNALS_UNAVAILABLE.value)
|
||||
source_diagnostics = dict(getattr(exc, "source_diagnostics", {}) or {})
|
||||
data_quality = getattr(exc, "data_quality", None)
|
||||
research = None
|
||||
for diagnostic in source_diagnostics.values():
|
||||
if isinstance(diagnostic, dict) and diagnostic.get("research") is not None:
|
||||
research = diagnostic["research"]
|
||||
break
|
||||
return {
|
||||
"contract_version": CONTRACT_VERSION,
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"status": "failed",
|
||||
"result": None,
|
||||
"error": {
|
||||
"code": "live_signal_failed",
|
||||
"message": str(exc),
|
||||
"retryable": False,
|
||||
},
|
||||
"degradation": {
|
||||
"degraded": bool(reason_codes),
|
||||
"reason_codes": reason_codes,
|
||||
"source_diagnostics": source_diagnostics,
|
||||
},
|
||||
"data_quality": data_quality,
|
||||
"research": research,
|
||||
}
|
||||
|
||||
async def run_once(self, tickers: List[str], date: Optional[str] = None) -> List[dict]:
|
||||
"""
|
||||
Compute combined signals for all tickers on the given date (default: today).
|
||||
Returns list of signal dicts.
|
||||
"""
|
||||
if date is None:
|
||||
date = datetime.now(timezone.utc).strftime("%Y-%m-%d")
|
||||
|
||||
results = []
|
||||
for ticker in tickers:
|
||||
try:
|
||||
sig = await asyncio.to_thread(
|
||||
self._orchestrator.get_combined_signal, ticker, date
|
||||
)
|
||||
results.append(self._serialize_signal(ticker=ticker, date=date, signal=sig))
|
||||
except Exception as e:
|
||||
logger.error("LiveMode: failed for %s %s: %s", ticker, date, e)
|
||||
results.append(self._serialize_error(ticker=ticker, date=date, exc=e))
|
||||
return results
|
||||
|
|
@ -0,0 +1,237 @@
|
|||
import json
|
||||
import logging
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.contracts.result_contract import Signal, build_error_signal
|
||||
from tradingagents.agents.utils.agent_states import extract_research_provenance
|
||||
from tradingagents.llm_clients.factory import validate_provider_base_url
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Recommended timeout thresholds by analyst count
|
||||
_RECOMMENDED_TIMEOUTS = {
|
||||
1: {"analyst": 75.0, "research": 30.0},
|
||||
2: {"analyst": 90.0, "research": 45.0},
|
||||
3: {"analyst": 105.0, "research": 60.0},
|
||||
4: {"analyst": 120.0, "research": 75.0},
|
||||
}
|
||||
|
||||
|
||||
def _build_data_quality(state: str, **details):
|
||||
payload = {"state": state}
|
||||
payload.update({key: value for key, value in details.items() if value is not None})
|
||||
return payload
|
||||
|
||||
|
||||
def _extract_research_metadata(final_state: dict | None) -> dict | None:
|
||||
if not isinstance(final_state, dict):
|
||||
return None
|
||||
debate_state = final_state.get("investment_debate_state") or {}
|
||||
return extract_research_provenance(debate_state)
|
||||
|
||||
|
||||
def _looks_like_provider_auth_failure(exc: Exception) -> bool:
|
||||
text = str(exc).lower()
|
||||
markers = (
|
||||
"authentication_error",
|
||||
"login fail",
|
||||
"please carry the api secret key",
|
||||
"invalid api key",
|
||||
"unauthorized",
|
||||
"error code: 401",
|
||||
)
|
||||
return any(marker in text for marker in markers)
|
||||
|
||||
|
||||
class LLMRunner:
|
||||
def __init__(self, config: OrchestratorConfig):
|
||||
self._config = config
|
||||
self._graph = None # Lazy-initialized on first get_signal() call (requires API key)
|
||||
self.cache_dir = config.cache_dir
|
||||
os.makedirs(self.cache_dir, exist_ok=True)
|
||||
self._validate_timeout_config()
|
||||
|
||||
def _validate_timeout_config(self):
|
||||
"""Warn if timeout configuration may be insufficient for selected analysts."""
|
||||
trading_cfg = self._config.trading_agents_config or {}
|
||||
selected_analysts = trading_cfg.get("selected_analysts", ["market", "social", "news", "fundamentals"])
|
||||
analyst_count = len(selected_analysts) if selected_analysts else 4
|
||||
|
||||
analyst_timeout = float(trading_cfg.get("analyst_node_timeout_secs", 75.0))
|
||||
research_timeout = float(trading_cfg.get("research_node_timeout_secs", 30.0))
|
||||
|
||||
# Get recommended thresholds (use max if analyst_count > 4)
|
||||
recommended = _RECOMMENDED_TIMEOUTS.get(analyst_count, _RECOMMENDED_TIMEOUTS[4])
|
||||
|
||||
warnings = []
|
||||
if analyst_timeout < recommended["analyst"]:
|
||||
warnings.append(
|
||||
f"analyst_node_timeout_secs={analyst_timeout:.1f}s may be insufficient "
|
||||
f"for {analyst_count} analyst(s) (recommended: {recommended['analyst']:.1f}s)"
|
||||
)
|
||||
if research_timeout < recommended["research"]:
|
||||
warnings.append(
|
||||
f"research_node_timeout_secs={research_timeout:.1f}s may be insufficient "
|
||||
f"for {analyst_count} analyst(s) (recommended: {recommended['research']:.1f}s)"
|
||||
)
|
||||
|
||||
for warning in warnings:
|
||||
logger.warning("LLMRunner: %s", warning)
|
||||
|
||||
def _get_graph(self):
|
||||
"""Lazy-initialize TradingAgentsGraph (heavy, requires API key at init time)."""
|
||||
if self._graph is None:
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
trading_cfg = self._config.trading_agents_config if self._config.trading_agents_config else None
|
||||
graph_kwargs = {"config": trading_cfg}
|
||||
if trading_cfg and "selected_analysts" in trading_cfg:
|
||||
graph_kwargs["selected_analysts"] = trading_cfg["selected_analysts"]
|
||||
self._graph = TradingAgentsGraph(**graph_kwargs)
|
||||
return self._graph
|
||||
|
||||
def _detect_provider_mismatch(self):
|
||||
"""Validate provider × base_url compatibility using factory's validation.
|
||||
|
||||
Uses the original provider name (not canonical) for validation since
|
||||
ollama/openrouter have different URL patterns than openai.
|
||||
"""
|
||||
trading_cfg = self._config.trading_agents_config or {}
|
||||
provider = trading_cfg.get("llm_provider", "")
|
||||
base_url = trading_cfg.get("backend_url", "")
|
||||
|
||||
if not provider or not base_url:
|
||||
return None
|
||||
|
||||
return validate_provider_base_url(provider, base_url)
|
||||
|
||||
def get_signal(self, ticker: str, date: str) -> Signal:
|
||||
"""获取指定股票在指定日期的 LLM 信号,带缓存。"""
|
||||
# Validate configuration first (lightweight, prevents returning stale cache on config errors)
|
||||
mismatch = self._detect_provider_mismatch()
|
||||
if mismatch is not None:
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="llm",
|
||||
reason_code=ReasonCode.PROVIDER_MISMATCH.value,
|
||||
message=(
|
||||
f"provider '{mismatch['provider']}' does not match backend_url "
|
||||
f"'{mismatch['backend_url']}'"
|
||||
),
|
||||
metadata={
|
||||
"data_quality": _build_data_quality("provider_mismatch", **mismatch),
|
||||
},
|
||||
)
|
||||
|
||||
# Check cache after validation
|
||||
safe_ticker = ticker.replace("/", "_")
|
||||
cache_path = os.path.join(self.cache_dir, f"{safe_ticker}_{date}.json")
|
||||
|
||||
try:
|
||||
with open(cache_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
logger.info("LLMRunner: cache hit for %s %s", ticker, date)
|
||||
return Signal(
|
||||
ticker=ticker,
|
||||
direction=data["direction"],
|
||||
confidence=data["confidence"],
|
||||
source="llm",
|
||||
timestamp=datetime.fromisoformat(data["timestamp"]),
|
||||
metadata=data,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
pass # Continue to LLM call
|
||||
|
||||
try:
|
||||
_final_state, processed_signal = self._get_graph().propagate(ticker, date)
|
||||
rating = processed_signal if isinstance(processed_signal, str) else str(processed_signal)
|
||||
direction, confidence = self._map_rating(rating)
|
||||
now = datetime.now(timezone.utc)
|
||||
research_metadata = _extract_research_metadata(_final_state)
|
||||
if research_metadata and research_metadata.get("research_status") != "full":
|
||||
data_quality = _build_data_quality(
|
||||
"research_degraded",
|
||||
research_status=research_metadata.get("research_status"),
|
||||
research_mode=research_metadata.get("research_mode"),
|
||||
degraded_reason=research_metadata.get("degraded_reason"),
|
||||
timed_out_nodes=research_metadata.get("timed_out_nodes"),
|
||||
)
|
||||
else:
|
||||
data_quality = _build_data_quality("ok")
|
||||
|
||||
cache_data = {
|
||||
"rating": rating,
|
||||
"direction": direction,
|
||||
"confidence": confidence,
|
||||
"timestamp": now.isoformat(),
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"decision_structured": (
|
||||
(_final_state or {}).get("final_trade_decision_structured")
|
||||
if isinstance(_final_state, dict)
|
||||
else None
|
||||
),
|
||||
"data_quality": data_quality,
|
||||
"research": research_metadata,
|
||||
"sample_quality": (
|
||||
"degraded_research"
|
||||
if research_metadata and research_metadata.get("research_status") != "full"
|
||||
else "full_research"
|
||||
),
|
||||
}
|
||||
with open(cache_path, "w", encoding="utf-8") as f:
|
||||
json.dump(cache_data, f, ensure_ascii=False, indent=2)
|
||||
|
||||
return Signal(
|
||||
ticker=ticker,
|
||||
direction=direction,
|
||||
confidence=confidence,
|
||||
source="llm",
|
||||
timestamp=now,
|
||||
metadata=cache_data,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error("LLMRunner: propagate failed for %s %s: %s", ticker, date, e)
|
||||
reason_code = ReasonCode.LLM_SIGNAL_FAILED.value
|
||||
if "Unsupported LLM provider" in str(e):
|
||||
reason_code = ReasonCode.PROVIDER_MISMATCH.value
|
||||
elif _looks_like_provider_auth_failure(e):
|
||||
reason_code = ReasonCode.PROVIDER_AUTH_FAILED.value
|
||||
|
||||
# Map reason code to data quality state
|
||||
state_map = {
|
||||
ReasonCode.PROVIDER_MISMATCH.value: "provider_mismatch",
|
||||
ReasonCode.PROVIDER_AUTH_FAILED.value: "provider_auth_failed",
|
||||
}
|
||||
state = state_map.get(reason_code, "unknown")
|
||||
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="llm",
|
||||
reason_code=reason_code,
|
||||
message=str(e),
|
||||
metadata={
|
||||
"data_quality": _build_data_quality(
|
||||
state,
|
||||
provider=(self._config.trading_agents_config or {}).get("llm_provider"),
|
||||
backend_url=(self._config.trading_agents_config or {}).get("backend_url"),
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
def _map_rating(self, rating: str) -> tuple[int, float]:
|
||||
"""将 5 级评级映射为 (direction, confidence)。"""
|
||||
mapping = {
|
||||
"BUY": (1, 0.9),
|
||||
"OVERWEIGHT": (1, 0.6),
|
||||
"HOLD": (0, 0.5),
|
||||
"UNDERWEIGHT": (-1, 0.6),
|
||||
"SELL": (-1, 0.9),
|
||||
}
|
||||
result = mapping.get(rating.upper() if rating else "", None)
|
||||
if result is None:
|
||||
logger.warning("LLMRunner: unknown rating %r, falling back to HOLD", rating)
|
||||
return (0, 0.5)
|
||||
return result
|
||||
|
|
@ -0,0 +1,149 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
from datetime import date, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
_A_SHARE_SUFFIXES = {"SH", "SS", "SZ"}
|
||||
_DEFAULT_MARKET_HOLIDAYS_PATH = Path(__file__).with_name("data") / "market_holidays.json"
|
||||
|
||||
|
||||
def is_non_trading_day(ticker: str, day: date, *, data_path: Path | None = None) -> bool:
|
||||
"""Return whether the requested date is a known non-trading day for the ticker's market."""
|
||||
if day.weekday() >= 5:
|
||||
return True
|
||||
market = market_for_ticker(ticker)
|
||||
if market == "a_share":
|
||||
return day in get_market_holidays(market, day.year, data_path=data_path)
|
||||
if market == "nyse":
|
||||
return _is_nyse_holiday(day)
|
||||
return False
|
||||
|
||||
|
||||
def market_for_ticker(ticker: str) -> str:
|
||||
suffix = ticker.rsplit(".", 1)[-1].upper() if "." in ticker else ""
|
||||
if suffix in _A_SHARE_SUFFIXES:
|
||||
return "a_share"
|
||||
return "nyse"
|
||||
|
||||
|
||||
def get_market_holidays(market: str, year: int, *, data_path: Path | None = None) -> set[date]:
|
||||
holidays_by_market = load_market_holidays(data_path=data_path)
|
||||
market_data = holidays_by_market.get(market, {})
|
||||
values = market_data.get(str(year), [])
|
||||
return {date.fromisoformat(raw) for raw in values}
|
||||
|
||||
|
||||
def load_market_holidays(*, data_path: Path | None = None) -> dict[str, dict[str, list[str]]]:
|
||||
path = _resolve_market_holidays_path(data_path)
|
||||
if not path.exists():
|
||||
return {}
|
||||
payload = json.loads(path.read_text())
|
||||
return {
|
||||
str(market): {str(year): list(days) for year, days in years.items()}
|
||||
for market, years in payload.items()
|
||||
}
|
||||
|
||||
|
||||
def update_market_holidays(
|
||||
*,
|
||||
market: str,
|
||||
year: int,
|
||||
holiday_dates: list[date | str],
|
||||
data_path: Path | None = None,
|
||||
) -> Path:
|
||||
path = _resolve_market_holidays_path(data_path)
|
||||
payload = load_market_holidays(data_path=path)
|
||||
payload.setdefault(market, {})
|
||||
normalized_days = sorted(
|
||||
{
|
||||
item.isoformat() if isinstance(item, date) else date.fromisoformat(item).isoformat()
|
||||
for item in holiday_dates
|
||||
}
|
||||
)
|
||||
payload[market][str(year)] = normalized_days
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(payload, ensure_ascii=False, indent=2, sort_keys=True))
|
||||
return path
|
||||
|
||||
|
||||
def _resolve_market_holidays_path(data_path: Path | None = None) -> Path:
|
||||
if data_path is not None:
|
||||
return data_path
|
||||
env_path = os.environ.get("TRADINGAGENTS_MARKET_HOLIDAYS_PATH")
|
||||
if env_path:
|
||||
return Path(env_path)
|
||||
return _DEFAULT_MARKET_HOLIDAYS_PATH
|
||||
|
||||
|
||||
def _is_nyse_holiday(day: date) -> bool:
|
||||
observed_new_year = _observed_fixed_holiday(day.year, 1, 1)
|
||||
observed_juneteenth = _observed_fixed_holiday(day.year, 6, 19)
|
||||
observed_independence_day = _observed_fixed_holiday(day.year, 7, 4)
|
||||
observed_christmas = _observed_fixed_holiday(day.year, 12, 25)
|
||||
|
||||
holidays = {
|
||||
observed_new_year,
|
||||
_nth_weekday(day.year, 1, 0, 3), # Martin Luther King, Jr. Day
|
||||
_nth_weekday(day.year, 2, 0, 3), # Washington's Birthday
|
||||
_easter(day.year) - timedelta(days=2), # Good Friday
|
||||
_last_weekday(day.year, 5, 0), # Memorial Day
|
||||
observed_independence_day,
|
||||
_nth_weekday(day.year, 9, 0, 1), # Labor Day
|
||||
_nth_weekday(day.year, 11, 3, 4), # Thanksgiving Day
|
||||
observed_christmas,
|
||||
}
|
||||
if day.year >= 2022:
|
||||
holidays.add(observed_juneteenth)
|
||||
|
||||
if day.month == 12 and day.day == 31:
|
||||
next_new_year = _observed_fixed_holiday(day.year + 1, 1, 1)
|
||||
if next_new_year.year == day.year:
|
||||
holidays.add(next_new_year)
|
||||
|
||||
return day in holidays
|
||||
|
||||
|
||||
def _observed_fixed_holiday(year: int, month: int, day: int) -> date:
|
||||
holiday = date(year, month, day)
|
||||
if holiday.weekday() == 5:
|
||||
return holiday - timedelta(days=1)
|
||||
if holiday.weekday() == 6:
|
||||
return holiday + timedelta(days=1)
|
||||
return holiday
|
||||
|
||||
|
||||
def _nth_weekday(year: int, month: int, weekday: int, occurrence: int) -> date:
|
||||
first = date(year, month, 1)
|
||||
delta = (weekday - first.weekday()) % 7
|
||||
return first + timedelta(days=delta + 7 * (occurrence - 1))
|
||||
|
||||
|
||||
def _last_weekday(year: int, month: int, weekday: int) -> date:
|
||||
if month == 12:
|
||||
cursor = date(year + 1, 1, 1) - timedelta(days=1)
|
||||
else:
|
||||
cursor = date(year, month + 1, 1) - timedelta(days=1)
|
||||
while cursor.weekday() != weekday:
|
||||
cursor -= timedelta(days=1)
|
||||
return cursor
|
||||
|
||||
|
||||
def _easter(year: int) -> date:
|
||||
"""Anonymous Gregorian algorithm."""
|
||||
a = year % 19
|
||||
b = year // 100
|
||||
c = year % 100
|
||||
d = b // 4
|
||||
e = b % 4
|
||||
f = (b + 8) // 25
|
||||
g = (b - f + 1) // 3
|
||||
h = (19 * a + b - d - g + 15) % 30
|
||||
i = c // 4
|
||||
k = c % 4
|
||||
l = (32 + 2 * e + 2 * i - h - k) % 7
|
||||
m = (a + 11 * h + 22 * l) // 451
|
||||
month = (h + l - 7 * m + 114) // 31
|
||||
day = ((h + l - 7 * m + 114) % 31) + 1
|
||||
return date(year, month, day)
|
||||
|
|
@ -0,0 +1,162 @@
|
|||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.contracts.result_contract import CombinedSignalFailure, FinalSignal, Signal, signal_reason_code
|
||||
from orchestrator.signals import SignalMerger
|
||||
from orchestrator.quant_runner import QuantRunner
|
||||
from orchestrator.llm_runner import LLMRunner
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TradingOrchestrator:
|
||||
def __init__(self, config: OrchestratorConfig):
|
||||
self._config = config
|
||||
self._merger = SignalMerger(config)
|
||||
self._quant: Optional[QuantRunner] = None
|
||||
self._llm: Optional[LLMRunner] = None
|
||||
self._quant_unavailable_reason: Optional[str] = None
|
||||
self._llm_unavailable_reason: Optional[str] = None
|
||||
|
||||
# Initialize runners (quant requires quant_backtest_path)
|
||||
if config.quant_backtest_path:
|
||||
try:
|
||||
self._quant = QuantRunner(config)
|
||||
except Exception as e:
|
||||
logger.warning("TradingOrchestrator: QuantRunner init failed: %s", e)
|
||||
self._quant_unavailable_reason = ReasonCode.QUANT_INIT_FAILED.value
|
||||
else:
|
||||
self._quant_unavailable_reason = ReasonCode.QUANT_NOT_CONFIGURED.value
|
||||
|
||||
try:
|
||||
self._llm = LLMRunner(config)
|
||||
except Exception as e:
|
||||
logger.warning("TradingOrchestrator: LLMRunner init failed: %s", e)
|
||||
self._llm_unavailable_reason = ReasonCode.LLM_INIT_FAILED.value
|
||||
|
||||
def get_combined_signal(self, ticker: str, date: str) -> FinalSignal:
|
||||
"""
|
||||
Get merged signal for ticker on date.
|
||||
Degradation:
|
||||
- quant fails (error signal): use llm only with llm_solo_penalty
|
||||
- llm fails (error signal): use quant only with quant_solo_penalty
|
||||
- both fail: raises ValueError
|
||||
"""
|
||||
quant_sig: Optional[Signal] = None
|
||||
llm_sig: Optional[Signal] = None
|
||||
degradation_reasons: list[str] = []
|
||||
source_diagnostics: dict[str, dict] = {}
|
||||
|
||||
if self._quant is None and self._quant_unavailable_reason:
|
||||
degradation_reasons.append(self._quant_unavailable_reason)
|
||||
source_diagnostics["quant"] = {"reason_code": self._quant_unavailable_reason}
|
||||
if self._llm is None and self._llm_unavailable_reason:
|
||||
degradation_reasons.append(self._llm_unavailable_reason)
|
||||
source_diagnostics["llm"] = {"reason_code": self._llm_unavailable_reason}
|
||||
|
||||
# Get quant signal
|
||||
if self._quant is not None:
|
||||
try:
|
||||
quant_sig = self._quant.get_signal(ticker, date)
|
||||
if quant_sig.degraded:
|
||||
reason_code = signal_reason_code(quant_sig) or ReasonCode.QUANT_SIGNAL_FAILED.value
|
||||
degradation_reasons.append(
|
||||
reason_code
|
||||
)
|
||||
source_diagnostics["quant"] = self._build_source_diagnostic(quant_sig, reason_code)
|
||||
logger.warning("TradingOrchestrator: quant signal degraded for %s %s", ticker, date)
|
||||
quant_sig = None
|
||||
except Exception as e:
|
||||
logger.error("TradingOrchestrator: quant get_signal failed: %s", e)
|
||||
degradation_reasons.append(ReasonCode.QUANT_SIGNAL_FAILED.value)
|
||||
source_diagnostics["quant"] = {"reason_code": ReasonCode.QUANT_SIGNAL_FAILED.value}
|
||||
quant_sig = None
|
||||
|
||||
# Get llm signal
|
||||
if self._llm is not None:
|
||||
try:
|
||||
llm_sig = self._llm.get_signal(ticker, date)
|
||||
if llm_sig.degraded:
|
||||
reason_code = signal_reason_code(llm_sig) or ReasonCode.LLM_SIGNAL_FAILED.value
|
||||
degradation_reasons.append(
|
||||
reason_code
|
||||
)
|
||||
source_diagnostics["llm"] = self._build_source_diagnostic(llm_sig, reason_code)
|
||||
logger.warning("TradingOrchestrator: llm signal degraded for %s %s", ticker, date)
|
||||
llm_sig = None
|
||||
except Exception as e:
|
||||
logger.error("TradingOrchestrator: llm get_signal failed: %s", e)
|
||||
degradation_reasons.append(ReasonCode.LLM_SIGNAL_FAILED.value)
|
||||
source_diagnostics["llm"] = {"reason_code": ReasonCode.LLM_SIGNAL_FAILED.value}
|
||||
llm_sig = None
|
||||
|
||||
# Preserve diagnostics even when both lanes degrade and no FinalSignal can be produced.
|
||||
if quant_sig is None and llm_sig is None:
|
||||
degradation_reasons.append(ReasonCode.BOTH_SIGNALS_UNAVAILABLE.value)
|
||||
raise CombinedSignalFailure(
|
||||
"both quant and llm signals are None",
|
||||
reason_codes=tuple(dict.fromkeys(degradation_reasons)),
|
||||
source_diagnostics=source_diagnostics,
|
||||
data_quality=self._summarize_data_quality(source_diagnostics),
|
||||
)
|
||||
|
||||
final_signal = self._merger.merge(
|
||||
quant_sig,
|
||||
llm_sig,
|
||||
degradation_reasons=degradation_reasons,
|
||||
)
|
||||
data_quality = self._summarize_data_quality(source_diagnostics)
|
||||
metadata = dict(final_signal.metadata)
|
||||
if source_diagnostics:
|
||||
metadata["source_diagnostics"] = source_diagnostics
|
||||
if data_quality:
|
||||
metadata["data_quality"] = data_quality
|
||||
if llm_sig is not None and llm_sig.metadata.get("research") is not None:
|
||||
metadata["research"] = llm_sig.metadata.get("research")
|
||||
final_signal.metadata = metadata
|
||||
return final_signal
|
||||
|
||||
@staticmethod
|
||||
def _build_source_diagnostic(signal: Signal, reason_code: str) -> dict:
|
||||
diagnostic = {"reason_code": reason_code}
|
||||
data_quality = signal.metadata.get("data_quality")
|
||||
if data_quality is not None:
|
||||
diagnostic["data_quality"] = data_quality
|
||||
error = signal.metadata.get("error")
|
||||
if error:
|
||||
diagnostic["error"] = error
|
||||
research = signal.metadata.get("research")
|
||||
if research is not None:
|
||||
diagnostic["research"] = research
|
||||
return diagnostic
|
||||
|
||||
@staticmethod
|
||||
def _summarize_data_quality(source_diagnostics: dict[str, dict]) -> Optional[dict]:
|
||||
states: list[tuple[str, dict]] = []
|
||||
for source, diagnostic in source_diagnostics.items():
|
||||
data_quality = diagnostic.get("data_quality")
|
||||
if isinstance(data_quality, dict) and data_quality.get("state"):
|
||||
states.append((source, data_quality))
|
||||
|
||||
if not states:
|
||||
return None
|
||||
|
||||
priority = {
|
||||
"provider_mismatch": 0,
|
||||
"non_trading_day": 1,
|
||||
"stale_data": 2,
|
||||
"partial_data": 3,
|
||||
}
|
||||
source, selected = sorted(
|
||||
states,
|
||||
key=lambda item: priority.get(item[1].get("state"), 999),
|
||||
)[0]
|
||||
summary = dict(selected)
|
||||
summary["source"] = source
|
||||
summary["issues"] = [
|
||||
{"source": issue_source, **issue_quality}
|
||||
for issue_source, issue_quality in states
|
||||
]
|
||||
return summary
|
||||
|
|
@ -0,0 +1,164 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from collections import Counter
|
||||
from pathlib import Path
|
||||
from statistics import median
|
||||
|
||||
AB_SCHEMA_VERSION = "tradingagents.profile_ab.v1alpha1"
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Compare TradingAgents stage-profile traces for a minimal A/B workflow.",
|
||||
)
|
||||
parser.add_argument("--a", nargs="+", required=True, help="Trace file(s) or directories for cohort A")
|
||||
parser.add_argument("--b", nargs="+", required=True, help="Trace file(s) or directories for cohort B")
|
||||
parser.add_argument("--label-a", default="A")
|
||||
parser.add_argument("--label-b", default="B")
|
||||
parser.add_argument("--output", help="Optional path to write the comparison JSON")
|
||||
return parser
|
||||
|
||||
|
||||
def _expand_inputs(items: list[str]) -> list[Path]:
|
||||
files: list[Path] = []
|
||||
for item in items:
|
||||
path = Path(item)
|
||||
if path.is_dir():
|
||||
files.extend(sorted(candidate for candidate in path.glob("*.json") if candidate.is_file()))
|
||||
elif path.is_file():
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def _load_trace(path: Path) -> dict:
|
||||
data = json.loads(path.read_text())
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError(f"trace at {path} must be a JSON object")
|
||||
payload = dict(data)
|
||||
payload.setdefault("_source_path", str(path))
|
||||
return payload
|
||||
|
||||
|
||||
def _phase_totals_ms(trace: dict) -> dict[str, int]:
|
||||
summary = trace.get("summary") or {}
|
||||
phase_totals = summary.get("phase_totals_seconds") or trace.get("phase_totals_seconds") or {}
|
||||
return {str(key): int(round(float(value) * 1000)) for key, value in phase_totals.items()}
|
||||
|
||||
|
||||
def summarize_traces(traces: list[dict], label: str) -> dict:
|
||||
run_count = len(traces)
|
||||
ok_runs = [trace for trace in traces if trace.get("status") == "ok"]
|
||||
degraded_runs = [
|
||||
trace for trace in traces
|
||||
if ((trace.get("summary") or {}).get("final_research_status") not in (None, "full"))
|
||||
]
|
||||
total_elapsed = [int((trace.get("summary") or {}).get("total_elapsed_ms", 0)) for trace in ok_runs]
|
||||
event_counts = [int((trace.get("summary") or {}).get("event_count", 0)) for trace in ok_runs]
|
||||
status_counts = Counter(str(trace.get("status") or "unknown") for trace in traces)
|
||||
schema_versions = sorted({str(trace.get("trace_schema_version") or "unknown") for trace in traces})
|
||||
source_files = sorted(str(trace.get("_source_path")) for trace in traces if trace.get("_source_path"))
|
||||
|
||||
phase_values: dict[str, list[int]] = {}
|
||||
for trace in ok_runs:
|
||||
for phase, elapsed_ms in _phase_totals_ms(trace).items():
|
||||
phase_values.setdefault(phase, []).append(elapsed_ms)
|
||||
|
||||
phase_medians = {phase: int(median(values)) for phase, values in sorted(phase_values.items()) if values}
|
||||
variants = sorted({str(trace.get("variant_label") or label) for trace in traces})
|
||||
return {
|
||||
"label": label,
|
||||
"run_count": run_count,
|
||||
"ok_count": len(ok_runs),
|
||||
"error_count": run_count - len(ok_runs),
|
||||
"degraded_run_count": len(degraded_runs),
|
||||
"variants": variants,
|
||||
"status_counts": dict(sorted(status_counts.items())),
|
||||
"trace_schema_versions": schema_versions,
|
||||
"source_files": source_files,
|
||||
"median_total_elapsed_ms": int(median(total_elapsed)) if total_elapsed else None,
|
||||
"median_event_count": int(median(event_counts)) if event_counts else None,
|
||||
"median_phase_elapsed_ms": phase_medians,
|
||||
}
|
||||
|
||||
|
||||
def compare_summaries(summary_a: dict, summary_b: dict) -> dict:
|
||||
total_a = summary_a.get("median_total_elapsed_ms")
|
||||
total_b = summary_b.get("median_total_elapsed_ms")
|
||||
degraded_a = summary_a.get("degraded_run_count", 0)
|
||||
degraded_b = summary_b.get("degraded_run_count", 0)
|
||||
error_a = summary_a.get("error_count", 0)
|
||||
error_b = summary_b.get("error_count", 0)
|
||||
|
||||
faster = None
|
||||
if total_a is not None and total_b is not None:
|
||||
if total_a < total_b:
|
||||
faster = summary_a["label"]
|
||||
elif total_b < total_a:
|
||||
faster = summary_b["label"]
|
||||
|
||||
lower_degradation = None
|
||||
if degraded_a < degraded_b:
|
||||
lower_degradation = summary_a["label"]
|
||||
elif degraded_b < degraded_a:
|
||||
lower_degradation = summary_b["label"]
|
||||
|
||||
lower_error_rate = None
|
||||
if error_a < error_b:
|
||||
lower_error_rate = summary_a["label"]
|
||||
elif error_b < error_a:
|
||||
lower_error_rate = summary_b["label"]
|
||||
|
||||
recommended = None
|
||||
if faster == summary_a["label"] and lower_degradation in (None, summary_a["label"]) and lower_error_rate in (None, summary_a["label"]):
|
||||
recommended = summary_a["label"]
|
||||
elif faster == summary_b["label"] and lower_degradation in (None, summary_b["label"]) and lower_error_rate in (None, summary_b["label"]):
|
||||
recommended = summary_b["label"]
|
||||
elif lower_degradation == summary_a["label"] and total_a == total_b and lower_error_rate in (None, summary_a["label"]):
|
||||
recommended = summary_a["label"]
|
||||
elif lower_degradation == summary_b["label"] and total_a == total_b and lower_error_rate in (None, summary_b["label"]):
|
||||
recommended = summary_b["label"]
|
||||
|
||||
return {
|
||||
"faster_label": faster,
|
||||
"lower_degradation_label": lower_degradation,
|
||||
"lower_error_rate_label": lower_error_rate,
|
||||
"recommended_label": recommended,
|
||||
}
|
||||
|
||||
|
||||
def build_comparison(traces_a: list[dict], traces_b: list[dict], *, label_a: str, label_b: str) -> dict:
|
||||
summary_a = summarize_traces(traces_a, label_a)
|
||||
summary_b = summarize_traces(traces_b, label_b)
|
||||
return {
|
||||
"schema_version": AB_SCHEMA_VERSION,
|
||||
"cohorts": {
|
||||
label_a: summary_a,
|
||||
label_b: summary_b,
|
||||
},
|
||||
"comparison": compare_summaries(summary_a, summary_b),
|
||||
}
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = build_parser().parse_args()
|
||||
files_a = _expand_inputs(args.a)
|
||||
files_b = _expand_inputs(args.b)
|
||||
if not files_a:
|
||||
raise SystemExit("no trace files found for cohort A")
|
||||
if not files_b:
|
||||
raise SystemExit("no trace files found for cohort B")
|
||||
|
||||
traces_a = [_load_trace(path) for path in files_a]
|
||||
traces_b = [_load_trace(path) for path in files_b]
|
||||
payload = build_comparison(traces_a, traces_b, label_a=args.label_a, label_b=args.label_b)
|
||||
|
||||
rendered = json.dumps(payload, ensure_ascii=False, indent=2)
|
||||
if args.output:
|
||||
Path(args.output).write_text(rendered)
|
||||
print(rendered)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -0,0 +1,246 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import _thread
|
||||
import argparse
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from contextlib import contextmanager
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from orchestrator.profile_trace_utils import TRACE_KIND, TRACE_SCHEMA_VERSION, build_trace_summary
|
||||
from tradingagents.graph.propagation import Propagator
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
|
||||
_PHASE_MAP = {
|
||||
"Market Analyst": "analyst",
|
||||
"Bull Researcher": "research",
|
||||
"Bear Researcher": "research",
|
||||
"Research Manager": "research",
|
||||
"Trader": "trading",
|
||||
"Aggressive Analyst": "risk",
|
||||
"Conservative Analyst": "risk",
|
||||
"Neutral Analyst": "risk",
|
||||
"Portfolio Manager": "portfolio",
|
||||
}
|
||||
|
||||
_LLM_KIND_MAP = {
|
||||
"Market Analyst": "quick",
|
||||
"Bull Researcher": "quick",
|
||||
"Bear Researcher": "quick",
|
||||
"Research Manager": "deep",
|
||||
"Trader": "quick",
|
||||
"Aggressive Analyst": "quick",
|
||||
"Conservative Analyst": "quick",
|
||||
"Neutral Analyst": "quick",
|
||||
"Portfolio Manager": "deep",
|
||||
}
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Profile TradingAgents graph stage timings.")
|
||||
parser.add_argument("--ticker", required=True)
|
||||
parser.add_argument("--date", required=True)
|
||||
parser.add_argument("--provider", default="anthropic")
|
||||
parser.add_argument("--model", default="MiniMax-M2.7-highspeed")
|
||||
parser.add_argument("--base-url", default="https://api.minimaxi.com/anthropic")
|
||||
parser.add_argument("--timeout", type=float, default=45.0)
|
||||
parser.add_argument("--max-retries", type=int, default=0)
|
||||
parser.add_argument("--analysis-prompt-style", default="compact")
|
||||
parser.add_argument("--selected-analysts", default="market")
|
||||
parser.add_argument("--overall-timeout", type=int, default=120)
|
||||
parser.add_argument("--dump-dir", default="orchestrator/profile_runs")
|
||||
parser.add_argument("--dump-raw-on-failure", action="store_true")
|
||||
return parser
|
||||
|
||||
|
||||
class _ProfileTimeout(Exception):
|
||||
pass
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _overall_timeout_guard(seconds: int):
|
||||
timed_out = threading.Event()
|
||||
timer: threading.Timer | None = None
|
||||
|
||||
def interrupt_main() -> None:
|
||||
timed_out.set()
|
||||
_thread.interrupt_main()
|
||||
|
||||
if seconds > 0:
|
||||
timer = threading.Timer(seconds, interrupt_main)
|
||||
timer.daemon = True
|
||||
timer.start()
|
||||
|
||||
try:
|
||||
yield timed_out
|
||||
finally:
|
||||
if timer is not None:
|
||||
timer.cancel()
|
||||
|
||||
|
||||
def _jsonable(value):
|
||||
if isinstance(value, (str, int, float, bool)) or value is None:
|
||||
return value
|
||||
if isinstance(value, dict):
|
||||
return {str(k): _jsonable(v) for k, v in value.items()}
|
||||
if isinstance(value, (list, tuple)):
|
||||
return [_jsonable(item) for item in value]
|
||||
return repr(value)
|
||||
|
||||
|
||||
def _extract_research_state(event: dict) -> tuple[str | None, str | None, int | None, int | None]:
|
||||
node_payload = next(iter(event.values()), {})
|
||||
if not isinstance(node_payload, dict):
|
||||
return None, None, None, None
|
||||
debate_state = node_payload.get("investment_debate_state") or {}
|
||||
if not isinstance(debate_state, dict):
|
||||
return None, None, None, None
|
||||
history = debate_state.get("history") or ""
|
||||
current = debate_state.get("current_response") or ""
|
||||
return (
|
||||
debate_state.get("research_status"),
|
||||
debate_state.get("degraded_reason"),
|
||||
len(history),
|
||||
len(current),
|
||||
)
|
||||
|
||||
|
||||
def build_trace_payload(
|
||||
*,
|
||||
status: str,
|
||||
run_id: str,
|
||||
ticker: str,
|
||||
date: str,
|
||||
selected_analysts: list[str],
|
||||
analysis_prompt_style: str,
|
||||
variant_label: str,
|
||||
node_timings: list[dict],
|
||||
phase_totals: dict[str, float],
|
||||
dump_path: Path,
|
||||
raw_events: list[dict],
|
||||
error: str | None = None,
|
||||
exception_type: str | None = None,
|
||||
) -> dict:
|
||||
payload = {
|
||||
"trace_schema_version": TRACE_SCHEMA_VERSION,
|
||||
"trace_kind": TRACE_KIND,
|
||||
"run_id": run_id,
|
||||
"status": status,
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"variant_label": variant_label,
|
||||
"selected_analysts": selected_analysts,
|
||||
"analysis_prompt_style": analysis_prompt_style,
|
||||
"node_timings": node_timings,
|
||||
"summary": build_trace_summary(node_timings, phase_totals),
|
||||
"dump_path": str(dump_path),
|
||||
"raw_events": raw_events,
|
||||
}
|
||||
if error is not None:
|
||||
payload["error"] = error
|
||||
if exception_type is not None:
|
||||
payload["exception_type"] = exception_type
|
||||
return payload
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = build_parser().parse_args()
|
||||
selected_analysts = [item.strip() for item in args.selected_analysts.split(",") if item.strip()]
|
||||
config = {
|
||||
"llm_provider": args.provider,
|
||||
"deep_think_llm": args.model,
|
||||
"quick_think_llm": args.model,
|
||||
"backend_url": args.base_url,
|
||||
"selected_analysts": selected_analysts,
|
||||
"analysis_prompt_style": args.analysis_prompt_style,
|
||||
"llm_timeout": args.timeout,
|
||||
"llm_max_retries": args.max_retries,
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
}
|
||||
|
||||
graph = TradingAgentsGraph(selected_analysts=selected_analysts, config=config)
|
||||
state = Propagator().create_initial_state(args.ticker, args.date)
|
||||
config_kwargs = {"recursion_limit": 100, "max_concurrency": 1}
|
||||
|
||||
node_timings = []
|
||||
phase_totals = defaultdict(float)
|
||||
raw_events = []
|
||||
started_at = time.monotonic()
|
||||
last_at = started_at
|
||||
run_id = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
|
||||
dump_dir = Path(args.dump_dir)
|
||||
dump_dir.mkdir(parents=True, exist_ok=True)
|
||||
dump_path = dump_dir / f"{args.ticker.replace('/', '_')}_{args.date}_{run_id}.json"
|
||||
|
||||
try:
|
||||
with _overall_timeout_guard(args.overall_timeout) as timed_out:
|
||||
try:
|
||||
for event in graph.graph.stream(state, stream_mode="updates", config=config_kwargs):
|
||||
now = time.monotonic()
|
||||
nodes = list(event.keys())
|
||||
phases = sorted({_PHASE_MAP.get(node, "unknown") for node in nodes})
|
||||
llm_kinds = sorted({_LLM_KIND_MAP.get(node, "unknown") for node in nodes})
|
||||
delta = round(now - last_at, 3)
|
||||
research_status, degraded_reason, history_len, response_len = _extract_research_state(event)
|
||||
entry = {
|
||||
"run_id": run_id,
|
||||
"nodes": nodes,
|
||||
"phases": phases,
|
||||
"llm_kinds": llm_kinds,
|
||||
"start_at": round(last_at - started_at, 3),
|
||||
"end_at": round(now - started_at, 3),
|
||||
"elapsed_ms": int(delta * 1000),
|
||||
"selected_analysts": selected_analysts,
|
||||
"analysis_prompt_style": args.analysis_prompt_style,
|
||||
"research_status": research_status,
|
||||
"degraded_reason": degraded_reason,
|
||||
"history_len": history_len,
|
||||
"response_len": response_len,
|
||||
}
|
||||
node_timings.append(entry)
|
||||
raw_events.append(_jsonable(event))
|
||||
for phase in phases:
|
||||
phase_totals[phase] += delta
|
||||
last_at = now
|
||||
except KeyboardInterrupt:
|
||||
if timed_out.is_set():
|
||||
raise _ProfileTimeout(f"profiling timeout after {args.overall_timeout}s") from None
|
||||
raise
|
||||
|
||||
payload = {
|
||||
"status": "ok",
|
||||
"ticker": args.ticker,
|
||||
"date": args.date,
|
||||
"selected_analysts": selected_analysts,
|
||||
"analysis_prompt_style": args.analysis_prompt_style,
|
||||
"node_timings": node_timings,
|
||||
"phase_totals_seconds": {key: round(value, 3) for key, value in phase_totals.items()},
|
||||
"dump_path": str(dump_path),
|
||||
"raw_events": raw_events if args.dump_raw_on_failure else [],
|
||||
}
|
||||
except Exception as exc:
|
||||
payload = {
|
||||
"run_id": run_id,
|
||||
"status": "error",
|
||||
"ticker": args.ticker,
|
||||
"date": args.date,
|
||||
"selected_analysts": selected_analysts,
|
||||
"analysis_prompt_style": args.analysis_prompt_style,
|
||||
"error": str(exc),
|
||||
"exception_type": type(exc).__name__,
|
||||
"node_timings": node_timings,
|
||||
"phase_totals_seconds": {key: round(value, 3) for key, value in phase_totals.items()},
|
||||
"dump_path": str(dump_path),
|
||||
"raw_events": raw_events,
|
||||
}
|
||||
|
||||
dump_path.write_text(json.dumps(payload, ensure_ascii=False, indent=2))
|
||||
print(json.dumps(payload, ensure_ascii=False, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from collections import Counter
|
||||
|
||||
TRACE_SCHEMA_VERSION = "tradingagents.profile_trace.v1alpha1"
|
||||
TRACE_KIND = "tradingagents_stage_profile"
|
||||
|
||||
|
||||
def build_trace_summary(node_timings: list[dict], phase_totals: dict[str, float]) -> dict:
|
||||
phase_totals_seconds = {key: round(value, 3) for key, value in phase_totals.items()}
|
||||
degraded_events = [entry for entry in node_timings if entry.get("research_status") not in (None, "full")]
|
||||
node_counter = Counter(node for entry in node_timings for node in entry.get("nodes", []))
|
||||
total_elapsed_ms = sum(int(entry.get("elapsed_ms", 0)) for entry in node_timings)
|
||||
return {
|
||||
"event_count": len(node_timings),
|
||||
"total_elapsed_ms": total_elapsed_ms,
|
||||
"phase_totals_seconds": phase_totals_seconds,
|
||||
"degraded_event_count": len(degraded_events),
|
||||
"final_research_status": node_timings[-1].get("research_status") if node_timings else None,
|
||||
"final_degraded_reason": node_timings[-1].get("degraded_reason") if node_timings else None,
|
||||
"unique_nodes": sorted(node_counter.keys()),
|
||||
"node_hit_count": dict(sorted(node_counter.items())),
|
||||
}
|
||||
|
|
@ -0,0 +1,267 @@
|
|||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from typing import Any
|
||||
|
||||
import pandas as pd
|
||||
import yfinance as yf
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.contracts.result_contract import Signal, build_error_signal
|
||||
from orchestrator.market_calendar import is_non_trading_day
|
||||
from tradingagents.dataflows.stockstats_utils import yf_retry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _build_data_quality(state: str, **details: Any) -> dict[str, Any]:
|
||||
payload = {"state": state}
|
||||
payload.update({key: value for key, value in details.items() if value is not None})
|
||||
return payload
|
||||
|
||||
|
||||
class QuantRunner:
|
||||
def __init__(self, config: OrchestratorConfig):
|
||||
if not config.quant_backtest_path:
|
||||
raise ValueError("OrchestratorConfig.quant_backtest_path must be set")
|
||||
self._config = config
|
||||
path = config.quant_backtest_path
|
||||
if path not in sys.path:
|
||||
sys.path.insert(0, path)
|
||||
self._db_path = f"{path}/research_results/runs.db"
|
||||
|
||||
def get_signal(self, ticker: str, date: str) -> Signal:
|
||||
"""
|
||||
获取指定股票在指定日期的量化信号。
|
||||
date 格式:'YYYY-MM-DD'
|
||||
返回 Signal(source="quant")
|
||||
"""
|
||||
result = self._load_best_params()
|
||||
params: dict = result["params"]
|
||||
sharpe: float = result["sharpe_ratio"]
|
||||
|
||||
# 获取 date 前 60 天的历史数据
|
||||
end_dt = datetime.strptime(date, "%Y-%m-%d")
|
||||
start_dt = end_dt - timedelta(days=60)
|
||||
start_str = start_dt.strftime("%Y-%m-%d")
|
||||
|
||||
end_exclusive = (end_dt + timedelta(days=1)).strftime("%Y-%m-%d")
|
||||
df = yf_retry(
|
||||
lambda: yf.download(
|
||||
ticker,
|
||||
start=start_str,
|
||||
end=end_exclusive,
|
||||
progress=False,
|
||||
auto_adjust=True,
|
||||
)
|
||||
)
|
||||
if df.empty:
|
||||
logger.warning("No price data for %s between %s and %s", ticker, start_str, date)
|
||||
if is_non_trading_day(ticker, end_dt.date()):
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="quant",
|
||||
reason_code=ReasonCode.NON_TRADING_DAY.value,
|
||||
message=f"{date} is not a trading day",
|
||||
metadata={
|
||||
"start_date": start_str,
|
||||
"end_date": date,
|
||||
"data_quality": _build_data_quality(
|
||||
"non_trading_day",
|
||||
requested_date=date,
|
||||
),
|
||||
},
|
||||
)
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="quant",
|
||||
reason_code=ReasonCode.QUANT_NO_DATA.value,
|
||||
message=f"no price data between {start_str} and {date}",
|
||||
metadata={
|
||||
"start_date": start_str,
|
||||
"end_date": date,
|
||||
},
|
||||
)
|
||||
|
||||
# 标准化列名为小写
|
||||
df.columns = [c[0].lower() if isinstance(c, tuple) else c.lower() for c in df.columns]
|
||||
|
||||
required_columns = {"open", "high", "low", "close"}
|
||||
missing_columns = sorted(required_columns - set(df.columns))
|
||||
if missing_columns:
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="quant",
|
||||
reason_code=ReasonCode.PARTIAL_DATA.value,
|
||||
message=f"missing price columns: {', '.join(missing_columns)}",
|
||||
metadata={
|
||||
"start_date": start_str,
|
||||
"end_date": date,
|
||||
"data_quality": _build_data_quality(
|
||||
"partial_data",
|
||||
missing_fields=missing_columns,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
df.index = pd.to_datetime(df.index)
|
||||
available_dates = df.index.normalize()
|
||||
requested_date = pd.Timestamp(end_dt.date())
|
||||
if requested_date not in available_dates:
|
||||
last_available_ts = df.index.max()
|
||||
last_available_date = (
|
||||
last_available_ts.strftime("%Y-%m-%d")
|
||||
if hasattr(last_available_ts, "strftime")
|
||||
else str(last_available_ts)
|
||||
)
|
||||
if is_non_trading_day(ticker, end_dt.date()):
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="quant",
|
||||
reason_code=ReasonCode.NON_TRADING_DAY.value,
|
||||
message=f"{date} is not a trading day",
|
||||
metadata={
|
||||
"start_date": start_str,
|
||||
"end_date": date,
|
||||
"data_quality": _build_data_quality(
|
||||
"non_trading_day",
|
||||
requested_date=date,
|
||||
last_available_date=last_available_date,
|
||||
),
|
||||
},
|
||||
)
|
||||
return build_error_signal(
|
||||
ticker=ticker,
|
||||
source="quant",
|
||||
reason_code=ReasonCode.STALE_DATA.value,
|
||||
message=f"latest price data stops at {last_available_date}",
|
||||
metadata={
|
||||
"start_date": start_str,
|
||||
"end_date": date,
|
||||
"data_quality": _build_data_quality(
|
||||
"stale_data",
|
||||
requested_date=date,
|
||||
last_available_date=last_available_date,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
# 用最佳参数创建 BollingerStrategy 实例
|
||||
# Lazy import: requires quant_backtest_path to be in sys.path (set in __init__)
|
||||
from strategies.momentum import BollingerStrategy
|
||||
from core.data_models import Bar, OrderDirection
|
||||
|
||||
strategy = BollingerStrategy(
|
||||
period=params.get("period", 20),
|
||||
num_std=params.get("num_std", 2.0),
|
||||
position_pct=params.get("position_pct", 0.20),
|
||||
stop_loss_pct=params.get("stop_loss_pct", 0.05),
|
||||
take_profit_pct=params.get("take_profit_pct", 0.15),
|
||||
)
|
||||
|
||||
# 逐 bar 喂给策略,模拟历史回放
|
||||
direction = 0
|
||||
orders: list = []
|
||||
context: dict[str, Any] = {"positions": {}}
|
||||
|
||||
for ts, row in df.iterrows():
|
||||
bar = Bar(
|
||||
symbol=ticker,
|
||||
timestamp=ts.to_pydatetime() if hasattr(ts, "to_pydatetime") else ts,
|
||||
open=float(row["open"]),
|
||||
high=float(row["high"]),
|
||||
low=float(row["low"]),
|
||||
close=float(row["close"]),
|
||||
volume=float(row.get("volume", 0)),
|
||||
)
|
||||
orders = strategy.on_bar(bar, context)
|
||||
# 更新模拟持仓
|
||||
for order in orders:
|
||||
if order.direction == OrderDirection.BUY:
|
||||
context["positions"][ticker] = order.volume
|
||||
elif order.direction == OrderDirection.SELL:
|
||||
context["positions"][ticker] = 0
|
||||
|
||||
# 最后一个 bar 的信号
|
||||
last_orders = orders if df.shape[0] > 0 else []
|
||||
for order in last_orders:
|
||||
if order.direction == OrderDirection.BUY:
|
||||
direction = 1
|
||||
break
|
||||
elif order.direction == OrderDirection.SELL:
|
||||
direction = -1
|
||||
break
|
||||
|
||||
# 计算 max_sharpe(从 DB 中取全局最大值)
|
||||
try:
|
||||
with sqlite3.connect(self._db_path) as conn:
|
||||
cur = conn.cursor()
|
||||
cur.execute("SELECT MAX(sharpe_ratio) FROM backtest_results")
|
||||
row = cur.fetchone()
|
||||
max_sharpe = float(row[0]) if row and row[0] is not None else sharpe
|
||||
except Exception:
|
||||
max_sharpe = sharpe
|
||||
|
||||
confidence = self._calc_confidence(sharpe, max_sharpe)
|
||||
|
||||
return Signal(
|
||||
ticker=ticker,
|
||||
direction=direction,
|
||||
confidence=confidence,
|
||||
source="quant",
|
||||
timestamp=datetime.now(timezone.utc),
|
||||
metadata={
|
||||
"params": params,
|
||||
"sharpe_ratio": sharpe,
|
||||
"max_sharpe": max_sharpe,
|
||||
"data_quality": _build_data_quality(
|
||||
"ok",
|
||||
requested_date=date,
|
||||
last_available_date=date,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
def _load_best_params(self) -> dict:
|
||||
"""
|
||||
直接查 SQLite 获取 BollingerStrategy 最佳参数。
|
||||
参数是全局最优,不区分股票(backtest_results 表无 ticker 列,优化是全局的)。
|
||||
strategy_type 支持 'BollingerStrategy' 和 'bollinger'(兼容两种写法)。
|
||||
"""
|
||||
with sqlite3.connect(self._db_path) as conn:
|
||||
cur = conn.cursor()
|
||||
# 先按规格查 'BollingerStrategy',再 fallback 到 'bollinger'
|
||||
cur.execute(
|
||||
"""
|
||||
SELECT params, sharpe_ratio
|
||||
FROM backtest_results
|
||||
WHERE strategy_type IN ('BollingerStrategy', 'bollinger')
|
||||
ORDER BY sharpe_ratio DESC
|
||||
LIMIT 1
|
||||
""",
|
||||
)
|
||||
row = cur.fetchone()
|
||||
|
||||
if row is None:
|
||||
raise ValueError(
|
||||
"No BollingerStrategy results found in ResultStore. "
|
||||
"Run optimization first: python quant_backtest/run_research.py"
|
||||
)
|
||||
|
||||
params = json.loads(row[0]) if isinstance(row[0], str) else row[0]
|
||||
return {"params": params, "sharpe_ratio": float(row[1])}
|
||||
|
||||
def _calc_confidence(self, sharpe: float, max_sharpe: float) -> float:
|
||||
"""
|
||||
Sharpe 归一化为置信度。
|
||||
- max_sharpe=0 时返回 0.5(默认值,避免除零)
|
||||
- sharpe/max_sharpe 上限截断到 1.0
|
||||
- 下限截断到 0.0(负 Sharpe 不产生负置信度)
|
||||
"""
|
||||
if max_sharpe == 0:
|
||||
return 0.5
|
||||
ratio = sharpe / max_sharpe
|
||||
return max(0.0, min(1.0, ratio))
|
||||
|
|
@ -0,0 +1,90 @@
|
|||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.result_contract import FinalSignal, Signal
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _sign(x: float) -> int:
|
||||
"""Return +1, -1, or 0."""
|
||||
if x > 0:
|
||||
return 1
|
||||
elif x < 0:
|
||||
return -1
|
||||
return 0
|
||||
|
||||
|
||||
class SignalMerger:
|
||||
def __init__(self, config: OrchestratorConfig) -> None:
|
||||
self._config = config
|
||||
|
||||
def merge(
|
||||
self,
|
||||
quant: Optional[Signal],
|
||||
llm: Optional[Signal],
|
||||
degradation_reasons: Optional[list[str]] = None,
|
||||
) -> FinalSignal:
|
||||
now = datetime.now(timezone.utc)
|
||||
reasons = tuple(dict.fromkeys(code for code in (degradation_reasons or []) if code))
|
||||
|
||||
# 两者均失败
|
||||
if quant is None and llm is None:
|
||||
raise ValueError("both quant and llm signals are None")
|
||||
|
||||
ticker = (quant or llm).ticker # type: ignore[union-attr]
|
||||
|
||||
# 只有 LLM(quant 失败)
|
||||
if quant is None:
|
||||
return FinalSignal(
|
||||
ticker=ticker,
|
||||
direction=llm.direction,
|
||||
confidence=min(llm.confidence * self._config.llm_solo_penalty,
|
||||
self._config.llm_weight_cap),
|
||||
quant_signal=None,
|
||||
llm_signal=llm,
|
||||
timestamp=now,
|
||||
degrade_reason_codes=reasons,
|
||||
)
|
||||
|
||||
# 只有 Quant(llm 失败)
|
||||
if llm is None:
|
||||
return FinalSignal(
|
||||
ticker=ticker,
|
||||
direction=quant.direction,
|
||||
confidence=min(quant.confidence * self._config.quant_solo_penalty,
|
||||
self._config.quant_weight_cap),
|
||||
quant_signal=quant,
|
||||
llm_signal=None,
|
||||
timestamp=now,
|
||||
degrade_reason_codes=reasons,
|
||||
)
|
||||
|
||||
# 两者都有:加权合并
|
||||
# Cap each signal's contribution before merging
|
||||
quant_conf = min(quant.confidence, self._config.quant_weight_cap)
|
||||
llm_conf = min(llm.confidence, self._config.llm_weight_cap)
|
||||
weighted_sum = (
|
||||
quant.direction * quant_conf
|
||||
+ llm.direction * llm_conf
|
||||
)
|
||||
final_direction = _sign(weighted_sum)
|
||||
if final_direction == 0:
|
||||
logger.info(
|
||||
"SignalMerger: weighted_sum=0 for %s — signals cancel out, HOLD",
|
||||
ticker,
|
||||
)
|
||||
total_conf = quant_conf + llm_conf
|
||||
final_confidence = abs(weighted_sum) / total_conf if total_conf > 0 else 0.0
|
||||
|
||||
return FinalSignal(
|
||||
ticker=ticker,
|
||||
direction=final_direction,
|
||||
confidence=final_confidence,
|
||||
quant_signal=quant,
|
||||
llm_signal=llm,
|
||||
timestamp=now,
|
||||
degrade_reason_codes=reasons,
|
||||
)
|
||||
|
|
@ -0,0 +1,170 @@
|
|||
from datetime import datetime, timezone
|
||||
|
||||
import pytest
|
||||
|
||||
import orchestrator.orchestrator as orchestrator_module
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.contracts.result_contract import CombinedSignalFailure
|
||||
from orchestrator.signals import Signal
|
||||
|
||||
|
||||
def _signal(
|
||||
source: str,
|
||||
*,
|
||||
direction: int,
|
||||
confidence: float,
|
||||
metadata: dict | None = None,
|
||||
reason_code: str | None = None,
|
||||
) -> Signal:
|
||||
return Signal(
|
||||
ticker="AAPL",
|
||||
direction=direction,
|
||||
confidence=confidence,
|
||||
source=source,
|
||||
timestamp=datetime.now(timezone.utc),
|
||||
metadata=metadata or {},
|
||||
reason_code=reason_code,
|
||||
)
|
||||
|
||||
|
||||
def test_trading_orchestrator_degrades_to_llm_only_when_quant_has_error(monkeypatch):
|
||||
class FakeQuantRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal("quant", direction=1, confidence=0.8, metadata={"error": "db unavailable"})
|
||||
|
||||
class FakeLLMRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal("llm", direction=-1, confidence=0.9)
|
||||
|
||||
monkeypatch.setattr(orchestrator_module, "QuantRunner", FakeQuantRunner)
|
||||
monkeypatch.setattr(orchestrator_module, "LLMRunner", FakeLLMRunner)
|
||||
|
||||
result = orchestrator_module.TradingOrchestrator(
|
||||
OrchestratorConfig(quant_backtest_path="/tmp/quant")
|
||||
).get_combined_signal("AAPL", "2026-04-11")
|
||||
|
||||
assert result.direction == -1
|
||||
assert result.quant_signal is None
|
||||
assert result.llm_signal is not None
|
||||
assert result.llm_signal.source == "llm"
|
||||
|
||||
|
||||
def test_trading_orchestrator_degrades_to_quant_only_when_llm_has_error(monkeypatch):
|
||||
class FakeQuantRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal("quant", direction=1, confidence=0.8)
|
||||
|
||||
class FakeLLMRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal("llm", direction=0, confidence=0.0, metadata={"error": "timeout"})
|
||||
|
||||
monkeypatch.setattr(orchestrator_module, "QuantRunner", FakeQuantRunner)
|
||||
monkeypatch.setattr(orchestrator_module, "LLMRunner", FakeLLMRunner)
|
||||
|
||||
result = orchestrator_module.TradingOrchestrator(
|
||||
OrchestratorConfig(quant_backtest_path="/tmp/quant")
|
||||
).get_combined_signal("AAPL", "2026-04-11")
|
||||
|
||||
assert result.direction == 1
|
||||
assert result.quant_signal is not None
|
||||
assert result.quant_signal.source == "quant"
|
||||
assert result.llm_signal is None
|
||||
|
||||
|
||||
def test_trading_orchestrator_raises_when_both_sources_degrade(monkeypatch):
|
||||
class FakeQuantRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal(
|
||||
"quant",
|
||||
direction=0,
|
||||
confidence=0.0,
|
||||
metadata={"error": "no data"},
|
||||
reason_code=ReasonCode.QUANT_NO_DATA.value,
|
||||
)
|
||||
|
||||
class FakeLLMRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal("llm", direction=0, confidence=0.0, metadata={"error": "timeout"})
|
||||
|
||||
monkeypatch.setattr(orchestrator_module, "QuantRunner", FakeQuantRunner)
|
||||
monkeypatch.setattr(orchestrator_module, "LLMRunner", FakeLLMRunner)
|
||||
|
||||
with pytest.raises(CombinedSignalFailure) as exc_info:
|
||||
orchestrator_module.TradingOrchestrator(
|
||||
OrchestratorConfig(quant_backtest_path="/tmp/quant")
|
||||
).get_combined_signal("AAPL", "2026-04-11")
|
||||
|
||||
assert str(exc_info.value) == "both quant and llm signals are None"
|
||||
assert exc_info.value.reason_codes[0] == ReasonCode.QUANT_NO_DATA.value
|
||||
assert exc_info.value.reason_codes[-1] == ReasonCode.BOTH_SIGNALS_UNAVAILABLE.value
|
||||
assert exc_info.value.source_diagnostics["quant"]["reason_code"] == ReasonCode.QUANT_NO_DATA.value
|
||||
|
||||
|
||||
def test_trading_orchestrator_surfaces_provider_mismatch_summary_when_llm_degrades(monkeypatch):
|
||||
class FakeQuantRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal("quant", direction=1, confidence=0.8)
|
||||
|
||||
class FakeLLMRunner:
|
||||
def __init__(self, _config):
|
||||
pass
|
||||
|
||||
def get_signal(self, _ticker, _date):
|
||||
return _signal(
|
||||
"llm",
|
||||
direction=0,
|
||||
confidence=0.0,
|
||||
metadata={
|
||||
"error": "provider mismatch",
|
||||
"data_quality": {
|
||||
"state": "provider_mismatch",
|
||||
"provider": "anthropic",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
},
|
||||
},
|
||||
reason_code=ReasonCode.PROVIDER_MISMATCH.value,
|
||||
)
|
||||
|
||||
monkeypatch.setattr(orchestrator_module, "QuantRunner", FakeQuantRunner)
|
||||
monkeypatch.setattr(orchestrator_module, "LLMRunner", FakeLLMRunner)
|
||||
|
||||
result = orchestrator_module.TradingOrchestrator(
|
||||
OrchestratorConfig(quant_backtest_path="/tmp/quant")
|
||||
).get_combined_signal("AAPL", "2026-04-11")
|
||||
|
||||
assert result.direction == 1
|
||||
assert result.quant_signal is not None
|
||||
assert result.llm_signal is None
|
||||
assert result.degrade_reason_codes == (ReasonCode.PROVIDER_MISMATCH.value,)
|
||||
assert result.metadata["data_quality"]["state"] == "provider_mismatch"
|
||||
assert result.metadata["data_quality"]["source"] == "llm"
|
||||
assert result.metadata["data_quality"]["issues"] == [
|
||||
{
|
||||
"source": "llm",
|
||||
"state": "provider_mismatch",
|
||||
"provider": "anthropic",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
}
|
||||
]
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.llm_runner import LLMRunner
|
||||
|
||||
|
||||
class _SuccessfulGraph:
|
||||
def propagate(self, ticker: str, date: str):
|
||||
return {"ticker": ticker, "date": date}, "BUY"
|
||||
|
||||
|
||||
class _FailingGraph:
|
||||
def propagate(self, _ticker: str, _date: str):
|
||||
raise RuntimeError("graph offline")
|
||||
|
||||
|
||||
def test_llm_runner_persists_result_contract_v1alpha1(monkeypatch, tmp_path):
|
||||
runner = LLMRunner(OrchestratorConfig(cache_dir=str(tmp_path)))
|
||||
monkeypatch.setattr(runner, "_get_graph", lambda: _SuccessfulGraph())
|
||||
|
||||
signal = runner.get_signal("BRK/B", "2026-04-11")
|
||||
|
||||
assert signal.ticker == "BRK/B"
|
||||
assert signal.direction == 1
|
||||
assert signal.confidence == 0.9
|
||||
assert signal.source == "llm"
|
||||
assert signal.metadata["rating"] == "BUY"
|
||||
assert signal.metadata["ticker"] == "BRK/B"
|
||||
assert signal.metadata["date"] == "2026-04-11"
|
||||
assert datetime.fromisoformat(signal.metadata["timestamp"])
|
||||
|
||||
cache_path = Path(tmp_path) / "BRK_B_2026-04-11.json"
|
||||
assert cache_path.exists()
|
||||
|
||||
|
||||
def test_llm_runner_returns_error_contract_when_graph_fails(monkeypatch, tmp_path):
|
||||
runner = LLMRunner(OrchestratorConfig(cache_dir=str(tmp_path)))
|
||||
monkeypatch.setattr(runner, "_get_graph", lambda: _FailingGraph())
|
||||
|
||||
signal = runner.get_signal("AAPL", "2026-04-11")
|
||||
|
||||
assert signal.ticker == "AAPL"
|
||||
assert signal.direction == 0
|
||||
assert signal.confidence == 0.0
|
||||
assert signal.source == "llm"
|
||||
assert signal.metadata["error"] == "graph offline"
|
||||
assert signal.metadata["reason_code"] == ReasonCode.LLM_SIGNAL_FAILED.value
|
||||
assert signal.metadata["contract_version"]
|
||||
assert signal.metadata["source"] == "llm"
|
||||
assert not (Path(tmp_path) / "AAPL_2026-04-11.json").exists()
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
import tradingagents.agents.analysts.fundamentals_analyst as fundamentals_module
|
||||
from types import SimpleNamespace
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class _FakePrompt:
|
||||
def __init__(self):
|
||||
self.partials = {}
|
||||
|
||||
def partial(self, **kwargs):
|
||||
self.partials.update(kwargs)
|
||||
return self
|
||||
|
||||
def __or__(self, _other):
|
||||
return _FakeChain(self)
|
||||
|
||||
|
||||
class _FakeChain:
|
||||
def __init__(self, prompt):
|
||||
self.prompt = prompt
|
||||
|
||||
def invoke(self, _messages):
|
||||
return SimpleNamespace(tool_calls=[], content=self.prompt.partials["system_message"])
|
||||
|
||||
|
||||
class _FakePromptTemplate:
|
||||
last_prompt = None
|
||||
|
||||
@classmethod
|
||||
def from_messages(cls, _messages):
|
||||
cls.last_prompt = _FakePrompt()
|
||||
return cls.last_prompt
|
||||
|
||||
|
||||
class _FakeLLM:
|
||||
def bind_tools(self, _tools):
|
||||
return self
|
||||
|
||||
|
||||
@pytest.mark.parametrize("compact_mode", [True, False])
|
||||
def test_fundamentals_system_message_is_string(monkeypatch, compact_mode):
|
||||
monkeypatch.setattr(fundamentals_module, "ChatPromptTemplate", _FakePromptTemplate)
|
||||
monkeypatch.setattr(fundamentals_module, "use_compact_analysis_prompt", lambda: compact_mode)
|
||||
monkeypatch.setattr(fundamentals_module, "get_language_instruction", lambda: "")
|
||||
|
||||
node = fundamentals_module.create_fundamentals_analyst(_FakeLLM())
|
||||
result = node(
|
||||
{
|
||||
"trade_date": "2026-04-11",
|
||||
"company_of_interest": "600519.SS",
|
||||
"messages": [],
|
||||
}
|
||||
)
|
||||
|
||||
system_message = _FakePromptTemplate.last_prompt.partials["system_message"]
|
||||
|
||||
assert isinstance(system_message, str)
|
||||
assert result["fundamentals_report"] == system_message
|
||||
|
|
@ -0,0 +1,168 @@
|
|||
import asyncio
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.contracts.result_contract import CombinedSignalFailure, FinalSignal, Signal
|
||||
from orchestrator.live_mode import LiveMode
|
||||
|
||||
|
||||
def _signal(*, source: str, direction: int, confidence: float) -> Signal:
|
||||
return Signal(
|
||||
ticker="AAPL",
|
||||
direction=direction,
|
||||
confidence=confidence,
|
||||
source=source,
|
||||
timestamp=datetime(2026, 4, 11, 12, 0, tzinfo=timezone.utc),
|
||||
)
|
||||
|
||||
|
||||
class _StubOrchestrator:
|
||||
def __init__(self, responses):
|
||||
self._responses = responses
|
||||
|
||||
def get_combined_signal(self, ticker: str, date: str):
|
||||
response = self._responses[(ticker, date)]
|
||||
if isinstance(response, Exception):
|
||||
raise response
|
||||
return response
|
||||
|
||||
|
||||
def test_live_mode_serializes_degraded_contract_shape():
|
||||
live_mode = LiveMode(
|
||||
_StubOrchestrator(
|
||||
{
|
||||
("AAPL", "2026-04-11"): FinalSignal(
|
||||
ticker="AAPL",
|
||||
direction=-1,
|
||||
confidence=0.42,
|
||||
quant_signal=None,
|
||||
llm_signal=_signal(source="llm", direction=-1, confidence=0.6),
|
||||
timestamp=datetime(2026, 4, 11, 12, 1, tzinfo=timezone.utc),
|
||||
degrade_reason_codes=(ReasonCode.QUANT_SIGNAL_FAILED.value,),
|
||||
metadata={
|
||||
"contract_version": "v1alpha1",
|
||||
"data_quality": {"state": "stale_data", "source": "quant"},
|
||||
"research": {
|
||||
"research_status": "degraded",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": None,
|
||||
},
|
||||
"source_diagnostics": {
|
||||
"quant": {"reason_code": ReasonCode.STALE_DATA.value}
|
||||
},
|
||||
},
|
||||
)
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
results = asyncio.run(live_mode.run_once(["AAPL"], "2026-04-11"))
|
||||
|
||||
assert results == [
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "degraded_success",
|
||||
"result": {
|
||||
"direction": -1,
|
||||
"confidence": 0.42,
|
||||
"quant_direction": None,
|
||||
"llm_direction": -1,
|
||||
"timestamp": "2026-04-11T12:01:00+00:00",
|
||||
},
|
||||
"error": None,
|
||||
"degradation": {
|
||||
"degraded": True,
|
||||
"reason_codes": [ReasonCode.QUANT_SIGNAL_FAILED.value],
|
||||
"source_diagnostics": {
|
||||
"quant": {"reason_code": ReasonCode.STALE_DATA.value}
|
||||
},
|
||||
},
|
||||
"data_quality": {"state": "stale_data", "source": "quant"},
|
||||
"research": {
|
||||
"research_status": "degraded",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": None,
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
def test_live_mode_serializes_failure_contract_shape():
|
||||
live_mode = LiveMode(
|
||||
_StubOrchestrator(
|
||||
{
|
||||
("AAPL", "2026-04-11"): CombinedSignalFailure(
|
||||
"both quant and llm signals are None",
|
||||
reason_codes=(ReasonCode.BOTH_SIGNALS_UNAVAILABLE.value, ReasonCode.PROVIDER_MISMATCH.value),
|
||||
source_diagnostics={
|
||||
"llm": {
|
||||
"reason_code": ReasonCode.PROVIDER_MISMATCH.value,
|
||||
"research": {
|
||||
"research_status": "failed",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_connectionerror",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": None,
|
||||
},
|
||||
}
|
||||
},
|
||||
data_quality={"state": "provider_mismatch", "source": "llm"},
|
||||
)
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
results = asyncio.run(live_mode.run_once(["AAPL"], "2026-04-11"))
|
||||
|
||||
assert results == [
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "failed",
|
||||
"result": None,
|
||||
"error": {
|
||||
"code": "live_signal_failed",
|
||||
"message": "both quant and llm signals are None",
|
||||
"retryable": False,
|
||||
},
|
||||
"degradation": {
|
||||
"degraded": True,
|
||||
"reason_codes": [
|
||||
ReasonCode.BOTH_SIGNALS_UNAVAILABLE.value,
|
||||
ReasonCode.PROVIDER_MISMATCH.value,
|
||||
],
|
||||
"source_diagnostics": {
|
||||
"llm": {
|
||||
"reason_code": ReasonCode.PROVIDER_MISMATCH.value,
|
||||
"research": {
|
||||
"research_status": "failed",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_connectionerror",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": None,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"data_quality": {"state": "provider_mismatch", "source": "llm"},
|
||||
"research": {
|
||||
"research_status": "failed",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_connectionerror",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": None,
|
||||
},
|
||||
}
|
||||
]
|
||||
|
|
@ -0,0 +1,329 @@
|
|||
"""Tests for LLMRunner."""
|
||||
import logging
|
||||
import sys
|
||||
from types import ModuleType
|
||||
|
||||
import pytest
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.llm_runner import LLMRunner
|
||||
|
||||
|
||||
def _clear_runtime_llm_env(monkeypatch):
|
||||
for env_name in (
|
||||
"TRADINGAGENTS_LLM_PROVIDER",
|
||||
"TRADINGAGENTS_BACKEND_URL",
|
||||
"TRADINGAGENTS_MODEL",
|
||||
"TRADINGAGENTS_DEEP_MODEL",
|
||||
"TRADINGAGENTS_QUICK_MODEL",
|
||||
"ANTHROPIC_BASE_URL",
|
||||
"OPENAI_BASE_URL",
|
||||
"ANTHROPIC_API_KEY",
|
||||
"MINIMAX_API_KEY",
|
||||
"OPENAI_API_KEY",
|
||||
):
|
||||
monkeypatch.delenv(env_name, raising=False)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def runner(tmp_path, monkeypatch):
|
||||
_clear_runtime_llm_env(monkeypatch)
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"deep_think_llm": "MiniMax-M2.7-highspeed",
|
||||
"quick_think_llm": "MiniMax-M2.7-highspeed",
|
||||
},
|
||||
)
|
||||
return LLMRunner(cfg)
|
||||
|
||||
|
||||
# All 5 known ratings
|
||||
@pytest.mark.parametrize("rating,expected", [
|
||||
("BUY", (1, 0.9)),
|
||||
("OVERWEIGHT", (1, 0.6)),
|
||||
("HOLD", (0, 0.5)),
|
||||
("UNDERWEIGHT", (-1, 0.6)),
|
||||
("SELL", (-1, 0.9)),
|
||||
])
|
||||
def test_map_rating_known(runner, rating, expected):
|
||||
assert runner._map_rating(rating) == expected
|
||||
|
||||
|
||||
# Unknown rating → (0, 0.5)
|
||||
def test_map_rating_unknown(runner):
|
||||
assert runner._map_rating("STRONG_BUY") == (0, 0.5)
|
||||
|
||||
|
||||
# Case-insensitive
|
||||
def test_map_rating_lowercase(runner):
|
||||
assert runner._map_rating("buy") == (1, 0.9)
|
||||
assert runner._map_rating("sell") == (-1, 0.9)
|
||||
assert runner._map_rating("hold") == (0, 0.5)
|
||||
|
||||
|
||||
# Empty string → (0, 0.5)
|
||||
def test_map_rating_empty_string(runner):
|
||||
assert runner._map_rating("") == (0, 0.5)
|
||||
|
||||
|
||||
def test_get_graph_preserves_explicit_empty_selected_analysts(monkeypatch, tmp_path):
|
||||
captured_kwargs = {}
|
||||
|
||||
class FakeTradingAgentsGraph:
|
||||
def __init__(self, **kwargs):
|
||||
captured_kwargs.update(kwargs)
|
||||
|
||||
fake_module = ModuleType("tradingagents.graph.trading_graph")
|
||||
fake_module.TradingAgentsGraph = FakeTradingAgentsGraph
|
||||
monkeypatch.setitem(sys.modules, "tradingagents.graph.trading_graph", fake_module)
|
||||
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={"selected_analysts": [], "llm_provider": "anthropic"},
|
||||
)
|
||||
|
||||
runner = LLMRunner(cfg)
|
||||
graph = runner._get_graph()
|
||||
|
||||
assert isinstance(graph, FakeTradingAgentsGraph)
|
||||
assert captured_kwargs["config"] == cfg.trading_agents_config
|
||||
assert captured_kwargs["selected_analysts"] == []
|
||||
|
||||
|
||||
def test_get_signal_returns_reason_code_on_propagate_failure(monkeypatch, tmp_path):
|
||||
_clear_runtime_llm_env(monkeypatch)
|
||||
class BrokenGraph:
|
||||
def propagate(self, ticker, date):
|
||||
raise RuntimeError("graph unavailable")
|
||||
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"deep_think_llm": "MiniMax-M2.7-highspeed",
|
||||
"quick_think_llm": "MiniMax-M2.7-highspeed",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
monkeypatch.setattr(runner, "_get_graph", lambda: BrokenGraph())
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.LLM_SIGNAL_FAILED.value
|
||||
assert signal.metadata["error"] == "graph unavailable"
|
||||
|
||||
|
||||
def test_get_signal_classifies_provider_auth_failure(monkeypatch, tmp_path):
|
||||
_clear_runtime_llm_env(monkeypatch)
|
||||
|
||||
class BrokenGraph:
|
||||
def propagate(self, ticker, date):
|
||||
raise RuntimeError(
|
||||
"Error code: 401 - {'type': 'error', 'error': {'type': 'authentication_error', 'message': \"login fail: Please carry the API secret key in the Authorization field\"}}"
|
||||
)
|
||||
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"deep_think_llm": "MiniMax-M2.7-highspeed",
|
||||
"quick_think_llm": "MiniMax-M2.7-highspeed",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
monkeypatch.setattr(runner, "_get_graph", lambda: BrokenGraph())
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.PROVIDER_AUTH_FAILED.value
|
||||
assert signal.metadata["data_quality"]["state"] == "provider_auth_failed"
|
||||
|
||||
|
||||
def test_get_signal_returns_provider_mismatch_before_graph_init(tmp_path):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.PROVIDER_MISMATCH.value
|
||||
assert signal.metadata["data_quality"]["state"] == "provider_mismatch"
|
||||
|
||||
|
||||
def test_get_signal_persists_research_provenance_on_success(monkeypatch, tmp_path):
|
||||
_clear_runtime_llm_env(monkeypatch)
|
||||
class SuccessfulGraph:
|
||||
def propagate(self, ticker, date):
|
||||
return {
|
||||
"investment_debate_state": {
|
||||
"research_status": "degraded",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
,
|
||||
"final_trade_decision_structured": {
|
||||
"rating": "BUY",
|
||||
"hold_subtype": "N/A",
|
||||
},
|
||||
}, "BUY"
|
||||
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"deep_think_llm": "MiniMax-M2.7-highspeed",
|
||||
"quick_think_llm": "MiniMax-M2.7-highspeed",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
monkeypatch.setattr(runner, "_get_graph", lambda: SuccessfulGraph())
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is False
|
||||
assert signal.metadata["research"]["research_status"] == "degraded"
|
||||
assert signal.metadata["sample_quality"] == "degraded_research"
|
||||
assert signal.metadata["data_quality"]["state"] == "research_degraded"
|
||||
assert signal.metadata["decision_structured"]["rating"] == "BUY"
|
||||
|
||||
|
||||
# Phase 2: Provider matrix validation tests
|
||||
def test_detect_provider_mismatch_google_with_openai_url(tmp_path):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "google",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.PROVIDER_MISMATCH.value
|
||||
|
||||
|
||||
def test_detect_provider_mismatch_xai_with_anthropic_url(tmp_path):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "xai",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.PROVIDER_MISMATCH.value
|
||||
|
||||
|
||||
def test_detect_provider_mismatch_ollama_with_openai_url(tmp_path):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "ollama",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.PROVIDER_MISMATCH.value
|
||||
|
||||
|
||||
def test_detect_provider_mismatch_valid_anthropic_minimax(tmp_path):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
mismatch = runner._detect_provider_mismatch()
|
||||
|
||||
assert mismatch is None
|
||||
|
||||
|
||||
def test_detect_provider_mismatch_valid_openai(tmp_path):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "openai",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
},
|
||||
)
|
||||
runner = LLMRunner(cfg)
|
||||
mismatch = runner._detect_provider_mismatch()
|
||||
|
||||
assert mismatch is None
|
||||
|
||||
|
||||
# Phase 3: Timeout configuration validation tests
|
||||
def test_timeout_validation_warns_for_multiple_analysts_low_timeout(tmp_path, caplog):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"selected_analysts": ["market", "social", "news", "fundamentals"],
|
||||
"analyst_node_timeout_secs": 75.0,
|
||||
},
|
||||
)
|
||||
with caplog.at_level(logging.WARNING):
|
||||
runner = LLMRunner(cfg)
|
||||
|
||||
assert any("analyst_node_timeout_secs=75.0s may be insufficient" in record.message for record in caplog.records)
|
||||
|
||||
|
||||
def test_timeout_validation_no_warn_for_single_analyst(tmp_path, caplog):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"selected_analysts": ["market"],
|
||||
"analyst_node_timeout_secs": 75.0,
|
||||
},
|
||||
)
|
||||
with caplog.at_level(logging.WARNING):
|
||||
runner = LLMRunner(cfg)
|
||||
|
||||
assert not any("may be insufficient" in record.message for record in caplog.records)
|
||||
|
||||
|
||||
def test_timeout_validation_no_warn_for_sufficient_timeout(tmp_path, caplog):
|
||||
cfg = OrchestratorConfig(
|
||||
cache_dir=str(tmp_path),
|
||||
trading_agents_config={
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic",
|
||||
"selected_analysts": ["market", "social", "news", "fundamentals"],
|
||||
"analyst_node_timeout_secs": 120.0,
|
||||
"research_node_timeout_secs": 75.0,
|
||||
},
|
||||
)
|
||||
with caplog.at_level(logging.WARNING):
|
||||
runner = LLMRunner(cfg)
|
||||
|
||||
assert not any("may be insufficient" in record.message for record in caplog.records)
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
import json
|
||||
from datetime import date
|
||||
|
||||
from orchestrator.market_calendar import get_market_holidays, is_non_trading_day, update_market_holidays
|
||||
|
||||
|
||||
def test_is_non_trading_day_marks_a_share_holiday():
|
||||
assert is_non_trading_day('600519.SS', date(2024, 10, 2)) is True
|
||||
|
||||
|
||||
def test_is_non_trading_day_marks_nyse_holiday():
|
||||
assert is_non_trading_day('AAPL', date(2024, 3, 29)) is True
|
||||
|
||||
|
||||
def test_is_non_trading_day_leaves_regular_weekday_open():
|
||||
assert is_non_trading_day('AAPL', date(2024, 3, 28)) is False
|
||||
|
||||
|
||||
def test_update_market_holidays_creates_maintainable_future_year_entry(tmp_path):
|
||||
data_path = tmp_path / "market_holidays.json"
|
||||
data_path.write_text(json.dumps({"a_share": {}}))
|
||||
|
||||
update_market_holidays(
|
||||
market="a_share",
|
||||
year=2027,
|
||||
holiday_dates=["2027-02-10", "2027-02-11"],
|
||||
data_path=data_path,
|
||||
)
|
||||
|
||||
assert get_market_holidays("a_share", 2027, data_path=data_path) == {
|
||||
date(2027, 2, 10),
|
||||
date(2027, 2, 11),
|
||||
}
|
||||
assert is_non_trading_day("600519.SS", date(2027, 2, 10)) is False
|
||||
assert is_non_trading_day("600519.SS", date(2027, 2, 10), data_path=data_path) is True
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
from orchestrator.profile_ab import build_comparison
|
||||
from orchestrator.profile_trace_utils import build_trace_summary
|
||||
|
||||
|
||||
def test_build_trace_summary_counts_degraded_events():
|
||||
summary = build_trace_summary(
|
||||
[
|
||||
{"nodes": ["Market Analyst"], "elapsed_ms": 110, "research_status": None, "degraded_reason": None},
|
||||
{"nodes": ["Bull Researcher"], "elapsed_ms": 220, "research_status": "degraded", "degraded_reason": "bull_timeout"},
|
||||
],
|
||||
{"analyst": 0.11, "research": 0.22},
|
||||
)
|
||||
|
||||
assert summary["event_count"] == 2
|
||||
assert summary["total_elapsed_ms"] == 330
|
||||
assert summary["degraded_event_count"] == 1
|
||||
assert summary["final_research_status"] == "degraded"
|
||||
assert summary["node_hit_count"]["Bull Researcher"] == 1
|
||||
|
||||
|
||||
def test_build_comparison_prefers_faster_less_degraded_cohort():
|
||||
traces_a = [
|
||||
{
|
||||
"status": "ok",
|
||||
"trace_schema_version": "tradingagents.profile_trace.v1alpha1",
|
||||
"_source_path": "/tmp/a.json",
|
||||
"variant_label": "compact",
|
||||
"summary": {
|
||||
"total_elapsed_ms": 450,
|
||||
"event_count": 4,
|
||||
"final_research_status": "full",
|
||||
"phase_totals_seconds": {"research": 0.22, "risk": 0.10},
|
||||
},
|
||||
}
|
||||
]
|
||||
traces_b = [
|
||||
{
|
||||
"status": "ok",
|
||||
"trace_schema_version": "tradingagents.profile_trace.v1alpha1",
|
||||
"_source_path": "/tmp/b.json",
|
||||
"variant_label": "verbose",
|
||||
"summary": {
|
||||
"total_elapsed_ms": 700,
|
||||
"event_count": 5,
|
||||
"final_research_status": "degraded",
|
||||
"phase_totals_seconds": {"research": 0.45, "risk": 0.15},
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
payload = build_comparison(traces_a, traces_b, label_a="A", label_b="B")
|
||||
|
||||
assert payload["cohorts"]["A"]["median_total_elapsed_ms"] == 450
|
||||
assert payload["cohorts"]["A"]["trace_schema_versions"] == ["tradingagents.profile_trace.v1alpha1"]
|
||||
assert payload["cohorts"]["B"]["degraded_run_count"] == 1
|
||||
assert payload["comparison"]["faster_label"] == "A"
|
||||
assert payload["comparison"]["lower_error_rate_label"] is None
|
||||
assert payload["comparison"]["recommended_label"] == "A"
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
import json
|
||||
from contextlib import contextmanager
|
||||
from datetime import datetime as real_datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
import orchestrator.profile_stage_chain as profile_stage_chain
|
||||
|
||||
|
||||
class _FakeGraphStream:
|
||||
def __init__(self, events):
|
||||
self._events = events
|
||||
|
||||
def stream(self, state, stream_mode, config):
|
||||
assert state["company_of_interest"] == "AAPL"
|
||||
assert state["trade_date"] == "2026-04-11"
|
||||
assert stream_mode == "updates"
|
||||
assert config == {"recursion_limit": 100, "max_concurrency": 1}
|
||||
for event in self._events:
|
||||
yield event
|
||||
|
||||
|
||||
class _FakeTradingAgentsGraph:
|
||||
def __init__(self, *, selected_analysts, config):
|
||||
assert selected_analysts == ["market", "social"]
|
||||
assert config["selected_analysts"] == ["market", "social"]
|
||||
assert config["analysis_prompt_style"] == "balanced"
|
||||
self.graph = _FakeGraphStream(
|
||||
[
|
||||
{
|
||||
"Bull Researcher": {
|
||||
"investment_debate_state": {
|
||||
"research_status": "degraded",
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"history": "Bull Analyst: case",
|
||||
"current_response": "Bull Analyst: case",
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"Research Manager": {
|
||||
"investment_debate_state": {
|
||||
"research_status": "degraded",
|
||||
"degraded_reason": "research_manager_timeout",
|
||||
"history": "Bull Analyst: case\nRecommendation: HOLD",
|
||||
"current_response": "Recommendation: HOLD",
|
||||
}
|
||||
}
|
||||
},
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
class _FakePropagator:
|
||||
def create_initial_state(self, ticker, date):
|
||||
return {
|
||||
"company_of_interest": ticker,
|
||||
"trade_date": date,
|
||||
"investment_debate_state": {},
|
||||
}
|
||||
|
||||
|
||||
class _FixedDateTime:
|
||||
@staticmethod
|
||||
def now(tz=None):
|
||||
return real_datetime(2026, 4, 14, 0, 0, tzinfo=timezone.utc)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("event", "expected"),
|
||||
[
|
||||
({}, (None, None, 0, 0)),
|
||||
(
|
||||
{
|
||||
"Bull Researcher": {
|
||||
"investment_debate_state": {
|
||||
"research_status": "degraded",
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"history": "abc",
|
||||
"current_response": "xy",
|
||||
}
|
||||
}
|
||||
},
|
||||
("degraded", "bull_researcher_timeout", 3, 2),
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_extract_research_state_captures_trace_fields(event, expected):
|
||||
assert profile_stage_chain._extract_research_state(event) == expected
|
||||
|
||||
|
||||
def test_main_writes_trace_payload_with_research_provenance(monkeypatch, tmp_path, capsys):
|
||||
monotonic_points = iter([100.0, 100.4, 101.0])
|
||||
|
||||
monkeypatch.setattr(profile_stage_chain, "TradingAgentsGraph", _FakeTradingAgentsGraph)
|
||||
monkeypatch.setattr(profile_stage_chain, "Propagator", _FakePropagator)
|
||||
monkeypatch.setattr(profile_stage_chain.time, "monotonic", lambda: next(monotonic_points))
|
||||
monkeypatch.setattr(profile_stage_chain, "datetime", _FixedDateTime)
|
||||
|
||||
@contextmanager
|
||||
def fake_guard(_seconds):
|
||||
yield profile_stage_chain.threading.Event()
|
||||
|
||||
monkeypatch.setattr(profile_stage_chain, "_overall_timeout_guard", fake_guard)
|
||||
monkeypatch.setattr(
|
||||
"sys.argv",
|
||||
[
|
||||
"profile_stage_chain.py",
|
||||
"--ticker",
|
||||
"AAPL",
|
||||
"--date",
|
||||
"2026-04-11",
|
||||
"--selected-analysts",
|
||||
"market,social",
|
||||
"--analysis-prompt-style",
|
||||
"balanced",
|
||||
"--dump-dir",
|
||||
str(tmp_path),
|
||||
],
|
||||
)
|
||||
|
||||
profile_stage_chain.main()
|
||||
|
||||
output = json.loads(capsys.readouterr().out)
|
||||
assert output["status"] == "ok"
|
||||
assert output["ticker"] == "AAPL"
|
||||
assert output["date"] == "2026-04-11"
|
||||
assert output["selected_analysts"] == ["market", "social"]
|
||||
assert output["analysis_prompt_style"] == "balanced"
|
||||
assert output["phase_totals_seconds"] == {"research": 1.0}
|
||||
assert output["raw_events"] == []
|
||||
assert output["node_timings"] == [
|
||||
{
|
||||
"run_id": "20260414T000000Z",
|
||||
"nodes": ["Bull Researcher"],
|
||||
"phases": ["research"],
|
||||
"llm_kinds": ["quick"],
|
||||
"start_at": 0.0,
|
||||
"end_at": 0.4,
|
||||
"elapsed_ms": 400,
|
||||
"selected_analysts": ["market", "social"],
|
||||
"analysis_prompt_style": "balanced",
|
||||
"research_status": "degraded",
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"history_len": len("Bull Analyst: case"),
|
||||
"response_len": len("Bull Analyst: case"),
|
||||
},
|
||||
{
|
||||
"run_id": "20260414T000000Z",
|
||||
"nodes": ["Research Manager"],
|
||||
"phases": ["research"],
|
||||
"llm_kinds": ["deep"],
|
||||
"start_at": 0.4,
|
||||
"end_at": 1.0,
|
||||
"elapsed_ms": 600,
|
||||
"selected_analysts": ["market", "social"],
|
||||
"analysis_prompt_style": "balanced",
|
||||
"research_status": "degraded",
|
||||
"degraded_reason": "research_manager_timeout",
|
||||
"history_len": len("Bull Analyst: case\nRecommendation: HOLD"),
|
||||
"response_len": len("Recommendation: HOLD"),
|
||||
},
|
||||
]
|
||||
|
||||
dump_path = Path(output["dump_path"])
|
||||
assert dump_path.exists()
|
||||
assert json.loads(dump_path.read_text()) == output
|
||||
|
||||
|
||||
class _KeyboardInterruptGraph:
|
||||
def __init__(self, *, selected_analysts, config):
|
||||
self.graph = self
|
||||
|
||||
def stream(self, state, stream_mode, config):
|
||||
raise KeyboardInterrupt
|
||||
yield
|
||||
|
||||
|
||||
def test_main_reports_cross_platform_timeout(monkeypatch, tmp_path, capsys):
|
||||
monkeypatch.setattr(profile_stage_chain, "TradingAgentsGraph", _KeyboardInterruptGraph)
|
||||
monkeypatch.setattr(profile_stage_chain, "Propagator", _FakePropagator)
|
||||
monkeypatch.setattr(profile_stage_chain, "datetime", _FixedDateTime)
|
||||
|
||||
@contextmanager
|
||||
def timed_out_guard(seconds):
|
||||
event = profile_stage_chain.threading.Event()
|
||||
event.set()
|
||||
yield event
|
||||
|
||||
monkeypatch.setattr(profile_stage_chain, "_overall_timeout_guard", timed_out_guard)
|
||||
monkeypatch.setattr(
|
||||
"sys.argv",
|
||||
[
|
||||
"profile_stage_chain.py",
|
||||
"--ticker",
|
||||
"AAPL",
|
||||
"--date",
|
||||
"2026-04-11",
|
||||
"--selected-analysts",
|
||||
"market,social",
|
||||
"--analysis-prompt-style",
|
||||
"balanced",
|
||||
"--overall-timeout",
|
||||
"1",
|
||||
"--dump-dir",
|
||||
str(tmp_path),
|
||||
],
|
||||
)
|
||||
|
||||
profile_stage_chain.main()
|
||||
|
||||
output = json.loads(capsys.readouterr().out)
|
||||
assert output["status"] == "error"
|
||||
assert output["exception_type"] == "_ProfileTimeout"
|
||||
assert output["error"] == "profiling timeout after 1s"
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
import importlib.util
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from types import ModuleType
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def _load_factory_module(monkeypatch):
|
||||
package_name = "_lane4_factory_testpkg"
|
||||
package = ModuleType(package_name)
|
||||
package.__path__ = []
|
||||
monkeypatch.setitem(sys.modules, package_name, package)
|
||||
|
||||
base_module = ModuleType(f"{package_name}.base_client")
|
||||
|
||||
class BaseLLMClient:
|
||||
pass
|
||||
|
||||
base_module.BaseLLMClient = BaseLLMClient
|
||||
monkeypatch.setitem(sys.modules, f"{package_name}.base_client", base_module)
|
||||
|
||||
calls = []
|
||||
|
||||
def _register_client(module_suffix: str, class_name: str):
|
||||
module = ModuleType(f"{package_name}.{module_suffix}")
|
||||
|
||||
class Client:
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
calls.append((class_name, args, kwargs))
|
||||
|
||||
setattr(module, class_name, Client)
|
||||
monkeypatch.setitem(sys.modules, module.__name__, module)
|
||||
|
||||
_register_client("openai_client", "OpenAIClient")
|
||||
_register_client("anthropic_client", "AnthropicClient")
|
||||
_register_client("google_client", "GoogleClient")
|
||||
|
||||
factory_path = (
|
||||
Path(__file__).resolve().parents[2]
|
||||
/ "tradingagents"
|
||||
/ "llm_clients"
|
||||
/ "factory.py"
|
||||
)
|
||||
spec = importlib.util.spec_from_file_location(f"{package_name}.factory", factory_path)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
monkeypatch.setitem(sys.modules, spec.name, module)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module, calls
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("provider", "expected_class", "expected_provider"),
|
||||
[
|
||||
("openai", "OpenAIClient", "openai"),
|
||||
("OpenRouter", "OpenAIClient", "openrouter"),
|
||||
("ollama", "OpenAIClient", "ollama"),
|
||||
("xai", "OpenAIClient", "xai"),
|
||||
("anthropic", "AnthropicClient", None),
|
||||
("google", "GoogleClient", None),
|
||||
],
|
||||
)
|
||||
def test_create_llm_client_routes_provider_to_expected_adapter(
|
||||
monkeypatch,
|
||||
provider,
|
||||
expected_class,
|
||||
expected_provider,
|
||||
):
|
||||
factory_module, calls = _load_factory_module(monkeypatch)
|
||||
|
||||
client = factory_module.create_llm_client(
|
||||
provider=provider,
|
||||
model="demo-model",
|
||||
base_url="https://example.test",
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
assert client is not None
|
||||
assert calls[-1][0] == expected_class
|
||||
assert calls[-1][1] == ("demo-model", "https://example.test")
|
||||
if expected_provider is None:
|
||||
assert "provider" not in calls[-1][2]
|
||||
else:
|
||||
assert calls[-1][2]["provider"] == expected_provider
|
||||
assert calls[-1][2]["timeout"] == 30
|
||||
|
||||
|
||||
def test_create_llm_client_rejects_unsupported_provider(monkeypatch):
|
||||
factory_module, _calls = _load_factory_module(monkeypatch)
|
||||
|
||||
with pytest.raises(ValueError, match="Unsupported LLM provider"):
|
||||
factory_module.create_llm_client("unknown", "demo-model")
|
||||
|
|
@ -0,0 +1,201 @@
|
|||
"""Tests for QuantRunner._calc_confidence()."""
|
||||
import json
|
||||
import sqlite3
|
||||
import pandas as pd
|
||||
import pytest
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.contracts.error_taxonomy import ReasonCode
|
||||
from orchestrator.quant_runner import QuantRunner
|
||||
|
||||
|
||||
def _make_runner(tmp_path):
|
||||
"""Create a QuantRunner with a minimal SQLite DB so __init__ succeeds."""
|
||||
db_dir = tmp_path / "research_results"
|
||||
db_dir.mkdir(parents=True)
|
||||
db_path = db_dir / "runs.db"
|
||||
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
conn.execute(
|
||||
"""CREATE TABLE backtest_results (
|
||||
id INTEGER PRIMARY KEY,
|
||||
strategy_type TEXT,
|
||||
params TEXT,
|
||||
sharpe_ratio REAL
|
||||
)"""
|
||||
)
|
||||
conn.execute(
|
||||
"INSERT INTO backtest_results (strategy_type, params, sharpe_ratio) VALUES (?, ?, ?)",
|
||||
("BollingerStrategy", json.dumps({"period": 20, "num_std": 2.0,
|
||||
"position_pct": 0.2,
|
||||
"stop_loss_pct": 0.05,
|
||||
"take_profit_pct": 0.15}), 1.5),
|
||||
)
|
||||
|
||||
cfg = OrchestratorConfig(quant_backtest_path=str(tmp_path))
|
||||
return QuantRunner(cfg)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def runner(tmp_path):
|
||||
return _make_runner(tmp_path)
|
||||
|
||||
|
||||
def test_calc_confidence_max_sharpe_zero(runner):
|
||||
assert runner._calc_confidence(1.0, 0) == 0.5
|
||||
|
||||
|
||||
def test_calc_confidence_half(runner):
|
||||
result = runner._calc_confidence(1.0, 2.0)
|
||||
assert result == pytest.approx(0.5)
|
||||
|
||||
|
||||
def test_calc_confidence_full(runner):
|
||||
result = runner._calc_confidence(2.0, 2.0)
|
||||
assert result == pytest.approx(1.0)
|
||||
|
||||
|
||||
def test_calc_confidence_clamped_above(runner):
|
||||
result = runner._calc_confidence(3.0, 2.0)
|
||||
assert result == pytest.approx(1.0)
|
||||
|
||||
|
||||
def test_calc_confidence_clamped_below(runner):
|
||||
result = runner._calc_confidence(-1.0, 2.0)
|
||||
assert result == pytest.approx(0.0)
|
||||
|
||||
|
||||
def test_get_signal_returns_reason_code_when_no_data(runner, monkeypatch):
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: type("EmptyFrame", (), {"empty": True})(),
|
||||
)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.QUANT_NO_DATA.value
|
||||
|
||||
|
||||
def test_get_signal_marks_non_trading_day_on_a_share_holiday(runner, monkeypatch):
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: pd.DataFrame(),
|
||||
)
|
||||
|
||||
signal = runner.get_signal("600519.SS", "2024-10-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.NON_TRADING_DAY.value
|
||||
assert signal.metadata["data_quality"]["state"] == "non_trading_day"
|
||||
|
||||
|
||||
def test_get_signal_marks_non_trading_day_on_market_holiday(runner, monkeypatch):
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: pd.DataFrame(),
|
||||
)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-03-29")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.NON_TRADING_DAY.value
|
||||
assert signal.metadata["data_quality"]["state"] == "non_trading_day"
|
||||
|
||||
|
||||
def test_get_signal_marks_non_trading_day_on_weekend(runner, monkeypatch):
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: pd.DataFrame(),
|
||||
)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-06")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.NON_TRADING_DAY.value
|
||||
assert signal.metadata["data_quality"]["state"] == "non_trading_day"
|
||||
|
||||
|
||||
def test_get_signal_marks_non_trading_day_on_market_holiday(runner, monkeypatch):
|
||||
holiday_frame = pd.DataFrame(
|
||||
{
|
||||
"Open": [10.0],
|
||||
"High": [11.0],
|
||||
"Low": [9.0],
|
||||
"Close": [10.5],
|
||||
"Volume": [1000],
|
||||
},
|
||||
index=pd.to_datetime(["2024-07-03"]),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: holiday_frame,
|
||||
)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-07-04")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.NON_TRADING_DAY.value
|
||||
assert signal.metadata["data_quality"]["state"] == "non_trading_day"
|
||||
assert signal.metadata["data_quality"]["last_available_date"] == "2024-07-03"
|
||||
|
||||
|
||||
def test_get_signal_marks_stale_data_when_requested_day_missing(runner, monkeypatch):
|
||||
stale_frame = pd.DataFrame(
|
||||
{
|
||||
"Open": [10.0],
|
||||
"High": [11.0],
|
||||
"Low": [9.0],
|
||||
"Close": [10.5],
|
||||
"Volume": [1000],
|
||||
},
|
||||
index=pd.to_datetime(["2024-01-01"]),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: stale_frame,
|
||||
)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.STALE_DATA.value
|
||||
assert signal.metadata["data_quality"]["state"] == "stale_data"
|
||||
|
||||
|
||||
def test_get_signal_marks_partial_data_when_required_columns_missing(runner, monkeypatch):
|
||||
partial_frame = pd.DataFrame(
|
||||
{
|
||||
"Open": [10.0],
|
||||
"Low": [9.0],
|
||||
"Close": [10.5],
|
||||
"Volume": [1000],
|
||||
},
|
||||
index=pd.to_datetime(["2024-01-02"]),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
"orchestrator.quant_runner.yf.download",
|
||||
lambda *args, **kwargs: partial_frame,
|
||||
)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert signal.degraded is True
|
||||
assert signal.reason_code == ReasonCode.PARTIAL_DATA.value
|
||||
assert signal.metadata["data_quality"]["state"] == "partial_data"
|
||||
|
||||
|
||||
def test_get_signal_uses_yf_retry_wrapper(runner, monkeypatch):
|
||||
calls = []
|
||||
|
||||
def fake_retry(func, max_retries=3, base_delay=2.0):
|
||||
calls.append((max_retries, base_delay))
|
||||
return pd.DataFrame()
|
||||
|
||||
monkeypatch.setattr("orchestrator.quant_runner.yf_retry", fake_retry)
|
||||
monkeypatch.setattr("orchestrator.quant_runner.is_non_trading_day", lambda *_args, **_kwargs: False)
|
||||
|
||||
signal = runner.get_signal("AAPL", "2024-01-02")
|
||||
|
||||
assert calls == [(3, 2.0)]
|
||||
assert signal.reason_code == ReasonCode.QUANT_NO_DATA.value
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
"""Tests for SignalMerger in orchestrator/signals.py."""
|
||||
import math
|
||||
import pytest
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.signals import Signal, SignalMerger
|
||||
|
||||
|
||||
def _make_signal(ticker="AAPL", direction=1, confidence=0.8, source="quant"):
|
||||
return Signal(
|
||||
ticker=ticker,
|
||||
direction=direction,
|
||||
confidence=confidence,
|
||||
source=source,
|
||||
timestamp=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def merger():
|
||||
return SignalMerger(OrchestratorConfig())
|
||||
|
||||
|
||||
# Branch 1: both None → ValueError
|
||||
def test_merge_both_none_raises(merger):
|
||||
with pytest.raises(ValueError):
|
||||
merger.merge(None, None)
|
||||
|
||||
|
||||
# Branch 2: quant only
|
||||
def test_merge_quant_only(merger):
|
||||
cfg = OrchestratorConfig()
|
||||
q = _make_signal(direction=1, confidence=0.8, source="quant")
|
||||
result = merger.merge(q, None)
|
||||
assert result.direction == 1
|
||||
expected_conf = min(0.8 * cfg.quant_solo_penalty, cfg.quant_weight_cap)
|
||||
assert math.isclose(result.confidence, expected_conf)
|
||||
assert result.quant_signal is q
|
||||
assert result.llm_signal is None
|
||||
|
||||
|
||||
def test_merge_quant_only_capped(merger):
|
||||
cfg = OrchestratorConfig()
|
||||
# confidence=1.0 * quant_solo_penalty=0.8 → 0.8 == quant_weight_cap=0.8, no clamp needed
|
||||
q = _make_signal(direction=-1, confidence=1.0, source="quant")
|
||||
result = merger.merge(q, None)
|
||||
expected_conf = min(1.0 * cfg.quant_solo_penalty, cfg.quant_weight_cap)
|
||||
assert math.isclose(result.confidence, expected_conf)
|
||||
assert result.direction == -1
|
||||
|
||||
|
||||
# Branch 3: llm only
|
||||
def test_merge_llm_only(merger):
|
||||
cfg = OrchestratorConfig()
|
||||
l = _make_signal(direction=-1, confidence=0.9, source="llm")
|
||||
result = merger.merge(None, l, degradation_reasons=["quant_signal_failed"])
|
||||
assert result.direction == -1
|
||||
expected_conf = min(0.9 * cfg.llm_solo_penalty, cfg.llm_weight_cap)
|
||||
assert math.isclose(result.confidence, expected_conf)
|
||||
assert result.llm_signal is l
|
||||
assert result.quant_signal is None
|
||||
assert result.degrade_reason_codes == ("quant_signal_failed",)
|
||||
|
||||
|
||||
def test_merge_llm_only_capped(merger):
|
||||
cfg = OrchestratorConfig()
|
||||
# Force cap: confidence=1.0, llm_solo_penalty=0.7 → 0.7 < llm_weight_cap=0.9, no cap
|
||||
l = _make_signal(direction=1, confidence=1.0, source="llm")
|
||||
result = merger.merge(None, l)
|
||||
expected_conf = min(1.0 * cfg.llm_solo_penalty, cfg.llm_weight_cap)
|
||||
assert math.isclose(result.confidence, expected_conf)
|
||||
|
||||
|
||||
# Branch 4: both present, same direction
|
||||
def test_merge_both_same_direction(merger):
|
||||
cfg = OrchestratorConfig()
|
||||
q = _make_signal(direction=1, confidence=0.6, source="quant")
|
||||
l = _make_signal(direction=1, confidence=0.8, source="llm")
|
||||
result = merger.merge(q, l)
|
||||
assert result.direction == 1
|
||||
# caps applied per-signal before merging
|
||||
quant_conf = min(0.6, cfg.quant_weight_cap) # 0.6
|
||||
llm_conf = min(0.8, cfg.llm_weight_cap) # 0.8
|
||||
weighted_sum = 1 * quant_conf + 1 * llm_conf # 1.4
|
||||
total_conf = quant_conf + llm_conf # 1.4
|
||||
expected_conf = abs(weighted_sum) / total_conf # 1.0
|
||||
assert math.isclose(result.confidence, expected_conf)
|
||||
|
||||
|
||||
# Branch 5: both present, opposite direction
|
||||
def test_merge_both_opposite_direction_quant_wins(merger):
|
||||
cfg = OrchestratorConfig()
|
||||
# quant stronger: direction should be quant's
|
||||
q = _make_signal(direction=1, confidence=0.9, source="quant")
|
||||
l = _make_signal(direction=-1, confidence=0.3, source="llm")
|
||||
result = merger.merge(q, l)
|
||||
assert result.direction == 1
|
||||
# caps applied per-signal before merging
|
||||
quant_conf = min(0.9, cfg.quant_weight_cap) # 0.8
|
||||
llm_conf = min(0.3, cfg.llm_weight_cap) # 0.3
|
||||
weighted_sum = 1 * quant_conf + (-1) * llm_conf # 0.5
|
||||
total_conf = quant_conf + llm_conf # 1.1
|
||||
expected_conf = abs(weighted_sum) / total_conf
|
||||
assert math.isclose(result.confidence, expected_conf)
|
||||
|
||||
|
||||
def test_merge_both_opposite_direction_llm_wins(merger):
|
||||
q = _make_signal(direction=1, confidence=0.2, source="quant")
|
||||
l = _make_signal(direction=-1, confidence=0.8, source="llm")
|
||||
result = merger.merge(q, l)
|
||||
assert result.direction == -1
|
||||
|
||||
|
||||
# weighted_sum=0 → direction=HOLD
|
||||
def test_merge_weighted_sum_zero(merger):
|
||||
q = _make_signal(direction=1, confidence=0.5, source="quant")
|
||||
l = _make_signal(direction=-1, confidence=0.5, source="llm")
|
||||
result = merger.merge(q, l)
|
||||
assert result.direction == 0
|
||||
assert math.isclose(result.confidence, 0.0)
|
||||
|
|
@ -0,0 +1,162 @@
|
|||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from tradingagents.default_config import DEFAULT_CONFIG, get_default_config, load_project_env, normalize_runtime_llm_config
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph, _merge_with_default_config
|
||||
|
||||
|
||||
def test_merge_with_default_config_keeps_required_defaults():
|
||||
merged = _merge_with_default_config({
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://example.com/api",
|
||||
})
|
||||
|
||||
assert merged["llm_provider"] == "anthropic"
|
||||
assert merged["backend_url"] == "https://example.com/api"
|
||||
assert merged["project_dir"] == DEFAULT_CONFIG["project_dir"]
|
||||
assert merged["results_dir"] == DEFAULT_CONFIG["results_dir"]
|
||||
|
||||
|
||||
def test_merge_with_default_config_merges_nested_vendor_settings():
|
||||
merged = _merge_with_default_config({
|
||||
"data_vendors": {
|
||||
"news_data": "alpha_vantage",
|
||||
},
|
||||
"tool_vendors": {
|
||||
"get_stock_data": "alpha_vantage",
|
||||
},
|
||||
})
|
||||
|
||||
assert merged["data_vendors"]["news_data"] == "alpha_vantage"
|
||||
assert merged["data_vendors"]["core_stock_apis"] == DEFAULT_CONFIG["data_vendors"]["core_stock_apis"]
|
||||
assert merged["tool_vendors"]["get_stock_data"] == "alpha_vantage"
|
||||
|
||||
|
||||
def test_get_default_config_prefers_runtime_minimax_env(monkeypatch):
|
||||
monkeypatch.setenv("ANTHROPIC_BASE_URL", "https://api.minimaxi.com/anthropic")
|
||||
monkeypatch.setenv("TRADINGAGENTS_MODEL", "MiniMax-M2.7-highspeed")
|
||||
monkeypatch.setenv("MINIMAX_API_KEY", "test-minimax-key")
|
||||
monkeypatch.delenv("TRADINGAGENTS_LLM_PROVIDER", raising=False)
|
||||
monkeypatch.delenv("TRADINGAGENTS_BACKEND_URL", raising=False)
|
||||
|
||||
config = get_default_config()
|
||||
|
||||
assert config["llm_provider"] == "anthropic"
|
||||
assert config["backend_url"] == "https://api.minimaxi.com/anthropic"
|
||||
assert config["deep_think_llm"] == "MiniMax-M2.7-highspeed"
|
||||
assert config["quick_think_llm"] == "MiniMax-M2.7-highspeed"
|
||||
assert config["api_key"] == "test-minimax-key"
|
||||
assert config["llm_timeout"] == 60.0
|
||||
assert config["llm_max_retries"] == 1
|
||||
assert config["minimax_retry_attempts"] == 2
|
||||
|
||||
|
||||
def test_load_project_env_overrides_stale_shell_vars(monkeypatch, tmp_path):
|
||||
monkeypatch.setenv("ANTHROPIC_BASE_URL", "https://stale.example.com/api")
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text("ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic\n", encoding="utf-8")
|
||||
|
||||
load_project_env(env_file)
|
||||
|
||||
assert Path(env_file).exists()
|
||||
assert Path(env_file).read_text(encoding="utf-8")
|
||||
assert Path(env_file).name == ".env"
|
||||
assert __import__("os").environ["ANTHROPIC_BASE_URL"] == "https://api.minimaxi.com/anthropic"
|
||||
|
||||
|
||||
def test_normalize_runtime_llm_config_keeps_model_and_canonicalizes_minimax_url():
|
||||
normalized = normalize_runtime_llm_config(
|
||||
{
|
||||
"llm_provider": "anthropic",
|
||||
"backend_url": "https://api.minimaxi.com/anthropic/",
|
||||
"deep_think_llm": "MiniMax-M2.7-highspeed",
|
||||
"quick_think_llm": "MiniMax-M2.7-highspeed",
|
||||
}
|
||||
)
|
||||
|
||||
assert normalized["backend_url"] == "https://api.minimaxi.com/anthropic"
|
||||
assert normalized["deep_think_llm"] == "MiniMax-M2.7-highspeed"
|
||||
assert normalized["quick_think_llm"] == "MiniMax-M2.7-highspeed"
|
||||
assert normalized["llm_timeout"] == 60.0
|
||||
assert normalized["llm_max_retries"] == 1
|
||||
assert normalized["minimax_retry_attempts"] == 2
|
||||
|
||||
|
||||
def test_log_state_persists_research_provenance(tmp_path):
|
||||
graph = TradingAgentsGraph.__new__(TradingAgentsGraph)
|
||||
graph.config = {"results_dir": str(tmp_path)}
|
||||
graph.ticker = "AAPL"
|
||||
graph.log_states_dict = {}
|
||||
|
||||
final_state = {
|
||||
"company_of_interest": "AAPL",
|
||||
"trade_date": "2026-04-11",
|
||||
"market_report": "",
|
||||
"sentiment_report": "",
|
||||
"news_report": "",
|
||||
"fundamentals_report": "",
|
||||
"investment_debate_state": {
|
||||
"bull_history": "Bull Analyst: case",
|
||||
"bear_history": "Bear Analyst: case",
|
||||
"history": "Bull Analyst: case\nBear Analyst: case",
|
||||
"current_response": "Recommendation: HOLD",
|
||||
"judge_decision": "Recommendation: HOLD",
|
||||
"research_status": "degraded",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"covered_dimensions": ["market"],
|
||||
"manager_confidence": 0.0,
|
||||
},
|
||||
"trader_investment_plan": "",
|
||||
"risk_debate_state": {
|
||||
"aggressive_history": "",
|
||||
"conservative_history": "",
|
||||
"neutral_history": "",
|
||||
"history": "",
|
||||
"judge_decision": "",
|
||||
},
|
||||
"investment_plan": "Recommendation: HOLD",
|
||||
"final_trade_decision": "HOLD",
|
||||
}
|
||||
|
||||
TradingAgentsGraph._log_state(graph, "2026-04-11", final_state)
|
||||
|
||||
log_path = tmp_path / "AAPL" / "TradingAgentsStrategy_logs" / "full_states_log_2026-04-11.json"
|
||||
payload = json.loads(log_path.read_text(encoding="utf-8"))
|
||||
assert payload["investment_debate_state"]["research_status"] == "degraded"
|
||||
assert payload["investment_debate_state"]["research_mode"] == "degraded_synthesis"
|
||||
assert payload["investment_debate_state"]["timed_out_nodes"] == ["Bull Researcher"]
|
||||
assert payload["investment_debate_state"]["manager_confidence"] == 0.0
|
||||
|
||||
|
||||
def test_normalize_decision_outputs_repairs_invalid_final_report():
|
||||
graph = TradingAgentsGraph.__new__(TradingAgentsGraph)
|
||||
final_state = {
|
||||
"portfolio_context": "Current account is crowded in growth beta.",
|
||||
"peer_context": "Within the same theme, this name ranks near the top on quality.",
|
||||
"investment_plan": "RECOMMENDATION: BUY\nSimple execution plan: build on weakness.",
|
||||
"trader_investment_plan": "TRADER_RATING: BUY\nFINAL TRANSACTION PROPOSAL: **BUY**",
|
||||
"risk_debate_state": {
|
||||
"judge_decision": "",
|
||||
"history": "",
|
||||
"aggressive_history": "",
|
||||
"conservative_history": "",
|
||||
"neutral_history": "",
|
||||
"latest_speaker": "Judge",
|
||||
"current_aggressive_response": "",
|
||||
"current_conservative_response": "",
|
||||
"current_neutral_response": "",
|
||||
"count": 3,
|
||||
},
|
||||
"final_trade_decision": 'I will gather more market data. <tool_call>name="stock_data"</tool_call>',
|
||||
}
|
||||
|
||||
normalized = TradingAgentsGraph._normalize_decision_outputs(graph, final_state)
|
||||
|
||||
assert normalized["final_trade_decision"] == "BUY"
|
||||
assert normalized["final_trade_decision_structured"]["rating_source"] == "trader_plan"
|
||||
assert normalized["final_trade_decision_structured"]["portfolio_context_used"] is True
|
||||
assert normalized["final_trade_decision_structured"]["peer_context_used"] is True
|
||||
assert normalized["final_trade_decision_report"].startswith("## Normalized Portfolio Decision")
|
||||
assert normalized["risk_debate_state"]["judge_decision"] == normalized["final_trade_decision_report"]
|
||||
|
|
@ -50,3 +50,8 @@ class ModelValidationTests(unittest.TestCase):
|
|||
client.get_llm()
|
||||
|
||||
self.assertEqual(caught, [])
|
||||
|
||||
def test_minimax_anthropic_compatible_models_are_known(self):
|
||||
for model in ("MiniMax-M2.7-highspeed", "MiniMax-M2.7"):
|
||||
with self.subTest(model=model):
|
||||
self.assertTrue(validate_model("anthropic", model))
|
||||
|
|
|
|||
|
|
@ -5,10 +5,9 @@ from tradingagents.agents.utils.agent_utils import (
|
|||
get_cashflow,
|
||||
get_fundamentals,
|
||||
get_income_statement,
|
||||
get_insider_transactions,
|
||||
get_language_instruction,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.dataflows.config import get_config
|
||||
|
||||
|
||||
def create_fundamentals_analyst(llm):
|
||||
|
|
@ -23,12 +22,18 @@ def create_fundamentals_analyst(llm):
|
|||
get_income_statement,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are a researcher tasked with analyzing fundamental information over the past week about a company. Please write a comprehensive report of the company's fundamental information such as financial documents, company profile, basic company financials, and company financial history to gain a full view of the company's fundamental information to inform traders. Make sure to include as much detail as possible. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."
|
||||
+ " Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."
|
||||
+ " Use the available tools: `get_fundamentals` for comprehensive company analysis, `get_balance_sheet`, `get_cashflow`, and `get_income_statement` for specific financial statements."
|
||||
+ get_language_instruction(),
|
||||
)
|
||||
if use_compact_analysis_prompt():
|
||||
system_message = (
|
||||
"You are a fundamentals analyst. Make at most one `get_fundamentals` call first, then only call statement tools if a specific gap remains. Avoid iterative follow-up tool calls. Summarize the company in under 220 words with: business quality, growth/profitability, balance-sheet risk, cash-flow quality, and a trading implication. End with a Markdown table."
|
||||
+ get_language_instruction()
|
||||
)
|
||||
else:
|
||||
system_message = (
|
||||
"You are a researcher tasked with analyzing fundamental information over the past week about a company. Please write a comprehensive report of the company's fundamental information such as financial documents, company profile, basic company financials, and company financial history to gain a full view of the company's fundamental information to inform traders. Make sure to include as much detail as possible. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."
|
||||
+ " Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."
|
||||
+ " Use the available tools: `get_fundamentals` for comprehensive company analysis, `get_balance_sheet`, `get_cashflow`, and `get_income_statement` for specific financial statements."
|
||||
+ get_language_instruction()
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from tradingagents.agents.utils.agent_utils import (
|
|||
get_indicators,
|
||||
get_language_instruction,
|
||||
get_stock_data,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.dataflows.config import get_config
|
||||
|
||||
|
|
@ -19,8 +20,25 @@ def create_market_analyst(llm):
|
|||
get_indicators,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"""You are a trading assistant tasked with analyzing financial markets. Your role is to select the **most relevant indicators** for a given market condition or trading strategy from the following list. The goal is to choose up to **8 indicators** that provide complementary insights without redundancy. Categories and each category's indicators are:
|
||||
if use_compact_analysis_prompt():
|
||||
system_message = (
|
||||
"""You are a market analyst. Make at most two tool calls total:
|
||||
1. Call `get_stock_data` once.
|
||||
2. Call `get_indicators` once with 4 to 6 complementary indicators passed as a single comma-separated string chosen from: `close_10_ema`, `close_50_sma`, `close_200_sma`, `macd`, `macds`, `macdh`, `rsi`, `boll`, `boll_ub`, `boll_lb`, `atr`, `vwma`.
|
||||
|
||||
Pick indicators that cover trend, momentum, volatility, and volume without redundancy. Then produce a concise report with:
|
||||
- market regime
|
||||
- momentum signal
|
||||
- support/resistance or volatility levels
|
||||
- trade implications
|
||||
- risk warnings
|
||||
|
||||
Do not make repeated follow-up tool calls after the indicator batch returns. Keep the report under 250 words and end with a Markdown table of the key signals."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
else:
|
||||
system_message = (
|
||||
"""You are a trading assistant tasked with analyzing financial markets. Your role is to select the **most relevant indicators** for a given market condition or trading strategy from the following list. The goal is to choose up to **8 indicators** that provide complementary insights without redundancy. Categories and each category's indicators are:
|
||||
|
||||
Moving Averages:
|
||||
- close_50_sma: 50 SMA: A medium-term trend indicator. Usage: Identify trend direction and serve as dynamic support/resistance. Tips: It lags price; combine with faster indicators for timely signals.
|
||||
|
|
@ -45,9 +63,9 @@ Volume-Based Indicators:
|
|||
- vwma: VWMA: A moving average weighted by volume. Usage: Confirm trends by integrating price action with volume data. Tips: Watch for skewed results from volume spikes; use in combination with other volume analyses.
|
||||
|
||||
- Select indicators that provide diverse and complementary information. Avoid redundancy (e.g., do not select both rsi and stochrsi). Also briefly explain why they are suitable for the given market context. When you tool call, please use the exact name of the indicators provided above as they are defined parameters, otherwise your call will fail. Please make sure to call get_stock_data first to retrieve the CSV that is needed to generate indicators. Then use get_indicators with the specific indicator names. Write a very detailed and nuanced report of the trends you observe. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."""
|
||||
+ """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
+ """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from tradingagents.agents.utils.agent_utils import (
|
|||
get_global_news,
|
||||
get_language_instruction,
|
||||
get_news,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.dataflows.config import get_config
|
||||
|
||||
|
|
@ -18,11 +19,17 @@ def create_news_analyst(llm):
|
|||
get_global_news,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are a news researcher tasked with analyzing recent news and trends over the past week. Please write a comprehensive report of the current state of the world that is relevant for trading and macroeconomics. Use the available tools: get_news(query, start_date, end_date) for company-specific or targeted news searches, and get_global_news(curr_date, look_back_days, limit) for broader macroeconomic news. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."
|
||||
+ """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
if use_compact_analysis_prompt():
|
||||
system_message = (
|
||||
"You are a news analyst. Make at most one `get_news` call and one `get_global_news` call, then gather only the most relevant recent company and macro news. Summarize in under 180 words with: bullish catalysts, bearish catalysts, macro context, and likely near-term market impact. End with a Markdown table."
|
||||
+ get_language_instruction()
|
||||
)
|
||||
else:
|
||||
system_message = (
|
||||
"You are a news researcher tasked with analyzing recent news and trends over the past week. Please write a comprehensive report of the current state of the world that is relevant for trading and macroeconomics. Use the available tools: get_news(query, start_date, end_date) for company-specific or targeted news searches, and get_global_news(curr_date, look_back_days, limit) for broader macroeconomic news. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."
|
||||
+ """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
|
|
|
|||
|
|
@ -1,5 +1,10 @@
|
|||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
from tradingagents.agents.utils.agent_utils import build_instrument_context, get_language_instruction, get_news
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_instrument_context,
|
||||
get_language_instruction,
|
||||
get_news,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.dataflows.config import get_config
|
||||
|
||||
|
||||
|
|
@ -12,11 +17,17 @@ def create_social_media_analyst(llm):
|
|||
get_news,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are a social media and company specific news researcher/analyst tasked with analyzing social media posts, recent company news, and public sentiment for a specific company over the past week. You will be given a company's name your objective is to write a comprehensive long report detailing your analysis, insights, and implications for traders and investors on this company's current state after looking at social media and what people are saying about that company, analyzing sentiment data of what people feel each day about the company, and looking at recent company news. Use the get_news(query, start_date, end_date) tool to search for company-specific news and social media discussions. Try to look at all sources possible from social media to sentiment to news. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."
|
||||
+ """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
if use_compact_analysis_prompt():
|
||||
system_message = (
|
||||
"You are a sentiment analyst. Make at most one `get_news` call, then infer recent company sentiment from news and public discussion. Summarize in under 180 words with: sentiment direction, what is driving it, whether sentiment confirms or contradicts price action, and the trading implication. End with a Markdown table."
|
||||
+ get_language_instruction()
|
||||
)
|
||||
else:
|
||||
system_message = (
|
||||
"You are a social media and company specific news researcher/analyst tasked with analyzing social media posts, recent company news, and public sentiment for a specific company over the past week. You will be given a company's name your objective is to write a comprehensive long report detailing your analysis, insights, and implications for traders and investors on this company's current state after looking at social media and what people are saying about that company, analyzing sentiment data of what people feel each day about the company, and looking at recent company news. Use the get_news(query, start_date, end_date) tool to search for company-specific news and social media discussions. Try to look at all sources possible from social media to sentiment to news. Provide specific, actionable insights with supporting evidence to help traders make informed decisions."
|
||||
+ """ Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
+ get_language_instruction()
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
|
|
|
|||
|
|
@ -1,4 +1,12 @@
|
|||
from tradingagents.agents.utils.agent_utils import build_instrument_context, get_language_instruction
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_instrument_context,
|
||||
build_optional_decision_context,
|
||||
get_language_instruction,
|
||||
summarize_structured_signal,
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.agents.utils.decision_utils import build_structured_decision
|
||||
|
||||
|
||||
def create_portfolio_manager(llm, memory):
|
||||
|
|
@ -14,6 +22,16 @@ def create_portfolio_manager(llm, memory):
|
|||
sentiment_report = state["sentiment_report"]
|
||||
research_plan = state["investment_plan"]
|
||||
trader_plan = state["trader_investment_plan"]
|
||||
research_structured = state.get("investment_plan_structured") or {}
|
||||
trader_structured = state.get("trader_investment_plan_structured") or {}
|
||||
portfolio_context = state.get("portfolio_context", "")
|
||||
peer_context = state.get("peer_context", "")
|
||||
decision_context = build_optional_decision_context(
|
||||
portfolio_context,
|
||||
peer_context,
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
max_chars=550,
|
||||
)
|
||||
|
||||
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
|
@ -22,7 +40,34 @@ def create_portfolio_manager(llm, memory):
|
|||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""As the Portfolio Manager, synthesize the risk analysts' debate and deliver the final trading decision.
|
||||
if use_compact_analysis_prompt():
|
||||
prompt = f"""As the Portfolio Manager, synthesize the risk debate and deliver the final rating.
|
||||
|
||||
{instrument_context}
|
||||
|
||||
Use exactly one rating: Buy / Overweight / Hold / Underweight / Sell.
|
||||
You already have enough evidence. Do not ask for more data and do not emit tool calls.
|
||||
|
||||
Return with this exact header first:
|
||||
RATING: BUY|OVERWEIGHT|HOLD|UNDERWEIGHT|SELL
|
||||
HOLD_SUBTYPE: DEFENSIVE_HOLD|STAGED_BUY_HOLD|STANDARD_HOLD|N/A
|
||||
ENTRY_STYLE: IMMEDIATE|STAGED|WAIT_PULLBACK|EXISTING_ONLY|REDUCE|EXIT|UNKNOWN
|
||||
SAME_THEME_RANK: LEADER|UPPER|MIDDLE|LOWER|LAGGARD|UNKNOWN
|
||||
ACCOUNT_FIT: FAVORABLE|NEUTRAL|CROWDED_GROWTH|DEFENSIVE_REBALANCE|UNKNOWN
|
||||
|
||||
Then return only:
|
||||
1. Executive summary
|
||||
2. Key risks
|
||||
|
||||
Research plan: {truncate_prompt_text(research_plan, 500)}
|
||||
Research signal summary: {summarize_structured_signal(research_structured)}
|
||||
Trader plan: {truncate_prompt_text(trader_plan, 500)}
|
||||
Trader signal summary: {summarize_structured_signal(trader_structured)}
|
||||
Past lessons: {truncate_prompt_text(past_memory_str, 400)}
|
||||
{decision_context}
|
||||
Risk debate: {truncate_prompt_text(history, 1400)}{get_language_instruction()}"""
|
||||
else:
|
||||
prompt = f"""As the Portfolio Manager, synthesize the risk analysts' debate and deliver the final trading decision.
|
||||
|
||||
{instrument_context}
|
||||
|
||||
|
|
@ -37,11 +82,19 @@ def create_portfolio_manager(llm, memory):
|
|||
|
||||
**Context:**
|
||||
- Research Manager's investment plan: **{research_plan}**
|
||||
- Research Manager structured signal: **{summarize_structured_signal(research_structured)}**
|
||||
- Trader's transaction proposal: **{trader_plan}**
|
||||
- Trader structured signal: **{summarize_structured_signal(trader_structured)}**
|
||||
- Lessons from past decisions: **{past_memory_str}**
|
||||
{decision_context}
|
||||
|
||||
**Required Output Structure:**
|
||||
1. **Rating**: State one of Buy / Overweight / Hold / Underweight / Sell.
|
||||
1. Start with these exact header lines:
|
||||
- `RATING: BUY|OVERWEIGHT|HOLD|UNDERWEIGHT|SELL`
|
||||
- `HOLD_SUBTYPE: DEFENSIVE_HOLD|STAGED_BUY_HOLD|STANDARD_HOLD|N/A`
|
||||
- `ENTRY_STYLE: IMMEDIATE|STAGED|WAIT_PULLBACK|EXISTING_ONLY|REDUCE|EXIT|UNKNOWN`
|
||||
- `SAME_THEME_RANK: LEADER|UPPER|MIDDLE|LOWER|LAGGARD|UNKNOWN`
|
||||
- `ACCOUNT_FIT: FAVORABLE|NEUTRAL|CROWDED_GROWTH|DEFENSIVE_REBALANCE|UNKNOWN`
|
||||
2. **Executive Summary**: A concise action plan covering entry strategy, position sizing, key risk levels, and time horizon.
|
||||
3. **Investment Thesis**: Detailed reasoning anchored in the analysts' debate and past reflections.
|
||||
|
||||
|
|
@ -52,12 +105,26 @@ def create_portfolio_manager(llm, memory):
|
|||
|
||||
---
|
||||
|
||||
Be decisive and ground every conclusion in specific evidence from the analysts.{get_language_instruction()}"""
|
||||
Be decisive and ground every conclusion in specific evidence from the analysts.
|
||||
Do not ask for more data and do not emit tool calls.{get_language_instruction()}"""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
structured_decision = build_structured_decision(
|
||||
response.content,
|
||||
fallback_candidates=(
|
||||
("trader_plan", trader_plan),
|
||||
("investment_plan", research_plan),
|
||||
),
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
context_usage={
|
||||
"portfolio_context": bool(str(portfolio_context).strip()),
|
||||
"peer_context": bool(str(peer_context).strip()),
|
||||
},
|
||||
)
|
||||
|
||||
new_risk_debate_state = {
|
||||
"judge_decision": response.content,
|
||||
"judge_decision": structured_decision["report_text"],
|
||||
"history": risk_debate_state["history"],
|
||||
"aggressive_history": risk_debate_state["aggressive_history"],
|
||||
"conservative_history": risk_debate_state["conservative_history"],
|
||||
|
|
@ -71,7 +138,9 @@ Be decisive and ground every conclusion in specific evidence from the analysts.{
|
|||
|
||||
return {
|
||||
"risk_debate_state": new_risk_debate_state,
|
||||
"final_trade_decision": response.content,
|
||||
"final_trade_decision": structured_decision["rating"],
|
||||
"final_trade_decision_report": structured_decision["report_text"],
|
||||
"final_trade_decision_structured": structured_decision,
|
||||
}
|
||||
|
||||
return portfolio_manager_node
|
||||
|
|
|
|||
|
|
@ -1,5 +1,10 @@
|
|||
|
||||
from tradingagents.agents.utils.agent_utils import build_instrument_context
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_instrument_context,
|
||||
build_optional_decision_context,
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.agents.utils.decision_utils import build_structured_decision
|
||||
|
||||
|
||||
def create_research_manager(llm, memory):
|
||||
|
|
@ -12,15 +17,47 @@ def create_research_manager(llm, memory):
|
|||
fundamentals_report = state["fundamentals_report"]
|
||||
|
||||
investment_debate_state = state["investment_debate_state"]
|
||||
portfolio_context = state.get("portfolio_context", "")
|
||||
peer_context = state.get("peer_context", "")
|
||||
decision_context = build_optional_decision_context(
|
||||
portfolio_context,
|
||||
peer_context,
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
max_chars=500,
|
||||
)
|
||||
|
||||
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
past_memories = memory.get_memories(
|
||||
curr_situation,
|
||||
n_matches=1 if use_compact_analysis_prompt() else 2,
|
||||
)
|
||||
|
||||
past_memory_str = ""
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented.
|
||||
if use_compact_analysis_prompt():
|
||||
prompt = f"""You are the research manager. Decide Buy, Sell, or Hold based on the debate.
|
||||
|
||||
Return a concise response with:
|
||||
1. Recommendation line formatted exactly as `RECOMMENDATION: BUY|HOLD|SELL`
|
||||
2. Top reasons
|
||||
3. Simple execution plan
|
||||
|
||||
Past lessons:
|
||||
{truncate_prompt_text(past_memory_str, 180)}
|
||||
|
||||
{instrument_context}
|
||||
|
||||
{decision_context}
|
||||
|
||||
Debate history:
|
||||
{truncate_prompt_text(history, 700)}
|
||||
|
||||
You already have enough evidence. Do not ask for more data and do not emit tool calls.
|
||||
Keep the full answer under 180 words."""
|
||||
else:
|
||||
prompt = f"""As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented.
|
||||
|
||||
Summarize the key points from both sides concisely, focusing on the most compelling evidence or reasoning. Your recommendation—Buy, Sell, or Hold—must be clear and actionable. Avoid defaulting to Hold simply because both sides have valid points; commit to a stance grounded in the debate's strongest arguments.
|
||||
|
||||
|
|
@ -36,10 +73,24 @@ Here are your past reflections on mistakes:
|
|||
|
||||
{instrument_context}
|
||||
|
||||
{decision_context}
|
||||
|
||||
Here is the debate:
|
||||
Debate History:
|
||||
{history}"""
|
||||
{history}
|
||||
|
||||
Start the answer with `RECOMMENDATION: BUY|HOLD|SELL`.
|
||||
You already have enough evidence. Do not ask for more data and do not emit tool calls."""
|
||||
response = llm.invoke(prompt)
|
||||
structured_plan = build_structured_decision(
|
||||
response.content,
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
context_usage={
|
||||
"portfolio_context": bool(str(portfolio_context).strip()),
|
||||
"peer_context": bool(str(peer_context).strip()),
|
||||
},
|
||||
)
|
||||
|
||||
new_investment_debate_state = {
|
||||
"judge_decision": response.content,
|
||||
|
|
@ -53,6 +104,7 @@ Debate History:
|
|||
return {
|
||||
"investment_debate_state": new_investment_debate_state,
|
||||
"investment_plan": response.content,
|
||||
"investment_plan_structured": structured_plan,
|
||||
}
|
||||
|
||||
return research_manager_node
|
||||
|
|
|
|||
|
|
@ -1,6 +1,23 @@
|
|||
from tradingagents.agents.utils.agent_utils import (
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.agents.utils.subagent_runner import (
|
||||
run_parallel_subagents,
|
||||
synthesize_subagent_results,
|
||||
)
|
||||
|
||||
|
||||
def create_bear_researcher(llm, memory):
|
||||
"""
|
||||
Create a Bear Researcher node that uses parallel subagents for each dimension.
|
||||
|
||||
Instead of a single large LLM call that times out, this implementation:
|
||||
1. Spawns parallel subagents for market, sentiment, news, fundamentals
|
||||
2. Each subagent has its own timeout (15s default)
|
||||
3. Synthesizes results into a unified bear argument
|
||||
4. If some subagents fail, still produces output with available results
|
||||
"""
|
||||
def bear_node(state) -> dict:
|
||||
investment_debate_state = state["investment_debate_state"]
|
||||
history = investment_debate_state.get("history", "")
|
||||
|
|
@ -13,43 +30,177 @@ def create_bear_researcher(llm, memory):
|
|||
fundamentals_report = state["fundamentals_report"]
|
||||
|
||||
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
past_memories = memory.get_memories(
|
||||
curr_situation,
|
||||
n_matches=1 if use_compact_analysis_prompt() else 2,
|
||||
)
|
||||
|
||||
past_memory_str = ""
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""You are a Bear Analyst making the case against investing in the stock. Your goal is to present a well-reasoned argument emphasizing risks, challenges, and negative indicators. Leverage the provided research and data to highlight potential downsides and counter bullish arguments effectively.
|
||||
# Build dimension-specific prompts for parallel execution
|
||||
dimension_configs = []
|
||||
|
||||
Key points to focus on:
|
||||
# Market analysis subagent
|
||||
market_prompt = f"""You are a Bear Analyst focusing on MARKET data.
|
||||
|
||||
- Risks and Challenges: Highlight factors like market saturation, financial instability, or macroeconomic threats that could hinder the stock's performance.
|
||||
- Competitive Weaknesses: Emphasize vulnerabilities such as weaker market positioning, declining innovation, or threats from competitors.
|
||||
- Negative Indicators: Use evidence from financial data, market trends, or recent adverse news to support your position.
|
||||
- Bull Counterpoints: Critically analyze the bull argument with specific data and sound reasoning, exposing weaknesses or over-optimistic assumptions.
|
||||
- Engagement: Present your argument in a conversational style, directly engaging with the bull analyst's points and debating effectively rather than simply listing facts.
|
||||
Based ONLY on the market report below, make a concise bear case (under 80 words).
|
||||
Focus on: price weakness, resistance rejection, moving average bearish alignment, overbought conditions.
|
||||
Address the latest bull argument directly if provided.
|
||||
|
||||
Resources available:
|
||||
Market Report:
|
||||
{truncate_prompt_text(market_research_report, 500)}
|
||||
|
||||
Market research report: {market_research_report}
|
||||
Social media sentiment report: {sentiment_report}
|
||||
Latest world affairs news: {news_report}
|
||||
Company fundamentals report: {fundamentals_report}
|
||||
Conversation history of the debate: {history}
|
||||
Last bull argument: {current_response}
|
||||
Reflections from similar situations and lessons learned: {past_memory_str}
|
||||
Use this information to deliver a compelling bear argument, refute the bull's claims, and engage in a dynamic debate that demonstrates the risks and weaknesses of investing in the stock. You must also address reflections and learn from lessons and mistakes you made in the past.
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bull Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BEAR CASE: [your concise bear argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "market",
|
||||
"prompt": market_prompt,
|
||||
})
|
||||
|
||||
# Sentiment analysis subagent
|
||||
sentiment_prompt = f"""You are a Bear Analyst focusing on SENTIMENT data.
|
||||
|
||||
Based ONLY on the sentiment report below, make a concise bear case (under 80 words).
|
||||
Focus on: negative sentiment trends, social media bearishness, analyst downgrades.
|
||||
Address the latest bull argument directly if provided.
|
||||
|
||||
Sentiment Report:
|
||||
{truncate_prompt_text(sentiment_report, 300)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bull Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BEAR CASE: [your concise bear argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "sentiment",
|
||||
"prompt": sentiment_prompt,
|
||||
})
|
||||
|
||||
# News analysis subagent
|
||||
news_prompt = f"""You are a Bear Analyst focusing on NEWS data.
|
||||
|
||||
Based ONLY on the news report below, make a concise bear case (under 80 words).
|
||||
Focus on: negative news, regulatory risks, competitive threats, strategic setbacks.
|
||||
Address the latest bull argument directly if provided.
|
||||
|
||||
News Report:
|
||||
{truncate_prompt_text(news_report, 300)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bull Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BEAR CASE: [your concise bear argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "news",
|
||||
"prompt": news_prompt,
|
||||
})
|
||||
|
||||
# Fundamentals analysis subagent
|
||||
fundamentals_prompt = f"""You are a Bear Analyst focusing on FUNDAMENTALS data.
|
||||
|
||||
Based ONLY on the fundamentals report below, make a concise bear case (under 80 words).
|
||||
Focus on: declining revenues, margin compression, high debt, deteriorating cash flow, overvaluation.
|
||||
Address the latest bull argument directly if provided.
|
||||
|
||||
Fundamentals Report:
|
||||
{truncate_prompt_text(fundamentals_report, 400)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bull Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Past Lessons:
|
||||
{truncate_prompt_text(past_memory_str, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BEAR CASE: [your concise bear argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "fundamentals",
|
||||
"prompt": fundamentals_prompt,
|
||||
})
|
||||
|
||||
# Run all subagents in parallel with 25s timeout each (LLM can be slow)
|
||||
subagent_results = run_parallel_subagents(
|
||||
llm=llm,
|
||||
dimension_configs=dimension_configs,
|
||||
timeout_per_subagent=25.0,
|
||||
max_workers=4,
|
||||
)
|
||||
|
||||
# Synthesize results into a unified bear argument
|
||||
synthesized_dimensions, synthesis_metadata = synthesize_subagent_results(
|
||||
subagent_results,
|
||||
max_chars_per_result=200,
|
||||
)
|
||||
|
||||
# Generate the final bear argument using synthesis
|
||||
synthesis_prompt = f"""You are a Bear Analyst. Based on the following dimension analyses from your team,
|
||||
synthesize a compelling bear argument (under 200 words) for this stock.
|
||||
|
||||
=== TEAM ANALYSIS RESULTS ===
|
||||
{synthesized_dimensions}
|
||||
|
||||
=== SYNTHESIS INSTRUCTIONS ===
|
||||
1. Combine the strongest bear points from each dimension
|
||||
2. Address the latest bull argument directly
|
||||
3. End with a clear stance: SELL, HOLD (with理由), or BUY (if overwhelming bull case)
|
||||
|
||||
Be decisive. Do not hedge. Present the bear case forcefully.
|
||||
"""
|
||||
try:
|
||||
synthesis_response = llm.invoke(synthesis_prompt)
|
||||
final_argument = synthesis_response.content if hasattr(synthesis_response, 'content') else str(synthesis_response)
|
||||
except Exception as e:
|
||||
# Fallback: just use synthesized dimensions directly
|
||||
final_argument = f"""BEAR SYNTHESIS FAILED: {str(e)}
|
||||
|
||||
=== AVAILABLE ANALYSES ===
|
||||
{synthesized_dimensions}
|
||||
|
||||
FALLBACK CONCLUSION: Based on available data, the bear case is MIXTED.
|
||||
Further analysis needed before making a definitive recommendation.
|
||||
"""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
argument = f"Bear Analyst: {final_argument}"
|
||||
|
||||
argument = f"Bear Analyst: {response.content}"
|
||||
# Add subagent metadata to the argument for transparency
|
||||
timing_info = ", ".join([
|
||||
f"{dim}={timing}s"
|
||||
for dim, timing in synthesis_metadata["subagent_timings"].items()
|
||||
])
|
||||
metadata_note = f"\n\n[Subagent timing: {timing_info}]"
|
||||
|
||||
new_investment_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"bear_history": bear_history + "\n" + argument,
|
||||
"history": history + "\n" + argument + metadata_note,
|
||||
"bear_history": bear_history + "\n" + argument + metadata_note,
|
||||
"bull_history": investment_debate_state.get("bull_history", ""),
|
||||
"current_response": argument,
|
||||
"current_response": argument + metadata_note,
|
||||
"count": investment_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,23 @@
|
|||
from tradingagents.agents.utils.agent_utils import (
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
from tradingagents.agents.utils.subagent_runner import (
|
||||
run_parallel_subagents,
|
||||
synthesize_subagent_results,
|
||||
)
|
||||
|
||||
|
||||
def create_bull_researcher(llm, memory):
|
||||
"""
|
||||
Create a Bull Researcher node that uses parallel subagents for each dimension.
|
||||
|
||||
Instead of a single large LLM call that times out, this implementation:
|
||||
1. Spawns parallel subagents for market, sentiment, news, fundamentals
|
||||
2. Each subagent has its own timeout (15s default)
|
||||
3. Synthesizes results into a unified bull argument
|
||||
4. If some subagents fail, still produces output with available results
|
||||
"""
|
||||
def bull_node(state) -> dict:
|
||||
investment_debate_state = state["investment_debate_state"]
|
||||
history = investment_debate_state.get("history", "")
|
||||
|
|
@ -13,41 +30,177 @@ def create_bull_researcher(llm, memory):
|
|||
fundamentals_report = state["fundamentals_report"]
|
||||
|
||||
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
past_memories = memory.get_memories(
|
||||
curr_situation,
|
||||
n_matches=1 if use_compact_analysis_prompt() else 2,
|
||||
)
|
||||
|
||||
past_memory_str = ""
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""You are a Bull Analyst advocating for investing in the stock. Your task is to build a strong, evidence-based case emphasizing growth potential, competitive advantages, and positive market indicators. Leverage the provided research and data to address concerns and counter bearish arguments effectively.
|
||||
# Build dimension-specific prompts for parallel execution
|
||||
dimension_configs = []
|
||||
|
||||
Key points to focus on:
|
||||
- Growth Potential: Highlight the company's market opportunities, revenue projections, and scalability.
|
||||
- Competitive Advantages: Emphasize factors like unique products, strong branding, or dominant market positioning.
|
||||
- Positive Indicators: Use financial health, industry trends, and recent positive news as evidence.
|
||||
- Bear Counterpoints: Critically analyze the bear argument with specific data and sound reasoning, addressing concerns thoroughly and showing why the bull perspective holds stronger merit.
|
||||
- Engagement: Present your argument in a conversational style, engaging directly with the bear analyst's points and debating effectively rather than just listing data.
|
||||
# Market analysis subagent
|
||||
market_prompt = f"""You are a Bull Analyst focusing on MARKET data.
|
||||
|
||||
Resources available:
|
||||
Market research report: {market_research_report}
|
||||
Social media sentiment report: {sentiment_report}
|
||||
Latest world affairs news: {news_report}
|
||||
Company fundamentals report: {fundamentals_report}
|
||||
Conversation history of the debate: {history}
|
||||
Last bear argument: {current_response}
|
||||
Reflections from similar situations and lessons learned: {past_memory_str}
|
||||
Use this information to deliver a compelling bull argument, refute the bear's concerns, and engage in a dynamic debate that demonstrates the strengths of the bull position. You must also address reflections and learn from lessons and mistakes you made in the past.
|
||||
Based ONLY on the market report below, make a concise bull case (under 80 words).
|
||||
Focus on: price trends, support/resistance, moving averages, technical indicators.
|
||||
Address the latest bear argument directly if provided.
|
||||
|
||||
Market Report:
|
||||
{truncate_prompt_text(market_research_report, 500)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bear Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BULL CASE: [your concise bull argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "market",
|
||||
"prompt": market_prompt,
|
||||
})
|
||||
|
||||
# Sentiment analysis subagent
|
||||
sentiment_prompt = f"""You are a Bull Analyst focusing on SENTIMENT data.
|
||||
|
||||
Based ONLY on the sentiment report below, make a concise bull case (under 80 words).
|
||||
Focus on: positive sentiment trends, social media bullishness, analyst upgrades.
|
||||
Address the latest bear argument directly if provided.
|
||||
|
||||
Sentiment Report:
|
||||
{truncate_prompt_text(sentiment_report, 300)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bear Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BULL CASE: [your concise bull argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "sentiment",
|
||||
"prompt": sentiment_prompt,
|
||||
})
|
||||
|
||||
# News analysis subagent
|
||||
news_prompt = f"""You are a Bull Analyst focusing on NEWS data.
|
||||
|
||||
Based ONLY on the news report below, make a concise bull case (under 80 words).
|
||||
Focus on: positive news, catalysts, strategic developments, partnerships.
|
||||
Address the latest bear argument directly if provided.
|
||||
|
||||
News Report:
|
||||
{truncate_prompt_text(news_report, 300)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bear Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BULL CASE: [your concise bull argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "news",
|
||||
"prompt": news_prompt,
|
||||
})
|
||||
|
||||
# Fundamentals analysis subagent
|
||||
fundamentals_prompt = f"""You are a Bull Analyst focusing on FUNDAMENTALS data.
|
||||
|
||||
Based ONLY on the fundamentals report below, make a concise bull case (under 80 words).
|
||||
Focus on: revenue growth, profit margins, cash flow, valuation metrics.
|
||||
Address the latest bear argument directly if provided.
|
||||
|
||||
Fundamentals Report:
|
||||
{truncate_prompt_text(fundamentals_report, 400)}
|
||||
|
||||
Debate History (for context):
|
||||
{truncate_prompt_text(history, 200)}
|
||||
|
||||
Last Bear Argument:
|
||||
{truncate_prompt_text(current_response, 150)}
|
||||
|
||||
Past Lessons:
|
||||
{truncate_prompt_text(past_memory_str, 150)}
|
||||
|
||||
Return your analysis in this format:
|
||||
BULL CASE: [your concise bull argument]
|
||||
CONFIDENCE: [HIGH/MEDIUM/LOW]
|
||||
"""
|
||||
dimension_configs.append({
|
||||
"dimension": "fundamentals",
|
||||
"prompt": fundamentals_prompt,
|
||||
})
|
||||
|
||||
# Run all subagents in parallel with 25s timeout each (LLM can be slow)
|
||||
subagent_results = run_parallel_subagents(
|
||||
llm=llm,
|
||||
dimension_configs=dimension_configs,
|
||||
timeout_per_subagent=25.0,
|
||||
max_workers=4,
|
||||
)
|
||||
|
||||
# Synthesize results into a unified bull argument
|
||||
synthesized_dimensions, synthesis_metadata = synthesize_subagent_results(
|
||||
subagent_results,
|
||||
max_chars_per_result=200,
|
||||
)
|
||||
|
||||
# Generate the final bull argument using synthesis
|
||||
synthesis_prompt = f"""You are a Bull Analyst. Based on the following dimension analyses from your team,
|
||||
synthesize a compelling bull argument (under 200 words) for this stock.
|
||||
|
||||
=== TEAM ANALYSIS RESULTS ===
|
||||
{synthesized_dimensions}
|
||||
|
||||
=== SYNTHESIS INSTRUCTIONS ===
|
||||
1. Combine the strongest bull points from each dimension
|
||||
2. Address the latest bear argument directly
|
||||
3. End with a clear stance: BUY, HOLD (with理由), or SELL (if overwhelming bear case)
|
||||
|
||||
Be decisive. Do not hedge. Present the bull case forcefully.
|
||||
"""
|
||||
try:
|
||||
synthesis_response = llm.invoke(synthesis_prompt)
|
||||
final_argument = synthesis_response.content if hasattr(synthesis_response, 'content') else str(synthesis_response)
|
||||
except Exception as e:
|
||||
# Fallback: just use synthesized dimensions directly
|
||||
final_argument = f"""BULL SYNTHESIS FAILED: {str(e)}
|
||||
|
||||
=== AVAILABLE ANALYSES ===
|
||||
{synthesized_dimensions}
|
||||
|
||||
FALLBACK CONCLUSION: Based on available data, the bull case is MIXTED.
|
||||
Further analysis needed before making a definitive recommendation.
|
||||
"""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
argument = f"Bull Analyst: {final_argument}"
|
||||
|
||||
argument = f"Bull Analyst: {response.content}"
|
||||
# Add subagent metadata to the argument for transparency
|
||||
timing_info = ", ".join([
|
||||
f"{dim}={timing}s"
|
||||
for dim, timing in synthesis_metadata["subagent_timings"].items()
|
||||
])
|
||||
metadata_note = f"\n\n[Subagent timing: {timing_info}]"
|
||||
|
||||
new_investment_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"bull_history": bull_history + "\n" + argument,
|
||||
"history": history + "\n" + argument + metadata_note,
|
||||
"bull_history": bull_history + "\n" + argument + metadata_note,
|
||||
"bear_history": investment_debate_state.get("bear_history", ""),
|
||||
"current_response": argument,
|
||||
"current_response": argument + metadata_note,
|
||||
"count": investment_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,11 @@
|
|||
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_optional_decision_context,
|
||||
summarize_structured_signal,
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
|
||||
|
||||
def create_aggressive_debator(llm):
|
||||
def aggressive_node(state) -> dict:
|
||||
|
|
@ -15,11 +22,40 @@ def create_aggressive_debator(llm):
|
|||
fundamentals_report = state["fundamentals_report"]
|
||||
|
||||
trader_decision = state["trader_investment_plan"]
|
||||
trader_structured = state.get("trader_investment_plan_structured") or {}
|
||||
research_structured = state.get("investment_plan_structured") or {}
|
||||
decision_context = build_optional_decision_context(
|
||||
state.get("portfolio_context", ""),
|
||||
state.get("peer_context", ""),
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
max_chars=400,
|
||||
)
|
||||
|
||||
prompt = f"""As the Aggressive Risk Analyst, your role is to actively champion high-reward, high-risk opportunities, emphasizing bold strategies and competitive advantages. When evaluating the trader's decision or plan, focus intently on the potential upside, growth potential, and innovative benefits—even when these come with elevated risk. Use the provided market data and sentiment analysis to strengthen your arguments and challenge the opposing views. Specifically, respond directly to each point made by the conservative and neutral analysts, countering with data-driven rebuttals and persuasive reasoning. Highlight where their caution might miss critical opportunities or where their assumptions may be overly conservative. Here is the trader's decision:
|
||||
if use_compact_analysis_prompt():
|
||||
prompt = f"""You are the Aggressive Risk Analyst. Defend upside and attack excessive caution.
|
||||
|
||||
Research signal: {summarize_structured_signal(research_structured)}
|
||||
Trader signal: {summarize_structured_signal(trader_structured)}
|
||||
Trader decision: {truncate_prompt_text(trader_decision, 500)}
|
||||
{decision_context}
|
||||
Market report: {truncate_prompt_text(market_research_report, 500)}
|
||||
Sentiment report: {truncate_prompt_text(sentiment_report, 350)}
|
||||
News report: {truncate_prompt_text(news_report, 350)}
|
||||
Fundamentals report: {truncate_prompt_text(fundamentals_report, 450)}
|
||||
Debate history: {truncate_prompt_text(history, 500)}
|
||||
Last conservative: {truncate_prompt_text(current_conservative_response, 300)}
|
||||
Last neutral: {truncate_prompt_text(current_neutral_response, 300)}
|
||||
|
||||
Keep it under 180 words and focus on 2-3 high-upside arguments."""
|
||||
else:
|
||||
prompt = f"""As the Aggressive Risk Analyst, your role is to actively champion high-reward, high-risk opportunities, emphasizing bold strategies and competitive advantages. When evaluating the trader's decision or plan, focus intently on the potential upside, growth potential, and innovative benefits—even when these come with elevated risk. Use the provided market data and sentiment analysis to strengthen your arguments and challenge the opposing views. Specifically, respond directly to each point made by the conservative and neutral analysts, countering with data-driven rebuttals and persuasive reasoning. Highlight where their caution might miss critical opportunities or where their assumptions may be overly conservative. Here is the trader's decision:
|
||||
|
||||
{trader_decision}
|
||||
|
||||
Structured research signal: {summarize_structured_signal(research_structured)}
|
||||
Structured trader signal: {summarize_structured_signal(trader_structured)}
|
||||
{decision_context}
|
||||
|
||||
Your task is to create a compelling case for the trader's decision by questioning and critiquing the conservative and neutral stances to demonstrate why your high-reward perspective offers the best path forward. Incorporate insights from the following sources into your arguments:
|
||||
|
||||
Market Research Report: {market_research_report}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,11 @@
|
|||
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_optional_decision_context,
|
||||
summarize_structured_signal,
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
|
||||
|
||||
def create_conservative_debator(llm):
|
||||
def conservative_node(state) -> dict:
|
||||
|
|
@ -15,11 +22,40 @@ def create_conservative_debator(llm):
|
|||
fundamentals_report = state["fundamentals_report"]
|
||||
|
||||
trader_decision = state["trader_investment_plan"]
|
||||
trader_structured = state.get("trader_investment_plan_structured") or {}
|
||||
research_structured = state.get("investment_plan_structured") or {}
|
||||
decision_context = build_optional_decision_context(
|
||||
state.get("portfolio_context", ""),
|
||||
state.get("peer_context", ""),
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
max_chars=400,
|
||||
)
|
||||
|
||||
prompt = f"""As the Conservative Risk Analyst, your primary objective is to protect assets, minimize volatility, and ensure steady, reliable growth. You prioritize stability, security, and risk mitigation, carefully assessing potential losses, economic downturns, and market volatility. When evaluating the trader's decision or plan, critically examine high-risk elements, pointing out where the decision may expose the firm to undue risk and where more cautious alternatives could secure long-term gains. Here is the trader's decision:
|
||||
if use_compact_analysis_prompt():
|
||||
prompt = f"""You are the Conservative Risk Analyst. Focus on downside protection and capital preservation.
|
||||
|
||||
Research signal: {summarize_structured_signal(research_structured)}
|
||||
Trader signal: {summarize_structured_signal(trader_structured)}
|
||||
Trader decision: {truncate_prompt_text(trader_decision, 500)}
|
||||
{decision_context}
|
||||
Market report: {truncate_prompt_text(market_research_report, 500)}
|
||||
Sentiment report: {truncate_prompt_text(sentiment_report, 350)}
|
||||
News report: {truncate_prompt_text(news_report, 350)}
|
||||
Fundamentals report: {truncate_prompt_text(fundamentals_report, 450)}
|
||||
Debate history: {truncate_prompt_text(history, 500)}
|
||||
Last aggressive: {truncate_prompt_text(current_aggressive_response, 300)}
|
||||
Last neutral: {truncate_prompt_text(current_neutral_response, 300)}
|
||||
|
||||
Keep it under 180 words and focus on 2-3 main risks."""
|
||||
else:
|
||||
prompt = f"""As the Conservative Risk Analyst, your primary objective is to protect assets, minimize volatility, and ensure steady, reliable growth. You prioritize stability, security, and risk mitigation, carefully assessing potential losses, economic downturns, and market volatility. When evaluating the trader's decision or plan, critically examine high-risk elements, pointing out where the decision may expose the firm to undue risk and where more cautious alternatives could secure long-term gains. Here is the trader's decision:
|
||||
|
||||
{trader_decision}
|
||||
|
||||
Structured research signal: {summarize_structured_signal(research_structured)}
|
||||
Structured trader signal: {summarize_structured_signal(trader_structured)}
|
||||
{decision_context}
|
||||
|
||||
Your task is to actively counter the arguments of the Aggressive and Neutral Analysts, highlighting where their views may overlook potential threats or fail to prioritize sustainability. Respond directly to their points, drawing from the following data sources to build a convincing case for a low-risk approach adjustment to the trader's decision:
|
||||
|
||||
Market Research Report: {market_research_report}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,11 @@
|
|||
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_optional_decision_context,
|
||||
summarize_structured_signal,
|
||||
truncate_prompt_text,
|
||||
use_compact_analysis_prompt,
|
||||
)
|
||||
|
||||
|
||||
def create_neutral_debator(llm):
|
||||
def neutral_node(state) -> dict:
|
||||
|
|
@ -15,11 +22,40 @@ def create_neutral_debator(llm):
|
|||
fundamentals_report = state["fundamentals_report"]
|
||||
|
||||
trader_decision = state["trader_investment_plan"]
|
||||
trader_structured = state.get("trader_investment_plan_structured") or {}
|
||||
research_structured = state.get("investment_plan_structured") or {}
|
||||
decision_context = build_optional_decision_context(
|
||||
state.get("portfolio_context", ""),
|
||||
state.get("peer_context", ""),
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
max_chars=400,
|
||||
)
|
||||
|
||||
prompt = f"""As the Neutral Risk Analyst, your role is to provide a balanced perspective, weighing both the potential benefits and risks of the trader's decision or plan. You prioritize a well-rounded approach, evaluating the upsides and downsides while factoring in broader market trends, potential economic shifts, and diversification strategies.Here is the trader's decision:
|
||||
if use_compact_analysis_prompt():
|
||||
prompt = f"""You are the Neutral Risk Analyst. Balance upside and downside and prefer robust execution.
|
||||
|
||||
Research signal: {summarize_structured_signal(research_structured)}
|
||||
Trader signal: {summarize_structured_signal(trader_structured)}
|
||||
Trader decision: {truncate_prompt_text(trader_decision, 500)}
|
||||
{decision_context}
|
||||
Market report: {truncate_prompt_text(market_research_report, 500)}
|
||||
Sentiment report: {truncate_prompt_text(sentiment_report, 350)}
|
||||
News report: {truncate_prompt_text(news_report, 350)}
|
||||
Fundamentals report: {truncate_prompt_text(fundamentals_report, 450)}
|
||||
Debate history: {truncate_prompt_text(history, 500)}
|
||||
Last aggressive: {truncate_prompt_text(current_aggressive_response, 300)}
|
||||
Last conservative: {truncate_prompt_text(current_conservative_response, 300)}
|
||||
|
||||
Keep it under 180 words and argue for the most balanced path."""
|
||||
else:
|
||||
prompt = f"""As the Neutral Risk Analyst, your role is to provide a balanced perspective, weighing both the potential benefits and risks of the trader's decision or plan. You prioritize a well-rounded approach, evaluating the upsides and downsides while factoring in broader market trends, potential economic shifts, and diversification strategies.Here is the trader's decision:
|
||||
|
||||
{trader_decision}
|
||||
|
||||
Structured research signal: {summarize_structured_signal(research_structured)}
|
||||
Structured trader signal: {summarize_structured_signal(trader_structured)}
|
||||
{decision_context}
|
||||
|
||||
Your task is to challenge both the Aggressive and Conservative Analysts, pointing out where each perspective may be overly optimistic or overly cautious. Use insights from the following data sources to support a moderate, sustainable strategy to adjust the trader's decision:
|
||||
|
||||
Market Research Report: {market_research_report}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,11 @@
|
|||
import functools
|
||||
|
||||
from tradingagents.agents.utils.agent_utils import build_instrument_context
|
||||
from tradingagents.agents.utils.agent_utils import (
|
||||
build_instrument_context,
|
||||
build_optional_decision_context,
|
||||
summarize_structured_signal,
|
||||
)
|
||||
from tradingagents.agents.utils.decision_utils import build_structured_decision
|
||||
|
||||
|
||||
def create_trader(llm, memory):
|
||||
|
|
@ -12,6 +17,9 @@ def create_trader(llm, memory):
|
|||
sentiment_report = state["sentiment_report"]
|
||||
news_report = state["news_report"]
|
||||
fundamentals_report = state["fundamentals_report"]
|
||||
portfolio_context = state.get("portfolio_context", "")
|
||||
peer_context = state.get("peer_context", "")
|
||||
research_plan_structured = state.get("investment_plan_structured") or {}
|
||||
|
||||
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
|
@ -23,24 +31,55 @@ def create_trader(llm, memory):
|
|||
else:
|
||||
past_memory_str = "No past memories found."
|
||||
|
||||
decision_context = build_optional_decision_context(
|
||||
portfolio_context,
|
||||
peer_context,
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
max_chars=500,
|
||||
)
|
||||
context = {
|
||||
"role": "user",
|
||||
"content": f"Based on a comprehensive analysis by a team of analysts, here is an investment plan tailored for {company_name}. {instrument_context} This plan incorporates insights from current technical market trends, macroeconomic indicators, and social media sentiment. Use this plan as a foundation for evaluating your next trading decision.\n\nProposed Investment Plan: {investment_plan}\n\nLeverage these insights to make an informed and strategic decision.",
|
||||
"content": (
|
||||
f"Based on a comprehensive analysis by a team of analysts, here is an investment plan tailored for {company_name}. "
|
||||
f"{instrument_context} This plan incorporates insights from current technical market trends, macroeconomic indicators, and social media sentiment. "
|
||||
"Use this plan as a foundation for evaluating your next trading decision.\n\n"
|
||||
f"Research signal summary: {summarize_structured_signal(research_plan_structured)}\n"
|
||||
f"{decision_context}\n\n"
|
||||
f"Proposed Investment Plan: {investment_plan}\n\n"
|
||||
"Leverage these insights to make an informed and strategic decision."
|
||||
),
|
||||
}
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": f"""You are a trading agent analyzing market data to make investment decisions. Based on your analysis, provide a specific recommendation to buy, sell, or hold. End with a firm decision and always conclude your response with 'FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**' to confirm your recommendation. Apply lessons from past decisions to strengthen your analysis. Here are reflections from similar situations you traded in and the lessons learned: {past_memory_str}""",
|
||||
"content": (
|
||||
"You are a trading agent analyzing market data to make investment decisions. "
|
||||
"Based on your analysis, provide a specific recommendation to buy, sell, or hold. "
|
||||
"Include a machine-readable line formatted exactly as `TRADER_RATING: BUY|HOLD|SELL` and "
|
||||
"always conclude your response with `FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**`. "
|
||||
"Do not emit tool calls or ask for more data. "
|
||||
f"Apply lessons from past decisions to strengthen your analysis. Here are reflections from similar situations you traded in and the lessons learned: {past_memory_str}"
|
||||
),
|
||||
},
|
||||
context,
|
||||
]
|
||||
|
||||
result = llm.invoke(messages)
|
||||
structured_plan = build_structured_decision(
|
||||
result.content,
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
context_usage={
|
||||
"portfolio_context": bool(str(portfolio_context).strip()),
|
||||
"peer_context": bool(str(peer_context).strip()),
|
||||
},
|
||||
)
|
||||
|
||||
return {
|
||||
"messages": [result],
|
||||
"trader_investment_plan": result.content,
|
||||
"trader_investment_plan_structured": structured_plan,
|
||||
"sender": name,
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +1,33 @@
|
|||
from typing import Annotated
|
||||
from typing_extensions import TypedDict
|
||||
from typing import Annotated, Any, Mapping, Optional
|
||||
from typing_extensions import NotRequired, TypedDict
|
||||
from langgraph.graph import MessagesState
|
||||
|
||||
|
||||
RESEARCH_PROVENANCE_FIELDS = (
|
||||
"research_status",
|
||||
"research_mode",
|
||||
"timed_out_nodes",
|
||||
"degraded_reason",
|
||||
"covered_dimensions",
|
||||
"manager_confidence",
|
||||
)
|
||||
|
||||
|
||||
def extract_research_provenance(
|
||||
debate_state: Mapping[str, Any] | None,
|
||||
) -> dict[str, Any] | None:
|
||||
if not isinstance(debate_state, Mapping):
|
||||
return None
|
||||
metadata = {
|
||||
key: debate_state.get(key)
|
||||
for key in RESEARCH_PROVENANCE_FIELDS
|
||||
if key in debate_state
|
||||
}
|
||||
return metadata or None
|
||||
|
||||
|
||||
# Researcher team state
|
||||
class InvestDebateState(TypedDict):
|
||||
class InvestDebateState(TypedDict, total=False):
|
||||
bull_history: Annotated[
|
||||
str, "Bullish Conversation history"
|
||||
] # Bullish Conversation history
|
||||
|
|
@ -15,6 +38,12 @@ class InvestDebateState(TypedDict):
|
|||
current_response: Annotated[str, "Latest response"] # Last response
|
||||
judge_decision: Annotated[str, "Final judge decision"] # Last response
|
||||
count: Annotated[int, "Length of the current conversation"] # Conversation length
|
||||
research_status: NotRequired[Annotated[str, "Research stage status: full/degraded/failed"]]
|
||||
research_mode: NotRequired[Annotated[str, "Research mode: debate/degraded_synthesis"]]
|
||||
timed_out_nodes: NotRequired[Annotated[list[str], "Research nodes that timed out"]]
|
||||
degraded_reason: NotRequired[Annotated[Optional[str], "Research degradation reason"]]
|
||||
covered_dimensions: NotRequired[Annotated[list[str], "Research dimensions covered so far"]]
|
||||
manager_confidence: NotRequired[Annotated[Optional[float], "Research manager confidence"]]
|
||||
|
||||
|
||||
# Risk management team state
|
||||
|
|
@ -46,6 +75,9 @@ class RiskDebateState(TypedDict):
|
|||
class AgentState(MessagesState):
|
||||
company_of_interest: Annotated[str, "Company that we are interested in trading"]
|
||||
trade_date: Annotated[str, "What date we are trading at"]
|
||||
portfolio_context: Annotated[str, "Optional portfolio/account context for this analysis"]
|
||||
peer_context: Annotated[str, "Optional same-theme or peer ranking context for this analysis"]
|
||||
peer_context_mode: Annotated[str, "Mode describing whether peer_context is same-theme normalized or only a book snapshot"]
|
||||
|
||||
sender: Annotated[str, "Agent that sent this message"]
|
||||
|
||||
|
|
@ -62,11 +94,21 @@ class AgentState(MessagesState):
|
|||
InvestDebateState, "Current state of the debate on if to invest or not"
|
||||
]
|
||||
investment_plan: Annotated[str, "Plan generated by the Analyst"]
|
||||
investment_plan_structured: Annotated[
|
||||
Mapping[str, Any], "Structured metadata extracted from the research-manager decision"
|
||||
]
|
||||
|
||||
trader_investment_plan: Annotated[str, "Plan generated by the Trader"]
|
||||
trader_investment_plan_structured: Annotated[
|
||||
Mapping[str, Any], "Structured metadata extracted from the trader decision"
|
||||
]
|
||||
|
||||
# risk management team discussion step
|
||||
risk_debate_state: Annotated[
|
||||
RiskDebateState, "Current state of the debate on evaluating risk"
|
||||
]
|
||||
final_trade_decision: Annotated[str, "Final decision made by the Risk Analysts"]
|
||||
final_trade_decision_report: Annotated[str, "Human-readable final decision report"]
|
||||
final_trade_decision_structured: Annotated[
|
||||
Mapping[str, Any], "Structured metadata extracted from the portfolio-manager decision"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
from typing import Any, Mapping
|
||||
|
||||
from langchain_core.messages import HumanMessage, RemoveMessage
|
||||
|
||||
# Import tools from separate utility files
|
||||
|
|
@ -34,6 +36,80 @@ def get_language_instruction() -> str:
|
|||
return f" Write your entire response in {lang}."
|
||||
|
||||
|
||||
def use_compact_analysis_prompt() -> bool:
|
||||
"""Return whether analysts should use shorter prompts/reports.
|
||||
|
||||
This is helpful for OpenAI-compatible or Anthropic-compatible backends
|
||||
that support the API surface but struggle with the repository's original,
|
||||
very verbose analyst instructions.
|
||||
"""
|
||||
from tradingagents.dataflows.config import get_config
|
||||
|
||||
mode = str(get_config().get("analysis_prompt_style", "standard")).strip().lower()
|
||||
return mode in {"compact", "fast", "minimax"}
|
||||
|
||||
|
||||
def truncate_prompt_text(text: str, max_chars: int = 1200) -> str:
|
||||
"""Trim long reports/history before feeding them into compact prompts."""
|
||||
text = (text or "").strip()
|
||||
if len(text) <= max_chars:
|
||||
return text
|
||||
return text[:max_chars].rstrip() + "\n...[truncated]..."
|
||||
|
||||
|
||||
def build_optional_decision_context(
|
||||
portfolio_context: str | None,
|
||||
peer_context: str | None,
|
||||
*,
|
||||
peer_context_mode: str = "UNSPECIFIED",
|
||||
max_chars: int = 700,
|
||||
) -> str:
|
||||
sections: list[str] = []
|
||||
if str(portfolio_context or "").strip():
|
||||
sections.append(
|
||||
f"Portfolio context: {truncate_prompt_text(str(portfolio_context), max_chars)}"
|
||||
)
|
||||
if str(peer_context or "").strip():
|
||||
mode = str(peer_context_mode or "UNSPECIFIED").strip().upper()
|
||||
if mode == "SAME_THEME_NORMALIZED":
|
||||
sections.append(
|
||||
"Peer context mode: SAME_THEME_NORMALIZED. "
|
||||
"You may use this context when deciding SAME_THEME_RANK if the evidence is explicit."
|
||||
)
|
||||
sections.append(
|
||||
f"Peer / same-theme context: {truncate_prompt_text(str(peer_context), max_chars)}"
|
||||
)
|
||||
else:
|
||||
sections.append(
|
||||
f"Peer context mode: {mode}. This context is not same-theme normalized. "
|
||||
"Treat SAME_THEME_RANK as UNKNOWN unless the context itself contains explicit same-theme evidence."
|
||||
)
|
||||
sections.append(
|
||||
f"Peer universe context: {truncate_prompt_text(str(peer_context), max_chars)}"
|
||||
)
|
||||
return "\n".join(sections)
|
||||
|
||||
|
||||
def summarize_structured_signal(payload: Mapping[str, Any] | None) -> str:
|
||||
if not payload:
|
||||
return "rating=UNKNOWN"
|
||||
|
||||
parts = [f"rating={payload.get('rating', 'UNKNOWN')}"]
|
||||
hold_subtype = payload.get("hold_subtype")
|
||||
if hold_subtype and hold_subtype != "N/A":
|
||||
parts.append(f"hold_subtype={hold_subtype}")
|
||||
entry_style = payload.get("entry_style")
|
||||
if entry_style and entry_style != "UNKNOWN":
|
||||
parts.append(f"entry_style={entry_style}")
|
||||
same_theme_rank = payload.get("same_theme_rank")
|
||||
if same_theme_rank and same_theme_rank != "UNKNOWN":
|
||||
parts.append(f"same_theme_rank={same_theme_rank}")
|
||||
account_fit = payload.get("account_fit")
|
||||
if account_fit and account_fit != "UNKNOWN":
|
||||
parts.append(f"account_fit={account_fit}")
|
||||
return ", ".join(parts)
|
||||
|
||||
|
||||
def build_instrument_context(ticker: str) -> str:
|
||||
"""Describe the exact instrument so agents preserve exchange-qualified tickers."""
|
||||
return (
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ from tradingagents.dataflows.interface import route_to_vendor
|
|||
@tool
|
||||
def get_stock_data(
|
||||
symbol: Annotated[str, "ticker symbol of the company"],
|
||||
start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
|
||||
start_date: Annotated[str, "Start date in yyyy-mm-dd format. Prefer recent windows unless a longer history is strictly necessary."],
|
||||
end_date: Annotated[str, "End date in yyyy-mm-dd format"],
|
||||
) -> str:
|
||||
"""
|
||||
|
|
@ -14,7 +14,7 @@ def get_stock_data(
|
|||
Uses the configured core_stock_apis vendor.
|
||||
Args:
|
||||
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
|
||||
start_date (str): Start date in yyyy-mm-dd format
|
||||
start_date (str): Start date in yyyy-mm-dd format. Prefer recent windows unless a longer history is strictly necessary.
|
||||
end_date (str): End date in yyyy-mm-dd format
|
||||
Returns:
|
||||
str: A formatted dataframe containing the stock price data for the specified ticker symbol in the specified date range.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,69 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import Any, Iterable
|
||||
|
||||
CANONICAL_RATINGS = ("BUY", "OVERWEIGHT", "HOLD", "UNDERWEIGHT", "SELL")
|
||||
_RATING_PATTERN = re.compile(
|
||||
r"\b(BUY|OVERWEIGHT|HOLD|UNDERWEIGHT|SELL)\b",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
|
||||
def extract_rating(text: str) -> str | None:
|
||||
match = _RATING_PATTERN.search(str(text or ""))
|
||||
if not match:
|
||||
return None
|
||||
return match.group(1).upper()
|
||||
|
||||
|
||||
def _normalize_report_text(rating: str, rating_source: str, report_text: str) -> str:
|
||||
body = str(report_text or "").strip() or "No narrative provided."
|
||||
return (
|
||||
"## Normalized Portfolio Decision\n"
|
||||
f"- Rating: {rating}\n"
|
||||
f"- Rating Source: {rating_source}\n\n"
|
||||
f"{body}"
|
||||
)
|
||||
|
||||
|
||||
def build_structured_decision(
|
||||
text: str,
|
||||
*,
|
||||
fallback_candidates: Iterable[tuple[str, str]] = (),
|
||||
default_rating: str = "HOLD",
|
||||
peer_context_mode: str = "UNSPECIFIED",
|
||||
context_usage: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
warnings: list[str] = []
|
||||
rating_source = "direct"
|
||||
rating = extract_rating(text)
|
||||
source_text = str(text or "")
|
||||
|
||||
if rating is None:
|
||||
for candidate_name, candidate_text in fallback_candidates:
|
||||
rating = extract_rating(candidate_text)
|
||||
if rating is not None:
|
||||
rating_source = candidate_name
|
||||
source_text = str(candidate_text or "")
|
||||
warnings.append(f"rating_inferred_from:{candidate_name}")
|
||||
break
|
||||
|
||||
if rating is None:
|
||||
rating = str(default_rating or "HOLD").upper()
|
||||
rating_source = "default"
|
||||
warnings.append("rating_defaulted")
|
||||
|
||||
usage = context_usage or {}
|
||||
hold_subtype = "UNSPECIFIED" if rating == "HOLD" else "N/A"
|
||||
|
||||
return {
|
||||
"rating": rating,
|
||||
"hold_subtype": hold_subtype,
|
||||
"rating_source": rating_source,
|
||||
"report_text": _normalize_report_text(rating, rating_source, source_text),
|
||||
"warnings": warnings,
|
||||
"portfolio_context_used": bool(usage.get("portfolio_context")),
|
||||
"peer_context_used": bool(usage.get("peer_context")),
|
||||
"peer_context_mode": str(peer_context_mode or "UNSPECIFIED"),
|
||||
}
|
||||
|
|
@ -0,0 +1,99 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor, TimeoutError
|
||||
from typing import Any
|
||||
|
||||
|
||||
def _invoke_dimension(llm, dimension: str, prompt: str) -> dict[str, Any]:
|
||||
started_at = time.monotonic()
|
||||
try:
|
||||
response = llm.invoke(prompt)
|
||||
content = response.content if hasattr(response, "content") else str(response)
|
||||
return {
|
||||
"dimension": dimension,
|
||||
"content": str(content).strip(),
|
||||
"ok": True,
|
||||
"error": None,
|
||||
"elapsed_s": round(time.monotonic() - started_at, 3),
|
||||
}
|
||||
except Exception as exc:
|
||||
return {
|
||||
"dimension": dimension,
|
||||
"content": "",
|
||||
"ok": False,
|
||||
"error": str(exc),
|
||||
"elapsed_s": round(time.monotonic() - started_at, 3),
|
||||
}
|
||||
|
||||
|
||||
def run_parallel_subagents(
|
||||
*,
|
||||
llm,
|
||||
dimension_configs: list[dict[str, Any]],
|
||||
timeout_per_subagent: float = 25.0,
|
||||
max_workers: int = 4,
|
||||
) -> list[dict[str, Any]]:
|
||||
if not dimension_configs:
|
||||
return []
|
||||
|
||||
executor = ThreadPoolExecutor(max_workers=max_workers)
|
||||
futures = {
|
||||
executor.submit(
|
||||
_invoke_dimension,
|
||||
llm,
|
||||
config["dimension"],
|
||||
config["prompt"],
|
||||
): config["dimension"]
|
||||
for config in dimension_configs
|
||||
}
|
||||
|
||||
results: list[dict[str, Any]] = []
|
||||
try:
|
||||
for future, dimension in futures.items():
|
||||
try:
|
||||
results.append(future.result(timeout=timeout_per_subagent))
|
||||
except TimeoutError:
|
||||
results.append(
|
||||
{
|
||||
"dimension": dimension,
|
||||
"content": "",
|
||||
"ok": False,
|
||||
"error": "timeout",
|
||||
"elapsed_s": round(timeout_per_subagent, 3),
|
||||
}
|
||||
)
|
||||
finally:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def synthesize_subagent_results(
|
||||
subagent_results: list[dict[str, Any]],
|
||||
*,
|
||||
max_chars_per_result: int = 200,
|
||||
) -> tuple[str, dict[str, Any]]:
|
||||
lines: list[str] = []
|
||||
timings: dict[str, float] = {}
|
||||
failures: dict[str, str] = {}
|
||||
|
||||
for result in subagent_results:
|
||||
dimension = str(result.get("dimension") or "unknown")
|
||||
timings[dimension] = float(result.get("elapsed_s") or 0.0)
|
||||
|
||||
content = str(result.get("content") or "").strip()
|
||||
if not result.get("ok"):
|
||||
failure_reason = str(result.get("error") or "unknown error")
|
||||
failures[dimension] = failure_reason
|
||||
content = f"[UNAVAILABLE: {failure_reason}]"
|
||||
|
||||
if len(content) > max_chars_per_result:
|
||||
content = f"{content[:max_chars_per_result - 3]}..."
|
||||
|
||||
lines.append(f"[{dimension.upper()}]\n{content or '[NO OUTPUT]'}")
|
||||
|
||||
return "\n\n".join(lines), {
|
||||
"subagent_timings": timings,
|
||||
"failures": failures,
|
||||
}
|
||||
|
|
@ -5,20 +5,20 @@ from tradingagents.dataflows.interface import route_to_vendor
|
|||
@tool
|
||||
def get_indicators(
|
||||
symbol: Annotated[str, "ticker symbol of the company"],
|
||||
indicator: Annotated[str, "technical indicator to get the analysis and report of"],
|
||||
indicator: Annotated[str, "technical indicator name or a comma-separated list of indicator names for batch retrieval"],
|
||||
curr_date: Annotated[str, "The current trading date you are trading on, YYYY-mm-dd"],
|
||||
look_back_days: Annotated[int, "how many days to look back"] = 30,
|
||||
) -> str:
|
||||
"""
|
||||
Retrieve a single technical indicator for a given ticker symbol.
|
||||
Retrieve one or more technical indicators for a given ticker symbol.
|
||||
Uses the configured technical_indicators vendor.
|
||||
Args:
|
||||
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
|
||||
indicator (str): A single technical indicator name, e.g. 'rsi', 'macd'. Call this tool once per indicator.
|
||||
indicator (str): One technical indicator name, e.g. 'rsi', 'macd', or a comma-separated batch such as 'macd,rsi,atr,close_50_sma'.
|
||||
curr_date (str): The current trading date you are trading on, YYYY-mm-dd
|
||||
look_back_days (int): How many days to look back, default is 30
|
||||
Returns:
|
||||
str: A formatted dataframe containing the technical indicators for the specified ticker symbol and indicator.
|
||||
str: A formatted dataframe containing the requested technical indicator output(s). Batch requests are recommended to reduce repeated tool calls.
|
||||
"""
|
||||
# LLMs sometimes pass multiple indicators as a comma-separated string;
|
||||
# split and process each individually.
|
||||
|
|
@ -29,4 +29,4 @@ def get_indicators(
|
|||
results.append(route_to_vendor("get_indicators", symbol, ind, curr_date, look_back_days))
|
||||
except ValueError as e:
|
||||
results.append(str(e))
|
||||
return "\n\n".join(results)
|
||||
return "\n\n".join(results)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,8 @@
|
|||
from .interface import DEFAULT_DATAFLOW_ADAPTER, DataflowAdapter, VendorSelection, route_to_vendor
|
||||
|
||||
__all__ = [
|
||||
"DEFAULT_DATAFLOW_ADAPTER",
|
||||
"DataflowAdapter",
|
||||
"VendorSelection",
|
||||
"route_to_vendor",
|
||||
]
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
"""
|
||||
china_data vendor for TradingAgents dataflows.
|
||||
|
||||
NOTE: This stub exists because the actual china_data implementation (akshare-based)
|
||||
lives in web_dashboard/backend/china_data.py, not here. The tradingagents package
|
||||
does not currently ship with a china_data vendor implementation.
|
||||
|
||||
To use china_data functionality, run analysis through the web dashboard where
|
||||
akshare is available as a data source.
|
||||
"""
|
||||
from typing import Optional
|
||||
|
||||
def __getattr__(name: str):
|
||||
# Return None for all china_data imports so interface.py can handle them gracefully
|
||||
return None
|
||||
|
||||
|
|
@ -9,15 +9,29 @@ def initialize_config():
|
|||
"""Initialize the configuration with default values."""
|
||||
global _config
|
||||
if _config is None:
|
||||
_config = default_config.DEFAULT_CONFIG.copy()
|
||||
_config = default_config.get_default_config()
|
||||
|
||||
|
||||
def _merge_config(base: Dict, overrides: Dict) -> Dict:
|
||||
merged = dict(base)
|
||||
for key, value in overrides.items():
|
||||
if (
|
||||
key in ("data_vendors", "tool_vendors")
|
||||
and isinstance(value, dict)
|
||||
and isinstance(merged.get(key), dict)
|
||||
):
|
||||
merged[key] = {**merged[key], **value}
|
||||
else:
|
||||
merged[key] = value
|
||||
return merged
|
||||
|
||||
|
||||
def set_config(config: Dict):
|
||||
"""Update the configuration with custom values."""
|
||||
global _config
|
||||
if _config is None:
|
||||
_config = default_config.DEFAULT_CONFIG.copy()
|
||||
_config.update(config)
|
||||
_config = default_config.get_default_config()
|
||||
_config = _merge_config(_config, config)
|
||||
|
||||
|
||||
def get_config() -> Dict:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
from typing import Annotated
|
||||
from dataclasses import dataclass
|
||||
from typing import Annotated, Any
|
||||
|
||||
# Import from vendor-specific modules
|
||||
from .y_finance import (
|
||||
|
|
@ -24,6 +25,42 @@ from .alpha_vantage import (
|
|||
)
|
||||
from .alpha_vantage_common import AlphaVantageRateLimitError
|
||||
|
||||
# Lazy china_data import — only fails at runtime if akshare is missing and china_data vendor is selected
|
||||
try:
|
||||
from .china_data import (
|
||||
get_china_data_online,
|
||||
get_indicators_china,
|
||||
get_china_stock_info,
|
||||
get_china_financials,
|
||||
get_china_news,
|
||||
get_china_market_news,
|
||||
# Wrappers matching caller signatures:
|
||||
get_china_fundamentals,
|
||||
get_china_balance_sheet,
|
||||
get_china_cashflow,
|
||||
get_china_income_statement,
|
||||
get_china_news_wrapper,
|
||||
get_china_global_news_wrapper,
|
||||
get_china_insider_transactions,
|
||||
)
|
||||
_china_data_available = True
|
||||
except (ImportError, AttributeError):
|
||||
_china_data_available = False
|
||||
get_china_data_online = None
|
||||
get_indicators_china = None
|
||||
get_china_stock_info = None
|
||||
get_china_financials = None
|
||||
get_china_news = None
|
||||
get_china_market_news = None
|
||||
get_china_fundamentals = None
|
||||
get_china_balance_sheet = None
|
||||
get_china_cashflow = None
|
||||
get_china_income_statement = None
|
||||
get_china_news_wrapper = None
|
||||
get_china_global_news_wrapper = None
|
||||
get_china_insider_transactions = None
|
||||
|
||||
|
||||
# Configuration and routing logic
|
||||
from .config import get_config
|
||||
|
||||
|
|
@ -31,15 +68,11 @@ from .config import get_config
|
|||
TOOLS_CATEGORIES = {
|
||||
"core_stock_apis": {
|
||||
"description": "OHLCV stock price data",
|
||||
"tools": [
|
||||
"get_stock_data"
|
||||
]
|
||||
"tools": ["get_stock_data"],
|
||||
},
|
||||
"technical_indicators": {
|
||||
"description": "Technical analysis indicators",
|
||||
"tools": [
|
||||
"get_indicators"
|
||||
]
|
||||
"tools": ["get_indicators"],
|
||||
},
|
||||
"fundamental_data": {
|
||||
"description": "Company fundamentals",
|
||||
|
|
@ -47,8 +80,8 @@ TOOLS_CATEGORIES = {
|
|||
"get_fundamentals",
|
||||
"get_balance_sheet",
|
||||
"get_cashflow",
|
||||
"get_income_statement"
|
||||
]
|
||||
"get_income_statement",
|
||||
],
|
||||
},
|
||||
"news_data": {
|
||||
"description": "News and insider data",
|
||||
|
|
@ -56,17 +89,19 @@ TOOLS_CATEGORIES = {
|
|||
"get_news",
|
||||
"get_global_news",
|
||||
"get_insider_transactions",
|
||||
]
|
||||
}
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
VENDOR_LIST = [
|
||||
"yfinance",
|
||||
"alpha_vantage",
|
||||
*(["china_data"] if _china_data_available else []),
|
||||
]
|
||||
|
||||
# Mapping of methods to their vendor-specific implementations
|
||||
VENDOR_METHODS = {
|
||||
# china_data entries are only present if akshare is installed (_china_data_available)
|
||||
_base_vendor_methods = {
|
||||
# core_stock_apis
|
||||
"get_stock_data": {
|
||||
"alpha_vantage": get_alpha_vantage_stock,
|
||||
|
|
@ -109,6 +144,22 @@ VENDOR_METHODS = {
|
|||
},
|
||||
}
|
||||
|
||||
# Conditionally add china_data vendor only if akshare is available
|
||||
if _china_data_available:
|
||||
_base_vendor_methods["get_stock_data"]["china_data"] = get_china_data_online
|
||||
_base_vendor_methods["get_indicators"]["china_data"] = get_indicators_china
|
||||
_base_vendor_methods["get_fundamentals"]["china_data"] = get_china_fundamentals
|
||||
_base_vendor_methods["get_balance_sheet"]["china_data"] = get_china_balance_sheet
|
||||
_base_vendor_methods["get_cashflow"]["china_data"] = get_china_cashflow
|
||||
_base_vendor_methods["get_income_statement"]["china_data"] = get_china_income_statement
|
||||
_base_vendor_methods["get_news"]["china_data"] = get_china_news_wrapper
|
||||
_base_vendor_methods["get_global_news"]["china_data"] = get_china_global_news_wrapper
|
||||
_base_vendor_methods["get_insider_transactions"]["china_data"] = get_china_insider_transactions
|
||||
|
||||
VENDOR_METHODS = _base_vendor_methods
|
||||
del _base_vendor_methods
|
||||
|
||||
|
||||
def get_category_for_method(method: str) -> str:
|
||||
"""Get the category that contains the specified method."""
|
||||
for category, info in TOOLS_CATEGORIES.items():
|
||||
|
|
@ -116,6 +167,7 @@ def get_category_for_method(method: str) -> str:
|
|||
return category
|
||||
raise ValueError(f"Method '{method}' not found in any category")
|
||||
|
||||
|
||||
def get_vendor(category: str, method: str = None) -> str:
|
||||
"""Get the configured vendor for a data category or specific tool method.
|
||||
Tool-level configuration takes precedence over category-level.
|
||||
|
|
@ -131,32 +183,63 @@ def get_vendor(category: str, method: str = None) -> str:
|
|||
# Fall back to category-level configuration
|
||||
return config.get("data_vendors", {}).get(category, "default")
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class VendorSelection:
|
||||
"""Resolved vendor routing metadata for one dataflow method call."""
|
||||
|
||||
method: str
|
||||
category: str
|
||||
configured_vendors: tuple[str, ...]
|
||||
fallback_chain: tuple[str, ...]
|
||||
|
||||
|
||||
class DataflowAdapter:
|
||||
"""Thin adapter boundary over legacy vendor routing logic."""
|
||||
|
||||
def resolve(self, method: str) -> VendorSelection:
|
||||
category = get_category_for_method(method)
|
||||
vendor_config = get_vendor(category, method)
|
||||
configured_vendors = tuple(v.strip() for v in vendor_config.split(",") if v.strip())
|
||||
|
||||
if method not in VENDOR_METHODS:
|
||||
raise ValueError(f"Method '{method}' not supported")
|
||||
|
||||
all_available_vendors = list(VENDOR_METHODS[method].keys())
|
||||
fallback_chain = list(configured_vendors)
|
||||
for vendor in all_available_vendors:
|
||||
if vendor not in fallback_chain:
|
||||
fallback_chain.append(vendor)
|
||||
|
||||
return VendorSelection(
|
||||
method=method,
|
||||
category=category,
|
||||
configured_vendors=configured_vendors,
|
||||
fallback_chain=tuple(fallback_chain),
|
||||
)
|
||||
|
||||
def execute(self, method: str, *args: Any, **kwargs: Any):
|
||||
"""Route the call through the configured vendor chain with legacy fallback behavior."""
|
||||
selection = self.resolve(method)
|
||||
|
||||
for vendor in selection.fallback_chain:
|
||||
if vendor not in VENDOR_METHODS[method]:
|
||||
continue
|
||||
|
||||
vendor_impl = VENDOR_METHODS[method][vendor]
|
||||
impl_func = vendor_impl[0] if isinstance(vendor_impl, list) else vendor_impl
|
||||
|
||||
try:
|
||||
return impl_func(*args, **kwargs)
|
||||
except AlphaVantageRateLimitError:
|
||||
continue # Only rate limits trigger fallback
|
||||
|
||||
raise RuntimeError(f"No available vendor for '{method}'")
|
||||
|
||||
|
||||
DEFAULT_DATAFLOW_ADAPTER = DataflowAdapter()
|
||||
|
||||
|
||||
def route_to_vendor(method: str, *args, **kwargs):
|
||||
"""Route method calls to appropriate vendor implementation with fallback support."""
|
||||
category = get_category_for_method(method)
|
||||
vendor_config = get_vendor(category, method)
|
||||
primary_vendors = [v.strip() for v in vendor_config.split(',')]
|
||||
|
||||
if method not in VENDOR_METHODS:
|
||||
raise ValueError(f"Method '{method}' not supported")
|
||||
|
||||
# Build fallback chain: primary vendors first, then remaining available vendors
|
||||
all_available_vendors = list(VENDOR_METHODS[method].keys())
|
||||
fallback_vendors = primary_vendors.copy()
|
||||
for vendor in all_available_vendors:
|
||||
if vendor not in fallback_vendors:
|
||||
fallback_vendors.append(vendor)
|
||||
|
||||
for vendor in fallback_vendors:
|
||||
if vendor not in VENDOR_METHODS[method]:
|
||||
continue
|
||||
|
||||
vendor_impl = VENDOR_METHODS[method][vendor]
|
||||
impl_func = vendor_impl[0] if isinstance(vendor_impl, list) else vendor_impl
|
||||
|
||||
try:
|
||||
return impl_func(*args, **kwargs)
|
||||
except AlphaVantageRateLimitError:
|
||||
continue # Only rate limits trigger fallback
|
||||
|
||||
raise RuntimeError(f"No available vendor for '{method}'")
|
||||
return DEFAULT_DATAFLOW_ADAPTER.execute(method, *args, **kwargs)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
import time
|
||||
import logging
|
||||
import threading
|
||||
|
||||
import pandas as pd
|
||||
import yfinance as yf
|
||||
import requests
|
||||
from yfinance.exceptions import YFRateLimitError
|
||||
from stockstats import wrap
|
||||
from typing import Annotated
|
||||
|
|
@ -10,6 +12,125 @@ import os
|
|||
from .config import get_config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
_fallback_session_local = threading.local()
|
||||
|
||||
|
||||
def _get_fallback_session() -> requests.Session:
|
||||
session = getattr(_fallback_session_local, "session", None)
|
||||
if session is None:
|
||||
session = requests.Session()
|
||||
session.trust_env = False
|
||||
_fallback_session_local.session = session
|
||||
return session
|
||||
|
||||
|
||||
def _symbol_to_tencent_code(symbol: str) -> str:
|
||||
code, exchange = symbol.upper().split(".")
|
||||
if exchange == "SS":
|
||||
return f"sh{code}"
|
||||
if exchange == "SZ":
|
||||
return f"sz{code}"
|
||||
raise ValueError(f"Unsupported A-share symbol for Tencent fallback: {symbol}")
|
||||
|
||||
|
||||
def _fetch_tencent_ohlcv(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
|
||||
"""Fallback daily OHLCV fetch for A-shares via Tencent."""
|
||||
session = _get_fallback_session()
|
||||
response = session.get(
|
||||
"https://web.ifzq.gtimg.cn/appstock/app/fqkline/get",
|
||||
params={
|
||||
"param": f"{_symbol_to_tencent_code(symbol)},day,{start_date},{end_date},320,qfq"
|
||||
},
|
||||
headers={
|
||||
"User-Agent": "Mozilla/5.0",
|
||||
"Referer": "https://gu.qq.com/",
|
||||
},
|
||||
timeout=20,
|
||||
)
|
||||
response.raise_for_status()
|
||||
payload = response.json()
|
||||
data = ((payload or {}).get("data") or {}).get(_symbol_to_tencent_code(symbol)) or {}
|
||||
rows = data.get("qfqday") or data.get("day") or []
|
||||
if not rows:
|
||||
raise ValueError(f"No Tencent OHLCV data returned for {symbol}")
|
||||
|
||||
parsed = []
|
||||
for line in rows:
|
||||
# [date, open, close, high, low, volume]
|
||||
date_str, open_p, close_p, high_p, low_p, volume = line[:6]
|
||||
parsed.append(
|
||||
{
|
||||
"Date": date_str,
|
||||
"Open": float(open_p),
|
||||
"High": float(high_p),
|
||||
"Low": float(low_p),
|
||||
"Close": float(close_p),
|
||||
"Volume": float(volume),
|
||||
}
|
||||
)
|
||||
return pd.DataFrame(parsed)
|
||||
|
||||
|
||||
def _symbol_to_eastmoney_secid(symbol: str) -> str:
|
||||
code, exchange = symbol.upper().split(".")
|
||||
if exchange == "SS":
|
||||
return f"1.{code}"
|
||||
if exchange in {"SZ", "BJ"}:
|
||||
return f"0.{code}"
|
||||
raise ValueError(f"Unsupported A-share symbol for Eastmoney fallback: {symbol}")
|
||||
|
||||
|
||||
def _fetch_eastmoney_ohlcv(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
|
||||
"""Fallback daily OHLCV fetch for A-shares via Eastmoney."""
|
||||
session = _get_fallback_session()
|
||||
url = "https://push2his.eastmoney.com/api/qt/stock/kline/get"
|
||||
response = session.get(
|
||||
url,
|
||||
params={
|
||||
"secid": _symbol_to_eastmoney_secid(symbol),
|
||||
"fields1": "f1,f2,f3,f4,f5,f6",
|
||||
"fields2": "f51,f52,f53,f54,f55,f56,f57,f58,f59,f60,f61",
|
||||
"klt": "101",
|
||||
"fqt": "1",
|
||||
"beg": start_date.replace("-", ""),
|
||||
"end": end_date.replace("-", ""),
|
||||
"ut": "fa5fd1943c7b386f172d6893dbfba10b",
|
||||
},
|
||||
headers={
|
||||
"User-Agent": "Mozilla/5.0",
|
||||
"Referer": "https://quote.eastmoney.com/",
|
||||
},
|
||||
timeout=20,
|
||||
)
|
||||
response.raise_for_status()
|
||||
payload = response.json()
|
||||
klines = ((payload or {}).get("data") or {}).get("klines") or []
|
||||
if not klines:
|
||||
raise ValueError(f"No Eastmoney OHLCV data returned for {symbol}")
|
||||
|
||||
rows = []
|
||||
for line in klines:
|
||||
date_str, open_p, close_p, high_p, low_p, volume, amount, *_rest = line.split(",")
|
||||
rows.append(
|
||||
{
|
||||
"Date": date_str,
|
||||
"Open": float(open_p),
|
||||
"High": float(high_p),
|
||||
"Low": float(low_p),
|
||||
"Close": float(close_p),
|
||||
"Volume": float(volume),
|
||||
"Amount": float(amount),
|
||||
}
|
||||
)
|
||||
return pd.DataFrame(rows)
|
||||
|
||||
|
||||
def _is_transient_yfinance_error(exc: Exception) -> bool:
|
||||
"""Heuristic for flaky yfinance transport/parser failures."""
|
||||
if isinstance(exc, YFRateLimitError):
|
||||
return True
|
||||
message = str(exc)
|
||||
return isinstance(exc, TypeError) and "'NoneType' object is not subscriptable" in message
|
||||
|
||||
|
||||
def yf_retry(func, max_retries=3, base_delay=2.0):
|
||||
|
|
@ -17,15 +138,24 @@ def yf_retry(func, max_retries=3, base_delay=2.0):
|
|||
|
||||
yfinance raises YFRateLimitError on HTTP 429 responses but does not
|
||||
retry them internally. This wrapper adds retry logic specifically
|
||||
for rate limits. Other exceptions propagate immediately.
|
||||
for rate limits and observed transient parser failures. Other
|
||||
exceptions propagate immediately.
|
||||
"""
|
||||
for attempt in range(max_retries + 1):
|
||||
try:
|
||||
return func()
|
||||
except YFRateLimitError:
|
||||
except Exception as exc:
|
||||
if not _is_transient_yfinance_error(exc):
|
||||
raise
|
||||
if attempt < max_retries:
|
||||
delay = base_delay * (2 ** attempt)
|
||||
logger.warning(f"Yahoo Finance rate limited, retrying in {delay:.0f}s (attempt {attempt + 1}/{max_retries})")
|
||||
logger.warning(
|
||||
"Yahoo Finance transient failure (%s), retrying in %.0fs (attempt %s/%s)",
|
||||
exc,
|
||||
delay,
|
||||
attempt + 1,
|
||||
max_retries,
|
||||
)
|
||||
time.sleep(delay)
|
||||
else:
|
||||
raise
|
||||
|
|
@ -53,6 +183,7 @@ def load_ohlcv(symbol: str, curr_date: str) -> pd.DataFrame:
|
|||
"""
|
||||
config = get_config()
|
||||
curr_date_dt = pd.to_datetime(curr_date)
|
||||
min_acceptable_date = curr_date_dt - pd.Timedelta(days=1)
|
||||
|
||||
# Cache uses a fixed window (15y to today) so one file per symbol
|
||||
today_date = pd.Timestamp.today()
|
||||
|
|
@ -66,18 +197,47 @@ def load_ohlcv(symbol: str, curr_date: str) -> pd.DataFrame:
|
|||
f"{symbol}-YFin-data-{start_str}-{end_str}.csv",
|
||||
)
|
||||
|
||||
need_refresh = True
|
||||
data = None
|
||||
if os.path.exists(data_file):
|
||||
data = pd.read_csv(data_file, on_bad_lines="skip")
|
||||
else:
|
||||
data = yf_retry(lambda: yf.download(
|
||||
symbol,
|
||||
start=start_str,
|
||||
end=end_str,
|
||||
multi_level_index=False,
|
||||
progress=False,
|
||||
auto_adjust=True,
|
||||
))
|
||||
data = data.reset_index()
|
||||
cached = pd.read_csv(data_file, on_bad_lines="skip")
|
||||
if "Date" in cached.columns:
|
||||
parsed_dates = pd.to_datetime(cached["Date"], errors="coerce")
|
||||
latest_cached = parsed_dates.dropna().max()
|
||||
if (
|
||||
latest_cached is not pd.NaT
|
||||
and latest_cached is not None
|
||||
and latest_cached >= min_acceptable_date
|
||||
):
|
||||
data = cached
|
||||
need_refresh = False
|
||||
|
||||
if need_refresh:
|
||||
try:
|
||||
data = yf_retry(lambda: yf.download(
|
||||
symbol,
|
||||
start=start_str,
|
||||
end=end_str,
|
||||
multi_level_index=False,
|
||||
progress=False,
|
||||
auto_adjust=True,
|
||||
))
|
||||
data = data.reset_index()
|
||||
latest_downloaded = pd.to_datetime(data.get("Date"), errors="coerce").dropna().max()
|
||||
if latest_downloaded is pd.NaT or latest_downloaded is None or latest_downloaded < min_acceptable_date:
|
||||
raise ValueError(
|
||||
f"yfinance returned stale data for {symbol}: latest={latest_downloaded}"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning(
|
||||
"yfinance download failed for %s, falling back to Tencent/Eastmoney OHLCV: %s",
|
||||
symbol,
|
||||
exc,
|
||||
)
|
||||
try:
|
||||
data = _fetch_tencent_ohlcv(symbol, start_str, end_str)
|
||||
except Exception:
|
||||
data = _fetch_eastmoney_ohlcv(symbol, start_str, end_str)
|
||||
data.to_csv(data_file, index=False)
|
||||
|
||||
data = _clean_dataframe(data)
|
||||
|
|
|
|||
|
|
@ -4,7 +4,21 @@ from dateutil.relativedelta import relativedelta
|
|||
import pandas as pd
|
||||
import yfinance as yf
|
||||
import os
|
||||
from .stockstats_utils import StockstatsUtils, _clean_dataframe, yf_retry, load_ohlcv, filter_financials_by_date
|
||||
from .stockstats_utils import (
|
||||
StockstatsUtils,
|
||||
_clean_dataframe,
|
||||
_fetch_eastmoney_ohlcv,
|
||||
_fetch_tencent_ohlcv,
|
||||
yf_retry,
|
||||
load_ohlcv,
|
||||
filter_financials_by_date,
|
||||
)
|
||||
from .config import get_config
|
||||
|
||||
|
||||
def _use_compact_data_output() -> bool:
|
||||
mode = str(get_config().get("analysis_prompt_style", "standard")).strip().lower()
|
||||
return mode in {"compact", "fast", "minimax"}
|
||||
|
||||
def get_YFin_data_online(
|
||||
symbol: Annotated[str, "ticker symbol of the company"],
|
||||
|
|
@ -19,16 +33,31 @@ def get_YFin_data_online(
|
|||
ticker = yf.Ticker(symbol.upper())
|
||||
|
||||
# Fetch historical data for the specified date range
|
||||
data = yf_retry(lambda: ticker.history(start=start_date, end=end_date))
|
||||
try:
|
||||
data = yf_retry(lambda: ticker.history(start=start_date, end=end_date))
|
||||
except Exception:
|
||||
try:
|
||||
data = _fetch_tencent_ohlcv(symbol.upper(), start_date, end_date)
|
||||
except Exception:
|
||||
data = _fetch_eastmoney_ohlcv(symbol.upper(), start_date, end_date)
|
||||
|
||||
# Check if data is empty
|
||||
if data.empty:
|
||||
return (
|
||||
f"No data found for symbol '{symbol}' between {start_date} and {end_date}"
|
||||
)
|
||||
try:
|
||||
data = _fetch_tencent_ohlcv(symbol.upper(), start_date, end_date)
|
||||
except Exception:
|
||||
try:
|
||||
data = _fetch_eastmoney_ohlcv(symbol.upper(), start_date, end_date)
|
||||
except Exception:
|
||||
return (
|
||||
f"No data found for symbol '{symbol}' between {start_date} and {end_date}"
|
||||
)
|
||||
|
||||
if "Date" not in data.columns and data.index.name is not None:
|
||||
data = data.reset_index()
|
||||
|
||||
# Remove timezone info from index for cleaner output
|
||||
if data.index.tz is not None:
|
||||
if getattr(data.index, "tz", None) is not None:
|
||||
data.index = data.index.tz_localize(None)
|
||||
|
||||
# Round numerical values to 2 decimal places for cleaner display
|
||||
|
|
@ -37,12 +66,20 @@ def get_YFin_data_online(
|
|||
if col in data.columns:
|
||||
data[col] = data[col].round(2)
|
||||
|
||||
compact_mode = _use_compact_data_output()
|
||||
original_len = len(data)
|
||||
if compact_mode and original_len > 20:
|
||||
data = data.tail(20)
|
||||
|
||||
# Convert DataFrame to CSV string
|
||||
csv_string = data.to_csv()
|
||||
|
||||
# Add header information
|
||||
header = f"# Stock data for {symbol.upper()} from {start_date} to {end_date}\n"
|
||||
header += f"# Total records: {len(data)}\n"
|
||||
if compact_mode and original_len > len(data):
|
||||
header += f"# Showing last {len(data)} of {original_len} records (compact mode)\n"
|
||||
else:
|
||||
header += f"# Total records: {len(data)}\n"
|
||||
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
|
||||
return header + csv_string
|
||||
|
|
@ -134,6 +171,10 @@ def get_stock_stats_indicators_window(
|
|||
f"Indicator {indicator} is not supported. Please choose from: {list(best_ind_params.keys())}"
|
||||
)
|
||||
|
||||
compact_mode = _use_compact_data_output()
|
||||
if compact_mode:
|
||||
look_back_days = min(look_back_days, 14)
|
||||
|
||||
end_date = curr_date
|
||||
curr_date_dt = datetime.strptime(curr_date, "%Y-%m-%d")
|
||||
before = curr_date_dt - relativedelta(days=look_back_days)
|
||||
|
|
@ -158,6 +199,13 @@ def get_stock_stats_indicators_window(
|
|||
date_values.append((date_str, indicator_value))
|
||||
current_dt = current_dt - relativedelta(days=1)
|
||||
|
||||
if compact_mode:
|
||||
date_values = [
|
||||
(date_str, value)
|
||||
for date_str, value in date_values
|
||||
if not str(value).startswith("N/A: Not a trading day")
|
||||
][:10]
|
||||
|
||||
# Build the result string
|
||||
ind_string = ""
|
||||
for date_str, value in date_values:
|
||||
|
|
@ -168,11 +216,16 @@ def get_stock_stats_indicators_window(
|
|||
# Fallback to original implementation if bulk method fails
|
||||
ind_string = ""
|
||||
curr_date_dt = datetime.strptime(curr_date, "%Y-%m-%d")
|
||||
emitted = 0
|
||||
while curr_date_dt >= before:
|
||||
indicator_value = get_stockstats_indicator(
|
||||
symbol, indicator, curr_date_dt.strftime("%Y-%m-%d")
|
||||
)
|
||||
ind_string += f"{curr_date_dt.strftime('%Y-%m-%d')}: {indicator_value}\n"
|
||||
if not compact_mode or not str(indicator_value).startswith("N/A: Not a trading day"):
|
||||
ind_string += f"{curr_date_dt.strftime('%Y-%m-%d')}: {indicator_value}\n"
|
||||
emitted += 1
|
||||
if compact_mode and emitted >= 10:
|
||||
break
|
||||
curr_date_dt = curr_date_dt - relativedelta(days=1)
|
||||
|
||||
result_str = (
|
||||
|
|
@ -419,4 +472,4 @@ def get_insider_transactions(
|
|||
return header + csv_string
|
||||
|
||||
except Exception as e:
|
||||
return f"Error retrieving insider transactions for {ticker}: {str(e)}"
|
||||
return f"Error retrieving insider transactions for {ticker}: {str(e)}"
|
||||
|
|
|
|||
|
|
@ -1,4 +1,13 @@
|
|||
import copy
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
_MINIMAX_ANTHROPIC_BASE_URL = "https://api.minimaxi.com/anthropic"
|
||||
_MINIMAX_DEFAULT_TIMEOUT_SECS = 60.0
|
||||
_MINIMAX_DEFAULT_MAX_RETRIES = 1
|
||||
_MINIMAX_DEFAULT_EXTRA_RETRY_ATTEMPTS = 2
|
||||
_MINIMAX_DEFAULT_RETRY_BASE_DELAY_SECS = 1.5
|
||||
_MINIMAX_DEFAULT_ANALYST_NODE_TIMEOUT_SECS = 75.0
|
||||
|
||||
_TRADINGAGENTS_HOME = os.path.join(os.path.expanduser("~"), ".tradingagents")
|
||||
|
||||
|
|
@ -18,10 +27,15 @@ DEFAULT_CONFIG = {
|
|||
# Output language for analyst reports and final decision
|
||||
# Internal agent debate stays in English for reasoning quality
|
||||
"output_language": "English",
|
||||
# Optional runtime context for account-aware and peer-aware decisions
|
||||
"portfolio_context": "",
|
||||
"peer_context": "",
|
||||
"peer_context_mode": "UNSPECIFIED",
|
||||
# Debate and discussion settings
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
"max_recur_limit": 100,
|
||||
"research_node_timeout_secs": 90.0, # Increased for parallel subagent architecture with slow LLM
|
||||
# Data vendor configuration
|
||||
# Category-level configuration (default for all tools in category)
|
||||
"data_vendors": {
|
||||
|
|
@ -35,3 +49,107 @@ DEFAULT_CONFIG = {
|
|||
# Example: "get_stock_data": "alpha_vantage", # Override category default
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _looks_like_minimax_anthropic(provider: str | None, backend_url: str | None) -> bool:
|
||||
return (
|
||||
str(provider or "").lower() == "anthropic"
|
||||
and _MINIMAX_ANTHROPIC_BASE_URL in str(backend_url or "").lower()
|
||||
)
|
||||
|
||||
|
||||
def normalize_runtime_llm_config(config: dict) -> dict:
|
||||
"""Normalize runtime LLM settings for known provider/backend quirks."""
|
||||
normalized = copy.deepcopy(config)
|
||||
provider = normalized.get("llm_provider")
|
||||
backend_url = normalized.get("backend_url")
|
||||
|
||||
if _looks_like_minimax_anthropic(provider, backend_url):
|
||||
normalized["backend_url"] = _MINIMAX_ANTHROPIC_BASE_URL
|
||||
if not normalized.get("llm_timeout"):
|
||||
normalized["llm_timeout"] = _MINIMAX_DEFAULT_TIMEOUT_SECS
|
||||
if normalized.get("llm_max_retries") in (None, 0):
|
||||
normalized["llm_max_retries"] = _MINIMAX_DEFAULT_MAX_RETRIES
|
||||
if not normalized.get("minimax_retry_attempts"):
|
||||
normalized["minimax_retry_attempts"] = _MINIMAX_DEFAULT_EXTRA_RETRY_ATTEMPTS
|
||||
if not normalized.get("minimax_retry_base_delay"):
|
||||
normalized["minimax_retry_base_delay"] = _MINIMAX_DEFAULT_RETRY_BASE_DELAY_SECS
|
||||
if not normalized.get("analyst_node_timeout_secs"):
|
||||
normalized["analyst_node_timeout_secs"] = _MINIMAX_DEFAULT_ANALYST_NODE_TIMEOUT_SECS
|
||||
|
||||
return normalized
|
||||
|
||||
|
||||
def _resolve_runtime_llm_overrides() -> dict:
|
||||
"""Resolve provider/model/base URL overrides from the current environment."""
|
||||
overrides: dict[str, object] = {}
|
||||
|
||||
provider = os.getenv("TRADINGAGENTS_LLM_PROVIDER")
|
||||
if not provider:
|
||||
if os.getenv("ANTHROPIC_BASE_URL"):
|
||||
provider = "anthropic"
|
||||
elif os.getenv("OPENAI_BASE_URL"):
|
||||
provider = "openai"
|
||||
if provider:
|
||||
overrides["llm_provider"] = provider
|
||||
|
||||
backend_url = (
|
||||
os.getenv("TRADINGAGENTS_BACKEND_URL")
|
||||
or os.getenv("ANTHROPIC_BASE_URL")
|
||||
or os.getenv("OPENAI_BASE_URL")
|
||||
)
|
||||
if backend_url:
|
||||
overrides["backend_url"] = backend_url
|
||||
|
||||
shared_model = os.getenv("TRADINGAGENTS_MODEL")
|
||||
deep_model = os.getenv("TRADINGAGENTS_DEEP_MODEL") or shared_model
|
||||
quick_model = os.getenv("TRADINGAGENTS_QUICK_MODEL") or shared_model
|
||||
if deep_model:
|
||||
overrides["deep_think_llm"] = deep_model
|
||||
if quick_model:
|
||||
overrides["quick_think_llm"] = quick_model
|
||||
|
||||
anthropic_api_key = os.getenv("ANTHROPIC_API_KEY") or os.getenv("MINIMAX_API_KEY")
|
||||
if anthropic_api_key:
|
||||
overrides["api_key"] = anthropic_api_key
|
||||
|
||||
portfolio_context = os.getenv("TRADINGAGENTS_PORTFOLIO_CONTEXT")
|
||||
if portfolio_context is not None:
|
||||
overrides["portfolio_context"] = portfolio_context
|
||||
|
||||
peer_context = os.getenv("TRADINGAGENTS_PEER_CONTEXT")
|
||||
if peer_context is not None:
|
||||
overrides["peer_context"] = peer_context
|
||||
|
||||
peer_context_mode = os.getenv("TRADINGAGENTS_PEER_CONTEXT_MODE")
|
||||
if peer_context_mode is not None:
|
||||
overrides["peer_context_mode"] = peer_context_mode
|
||||
|
||||
return overrides
|
||||
|
||||
|
||||
def load_project_env(start_path):
|
||||
"""Load the nearest .env from the given path upward."""
|
||||
try:
|
||||
from dotenv import load_dotenv
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
current = Path(start_path).resolve()
|
||||
if current.is_file():
|
||||
current = current.parent
|
||||
|
||||
for directory in (current, *current.parents):
|
||||
env_path = directory / ".env"
|
||||
if env_path.exists():
|
||||
# Project entrypoints should use the repo-local runtime settings even
|
||||
# when the user's shell exports unrelated Anthropic/OpenAI variables.
|
||||
load_dotenv(env_path, override=True)
|
||||
return env_path
|
||||
return None
|
||||
|
||||
|
||||
def get_default_config():
|
||||
config = copy.deepcopy(DEFAULT_CONFIG)
|
||||
config.update(_resolve_runtime_llm_overrides())
|
||||
return normalize_runtime_llm_config(config)
|
||||
|
|
|
|||
|
|
@ -16,13 +16,22 @@ class Propagator:
|
|||
self.max_recur_limit = max_recur_limit
|
||||
|
||||
def create_initial_state(
|
||||
self, company_name: str, trade_date: str
|
||||
self,
|
||||
company_name: str,
|
||||
trade_date: str,
|
||||
*,
|
||||
portfolio_context: str = "",
|
||||
peer_context: str = "",
|
||||
peer_context_mode: str = "UNSPECIFIED",
|
||||
) -> Dict[str, Any]:
|
||||
"""Create the initial state for the agent graph."""
|
||||
return {
|
||||
"messages": [("human", company_name)],
|
||||
"company_of_interest": company_name,
|
||||
"trade_date": str(trade_date),
|
||||
"portfolio_context": portfolio_context,
|
||||
"peer_context": peer_context,
|
||||
"peer_context_mode": peer_context_mode,
|
||||
"investment_debate_state": InvestDebateState(
|
||||
{
|
||||
"bull_history": "",
|
||||
|
|
@ -31,6 +40,12 @@ class Propagator:
|
|||
"current_response": "",
|
||||
"judge_decision": "",
|
||||
"count": 0,
|
||||
"research_status": "full",
|
||||
"research_mode": "debate",
|
||||
"timed_out_nodes": [],
|
||||
"degraded_reason": None,
|
||||
"covered_dimensions": [],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
),
|
||||
"risk_debate_state": RiskDebateState(
|
||||
|
|
@ -51,6 +66,13 @@ class Propagator:
|
|||
"fundamentals_report": "",
|
||||
"sentiment_report": "",
|
||||
"news_report": "",
|
||||
"investment_plan": "",
|
||||
"investment_plan_structured": {},
|
||||
"trader_investment_plan": "",
|
||||
"trader_investment_plan_structured": {},
|
||||
"final_trade_decision": "",
|
||||
"final_trade_decision_report": "",
|
||||
"final_trade_decision_structured": {},
|
||||
}
|
||||
|
||||
def get_graph_args(self, callbacks: Optional[List] = None) -> Dict[str, Any]:
|
||||
|
|
@ -60,7 +82,7 @@ class Propagator:
|
|||
callbacks: Optional list of callback handlers for tool execution tracking.
|
||||
Note: LLM callbacks are handled separately via LLM constructor.
|
||||
"""
|
||||
config = {"recursion_limit": self.max_recur_limit}
|
||||
config = {"recursion_limit": self.max_recur_limit, "max_concurrency": 1}
|
||||
if callbacks:
|
||||
config["callbacks"] = callbacks
|
||||
return {
|
||||
|
|
|
|||
|
|
@ -1,10 +1,14 @@
|
|||
# TradingAgents/graph/setup.py
|
||||
|
||||
import concurrent.futures
|
||||
import time
|
||||
from typing import Any, Dict
|
||||
from langgraph.graph import END, START, StateGraph
|
||||
from langchain_core.messages import AIMessage
|
||||
from langgraph.prebuilt import ToolNode
|
||||
|
||||
from tradingagents.agents import *
|
||||
from tradingagents.agents.utils.decision_utils import build_structured_decision
|
||||
from tradingagents.agents.utils.agent_states import AgentState
|
||||
|
||||
from .conditional_logic import ConditionalLogic
|
||||
|
|
@ -24,6 +28,8 @@ class GraphSetup:
|
|||
invest_judge_memory,
|
||||
portfolio_manager_memory,
|
||||
conditional_logic: ConditionalLogic,
|
||||
analyst_node_timeout_secs: float = 75.0,
|
||||
research_node_timeout_secs: float = 30.0,
|
||||
):
|
||||
"""Initialize with required components."""
|
||||
self.quick_thinking_llm = quick_thinking_llm
|
||||
|
|
@ -35,6 +41,8 @@ class GraphSetup:
|
|||
self.invest_judge_memory = invest_judge_memory
|
||||
self.portfolio_manager_memory = portfolio_manager_memory
|
||||
self.conditional_logic = conditional_logic
|
||||
self.analyst_node_timeout_secs = analyst_node_timeout_secs
|
||||
self.research_node_timeout_secs = research_node_timeout_secs
|
||||
|
||||
def setup_graph(
|
||||
self, selected_analysts=["market", "social", "news", "fundamentals"]
|
||||
|
|
@ -57,41 +65,52 @@ class GraphSetup:
|
|||
tool_nodes = {}
|
||||
|
||||
if "market" in selected_analysts:
|
||||
analyst_nodes["market"] = create_market_analyst(
|
||||
self.quick_thinking_llm
|
||||
analyst_nodes["market"] = self._guard_analyst_node(
|
||||
"Market Analyst",
|
||||
create_market_analyst(self.quick_thinking_llm),
|
||||
report_field="market_report",
|
||||
)
|
||||
delete_nodes["market"] = create_msg_delete()
|
||||
tool_nodes["market"] = self.tool_nodes["market"]
|
||||
|
||||
if "social" in selected_analysts:
|
||||
analyst_nodes["social"] = create_social_media_analyst(
|
||||
self.quick_thinking_llm
|
||||
analyst_nodes["social"] = self._guard_analyst_node(
|
||||
"Social Analyst",
|
||||
create_social_media_analyst(self.quick_thinking_llm),
|
||||
report_field="sentiment_report",
|
||||
)
|
||||
delete_nodes["social"] = create_msg_delete()
|
||||
tool_nodes["social"] = self.tool_nodes["social"]
|
||||
|
||||
if "news" in selected_analysts:
|
||||
analyst_nodes["news"] = create_news_analyst(
|
||||
self.quick_thinking_llm
|
||||
analyst_nodes["news"] = self._guard_analyst_node(
|
||||
"News Analyst",
|
||||
create_news_analyst(self.quick_thinking_llm),
|
||||
report_field="news_report",
|
||||
)
|
||||
delete_nodes["news"] = create_msg_delete()
|
||||
tool_nodes["news"] = self.tool_nodes["news"]
|
||||
|
||||
if "fundamentals" in selected_analysts:
|
||||
analyst_nodes["fundamentals"] = create_fundamentals_analyst(
|
||||
self.quick_thinking_llm
|
||||
analyst_nodes["fundamentals"] = self._guard_analyst_node(
|
||||
"Fundamentals Analyst",
|
||||
create_fundamentals_analyst(self.quick_thinking_llm),
|
||||
report_field="fundamentals_report",
|
||||
)
|
||||
delete_nodes["fundamentals"] = create_msg_delete()
|
||||
tool_nodes["fundamentals"] = self.tool_nodes["fundamentals"]
|
||||
|
||||
# Create researcher and manager nodes
|
||||
bull_researcher_node = create_bull_researcher(
|
||||
bull_researcher_node = self._guard_research_node(
|
||||
"Bull Researcher",
|
||||
self.quick_thinking_llm, self.bull_memory
|
||||
)
|
||||
bear_researcher_node = create_bear_researcher(
|
||||
bear_researcher_node = self._guard_research_node(
|
||||
"Bear Researcher",
|
||||
self.quick_thinking_llm, self.bear_memory
|
||||
)
|
||||
research_manager_node = create_research_manager(
|
||||
research_manager_node = self._guard_research_node(
|
||||
"Research Manager",
|
||||
self.deep_thinking_llm, self.invest_judge_memory
|
||||
)
|
||||
trader_node = create_trader(self.quick_thinking_llm, self.trader_memory)
|
||||
|
|
@ -199,3 +218,155 @@ class GraphSetup:
|
|||
|
||||
# Compile and return
|
||||
return workflow.compile()
|
||||
|
||||
def _guard_research_node(self, node_name: str, llm: Any, memory):
|
||||
if node_name == "Bull Researcher":
|
||||
node = create_bull_researcher(llm, memory)
|
||||
dimension = "bull"
|
||||
elif node_name == "Bear Researcher":
|
||||
node = create_bear_researcher(llm, memory)
|
||||
dimension = "bear"
|
||||
else:
|
||||
node = create_research_manager(llm, memory)
|
||||
dimension = "manager"
|
||||
|
||||
def wrapped(state):
|
||||
started_at = time.time()
|
||||
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
|
||||
future = executor.submit(node, state)
|
||||
try:
|
||||
result = future.result(timeout=self.research_node_timeout_secs)
|
||||
return self._apply_research_success(state, result, dimension)
|
||||
except concurrent.futures.TimeoutError:
|
||||
future.cancel()
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
return self._apply_research_fallback(
|
||||
state,
|
||||
node_name=node_name,
|
||||
dimension=dimension,
|
||||
reason=f"{node_name.lower().replace(' ', '_')}_timeout",
|
||||
started_at=started_at,
|
||||
)
|
||||
except Exception as exc:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
return self._apply_research_fallback(
|
||||
state,
|
||||
node_name=node_name,
|
||||
dimension=dimension,
|
||||
reason=f"{node_name.lower().replace(' ', '_')}_{type(exc).__name__.lower()}",
|
||||
started_at=started_at,
|
||||
)
|
||||
finally:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
|
||||
return wrapped
|
||||
|
||||
def _guard_analyst_node(self, node_name: str, node, *, report_field: str):
|
||||
def wrapped(state):
|
||||
started_at = time.time()
|
||||
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
|
||||
future = executor.submit(node, state)
|
||||
try:
|
||||
return future.result(timeout=self.analyst_node_timeout_secs)
|
||||
except concurrent.futures.TimeoutError:
|
||||
future.cancel()
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
return self._apply_analyst_fallback(
|
||||
node_name=node_name,
|
||||
report_field=report_field,
|
||||
reason=f"{node_name.lower().replace(' ', '_')}_timeout",
|
||||
started_at=started_at,
|
||||
)
|
||||
except Exception as exc:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
return self._apply_analyst_fallback(
|
||||
node_name=node_name,
|
||||
report_field=report_field,
|
||||
reason=f"{node_name.lower().replace(' ', '_')}_{type(exc).__name__.lower()}",
|
||||
started_at=started_at,
|
||||
)
|
||||
finally:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
|
||||
return wrapped
|
||||
|
||||
@staticmethod
|
||||
def _provenance(state) -> dict:
|
||||
debate_state = dict(state["investment_debate_state"])
|
||||
return {
|
||||
"research_status": debate_state.get("research_status", "full"),
|
||||
"research_mode": debate_state.get("research_mode", "debate"),
|
||||
"timed_out_nodes": list(debate_state.get("timed_out_nodes", [])),
|
||||
"degraded_reason": debate_state.get("degraded_reason"),
|
||||
"covered_dimensions": list(debate_state.get("covered_dimensions", [])),
|
||||
"manager_confidence": debate_state.get("manager_confidence"),
|
||||
}
|
||||
|
||||
def _apply_research_success(self, state, result: dict, dimension: str):
|
||||
debate_state = dict(result.get("investment_debate_state") or state["investment_debate_state"])
|
||||
provenance = self._provenance(state)
|
||||
if dimension not in provenance["covered_dimensions"]:
|
||||
provenance["covered_dimensions"].append(dimension)
|
||||
if provenance["research_status"] == "full":
|
||||
provenance["research_mode"] = "debate"
|
||||
if dimension == "manager" and provenance["manager_confidence"] is None:
|
||||
provenance["manager_confidence"] = 1.0 if provenance["research_status"] == "full" else 0.5
|
||||
debate_state.update(provenance)
|
||||
updated = dict(result)
|
||||
updated["investment_debate_state"] = debate_state
|
||||
return updated
|
||||
|
||||
def _apply_research_fallback(self, state, *, node_name: str, dimension: str, reason: str, started_at: float):
|
||||
debate_state = dict(state["investment_debate_state"])
|
||||
provenance = self._provenance(state)
|
||||
provenance["research_status"] = "degraded"
|
||||
provenance["research_mode"] = "degraded_synthesis"
|
||||
provenance["degraded_reason"] = reason
|
||||
if "timeout" in reason and node_name not in provenance["timed_out_nodes"]:
|
||||
provenance["timed_out_nodes"].append(node_name)
|
||||
|
||||
elapsed_seconds = round(time.time() - started_at, 3)
|
||||
if dimension == "manager":
|
||||
provenance["manager_confidence"] = 0.0
|
||||
fallback = (
|
||||
"Recommendation: HOLD\n"
|
||||
f"Top reasons: research degraded at {node_name} ({reason}); use partial research context cautiously.\n"
|
||||
f"Simple execution plan: keep sizing conservative and wait for confirmation. Guard elapsed={elapsed_seconds}s."
|
||||
)
|
||||
debate_state["judge_decision"] = fallback
|
||||
debate_state["current_response"] = fallback
|
||||
debate_state.update(provenance)
|
||||
return {
|
||||
"investment_debate_state": debate_state,
|
||||
"investment_plan": fallback,
|
||||
"investment_plan_structured": build_structured_decision(
|
||||
fallback,
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
|
||||
),
|
||||
}
|
||||
|
||||
prefix = "Bull Analyst" if dimension == "bull" else "Bear Analyst"
|
||||
history_field = "bull_history" if dimension == "bull" else "bear_history"
|
||||
degraded_argument = (
|
||||
f"{prefix}: [DEGRADED] {node_name} unavailable ({reason}). "
|
||||
f"Proceeding with partial research context. Guard elapsed={elapsed_seconds}s."
|
||||
)
|
||||
debate_state["history"] = debate_state.get("history", "") + "\n" + degraded_argument
|
||||
debate_state[history_field] = debate_state.get(history_field, "") + "\n" + degraded_argument
|
||||
debate_state["current_response"] = degraded_argument
|
||||
debate_state["count"] = debate_state.get("count", 0) + 1
|
||||
debate_state.update(provenance)
|
||||
return {"investment_debate_state": debate_state}
|
||||
|
||||
@staticmethod
|
||||
def _apply_analyst_fallback(*, node_name: str, report_field: str, reason: str, started_at: float):
|
||||
elapsed_seconds = round(time.time() - started_at, 3)
|
||||
fallback = (
|
||||
f"[DEGRADED] {node_name} unavailable ({reason}). "
|
||||
f"Proceed with partial research context. Guard elapsed={elapsed_seconds}s."
|
||||
)
|
||||
return {
|
||||
"messages": [AIMessage(content=fallback)],
|
||||
report_field: fallback,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
from typing import Any
|
||||
|
||||
from tradingagents.agents.utils.decision_utils import CANONICAL_RATINGS, extract_rating
|
||||
|
||||
|
||||
class SignalProcessor:
|
||||
"""Processes trading signals to extract actionable decisions."""
|
||||
|
|
@ -20,6 +22,10 @@ class SignalProcessor:
|
|||
Returns:
|
||||
Extracted rating (BUY, OVERWEIGHT, HOLD, UNDERWEIGHT, or SELL)
|
||||
"""
|
||||
parsed = extract_rating(full_signal)
|
||||
if parsed in CANONICAL_RATINGS:
|
||||
return parsed
|
||||
|
||||
messages = [
|
||||
(
|
||||
"system",
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
# TradingAgents/graph/trading_graph.py
|
||||
|
||||
import copy
|
||||
import os
|
||||
from pathlib import Path
|
||||
import json
|
||||
|
|
@ -11,13 +12,15 @@ from langgraph.prebuilt import ToolNode
|
|||
from tradingagents.llm_clients import create_llm_client
|
||||
|
||||
from tradingagents.agents import *
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.default_config import get_default_config
|
||||
from tradingagents.agents.utils.memory import FinancialSituationMemory
|
||||
from tradingagents.agents.utils.agent_states import (
|
||||
AgentState,
|
||||
InvestDebateState,
|
||||
RiskDebateState,
|
||||
extract_research_provenance,
|
||||
)
|
||||
from tradingagents.agents.utils.decision_utils import build_structured_decision
|
||||
from tradingagents.dataflows.config import set_config
|
||||
|
||||
# Import the new abstract tool methods from agent_utils
|
||||
|
|
@ -40,6 +43,30 @@ from .reflection import Reflector
|
|||
from .signal_processing import SignalProcessor
|
||||
|
||||
|
||||
def _merge_with_default_config(config: Optional[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Merge a partial user config onto the runtime default config.
|
||||
|
||||
Orchestrator callers often override only a few LLM/vendor fields. Without a
|
||||
merge step, required defaults such as ``project_dir`` disappear and the
|
||||
graph fails during initialization.
|
||||
"""
|
||||
merged = get_default_config()
|
||||
if not config:
|
||||
return merged
|
||||
|
||||
for key, value in config.items():
|
||||
if (
|
||||
key in ("data_vendors", "tool_vendors")
|
||||
and isinstance(value, dict)
|
||||
and isinstance(merged.get(key), dict)
|
||||
):
|
||||
merged[key].update(value)
|
||||
else:
|
||||
merged[key] = value
|
||||
|
||||
return merged
|
||||
|
||||
|
||||
class TradingAgentsGraph:
|
||||
"""Main class that orchestrates the trading agents framework."""
|
||||
|
||||
|
|
@ -59,7 +86,7 @@ class TradingAgentsGraph:
|
|||
callbacks: Optional list of callback handlers (e.g., for tracking LLM/tool stats)
|
||||
"""
|
||||
self.debug = debug
|
||||
self.config = config or DEFAULT_CONFIG
|
||||
self.config = _merge_with_default_config(config)
|
||||
self.callbacks = callbacks or []
|
||||
|
||||
# Update the interface's config
|
||||
|
|
@ -117,6 +144,8 @@ class TradingAgentsGraph:
|
|||
self.invest_judge_memory,
|
||||
self.portfolio_manager_memory,
|
||||
self.conditional_logic,
|
||||
analyst_node_timeout_secs=float(self.config.get("analyst_node_timeout_secs", 75.0)),
|
||||
research_node_timeout_secs=float(self.config.get("research_node_timeout_secs", 30.0)),
|
||||
)
|
||||
|
||||
self.propagator = Propagator()
|
||||
|
|
@ -136,6 +165,17 @@ class TradingAgentsGraph:
|
|||
kwargs = {}
|
||||
provider = self.config.get("llm_provider", "").lower()
|
||||
|
||||
common_passthrough = {
|
||||
"timeout": ("llm_timeout", "timeout"),
|
||||
"max_retries": ("llm_max_retries", "max_retries"),
|
||||
}
|
||||
for out_key, config_keys in common_passthrough.items():
|
||||
for config_key in config_keys:
|
||||
value = self.config.get(config_key)
|
||||
if value is not None:
|
||||
kwargs[out_key] = value
|
||||
break
|
||||
|
||||
if provider == "google":
|
||||
thinking_level = self.config.get("google_thinking_level")
|
||||
if thinking_level:
|
||||
|
|
@ -145,12 +185,20 @@ class TradingAgentsGraph:
|
|||
reasoning_effort = self.config.get("openai_reasoning_effort")
|
||||
if reasoning_effort:
|
||||
kwargs["reasoning_effort"] = reasoning_effort
|
||||
# Allow disabling Responses API for third-party OpenAI-compatible providers
|
||||
if "use_responses_api" in self.config:
|
||||
kwargs["use_responses_api"] = self.config["use_responses_api"]
|
||||
|
||||
elif provider == "anthropic":
|
||||
effort = self.config.get("anthropic_effort")
|
||||
if effort:
|
||||
kwargs["effort"] = effort
|
||||
|
||||
# Pass api_key if present in config (for MiniMax and other third-party Anthropic-compatible APIs)
|
||||
api_key = self.config.get("api_key")
|
||||
if api_key:
|
||||
kwargs["api_key"] = api_key
|
||||
|
||||
return kwargs
|
||||
|
||||
def _create_tool_nodes(self) -> Dict[str, ToolNode]:
|
||||
|
|
@ -196,7 +244,11 @@ class TradingAgentsGraph:
|
|||
|
||||
# Initialize state
|
||||
init_agent_state = self.propagator.create_initial_state(
|
||||
company_name, trade_date
|
||||
company_name,
|
||||
trade_date,
|
||||
portfolio_context=str(self.config.get("portfolio_context", "") or ""),
|
||||
peer_context=str(self.config.get("peer_context", "") or ""),
|
||||
peer_context_mode=str(self.config.get("peer_context_mode", "UNSPECIFIED") or "UNSPECIFIED"),
|
||||
)
|
||||
args = self.propagator.get_graph_args()
|
||||
|
||||
|
|
@ -215,6 +267,8 @@ class TradingAgentsGraph:
|
|||
# Standard mode without tracing
|
||||
final_state = self.graph.invoke(init_agent_state, **args)
|
||||
|
||||
final_state = self._normalize_decision_outputs(final_state)
|
||||
|
||||
# Store current state for reflection
|
||||
self.curr_state = final_state
|
||||
|
||||
|
|
@ -224,6 +278,65 @@ class TradingAgentsGraph:
|
|||
# Return decision and processed signal
|
||||
return final_state, self.process_signal(final_state["final_trade_decision"])
|
||||
|
||||
def _normalize_decision_outputs(self, final_state: Dict[str, Any]) -> Dict[str, Any]:
|
||||
normalized = copy.deepcopy(final_state)
|
||||
portfolio_context = bool(str(normalized.get("portfolio_context", "") or "").strip())
|
||||
peer_context = bool(str(normalized.get("peer_context", "") or "").strip())
|
||||
context_usage = {
|
||||
"portfolio_context": portfolio_context,
|
||||
"peer_context": peer_context,
|
||||
}
|
||||
|
||||
investment_plan = str(normalized.get("investment_plan", "") or "")
|
||||
trader_plan = str(normalized.get("trader_investment_plan", "") or "")
|
||||
final_rating = str(normalized.get("final_trade_decision", "") or "")
|
||||
final_report = str(
|
||||
normalized.get("final_trade_decision_report")
|
||||
or normalized.get("risk_debate_state", {}).get("judge_decision", "")
|
||||
or final_rating
|
||||
)
|
||||
|
||||
investment_structured = normalized.get("investment_plan_structured") or build_structured_decision(
|
||||
investment_plan,
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=normalized.get("peer_context_mode", "UNSPECIFIED"),
|
||||
context_usage=context_usage,
|
||||
)
|
||||
trader_structured = normalized.get("trader_investment_plan_structured") or build_structured_decision(
|
||||
trader_plan,
|
||||
fallback_candidates=(("investment_plan", investment_plan),),
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=normalized.get("peer_context_mode", "UNSPECIFIED"),
|
||||
context_usage=context_usage,
|
||||
)
|
||||
final_structured = normalized.get("final_trade_decision_structured") or build_structured_decision(
|
||||
final_report,
|
||||
fallback_candidates=(
|
||||
("trader_plan", trader_plan),
|
||||
("investment_plan", investment_plan),
|
||||
),
|
||||
default_rating="HOLD",
|
||||
peer_context_mode=normalized.get("peer_context_mode", "UNSPECIFIED"),
|
||||
context_usage=context_usage,
|
||||
)
|
||||
|
||||
if final_rating and final_rating != final_structured["rating"]:
|
||||
warnings = list(final_structured.get("warnings") or [])
|
||||
warnings.append(f"final_trade_decision_overridden:{final_rating}->{final_structured['rating']}")
|
||||
final_structured["warnings"] = warnings
|
||||
|
||||
normalized["investment_plan_structured"] = investment_structured
|
||||
normalized["trader_investment_plan_structured"] = trader_structured
|
||||
normalized["final_trade_decision"] = final_structured["rating"]
|
||||
normalized["final_trade_decision_report"] = final_structured["report_text"]
|
||||
normalized["final_trade_decision_structured"] = final_structured
|
||||
|
||||
risk_state = dict(normalized.get("risk_debate_state") or {})
|
||||
risk_state["judge_decision"] = final_structured["report_text"]
|
||||
normalized["risk_debate_state"] = risk_state
|
||||
|
||||
return normalized
|
||||
|
||||
def _log_state(self, trade_date, final_state):
|
||||
"""Log the final state to a JSON file."""
|
||||
self.log_states_dict[str(trade_date)] = {
|
||||
|
|
@ -243,8 +356,15 @@ class TradingAgentsGraph:
|
|||
"judge_decision": final_state["investment_debate_state"][
|
||||
"judge_decision"
|
||||
],
|
||||
**(
|
||||
extract_research_provenance(
|
||||
final_state.get("investment_debate_state")
|
||||
)
|
||||
or {}
|
||||
),
|
||||
},
|
||||
"trader_investment_decision": final_state["trader_investment_plan"],
|
||||
"trader_investment_plan_structured": final_state.get("trader_investment_plan_structured", {}),
|
||||
"risk_debate_state": {
|
||||
"aggressive_history": final_state["risk_debate_state"]["aggressive_history"],
|
||||
"conservative_history": final_state["risk_debate_state"]["conservative_history"],
|
||||
|
|
@ -253,7 +373,10 @@ class TradingAgentsGraph:
|
|||
"judge_decision": final_state["risk_debate_state"]["judge_decision"],
|
||||
},
|
||||
"investment_plan": final_state["investment_plan"],
|
||||
"investment_plan_structured": final_state.get("investment_plan_structured", {}),
|
||||
"final_trade_decision": final_state["final_trade_decision"],
|
||||
"final_trade_decision_report": final_state.get("final_trade_decision_report", ""),
|
||||
"final_trade_decision_structured": final_state.get("final_trade_decision_structured", {}),
|
||||
}
|
||||
|
||||
# Save to file
|
||||
|
|
|
|||
|
|
@ -1,4 +1,10 @@
|
|||
from .base_client import BaseLLMClient
|
||||
from .factory import create_llm_client
|
||||
from .factory import ProviderSpec, create_llm_client, get_provider_spec, get_supported_providers
|
||||
|
||||
__all__ = ["BaseLLMClient", "create_llm_client"]
|
||||
__all__ = [
|
||||
"BaseLLMClient",
|
||||
"ProviderSpec",
|
||||
"create_llm_client",
|
||||
"get_provider_spec",
|
||||
"get_supported_providers",
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
import logging
|
||||
import time
|
||||
from typing import Any, Optional
|
||||
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
|
@ -5,12 +7,34 @@ from langchain_anthropic import ChatAnthropic
|
|||
from .base_client import BaseLLMClient, normalize_content
|
||||
from .validators import validate_model
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_PASSTHROUGH_KWARGS = (
|
||||
"timeout", "max_retries", "api_key", "max_tokens",
|
||||
"callbacks", "http_client", "http_async_client", "effort",
|
||||
)
|
||||
|
||||
|
||||
def _is_minimax_anthropic_base_url(base_url: Optional[str]) -> bool:
|
||||
return "api.minimaxi.com/anthropic" in str(base_url or "").lower()
|
||||
|
||||
|
||||
def _is_retryable_minimax_error(exc: Exception) -> bool:
|
||||
text = f"{type(exc).__name__}: {exc}".lower()
|
||||
retry_markers = (
|
||||
"overloaded_error",
|
||||
"http_code': '529'",
|
||||
'http_code": "529"',
|
||||
" 529 ",
|
||||
"429",
|
||||
"timeout",
|
||||
"timed out",
|
||||
"connection reset",
|
||||
"temporarily unavailable",
|
||||
)
|
||||
return any(marker in text for marker in retry_markers)
|
||||
|
||||
|
||||
class NormalizedChatAnthropic(ChatAnthropic):
|
||||
"""ChatAnthropic with normalized content output.
|
||||
|
||||
|
|
@ -20,7 +44,25 @@ class NormalizedChatAnthropic(ChatAnthropic):
|
|||
"""
|
||||
|
||||
def invoke(self, input, config=None, **kwargs):
|
||||
return normalize_content(super().invoke(input, config, **kwargs))
|
||||
extra_attempts = max(0, int(getattr(self, "_minimax_retry_attempts", 0)))
|
||||
base_delay = max(0.0, float(getattr(self, "_minimax_retry_base_delay", 0.0)))
|
||||
|
||||
for attempt in range(extra_attempts + 1):
|
||||
try:
|
||||
return normalize_content(super().invoke(input, config, **kwargs))
|
||||
except Exception as exc:
|
||||
if attempt >= extra_attempts or not _is_retryable_minimax_error(exc):
|
||||
raise
|
||||
|
||||
delay = base_delay * (2 ** attempt)
|
||||
logger.warning(
|
||||
"MiniMax Anthropic invoke failed (%s); retrying in %.1fs (%s/%s)",
|
||||
exc,
|
||||
delay,
|
||||
attempt + 1,
|
||||
extra_attempts,
|
||||
)
|
||||
time.sleep(delay)
|
||||
|
||||
|
||||
class AnthropicClient(BaseLLMClient):
|
||||
|
|
@ -41,7 +83,19 @@ class AnthropicClient(BaseLLMClient):
|
|||
if key in self.kwargs:
|
||||
llm_kwargs[key] = self.kwargs[key]
|
||||
|
||||
return NormalizedChatAnthropic(**llm_kwargs)
|
||||
llm = NormalizedChatAnthropic(**llm_kwargs)
|
||||
if _is_minimax_anthropic_base_url(self.base_url):
|
||||
object.__setattr__(
|
||||
llm,
|
||||
"_minimax_retry_attempts",
|
||||
int(self.kwargs.get("minimax_retry_attempts", 0)),
|
||||
)
|
||||
object.__setattr__(
|
||||
llm,
|
||||
"_minimax_retry_base_delay",
|
||||
float(self.kwargs.get("minimax_retry_base_delay", 0.0)),
|
||||
)
|
||||
return llm
|
||||
|
||||
def validate_model(self) -> bool:
|
||||
"""Validate model for Anthropic."""
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
from typing import Callable, Optional, TypedDict
|
||||
import re
|
||||
|
||||
from .base_client import BaseLLMClient
|
||||
from .openai_client import OpenAIClient
|
||||
|
|
@ -11,6 +13,107 @@ _OPENAI_COMPATIBLE = (
|
|||
"openai", "xai", "deepseek", "qwen", "glm", "ollama", "openrouter",
|
||||
)
|
||||
|
||||
# Compiled pattern cache for validation performance
|
||||
_COMPILED_PATTERNS: dict[str, list[re.Pattern]] = {}
|
||||
|
||||
|
||||
class ProviderMismatch(TypedDict):
|
||||
"""Provider validation mismatch details."""
|
||||
provider: str
|
||||
backend_url: str
|
||||
expected_patterns: tuple[str, ...]
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ProviderSpec:
|
||||
"""Provider registry entry for LLM client creation.
|
||||
|
||||
Attributes:
|
||||
canonical_name: Primary provider identifier
|
||||
aliases: Alternative names that resolve to this provider
|
||||
builder: Factory function to create the client instance
|
||||
base_url_patterns: Regex patterns for valid base URLs (None = no validation)
|
||||
"""
|
||||
|
||||
canonical_name: str
|
||||
aliases: tuple[str, ...]
|
||||
builder: Callable[..., BaseLLMClient]
|
||||
base_url_patterns: Optional[tuple[str, ...]] = None
|
||||
|
||||
|
||||
_PROVIDER_SPECS: tuple[ProviderSpec, ...] = (
|
||||
ProviderSpec(
|
||||
canonical_name="openai",
|
||||
aliases=("openai",),
|
||||
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
|
||||
model,
|
||||
base_url,
|
||||
provider="openai",
|
||||
**kwargs,
|
||||
),
|
||||
base_url_patterns=(r"api\.openai\.com",),
|
||||
),
|
||||
ProviderSpec(
|
||||
canonical_name="ollama",
|
||||
aliases=("ollama",),
|
||||
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
|
||||
model,
|
||||
base_url,
|
||||
provider="ollama",
|
||||
**kwargs,
|
||||
),
|
||||
base_url_patterns=(r"localhost:\d+", r"127\.0\.0\.1:\d+", r"ollama"),
|
||||
),
|
||||
ProviderSpec(
|
||||
canonical_name="openrouter",
|
||||
aliases=("openrouter",),
|
||||
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
|
||||
model,
|
||||
base_url,
|
||||
provider="openrouter",
|
||||
**kwargs,
|
||||
),
|
||||
base_url_patterns=(r"openrouter\.ai",),
|
||||
),
|
||||
ProviderSpec(
|
||||
canonical_name="xai",
|
||||
aliases=("xai",),
|
||||
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
|
||||
model,
|
||||
base_url,
|
||||
provider="xai",
|
||||
**kwargs,
|
||||
),
|
||||
base_url_patterns=(r"api\.x\.ai",),
|
||||
),
|
||||
ProviderSpec(
|
||||
canonical_name="anthropic",
|
||||
aliases=("anthropic",),
|
||||
builder=lambda model, base_url=None, **kwargs: AnthropicClient(model, base_url, **kwargs),
|
||||
base_url_patterns=(r"api\.anthropic\.com", r"api\.minimaxi\.com/anthropic"),
|
||||
),
|
||||
ProviderSpec(
|
||||
canonical_name="google",
|
||||
aliases=("google",),
|
||||
builder=lambda model, base_url=None, **kwargs: GoogleClient(model, base_url, **kwargs),
|
||||
base_url_patterns=(r"generativelanguage\.googleapis\.com",),
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def get_provider_spec(provider: str) -> ProviderSpec:
|
||||
"""Resolve a provider or alias to its canonical registry entry."""
|
||||
provider_lower = provider.lower()
|
||||
for spec in _PROVIDER_SPECS:
|
||||
if provider_lower in spec.aliases:
|
||||
return spec
|
||||
raise ValueError(f"Unsupported LLM provider: {provider}")
|
||||
|
||||
|
||||
def get_supported_providers() -> tuple[str, ...]:
|
||||
"""Return canonical provider names exposed by the registry."""
|
||||
return tuple(spec.canonical_name for spec in _PROVIDER_SPECS)
|
||||
|
||||
|
||||
def create_llm_client(
|
||||
provider: str,
|
||||
|
|
@ -33,17 +136,48 @@ def create_llm_client(
|
|||
ValueError: If provider is not supported
|
||||
"""
|
||||
provider_lower = provider.lower()
|
||||
provider_spec = get_provider_spec(provider_lower)
|
||||
return provider_spec.builder(model, base_url, **kwargs)
|
||||
|
||||
if provider_lower in _OPENAI_COMPATIBLE:
|
||||
return OpenAIClient(model, base_url, provider=provider_lower, **kwargs)
|
||||
|
||||
if provider_lower == "anthropic":
|
||||
return AnthropicClient(model, base_url, **kwargs)
|
||||
def validate_provider_base_url(provider: str, base_url: str) -> Optional[ProviderMismatch]:
|
||||
"""Validate provider × base_url compatibility.
|
||||
|
||||
if provider_lower == "google":
|
||||
return GoogleClient(model, base_url, **kwargs)
|
||||
Args:
|
||||
provider: LLM provider name (original, not canonical)
|
||||
base_url: API endpoint URL
|
||||
|
||||
if provider_lower == "azure":
|
||||
return AzureOpenAIClient(model, base_url, **kwargs)
|
||||
Returns:
|
||||
None if valid, or ProviderMismatch dict if invalid
|
||||
"""
|
||||
if not provider or not base_url:
|
||||
return None
|
||||
|
||||
raise ValueError(f"Unsupported LLM provider: {provider}")
|
||||
provider_lower = provider.lower()
|
||||
base_url_lower = base_url.lower()
|
||||
|
||||
try:
|
||||
spec = get_provider_spec(provider_lower)
|
||||
except ValueError:
|
||||
# Unknown provider - no validation rules
|
||||
return None
|
||||
|
||||
if spec.base_url_patterns is None:
|
||||
# No validation rules defined for this provider
|
||||
return None
|
||||
|
||||
# Use cached compiled patterns for performance
|
||||
cache_key = spec.canonical_name
|
||||
if cache_key not in _COMPILED_PATTERNS:
|
||||
_COMPILED_PATTERNS[cache_key] = [re.compile(p) for p in spec.base_url_patterns]
|
||||
|
||||
for pattern in _COMPILED_PATTERNS[cache_key]:
|
||||
if pattern.search(base_url_lower):
|
||||
return None # Match found
|
||||
|
||||
# No pattern matched - return mismatch details
|
||||
return {
|
||||
"provider": provider_lower,
|
||||
"backend_url": base_url,
|
||||
"expected_patterns": spec.base_url_patterns,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -25,11 +25,15 @@ MODEL_OPTIONS: ProviderModeOptions = {
|
|||
},
|
||||
"anthropic": {
|
||||
"quick": [
|
||||
("MiniMax M2.7 Highspeed - Repo local default via Anthropic-compatible API", "MiniMax-M2.7-highspeed"),
|
||||
("MiniMax M2.7 - Anthropic-compatible legacy fallback", "MiniMax-M2.7"),
|
||||
("Claude Sonnet 4.6 - Best speed and intelligence balance", "claude-sonnet-4-6"),
|
||||
("Claude Haiku 4.5 - Fast, near-instant responses", "claude-haiku-4-5"),
|
||||
("Claude Sonnet 4.5 - Agents and coding", "claude-sonnet-4-5"),
|
||||
],
|
||||
"deep": [
|
||||
("MiniMax M2.7 Highspeed - Repo local default via Anthropic-compatible API", "MiniMax-M2.7-highspeed"),
|
||||
("MiniMax M2.7 - Anthropic-compatible legacy fallback", "MiniMax-M2.7"),
|
||||
("Claude Opus 4.6 - Most intelligent, agents and coding", "claude-opus-4-6"),
|
||||
("Claude Opus 4.5 - Premium, max intelligence", "claude-opus-4-5"),
|
||||
("Claude Sonnet 4.6 - Best speed and intelligence balance", "claude-sonnet-4-6"),
|
||||
|
|
|
|||
|
|
@ -79,8 +79,10 @@ class OpenAIClient(BaseLLMClient):
|
|||
|
||||
# Native OpenAI: use Responses API for consistent behavior across
|
||||
# all model families. Third-party providers use Chat Completions.
|
||||
# Allow override via kwargs (e.g. use_responses_api=False for MiniMax)
|
||||
if self.provider == "openai":
|
||||
llm_kwargs["use_responses_api"] = True
|
||||
use_resp = self.kwargs.get("use_responses_api", True)
|
||||
llm_kwargs["use_responses_api"] = use_resp
|
||||
|
||||
return NormalizedChatOpenAI(**llm_kwargs)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,261 @@
|
|||
import time
|
||||
|
||||
from tradingagents.agents.utils.agent_states import extract_research_provenance
|
||||
import tradingagents.graph.setup as graph_setup_module
|
||||
from tradingagents.graph.setup import GraphSetup
|
||||
|
||||
|
||||
def _setup() -> GraphSetup:
|
||||
return GraphSetup(
|
||||
quick_thinking_llm=None,
|
||||
deep_thinking_llm=None,
|
||||
tool_nodes={},
|
||||
bull_memory=None,
|
||||
bear_memory=None,
|
||||
trader_memory=None,
|
||||
invest_judge_memory=None,
|
||||
portfolio_manager_memory=None,
|
||||
conditional_logic=None,
|
||||
analyst_node_timeout_secs=0.01,
|
||||
research_node_timeout_secs=0.01,
|
||||
)
|
||||
|
||||
|
||||
def test_manager_guard_fallback_marks_degraded_synthesis():
|
||||
setup = _setup()
|
||||
state = {
|
||||
"investment_debate_state": {
|
||||
"history": "Bull Analyst: case",
|
||||
"bull_history": "Bull Analyst: case",
|
||||
"bear_history": "",
|
||||
"current_response": "Bull Analyst: case",
|
||||
"judge_decision": "",
|
||||
"count": 1,
|
||||
"research_status": "full",
|
||||
"research_mode": "debate",
|
||||
"timed_out_nodes": [],
|
||||
"degraded_reason": None,
|
||||
"covered_dimensions": ["bull"],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
}
|
||||
|
||||
result = setup._apply_research_fallback(
|
||||
state,
|
||||
node_name="Research Manager",
|
||||
dimension="manager",
|
||||
reason="research_manager_timeout",
|
||||
started_at=0.0,
|
||||
)
|
||||
|
||||
debate = result["investment_debate_state"]
|
||||
assert debate["research_status"] == "degraded"
|
||||
assert debate["research_mode"] == "degraded_synthesis"
|
||||
assert debate["timed_out_nodes"] == ["Research Manager"]
|
||||
assert result["investment_plan"].startswith("Recommendation: HOLD")
|
||||
|
||||
|
||||
def test_bull_guard_success_records_coverage():
|
||||
setup = _setup()
|
||||
state = {
|
||||
"investment_debate_state": {
|
||||
"history": "",
|
||||
"bull_history": "",
|
||||
"bear_history": "",
|
||||
"current_response": "",
|
||||
"judge_decision": "",
|
||||
"count": 0,
|
||||
"research_status": "full",
|
||||
"research_mode": "debate",
|
||||
"timed_out_nodes": [],
|
||||
"degraded_reason": None,
|
||||
"covered_dimensions": [],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
}
|
||||
result = {
|
||||
"investment_debate_state": {
|
||||
"history": "Bull Analyst: ok",
|
||||
"bull_history": "Bull Analyst: ok",
|
||||
"bear_history": "",
|
||||
"current_response": "Bull Analyst: ok",
|
||||
"judge_decision": "",
|
||||
"count": 1,
|
||||
}
|
||||
}
|
||||
|
||||
updated = setup._apply_research_success(state, result, dimension="bull")
|
||||
debate = updated["investment_debate_state"]
|
||||
assert debate["research_status"] == "full"
|
||||
assert debate["research_mode"] == "debate"
|
||||
assert debate["covered_dimensions"] == ["bull"]
|
||||
|
||||
|
||||
def test_manager_success_sets_confidence_without_changing_shape():
|
||||
setup = _setup()
|
||||
state = {
|
||||
"investment_debate_state": {
|
||||
"history": "Bull Analyst: case\nBear Analyst: counter",
|
||||
"bull_history": "Bull Analyst: case",
|
||||
"bear_history": "Bear Analyst: counter",
|
||||
"current_response": "Bear Analyst: counter",
|
||||
"judge_decision": "",
|
||||
"count": 2,
|
||||
"research_status": "full",
|
||||
"research_mode": "debate",
|
||||
"timed_out_nodes": [],
|
||||
"degraded_reason": None,
|
||||
"covered_dimensions": ["bull", "bear"],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
}
|
||||
result = {
|
||||
"investment_debate_state": {
|
||||
"history": "Bull Analyst: case\nBear Analyst: counter",
|
||||
"bull_history": "Bull Analyst: case",
|
||||
"bear_history": "Bear Analyst: counter",
|
||||
"current_response": "Recommendation: BUY",
|
||||
"judge_decision": "Recommendation: BUY",
|
||||
"count": 2,
|
||||
},
|
||||
"investment_plan": "Recommendation: BUY",
|
||||
}
|
||||
|
||||
updated = setup._apply_research_success(state, result, dimension="manager")
|
||||
debate = updated["investment_debate_state"]
|
||||
assert updated["investment_plan"] == "Recommendation: BUY"
|
||||
assert debate["judge_decision"] == "Recommendation: BUY"
|
||||
assert debate["research_status"] == "full"
|
||||
assert debate["research_mode"] == "debate"
|
||||
assert debate["covered_dimensions"] == ["bull", "bear", "manager"]
|
||||
assert debate["manager_confidence"] == 1.0
|
||||
|
||||
|
||||
def test_bear_guard_exception_returns_degraded_argument(monkeypatch):
|
||||
def broken_bear(_llm, _memory):
|
||||
def node(_state):
|
||||
raise ConnectionError("downstream unavailable")
|
||||
|
||||
return node
|
||||
|
||||
monkeypatch.setattr(graph_setup_module, "create_bear_researcher", broken_bear)
|
||||
setup = _setup()
|
||||
wrapped = setup._guard_research_node("Bear Researcher", None, None)
|
||||
state = {
|
||||
"investment_debate_state": {
|
||||
"history": "Bull Analyst: case",
|
||||
"bull_history": "Bull Analyst: case",
|
||||
"bear_history": "",
|
||||
"current_response": "Bull Analyst: case",
|
||||
"judge_decision": "",
|
||||
"count": 1,
|
||||
"research_status": "full",
|
||||
"research_mode": "debate",
|
||||
"timed_out_nodes": [],
|
||||
"degraded_reason": None,
|
||||
"covered_dimensions": ["bull"],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
}
|
||||
|
||||
result = wrapped(state)
|
||||
|
||||
debate = result["investment_debate_state"]
|
||||
assert debate["research_status"] == "degraded"
|
||||
assert debate["research_mode"] == "degraded_synthesis"
|
||||
assert debate["degraded_reason"] == "bear_researcher_connectionerror"
|
||||
assert debate["timed_out_nodes"] == []
|
||||
assert debate["count"] == 2
|
||||
assert debate["current_response"].startswith(
|
||||
"Bear Analyst: [DEGRADED] Bear Researcher unavailable (bear_researcher_connectionerror)."
|
||||
)
|
||||
assert debate["history"].startswith("Bull Analyst: case\nBear Analyst: [DEGRADED]")
|
||||
assert debate["bear_history"].startswith("\nBear Analyst: [DEGRADED]")
|
||||
|
||||
|
||||
def test_guard_timeout_returns_without_waiting_for_node_completion(monkeypatch):
|
||||
def slow_bull(_llm, _memory):
|
||||
def node(_state):
|
||||
time.sleep(0.2)
|
||||
return {"investment_debate_state": {"history": "", "bull_history": "", "bear_history": "", "current_response": "", "judge_decision": "", "count": 1}}
|
||||
return node
|
||||
|
||||
monkeypatch.setattr(graph_setup_module, "create_bull_researcher", slow_bull)
|
||||
setup = _setup()
|
||||
wrapped = setup._guard_research_node("Bull Researcher", None, None)
|
||||
state = {
|
||||
"investment_debate_state": {
|
||||
"history": "",
|
||||
"bull_history": "",
|
||||
"bear_history": "",
|
||||
"current_response": "",
|
||||
"judge_decision": "",
|
||||
"count": 0,
|
||||
"research_status": "full",
|
||||
"research_mode": "debate",
|
||||
"timed_out_nodes": [],
|
||||
"degraded_reason": None,
|
||||
"covered_dimensions": [],
|
||||
"manager_confidence": None,
|
||||
}
|
||||
}
|
||||
|
||||
started = time.monotonic()
|
||||
result = wrapped(state)
|
||||
elapsed = time.monotonic() - started
|
||||
|
||||
assert elapsed < 0.1
|
||||
debate = result["investment_debate_state"]
|
||||
assert debate["research_status"] == "degraded"
|
||||
assert debate["research_mode"] == "degraded_synthesis"
|
||||
assert debate["timed_out_nodes"] == ["Bull Researcher"]
|
||||
|
||||
|
||||
def test_analyst_guard_timeout_returns_degraded_report_quickly():
|
||||
setup = _setup()
|
||||
|
||||
def slow_node(_state):
|
||||
time.sleep(0.2)
|
||||
return {"messages": [], "market_report": "ok"}
|
||||
|
||||
wrapped = setup._guard_analyst_node(
|
||||
"Market Analyst",
|
||||
slow_node,
|
||||
report_field="market_report",
|
||||
)
|
||||
|
||||
started = time.monotonic()
|
||||
result = wrapped({"messages": []})
|
||||
elapsed = time.monotonic() - started
|
||||
|
||||
assert elapsed < 0.1
|
||||
assert result["market_report"].startswith("[DEGRADED] Market Analyst unavailable")
|
||||
assert result["messages"][0].content.startswith("[DEGRADED] Market Analyst unavailable")
|
||||
|
||||
|
||||
def test_extract_research_provenance_returns_subset():
|
||||
payload = extract_research_provenance(
|
||||
{
|
||||
"research_status": "degraded",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"covered_dimensions": ["market", "bull"],
|
||||
"manager_confidence": 0.0,
|
||||
"history": "ignored",
|
||||
}
|
||||
)
|
||||
|
||||
assert payload == {
|
||||
"research_status": "degraded",
|
||||
"research_mode": "degraded_synthesis",
|
||||
"timed_out_nodes": ["Bull Researcher"],
|
||||
"degraded_reason": "bull_researcher_timeout",
|
||||
"covered_dimensions": ["market", "bull"],
|
||||
"manager_confidence": 0.0,
|
||||
}
|
||||
|
||||
|
||||
def test_extract_research_provenance_ignores_non_mapping():
|
||||
assert extract_research_provenance(None) is None
|
||||
assert extract_research_provenance("bad") is None
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
import threading
|
||||
|
||||
from tradingagents.dataflows import stockstats_utils
|
||||
|
||||
|
||||
def test_get_fallback_session_reuses_session_in_same_thread(monkeypatch):
|
||||
created = []
|
||||
|
||||
class FakeSession:
|
||||
def __init__(self):
|
||||
self.trust_env = True
|
||||
created.append(self)
|
||||
|
||||
monkeypatch.setattr(stockstats_utils, "_fallback_session_local", threading.local())
|
||||
monkeypatch.setattr(stockstats_utils.requests, "Session", FakeSession)
|
||||
|
||||
first = stockstats_utils._get_fallback_session()
|
||||
second = stockstats_utils._get_fallback_session()
|
||||
|
||||
assert first is second
|
||||
assert len(created) == 1
|
||||
assert first.trust_env is False
|
||||
|
|
@ -0,0 +1,487 @@
|
|||
"""
|
||||
Portfolio API — 自选股、持仓、每日建议
|
||||
"""
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import yfinance
|
||||
|
||||
try:
|
||||
import fcntl
|
||||
except ImportError: # pragma: no cover - exercised on Windows
|
||||
import msvcrt
|
||||
|
||||
class _FcntlCompat:
|
||||
LOCK_SH = 1
|
||||
LOCK_EX = 2
|
||||
LOCK_UN = 8
|
||||
|
||||
@staticmethod
|
||||
def flock(fd: int, operation: int) -> None:
|
||||
os.lseek(fd, 0, os.SEEK_SET)
|
||||
if operation == _FcntlCompat.LOCK_UN:
|
||||
try:
|
||||
msvcrt.locking(fd, msvcrt.LK_UNLCK, 1)
|
||||
except OSError:
|
||||
return
|
||||
return
|
||||
|
||||
if os.fstat(fd).st_size == 0:
|
||||
os.write(fd, b"\0")
|
||||
os.lseek(fd, 0, os.SEEK_SET)
|
||||
|
||||
msvcrt.locking(fd, msvcrt.LK_LOCK, 1)
|
||||
|
||||
fcntl = _FcntlCompat()
|
||||
|
||||
# Data directory
|
||||
DATA_DIR = Path(__file__).parent.parent.parent / "data"
|
||||
DATA_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
WATCHLIST_FILE = DATA_DIR / "watchlist.json"
|
||||
POSITIONS_FILE = DATA_DIR / "positions.json"
|
||||
RECOMMENDATIONS_DIR = DATA_DIR / "recommendations"
|
||||
WATCHLIST_LOCK = DATA_DIR / "watchlist.lock"
|
||||
POSITIONS_LOCK = DATA_DIR / "positions.lock"
|
||||
|
||||
|
||||
# ============== Watchlist ==============
|
||||
|
||||
def get_watchlist() -> list:
|
||||
if not WATCHLIST_FILE.exists():
|
||||
return []
|
||||
try:
|
||||
with open(WATCHLIST_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_SH)
|
||||
try:
|
||||
return json.loads(WATCHLIST_FILE.read_text()).get("watchlist", [])
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def _save_watchlist(watchlist: list):
|
||||
with open(WATCHLIST_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
WATCHLIST_FILE.write_text(json.dumps({"watchlist": watchlist}, ensure_ascii=False, indent=2))
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
def add_to_watchlist(ticker: str, name: str) -> dict:
|
||||
with open(WATCHLIST_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
watchlist = json.loads(WATCHLIST_FILE.read_text()).get("watchlist", []) if WATCHLIST_FILE.exists() else []
|
||||
if any(s["ticker"] == ticker for s in watchlist):
|
||||
raise ValueError(f"{ticker} 已在自选股中")
|
||||
entry = {
|
||||
"ticker": ticker,
|
||||
"name": name,
|
||||
"added_at": datetime.now().strftime("%Y-%m-%d"),
|
||||
}
|
||||
watchlist.append(entry)
|
||||
WATCHLIST_FILE.write_text(json.dumps({"watchlist": watchlist}, ensure_ascii=False, indent=2))
|
||||
return entry
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
def remove_from_watchlist(ticker: str) -> bool:
|
||||
with open(WATCHLIST_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
watchlist = json.loads(WATCHLIST_FILE.read_text()).get("watchlist", []) if WATCHLIST_FILE.exists() else []
|
||||
new_list = [s for s in watchlist if s["ticker"] != ticker]
|
||||
if len(new_list) == len(watchlist):
|
||||
return False
|
||||
WATCHLIST_FILE.write_text(json.dumps({"watchlist": new_list}, ensure_ascii=False, indent=2))
|
||||
return True
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
# ============== Accounts ==============
|
||||
|
||||
def get_accounts() -> dict:
|
||||
if not POSITIONS_FILE.exists():
|
||||
return {"accounts": {}}
|
||||
try:
|
||||
with open(POSITIONS_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_SH)
|
||||
try:
|
||||
return json.loads(POSITIONS_FILE.read_text())
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
except Exception:
|
||||
return {"accounts": {}}
|
||||
|
||||
|
||||
def _save_accounts(data: dict):
|
||||
with open(POSITIONS_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
POSITIONS_FILE.write_text(json.dumps(data, ensure_ascii=False, indent=2))
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
def create_account(account_name: str) -> dict:
|
||||
with open(POSITIONS_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
accounts = json.loads(POSITIONS_FILE.read_text()) if POSITIONS_FILE.exists() else {"accounts": {}}
|
||||
if account_name in accounts.get("accounts", {}):
|
||||
raise ValueError(f"账户 {account_name} 已存在")
|
||||
accounts["accounts"][account_name] = {"positions": {}}
|
||||
POSITIONS_FILE.write_text(json.dumps(accounts, ensure_ascii=False, indent=2))
|
||||
return {"account_name": account_name}
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
def delete_account(account_name: str) -> bool:
|
||||
with open(POSITIONS_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
accounts = json.loads(POSITIONS_FILE.read_text()) if POSITIONS_FILE.exists() else {"accounts": {}}
|
||||
if account_name not in accounts.get("accounts", {}):
|
||||
return False
|
||||
del accounts["accounts"][account_name]
|
||||
POSITIONS_FILE.write_text(json.dumps(accounts, ensure_ascii=False, indent=2))
|
||||
return True
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
# ============== Positions =============
|
||||
|
||||
# Semaphore to limit concurrent yfinance requests (avoid rate limiting)
|
||||
MAX_CONCURRENT_YFINANCE_REQUESTS = 5
|
||||
_yfinance_semaphore: asyncio.Semaphore = asyncio.Semaphore(MAX_CONCURRENT_YFINANCE_REQUESTS)
|
||||
|
||||
|
||||
def _fetch_price(ticker: str) -> float | None:
|
||||
"""Fetch current price synchronously (called in thread executor)"""
|
||||
try:
|
||||
stock = yfinance.Ticker(ticker)
|
||||
info = stock.info or {}
|
||||
return info.get("currentPrice") or info.get("regularMarketPrice")
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
async def _fetch_price_throttled(ticker: str) -> float | None:
|
||||
"""Fetch price with semaphore throttling."""
|
||||
async with _yfinance_semaphore:
|
||||
return await asyncio.to_thread(_fetch_price, ticker)
|
||||
|
||||
|
||||
async def get_positions(account: Optional[str] = None) -> list:
|
||||
"""
|
||||
Returns positions with live price from yfinance and computed P&L.
|
||||
Uses asyncio executor with concurrency limit (max 5 simultaneous requests).
|
||||
"""
|
||||
accounts = get_accounts()
|
||||
|
||||
if account:
|
||||
acc = accounts.get("accounts", {}).get(account)
|
||||
if not acc:
|
||||
return []
|
||||
positions = [(_ticker, _pos) for _ticker, _positions in acc.get("positions", {}).items()
|
||||
for _pos in _positions]
|
||||
else:
|
||||
positions = [
|
||||
(_ticker, _pos)
|
||||
for _acc_data in accounts.get("accounts", {}).values()
|
||||
for _ticker, _positions in _acc_data.get("positions", {}).items()
|
||||
for _pos in _positions
|
||||
]
|
||||
|
||||
if not positions:
|
||||
return []
|
||||
|
||||
tickers = [t for t, _ in positions]
|
||||
prices = await asyncio.gather(*[_fetch_price_throttled(t) for t in tickers])
|
||||
|
||||
result = []
|
||||
for (ticker, pos), current_price in zip(positions, prices):
|
||||
shares = pos.get("shares", 0)
|
||||
cost_price = pos.get("cost_price", 0)
|
||||
unrealized_pnl = None
|
||||
unrealized_pnl_pct = None
|
||||
if current_price is not None and cost_price:
|
||||
unrealized_pnl = (current_price - cost_price) * shares
|
||||
unrealized_pnl_pct = (current_price / cost_price - 1) * 100
|
||||
|
||||
result.append({
|
||||
"ticker": ticker,
|
||||
"name": pos.get("name", ticker),
|
||||
"account": pos.get("account", "默认账户"),
|
||||
"shares": shares,
|
||||
"cost_price": cost_price,
|
||||
"current_price": current_price,
|
||||
"unrealized_pnl": unrealized_pnl,
|
||||
"unrealized_pnl_pct": unrealized_pnl_pct,
|
||||
"purchase_date": pos.get("purchase_date"),
|
||||
"notes": pos.get("notes", ""),
|
||||
"position_id": pos.get("position_id"),
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
def add_position(ticker: str, shares: float, cost_price: float,
|
||||
purchase_date: Optional[str], notes: str, account: str) -> dict:
|
||||
with open(POSITIONS_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
accounts = json.loads(POSITIONS_FILE.read_text()) if POSITIONS_FILE.exists() else {"accounts": {}}
|
||||
acc = accounts.get("accounts", {}).get(account)
|
||||
if not acc:
|
||||
if "默认账户" not in accounts.get("accounts", {}):
|
||||
accounts["accounts"]["默认账户"] = {"positions": {}}
|
||||
acc = accounts["accounts"]["默认账户"]
|
||||
|
||||
position_id = f"pos_{uuid.uuid4().hex[:6]}"
|
||||
position = {
|
||||
"position_id": position_id,
|
||||
"shares": shares,
|
||||
"cost_price": cost_price,
|
||||
"purchase_date": purchase_date,
|
||||
"notes": notes,
|
||||
"account": account,
|
||||
"name": ticker,
|
||||
}
|
||||
|
||||
if ticker not in acc["positions"]:
|
||||
acc["positions"][ticker] = []
|
||||
acc["positions"][ticker].append(position)
|
||||
POSITIONS_FILE.write_text(json.dumps(accounts, ensure_ascii=False, indent=2))
|
||||
return position
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
def remove_position(ticker: str, position_id: str, account: Optional[str]) -> bool:
|
||||
if not position_id:
|
||||
return False # Require explicit position_id to prevent mass deletion
|
||||
with open(POSITIONS_LOCK, "w") as lf:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
try:
|
||||
accounts = json.loads(POSITIONS_FILE.read_text()) if POSITIONS_FILE.exists() else {"accounts": {}}
|
||||
if account:
|
||||
acc = accounts.get("accounts", {}).get(account)
|
||||
if acc and ticker in acc.get("positions", {}):
|
||||
acc["positions"][ticker] = [
|
||||
p for p in acc["positions"][ticker]
|
||||
if p.get("position_id") != position_id
|
||||
]
|
||||
if not acc["positions"][ticker]:
|
||||
del acc["positions"][ticker]
|
||||
POSITIONS_FILE.write_text(json.dumps(accounts, ensure_ascii=False, indent=2))
|
||||
return True
|
||||
else:
|
||||
for acc_data in accounts.get("accounts", {}).values():
|
||||
if ticker in acc_data.get("positions", {}):
|
||||
original_len = len(acc_data["positions"][ticker])
|
||||
acc_data["positions"][ticker] = [
|
||||
p for p in acc_data["positions"][ticker]
|
||||
if p.get("position_id") != position_id
|
||||
]
|
||||
if len(acc_data["positions"][ticker]) < original_len:
|
||||
if not acc_data["positions"][ticker]:
|
||||
del acc_data["positions"][ticker]
|
||||
POSITIONS_FILE.write_text(json.dumps(accounts, ensure_ascii=False, indent=2))
|
||||
return True
|
||||
return False
|
||||
finally:
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
# ============== Recommendations ==============
|
||||
|
||||
# Pagination defaults (must match main.py constants)
|
||||
DEFAULT_PAGE_SIZE = 50
|
||||
MAX_PAGE_SIZE = 500
|
||||
|
||||
|
||||
def _rating_to_direction(rating: Optional[str]) -> int:
|
||||
if rating in {"BUY", "OVERWEIGHT"}:
|
||||
return 1
|
||||
if rating in {"SELL", "UNDERWEIGHT"}:
|
||||
return -1
|
||||
return 0
|
||||
|
||||
|
||||
def _normalize_recommendation_record(record: dict, *, date: Optional[str] = None, ticker: Optional[str] = None) -> dict:
|
||||
normalized = dict(record)
|
||||
if "result" in normalized and "contract_version" in normalized:
|
||||
normalized.setdefault("ticker", ticker or normalized.get("ticker"))
|
||||
normalized.setdefault("date", date or normalized.get("date") or normalized.get("analysis_date"))
|
||||
return normalized
|
||||
|
||||
decision = normalized.get("decision", "HOLD")
|
||||
quant_signal = normalized.get("quant_signal")
|
||||
llm_signal = normalized.get("llm_signal")
|
||||
confidence = normalized.get("confidence")
|
||||
date_value = date or normalized.get("date") or normalized.get("analysis_date")
|
||||
ticker_value = ticker or normalized.get("ticker")
|
||||
return {
|
||||
"contract_version": "v1alpha1",
|
||||
"ticker": ticker_value,
|
||||
"name": normalized.get("name", ticker_value),
|
||||
"date": date_value,
|
||||
"status": normalized.get("status", "completed"),
|
||||
"created_at": normalized.get("created_at"),
|
||||
"result": {
|
||||
"decision": decision,
|
||||
"confidence": confidence,
|
||||
"signals": {
|
||||
"merged": {
|
||||
"direction": _rating_to_direction(decision),
|
||||
"rating": decision,
|
||||
},
|
||||
"quant": {
|
||||
"direction": _rating_to_direction(quant_signal),
|
||||
"rating": quant_signal,
|
||||
"available": quant_signal is not None,
|
||||
},
|
||||
"llm": {
|
||||
"direction": _rating_to_direction(llm_signal),
|
||||
"rating": llm_signal,
|
||||
"available": llm_signal is not None,
|
||||
},
|
||||
},
|
||||
"degraded": quant_signal is None or llm_signal is None,
|
||||
},
|
||||
"degradation": normalized.get("degradation") or {
|
||||
"degraded": quant_signal is None or llm_signal is None,
|
||||
"reason_codes": [],
|
||||
},
|
||||
"data_quality": normalized.get("data_quality"),
|
||||
"compat": {
|
||||
"analysis_date": date_value,
|
||||
"decision": decision,
|
||||
"quant_signal": quant_signal,
|
||||
"llm_signal": llm_signal,
|
||||
"confidence": confidence,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def get_recommendations(date: Optional[str] = None, limit: int = DEFAULT_PAGE_SIZE, offset: int = 0) -> dict:
|
||||
"""List recommendations, optionally filtered by date. Returns paginated results."""
|
||||
RECOMMENDATIONS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
all_recs = []
|
||||
|
||||
if date:
|
||||
date_dir = RECOMMENDATIONS_DIR / date
|
||||
if date_dir.exists():
|
||||
all_recs = [
|
||||
_normalize_recommendation_record(json.loads(f.read_text()), date=date_dir.name)
|
||||
for f in sorted(date_dir.glob("*.json"), reverse=True)
|
||||
if f.suffix == ".json"
|
||||
]
|
||||
else:
|
||||
for date_dir in sorted(RECOMMENDATIONS_DIR.iterdir(), reverse=True):
|
||||
if date_dir.is_dir() and date_dir.name.startswith("20"):
|
||||
for f in sorted(date_dir.glob("*.json"), reverse=True):
|
||||
if f.suffix == ".json":
|
||||
all_recs.append(
|
||||
_normalize_recommendation_record(
|
||||
json.loads(f.read_text()),
|
||||
date=date_dir.name,
|
||||
)
|
||||
)
|
||||
|
||||
total = len(all_recs)
|
||||
return {
|
||||
"contract_version": "v1alpha1",
|
||||
"recommendations": all_recs[offset : offset + limit],
|
||||
"total": total,
|
||||
"limit": limit,
|
||||
"offset": offset,
|
||||
}
|
||||
|
||||
|
||||
def get_recommendation(date: str, ticker: str) -> Optional[dict]:
|
||||
# Validate inputs to prevent path traversal
|
||||
if ".." in ticker or "/" in ticker or "\\" in ticker:
|
||||
return None
|
||||
if ".." in date or "/" in date or "\\" in date:
|
||||
return None
|
||||
path = RECOMMENDATIONS_DIR / date / f"{ticker}.json"
|
||||
if not path.exists():
|
||||
return None
|
||||
# Ensure resolved path is within RECOMMENDATIONS_DIR (strict traversal check)
|
||||
try:
|
||||
path.resolve().relative_to(RECOMMENDATIONS_DIR.resolve())
|
||||
except ValueError:
|
||||
return None
|
||||
return _normalize_recommendation_record(json.loads(path.read_text()), date=date, ticker=ticker)
|
||||
|
||||
|
||||
def save_recommendation(date: str, ticker: str, data: dict):
|
||||
date_dir = RECOMMENDATIONS_DIR / date
|
||||
date_dir.mkdir(parents=True, exist_ok=True)
|
||||
(date_dir / f"{ticker}.json").write_text(json.dumps(data, ensure_ascii=False, indent=2))
|
||||
|
||||
|
||||
class LegacyPortfolioGateway:
|
||||
"""Compatibility gateway that exposes the current portfolio API as a service boundary."""
|
||||
|
||||
def get_watchlist(self) -> list:
|
||||
return get_watchlist()
|
||||
|
||||
def add_to_watchlist(self, ticker: str, name: str) -> dict:
|
||||
return add_to_watchlist(ticker, name)
|
||||
|
||||
def remove_from_watchlist(self, ticker: str) -> bool:
|
||||
return remove_from_watchlist(ticker)
|
||||
|
||||
def get_accounts(self) -> dict:
|
||||
return get_accounts()
|
||||
|
||||
def create_account(self, account_name: str) -> dict:
|
||||
return create_account(account_name)
|
||||
|
||||
def delete_account(self, account_name: str) -> bool:
|
||||
return delete_account(account_name)
|
||||
|
||||
async def get_positions(self, account: Optional[str] = None) -> list:
|
||||
return await get_positions(account)
|
||||
|
||||
def add_position(
|
||||
self,
|
||||
ticker: str,
|
||||
shares: float,
|
||||
cost_price: float,
|
||||
purchase_date: Optional[str],
|
||||
notes: str,
|
||||
account: str,
|
||||
) -> dict:
|
||||
return add_position(ticker, shares, cost_price, purchase_date, notes, account)
|
||||
|
||||
def remove_position(self, ticker: str, position_id: str, account: Optional[str]) -> bool:
|
||||
return remove_position(ticker, position_id, account)
|
||||
|
||||
def get_recommendations(self, date: Optional[str] = None, limit: int = DEFAULT_PAGE_SIZE, offset: int = 0) -> dict:
|
||||
return get_recommendations(date, limit, offset)
|
||||
|
||||
def get_recommendation(self, date: str, ticker: str) -> Optional[dict]:
|
||||
return get_recommendation(date, ticker)
|
||||
|
||||
def save_recommendation(self, date: str, ticker: str, data: dict):
|
||||
save_recommendation(date, ticker, data)
|
||||
|
||||
|
||||
def create_legacy_portfolio_gateway() -> LegacyPortfolioGateway:
|
||||
"""Create a gateway instance for service-layer migration."""
|
||||
return LegacyPortfolioGateway()
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,20 @@
|
|||
from .analysis_service import AnalysisService
|
||||
from .job_service import JobService
|
||||
from .migration_flags import MigrationFlags, load_migration_flags
|
||||
from .request_context import RequestContext, build_request_context, clone_request_context
|
||||
from .result_store import ResultStore
|
||||
from .task_command_service import TaskCommandService
|
||||
from .task_query_service import TaskQueryService
|
||||
|
||||
__all__ = [
|
||||
"AnalysisService",
|
||||
"JobService",
|
||||
"MigrationFlags",
|
||||
"RequestContext",
|
||||
"ResultStore",
|
||||
"TaskCommandService",
|
||||
"TaskQueryService",
|
||||
"build_request_context",
|
||||
"clone_request_context",
|
||||
"load_migration_flags",
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,889 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import tempfile
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any, Awaitable, Callable, Optional, Protocol
|
||||
|
||||
from .request_context import (
|
||||
CONTRACT_VERSION,
|
||||
DEFAULT_EXECUTOR_TYPE,
|
||||
RequestContext,
|
||||
)
|
||||
|
||||
StageCallback = Callable[[str], Awaitable[None]]
|
||||
ProcessRegistry = Callable[[str, asyncio.subprocess.Process | None], None]
|
||||
|
||||
LEGACY_ANALYSIS_SCRIPT_TEMPLATE = """
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
ticker = sys.argv[1]
|
||||
date = sys.argv[2]
|
||||
repo_root = sys.argv[3]
|
||||
|
||||
sys.path.insert(0, repo_root)
|
||||
|
||||
import py_mini_racer
|
||||
sys.modules["mini_racer"] = py_mini_racer
|
||||
|
||||
from orchestrator.config import OrchestratorConfig
|
||||
from orchestrator.orchestrator import TradingOrchestrator
|
||||
from tradingagents.default_config import get_default_config, normalize_runtime_llm_config
|
||||
|
||||
def _provider_api_key(provider: str):
|
||||
provider = str(provider or "").lower()
|
||||
if os.environ.get("TRADINGAGENTS_PROVIDER_API_KEY"):
|
||||
return os.environ["TRADINGAGENTS_PROVIDER_API_KEY"]
|
||||
|
||||
env_names = {
|
||||
"anthropic": ("ANTHROPIC_API_KEY", "MINIMAX_API_KEY"),
|
||||
"openai": ("OPENAI_API_KEY",),
|
||||
"openrouter": ("OPENROUTER_API_KEY",),
|
||||
"xai": ("XAI_API_KEY",),
|
||||
"google": ("GOOGLE_API_KEY",),
|
||||
}.get(provider, tuple())
|
||||
|
||||
for env_name in env_names:
|
||||
value = os.environ.get(env_name)
|
||||
if value:
|
||||
return value
|
||||
return None
|
||||
|
||||
|
||||
trading_config = get_default_config()
|
||||
trading_config["project_dir"] = os.path.join(repo_root, "tradingagents")
|
||||
trading_config["results_dir"] = os.path.join(repo_root, "results")
|
||||
trading_config["max_debate_rounds"] = 1
|
||||
trading_config["max_risk_discuss_rounds"] = 1
|
||||
if os.environ.get("TRADINGAGENTS_LLM_PROVIDER"):
|
||||
trading_config["llm_provider"] = os.environ["TRADINGAGENTS_LLM_PROVIDER"]
|
||||
elif os.environ.get("ANTHROPIC_BASE_URL"):
|
||||
trading_config["llm_provider"] = "anthropic"
|
||||
elif os.environ.get("OPENAI_BASE_URL"):
|
||||
trading_config["llm_provider"] = "openai"
|
||||
if os.environ.get("TRADINGAGENTS_BACKEND_URL"):
|
||||
trading_config["backend_url"] = os.environ["TRADINGAGENTS_BACKEND_URL"]
|
||||
elif os.environ.get("ANTHROPIC_BASE_URL"):
|
||||
trading_config["backend_url"] = os.environ["ANTHROPIC_BASE_URL"]
|
||||
elif os.environ.get("OPENAI_BASE_URL"):
|
||||
trading_config["backend_url"] = os.environ["OPENAI_BASE_URL"]
|
||||
if os.environ.get("TRADINGAGENTS_MODEL"):
|
||||
trading_config["deep_think_llm"] = os.environ["TRADINGAGENTS_MODEL"]
|
||||
trading_config["quick_think_llm"] = os.environ["TRADINGAGENTS_MODEL"]
|
||||
if os.environ.get("TRADINGAGENTS_DEEP_MODEL"):
|
||||
trading_config["deep_think_llm"] = os.environ["TRADINGAGENTS_DEEP_MODEL"]
|
||||
if os.environ.get("TRADINGAGENTS_QUICK_MODEL"):
|
||||
trading_config["quick_think_llm"] = os.environ["TRADINGAGENTS_QUICK_MODEL"]
|
||||
if os.environ.get("TRADINGAGENTS_SELECTED_ANALYSTS"):
|
||||
trading_config["selected_analysts"] = [
|
||||
item.strip() for item in os.environ["TRADINGAGENTS_SELECTED_ANALYSTS"].split(",") if item.strip()
|
||||
]
|
||||
if os.environ.get("TRADINGAGENTS_ANALYSIS_PROMPT_STYLE"):
|
||||
trading_config["analysis_prompt_style"] = os.environ["TRADINGAGENTS_ANALYSIS_PROMPT_STYLE"]
|
||||
if os.environ.get("TRADINGAGENTS_LLM_TIMEOUT"):
|
||||
trading_config["llm_timeout"] = float(os.environ["TRADINGAGENTS_LLM_TIMEOUT"])
|
||||
if os.environ.get("TRADINGAGENTS_LLM_MAX_RETRIES"):
|
||||
trading_config["llm_max_retries"] = int(os.environ["TRADINGAGENTS_LLM_MAX_RETRIES"])
|
||||
if os.environ.get("TRADINGAGENTS_PORTFOLIO_CONTEXT") is not None:
|
||||
trading_config["portfolio_context"] = os.environ["TRADINGAGENTS_PORTFOLIO_CONTEXT"]
|
||||
if os.environ.get("TRADINGAGENTS_PEER_CONTEXT") is not None:
|
||||
trading_config["peer_context"] = os.environ["TRADINGAGENTS_PEER_CONTEXT"]
|
||||
if os.environ.get("TRADINGAGENTS_PEER_CONTEXT_MODE") is not None:
|
||||
trading_config["peer_context_mode"] = os.environ["TRADINGAGENTS_PEER_CONTEXT_MODE"]
|
||||
provider_api_key = _provider_api_key(trading_config.get("llm_provider", "anthropic"))
|
||||
if provider_api_key:
|
||||
trading_config["api_key"] = provider_api_key
|
||||
trading_config = normalize_runtime_llm_config(trading_config)
|
||||
print(
|
||||
"CHECKPOINT:AUTH:" + json.dumps(
|
||||
{
|
||||
"provider": trading_config.get("llm_provider"),
|
||||
"backend_url": trading_config.get("backend_url"),
|
||||
"api_key_present": bool(provider_api_key),
|
||||
}
|
||||
),
|
||||
flush=True,
|
||||
)
|
||||
if trading_config.get("llm_provider") != "ollama" and not provider_api_key:
|
||||
result_meta = {
|
||||
"degrade_reason_codes": ["provider_api_key_missing"],
|
||||
"data_quality": {
|
||||
"state": "provider_api_key_missing",
|
||||
"provider": trading_config.get("llm_provider"),
|
||||
},
|
||||
"source_diagnostics": {
|
||||
"llm": {
|
||||
"reason_code": "provider_api_key_missing",
|
||||
}
|
||||
},
|
||||
}
|
||||
print("RESULT_META:" + json.dumps(result_meta), file=sys.stderr, flush=True)
|
||||
print("ANALYSIS_ERROR:provider API key missing inside analysis subprocess", file=sys.stderr, flush=True)
|
||||
sys.exit(1)
|
||||
print("STAGE:analysts", flush=True)
|
||||
print("STAGE:research", flush=True)
|
||||
|
||||
config = OrchestratorConfig(
|
||||
quant_backtest_path=os.environ.get("QUANT_BACKTEST_PATH", ""),
|
||||
trading_agents_config=trading_config,
|
||||
)
|
||||
|
||||
orchestrator = TradingOrchestrator(config)
|
||||
|
||||
print("STAGE:trading", flush=True)
|
||||
|
||||
heartbeat_interval = float(os.environ.get("TRADINGAGENTS_HEARTBEAT_SECS", "10"))
|
||||
heartbeat_stop = threading.Event()
|
||||
heartbeat_started_at = time.monotonic()
|
||||
|
||||
def _heartbeat():
|
||||
while not heartbeat_stop.wait(heartbeat_interval):
|
||||
print(
|
||||
"HEARTBEAT:" + json.dumps(
|
||||
{
|
||||
"ticker": ticker,
|
||||
"elapsed_seconds": round(time.monotonic() - heartbeat_started_at, 1),
|
||||
"phase": "trading",
|
||||
}
|
||||
),
|
||||
flush=True,
|
||||
)
|
||||
|
||||
heartbeat_thread = threading.Thread(target=_heartbeat, name="analysis-heartbeat", daemon=True)
|
||||
heartbeat_thread.start()
|
||||
|
||||
try:
|
||||
result = orchestrator.get_combined_signal(ticker, date)
|
||||
except Exception as exc:
|
||||
heartbeat_stop.set()
|
||||
result_meta = {
|
||||
"degrade_reason_codes": list(getattr(exc, "reason_codes", ()) or ()),
|
||||
"data_quality": getattr(exc, "data_quality", None),
|
||||
"source_diagnostics": getattr(exc, "source_diagnostics", None),
|
||||
}
|
||||
print("RESULT_META:" + json.dumps(result_meta), file=sys.stderr, flush=True)
|
||||
print("ANALYSIS_ERROR:" + str(exc), file=sys.stderr, flush=True)
|
||||
sys.exit(1)
|
||||
finally:
|
||||
heartbeat_stop.set()
|
||||
|
||||
print("STAGE:risk", flush=True)
|
||||
|
||||
direction = result.direction
|
||||
confidence = result.confidence
|
||||
llm_sig_obj = result.llm_signal
|
||||
quant_sig_obj = result.quant_signal
|
||||
llm_signal = llm_sig_obj.metadata.get("rating", "HOLD") if llm_sig_obj else "HOLD"
|
||||
llm_decision_structured = llm_sig_obj.metadata.get("decision_structured") if llm_sig_obj else None
|
||||
if quant_sig_obj is None:
|
||||
quant_signal = "HOLD"
|
||||
elif quant_sig_obj.direction == 1:
|
||||
quant_signal = "BUY" if quant_sig_obj.confidence >= 0.7 else "OVERWEIGHT"
|
||||
elif quant_sig_obj.direction == -1:
|
||||
quant_signal = "SELL" if quant_sig_obj.confidence >= 0.7 else "UNDERWEIGHT"
|
||||
else:
|
||||
quant_signal = "HOLD"
|
||||
|
||||
if direction == 1:
|
||||
signal = "BUY" if confidence >= 0.7 else "OVERWEIGHT"
|
||||
elif direction == -1:
|
||||
signal = "SELL" if confidence >= 0.7 else "UNDERWEIGHT"
|
||||
else:
|
||||
signal = "HOLD"
|
||||
|
||||
results_dir = Path(repo_root) / "results" / ticker / date
|
||||
results_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
report_content = (
|
||||
"# TradingAgents 分析报告\\n\\n"
|
||||
"**股票**: " + ticker + "\\n"
|
||||
"**日期**: " + date + "\\n\\n"
|
||||
"## 最终决策\\n\\n"
|
||||
"**" + signal + "**\\n\\n"
|
||||
"## 信号详情\\n\\n"
|
||||
"- LLM 信号: " + llm_signal + "\\n"
|
||||
"- Quant 信号: " + quant_signal + "\\n"
|
||||
"- 置信度: " + f"{confidence:.1%}" + "\\n\\n"
|
||||
"## 分析摘要\\n\\n"
|
||||
"N/A\\n"
|
||||
)
|
||||
|
||||
report_path = results_dir / "complete_report.md"
|
||||
report_path.write_text(report_content)
|
||||
|
||||
print("STAGE:portfolio", flush=True)
|
||||
signal_detail = json.dumps({
|
||||
"llm_signal": llm_signal,
|
||||
"quant_signal": quant_signal,
|
||||
"confidence": confidence,
|
||||
"llm_decision_structured": llm_decision_structured,
|
||||
})
|
||||
result_meta = json.dumps({
|
||||
"degrade_reason_codes": list(getattr(result, "degrade_reason_codes", ())),
|
||||
"data_quality": (result.metadata or {}).get("data_quality"),
|
||||
"source_diagnostics": (result.metadata or {}).get("source_diagnostics"),
|
||||
})
|
||||
print("SIGNAL_DETAIL:" + signal_detail, flush=True)
|
||||
print("RESULT_META:" + result_meta, flush=True)
|
||||
print("ANALYSIS_COMPLETE:" + signal, flush=True)
|
||||
"""
|
||||
|
||||
|
||||
def _rating_to_direction(rating: Optional[str]) -> int:
|
||||
if rating in {"BUY", "OVERWEIGHT"}:
|
||||
return 1
|
||||
if rating in {"SELL", "UNDERWEIGHT"}:
|
||||
return -1
|
||||
return 0
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class AnalysisExecutionOutput:
|
||||
decision: str
|
||||
quant_signal: Optional[str]
|
||||
llm_signal: Optional[str]
|
||||
confidence: Optional[float]
|
||||
report_path: Optional[str] = None
|
||||
llm_decision_structured: Optional[dict[str, Any]] = None
|
||||
degrade_reason_codes: tuple[str, ...] = ()
|
||||
data_quality: Optional[dict] = None
|
||||
source_diagnostics: Optional[dict] = None
|
||||
observation: Optional[dict[str, Any]] = None
|
||||
contract_version: str = CONTRACT_VERSION
|
||||
executor_type: str = DEFAULT_EXECUTOR_TYPE
|
||||
|
||||
def to_result_contract(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
ticker: str,
|
||||
date: str,
|
||||
created_at: str,
|
||||
elapsed_seconds: int,
|
||||
current_stage: str = "portfolio",
|
||||
) -> dict:
|
||||
degraded = bool(self.degrade_reason_codes) or bool(self.data_quality) or self.quant_signal is None or self.llm_signal is None
|
||||
return {
|
||||
"contract_version": self.contract_version,
|
||||
"task_id": task_id,
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"status": "degraded_success" if degraded else "completed",
|
||||
"progress": 100,
|
||||
"current_stage": current_stage,
|
||||
"created_at": created_at,
|
||||
"elapsed_seconds": elapsed_seconds,
|
||||
"elapsed": elapsed_seconds,
|
||||
"degradation": {
|
||||
"degraded": degraded,
|
||||
"reason_codes": list(self.degrade_reason_codes),
|
||||
"source_diagnostics": self.source_diagnostics or {},
|
||||
},
|
||||
"data_quality": self.data_quality,
|
||||
"result": {
|
||||
"decision": self.decision,
|
||||
"confidence": self.confidence,
|
||||
"signals": {
|
||||
"merged": {
|
||||
"direction": _rating_to_direction(self.decision),
|
||||
"rating": self.decision,
|
||||
},
|
||||
"quant": {
|
||||
"direction": _rating_to_direction(self.quant_signal),
|
||||
"rating": self.quant_signal,
|
||||
"available": self.quant_signal is not None,
|
||||
},
|
||||
"llm": {
|
||||
"direction": _rating_to_direction(self.llm_signal),
|
||||
"rating": self.llm_signal,
|
||||
"available": self.llm_signal is not None,
|
||||
"structured": self.llm_decision_structured,
|
||||
},
|
||||
},
|
||||
"degraded": degraded,
|
||||
"report": {
|
||||
"path": self.report_path,
|
||||
"available": bool(self.report_path),
|
||||
},
|
||||
},
|
||||
"error": None,
|
||||
}
|
||||
|
||||
|
||||
class AnalysisExecutorError(RuntimeError):
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
code: str = "analysis_failed",
|
||||
retryable: bool = False,
|
||||
degrade_reason_codes: tuple[str, ...] = (),
|
||||
data_quality: Optional[dict] = None,
|
||||
source_diagnostics: Optional[dict] = None,
|
||||
observation: Optional[dict[str, Any]] = None,
|
||||
):
|
||||
super().__init__(message)
|
||||
self.code = code
|
||||
self.retryable = retryable
|
||||
self.degrade_reason_codes = degrade_reason_codes
|
||||
self.data_quality = data_quality
|
||||
self.source_diagnostics = source_diagnostics
|
||||
self.observation = observation
|
||||
|
||||
|
||||
class AnalysisExecutor(Protocol):
|
||||
async def execute(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
ticker: str,
|
||||
date: str,
|
||||
request_context: RequestContext,
|
||||
on_stage: Optional[StageCallback] = None,
|
||||
) -> AnalysisExecutionOutput: ...
|
||||
|
||||
|
||||
class LegacySubprocessAnalysisExecutor:
|
||||
"""Run the legacy dashboard analysis script behind a stable executor contract."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
analysis_python: Path,
|
||||
repo_root: Path,
|
||||
api_key_resolver: Callable[..., Optional[str]],
|
||||
process_registry: Optional[ProcessRegistry] = None,
|
||||
script_template: str = LEGACY_ANALYSIS_SCRIPT_TEMPLATE,
|
||||
stdout_timeout_secs: float = 300.0,
|
||||
):
|
||||
self.analysis_python = analysis_python
|
||||
self.repo_root = repo_root
|
||||
self.api_key_resolver = api_key_resolver
|
||||
self.process_registry = process_registry
|
||||
self.script_template = script_template
|
||||
self.stdout_timeout_secs = stdout_timeout_secs
|
||||
self.default_total_timeout_secs = max(stdout_timeout_secs * 6.0, 900.0)
|
||||
|
||||
async def execute(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
ticker: str,
|
||||
date: str,
|
||||
request_context: RequestContext,
|
||||
on_stage: Optional[StageCallback] = None,
|
||||
) -> AnalysisExecutionOutput:
|
||||
llm_provider = (request_context.llm_provider or "anthropic").lower()
|
||||
analysis_api_key = request_context.provider_api_key or self._resolve_provider_api_key(llm_provider)
|
||||
if llm_provider != "ollama" and not analysis_api_key:
|
||||
raise AnalysisExecutorError(
|
||||
f"{llm_provider} provider API key not configured",
|
||||
code="analysis_failed",
|
||||
observation=self._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code="provider_api_key_missing",
|
||||
stage=None,
|
||||
stdout_timeout_secs=float((request_context.metadata or {}).get("stdout_timeout_secs", self.stdout_timeout_secs)),
|
||||
returncode=None,
|
||||
markers={},
|
||||
message=f"{llm_provider} provider API key not configured",
|
||||
),
|
||||
)
|
||||
runtime_metadata = dict(request_context.metadata or {})
|
||||
stdout_timeout_secs = float(runtime_metadata.get("stdout_timeout_secs", self.stdout_timeout_secs))
|
||||
total_timeout_secs = float(
|
||||
runtime_metadata.get("total_timeout_secs", self.default_total_timeout_secs)
|
||||
)
|
||||
|
||||
script_path: Optional[Path] = None
|
||||
proc: asyncio.subprocess.Process | None = None
|
||||
last_stage: Optional[str] = None
|
||||
try:
|
||||
fd, script_path_str = tempfile.mkstemp(suffix=".py", prefix=f"analysis_{task_id}_")
|
||||
script_path = Path(script_path_str)
|
||||
os.chmod(script_path, 0o600)
|
||||
with os.fdopen(fd, "w", encoding="utf-8") as handle:
|
||||
handle.write(self.script_template)
|
||||
|
||||
clean_env = {
|
||||
key: value
|
||||
for key, value in os.environ.items()
|
||||
if not key.startswith(("PYTHON", "CONDA", "VIRTUAL"))
|
||||
}
|
||||
for env_name in (
|
||||
"ANTHROPIC_API_KEY",
|
||||
"MINIMAX_API_KEY",
|
||||
"OPENAI_API_KEY",
|
||||
"OPENROUTER_API_KEY",
|
||||
"XAI_API_KEY",
|
||||
"GOOGLE_API_KEY",
|
||||
):
|
||||
clean_env.pop(env_name, None)
|
||||
clean_env["TRADINGAGENTS_LLM_PROVIDER"] = llm_provider
|
||||
if request_context.backend_url:
|
||||
clean_env["TRADINGAGENTS_BACKEND_URL"] = request_context.backend_url
|
||||
if request_context.deep_think_llm:
|
||||
clean_env["TRADINGAGENTS_DEEP_MODEL"] = request_context.deep_think_llm
|
||||
if request_context.quick_think_llm:
|
||||
clean_env["TRADINGAGENTS_QUICK_MODEL"] = request_context.quick_think_llm
|
||||
if request_context.selected_analysts:
|
||||
clean_env["TRADINGAGENTS_SELECTED_ANALYSTS"] = ",".join(request_context.selected_analysts)
|
||||
if request_context.analysis_prompt_style:
|
||||
clean_env["TRADINGAGENTS_ANALYSIS_PROMPT_STYLE"] = request_context.analysis_prompt_style
|
||||
if request_context.llm_timeout is not None:
|
||||
clean_env["TRADINGAGENTS_LLM_TIMEOUT"] = str(request_context.llm_timeout)
|
||||
if request_context.llm_max_retries is not None:
|
||||
clean_env["TRADINGAGENTS_LLM_MAX_RETRIES"] = str(request_context.llm_max_retries)
|
||||
if runtime_metadata.get("portfolio_context") is not None:
|
||||
clean_env["TRADINGAGENTS_PORTFOLIO_CONTEXT"] = str(
|
||||
runtime_metadata.get("portfolio_context") or ""
|
||||
)
|
||||
if runtime_metadata.get("peer_context") is not None:
|
||||
clean_env["TRADINGAGENTS_PEER_CONTEXT"] = str(
|
||||
runtime_metadata.get("peer_context") or ""
|
||||
)
|
||||
if runtime_metadata.get("peer_context_mode") is not None:
|
||||
clean_env["TRADINGAGENTS_PEER_CONTEXT_MODE"] = str(
|
||||
runtime_metadata.get("peer_context_mode") or "UNSPECIFIED"
|
||||
)
|
||||
clean_env["TRADINGAGENTS_PROVIDER_API_KEY"] = analysis_api_key or ""
|
||||
clean_env["TRADINGAGENTS_HEARTBEAT_SECS"] = str(
|
||||
float(runtime_metadata.get("heartbeat_interval_secs", 10.0))
|
||||
)
|
||||
for env_name in self._provider_api_env_names(llm_provider):
|
||||
if analysis_api_key:
|
||||
clean_env[env_name] = analysis_api_key
|
||||
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
str(self.analysis_python),
|
||||
"-u",
|
||||
str(script_path),
|
||||
ticker,
|
||||
date,
|
||||
str(self.repo_root),
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
env=clean_env,
|
||||
)
|
||||
if self.process_registry is not None:
|
||||
self.process_registry(task_id, proc)
|
||||
|
||||
stdout_lines: list[str] = []
|
||||
started_at = asyncio.get_running_loop().time()
|
||||
assert proc.stdout is not None
|
||||
while True:
|
||||
elapsed = asyncio.get_running_loop().time() - started_at
|
||||
remaining_total = total_timeout_secs - elapsed
|
||||
if remaining_total <= 0:
|
||||
await self._terminate_process(proc)
|
||||
observation = self._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code="subprocess_total_timeout",
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=getattr(proc, "returncode", None),
|
||||
markers=self._collect_markers(stdout_lines),
|
||||
message=f"analysis subprocess exceeded total timeout of {total_timeout_secs:g}s",
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
)
|
||||
raise AnalysisExecutorError(
|
||||
f"analysis subprocess exceeded total timeout of {total_timeout_secs:g}s",
|
||||
retryable=True,
|
||||
observation=observation,
|
||||
)
|
||||
try:
|
||||
line_bytes = await asyncio.wait_for(
|
||||
proc.stdout.readline(),
|
||||
timeout=min(stdout_timeout_secs, remaining_total),
|
||||
)
|
||||
except asyncio.TimeoutError as exc:
|
||||
await self._terminate_process(proc)
|
||||
timed_out_total = (
|
||||
asyncio.get_running_loop().time() - started_at
|
||||
) >= total_timeout_secs
|
||||
observation_code = (
|
||||
"subprocess_total_timeout"
|
||||
if timed_out_total
|
||||
else "subprocess_stdout_timeout"
|
||||
)
|
||||
message = (
|
||||
f"analysis subprocess exceeded total timeout of {total_timeout_secs:g}s"
|
||||
if timed_out_total
|
||||
else f"analysis subprocess timed out after {stdout_timeout_secs:g}s"
|
||||
)
|
||||
observation = self._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code=observation_code,
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=getattr(proc, "returncode", None),
|
||||
markers=self._collect_markers(stdout_lines),
|
||||
message=message,
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
)
|
||||
raise AnalysisExecutorError(
|
||||
message,
|
||||
retryable=True,
|
||||
observation=observation,
|
||||
) from exc
|
||||
if not line_bytes:
|
||||
break
|
||||
line = line_bytes.decode(errors="replace").rstrip()
|
||||
stdout_lines.append(line)
|
||||
if on_stage is not None and line.startswith("STAGE:"):
|
||||
last_stage = line.split(":", 1)[1].strip()
|
||||
await on_stage(last_stage)
|
||||
|
||||
await proc.wait()
|
||||
stderr_bytes = await proc.stderr.read() if proc.stderr is not None else b""
|
||||
stderr_lines = stderr_bytes.decode(errors="replace").splitlines() if stderr_bytes else []
|
||||
if proc.returncode != 0:
|
||||
failure_meta = self._parse_failure_metadata(stdout_lines, stderr_lines)
|
||||
message = self._extract_error_message(stderr_lines) or (stderr_bytes.decode(errors="replace")[-1000:] if stderr_bytes else f"exit {proc.returncode}")
|
||||
observation = self._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code="analysis_protocol_failed" if failure_meta is None else "analysis_failed",
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=proc.returncode,
|
||||
markers=self._collect_markers(stdout_lines),
|
||||
message=message,
|
||||
data_quality=(failure_meta or {}).get("data_quality"),
|
||||
source_diagnostics=(failure_meta or {}).get("source_diagnostics"),
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
stderr_excerpt=stderr_lines[-8:],
|
||||
)
|
||||
if failure_meta is None:
|
||||
raise AnalysisExecutorError(
|
||||
"analysis subprocess failed without required markers: RESULT_META",
|
||||
code="analysis_protocol_failed",
|
||||
observation=observation,
|
||||
)
|
||||
raise AnalysisExecutorError(
|
||||
message,
|
||||
code="analysis_failed",
|
||||
degrade_reason_codes=failure_meta["degrade_reason_codes"],
|
||||
data_quality=failure_meta["data_quality"],
|
||||
source_diagnostics=failure_meta["source_diagnostics"],
|
||||
observation=observation,
|
||||
)
|
||||
|
||||
return self._parse_output(
|
||||
stdout_lines=stdout_lines,
|
||||
stderr_lines=stderr_lines,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
request_context=request_context,
|
||||
contract_version=request_context.contract_version,
|
||||
executor_type=request_context.executor_type,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
last_stage=last_stage,
|
||||
)
|
||||
finally:
|
||||
if self.process_registry is not None:
|
||||
self.process_registry(task_id, None)
|
||||
if script_path is not None:
|
||||
try:
|
||||
script_path.unlink()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
async def _terminate_process(proc: asyncio.subprocess.Process) -> None:
|
||||
if proc.returncode is not None:
|
||||
return
|
||||
try:
|
||||
proc.kill()
|
||||
except ProcessLookupError:
|
||||
return
|
||||
await proc.wait()
|
||||
|
||||
def _resolve_provider_api_key(self, provider: str) -> Optional[str]:
|
||||
try:
|
||||
return self.api_key_resolver(provider) # type: ignore[misc]
|
||||
except TypeError:
|
||||
return self.api_key_resolver()
|
||||
|
||||
@staticmethod
|
||||
def _provider_api_env_names(provider: str) -> tuple[str, ...]:
|
||||
return {
|
||||
"anthropic": ("ANTHROPIC_API_KEY", "MINIMAX_API_KEY"),
|
||||
"openai": ("OPENAI_API_KEY",),
|
||||
"openrouter": ("OPENROUTER_API_KEY",),
|
||||
"xai": ("XAI_API_KEY",),
|
||||
"google": ("GOOGLE_API_KEY",),
|
||||
"ollama": tuple(),
|
||||
}.get(provider, tuple())
|
||||
|
||||
@staticmethod
|
||||
def _parse_failure_metadata(stdout_lines: list[str], stderr_lines: list[str]) -> Optional[dict]:
|
||||
for line in [*stdout_lines, *stderr_lines]:
|
||||
if line.startswith("RESULT_META:"):
|
||||
try:
|
||||
detail = json.loads(line.split(":", 1)[1].strip())
|
||||
except Exception as exc:
|
||||
raise AnalysisExecutorError(
|
||||
"failed to parse RESULT_META payload",
|
||||
code="analysis_protocol_failed",
|
||||
) from exc
|
||||
return {
|
||||
"degrade_reason_codes": tuple(detail.get("degrade_reason_codes") or ()),
|
||||
"data_quality": detail.get("data_quality"),
|
||||
"source_diagnostics": detail.get("source_diagnostics"),
|
||||
}
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _extract_error_message(stderr_lines: list[str]) -> Optional[str]:
|
||||
for line in stderr_lines:
|
||||
if line.startswith("ANALYSIS_ERROR:"):
|
||||
return line.split(":", 1)[1].strip()
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _parse_output(
|
||||
*,
|
||||
stdout_lines: list[str],
|
||||
stderr_lines: list[str],
|
||||
ticker: str,
|
||||
date: str,
|
||||
request_context: RequestContext,
|
||||
contract_version: str,
|
||||
executor_type: str,
|
||||
stdout_timeout_secs: float,
|
||||
total_timeout_secs: float,
|
||||
last_stage: Optional[str],
|
||||
) -> AnalysisExecutionOutput:
|
||||
decision: Optional[str] = None
|
||||
quant_signal = None
|
||||
llm_signal = None
|
||||
confidence = None
|
||||
llm_decision_structured = None
|
||||
degrade_reason_codes: tuple[str, ...] = ()
|
||||
data_quality = None
|
||||
source_diagnostics = None
|
||||
seen_signal_detail = False
|
||||
seen_result_meta = False
|
||||
seen_complete = False
|
||||
|
||||
for line in stdout_lines:
|
||||
if line.startswith("SIGNAL_DETAIL:"):
|
||||
seen_signal_detail = True
|
||||
try:
|
||||
detail = json.loads(line.split(":", 1)[1].strip())
|
||||
except Exception as exc:
|
||||
raise AnalysisExecutorError(
|
||||
"failed to parse SIGNAL_DETAIL payload",
|
||||
observation=LegacySubprocessAnalysisExecutor._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code="signal_detail_parse_failed",
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=0,
|
||||
markers=LegacySubprocessAnalysisExecutor._collect_markers(stdout_lines),
|
||||
message="failed to parse SIGNAL_DETAIL payload",
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
stderr_excerpt=stderr_lines[-8:],
|
||||
),
|
||||
) from exc
|
||||
quant_signal = detail.get("quant_signal")
|
||||
llm_signal = detail.get("llm_signal")
|
||||
confidence = detail.get("confidence")
|
||||
llm_decision_structured = detail.get("llm_decision_structured")
|
||||
elif line.startswith("RESULT_META:"):
|
||||
seen_result_meta = True
|
||||
try:
|
||||
detail = json.loads(line.split(":", 1)[1].strip())
|
||||
except Exception as exc:
|
||||
raise AnalysisExecutorError(
|
||||
"failed to parse RESULT_META payload",
|
||||
observation=LegacySubprocessAnalysisExecutor._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code="result_meta_parse_failed",
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=0,
|
||||
markers=LegacySubprocessAnalysisExecutor._collect_markers(stdout_lines),
|
||||
message="failed to parse RESULT_META payload",
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
stderr_excerpt=stderr_lines[-8:],
|
||||
),
|
||||
) from exc
|
||||
degrade_reason_codes = tuple(detail.get("degrade_reason_codes") or ())
|
||||
data_quality = detail.get("data_quality")
|
||||
source_diagnostics = detail.get("source_diagnostics")
|
||||
elif line.startswith("ANALYSIS_COMPLETE:"):
|
||||
seen_complete = True
|
||||
decision = line.split(":", 1)[1].strip()
|
||||
|
||||
missing_markers = []
|
||||
if not seen_signal_detail:
|
||||
missing_markers.append("SIGNAL_DETAIL")
|
||||
if not seen_result_meta:
|
||||
missing_markers.append("RESULT_META")
|
||||
if not seen_complete:
|
||||
missing_markers.append("ANALYSIS_COMPLETE")
|
||||
if missing_markers:
|
||||
observation = LegacySubprocessAnalysisExecutor._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="failed",
|
||||
observation_code="analysis_protocol_failed",
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=0,
|
||||
markers={
|
||||
"signal_detail": seen_signal_detail,
|
||||
"result_meta": seen_result_meta,
|
||||
"analysis_complete": seen_complete,
|
||||
},
|
||||
message="analysis subprocess completed without required markers: " + ", ".join(missing_markers),
|
||||
data_quality=data_quality,
|
||||
source_diagnostics=source_diagnostics,
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
stderr_excerpt=stderr_lines[-8:],
|
||||
)
|
||||
raise AnalysisExecutorError(
|
||||
"analysis subprocess completed without required markers: "
|
||||
+ ", ".join(missing_markers),
|
||||
observation=observation,
|
||||
)
|
||||
|
||||
report_path = str(Path("results") / ticker / date / "complete_report.md")
|
||||
return AnalysisExecutionOutput(
|
||||
decision=decision or "HOLD",
|
||||
quant_signal=quant_signal,
|
||||
llm_signal=llm_signal,
|
||||
confidence=confidence,
|
||||
report_path=report_path,
|
||||
llm_decision_structured=llm_decision_structured,
|
||||
degrade_reason_codes=degrade_reason_codes,
|
||||
data_quality=data_quality,
|
||||
source_diagnostics=source_diagnostics,
|
||||
observation=LegacySubprocessAnalysisExecutor._build_observation(
|
||||
request_context=request_context,
|
||||
ticker=ticker,
|
||||
date=date,
|
||||
status="completed",
|
||||
observation_code="completed",
|
||||
stage=last_stage,
|
||||
stdout_timeout_secs=stdout_timeout_secs,
|
||||
total_timeout_secs=total_timeout_secs,
|
||||
returncode=0,
|
||||
markers=LegacySubprocessAnalysisExecutor._collect_markers(stdout_lines),
|
||||
data_quality=data_quality,
|
||||
source_diagnostics=source_diagnostics,
|
||||
stdout_excerpt=stdout_lines[-8:],
|
||||
stderr_excerpt=stderr_lines[-8:],
|
||||
),
|
||||
contract_version=contract_version,
|
||||
executor_type=executor_type,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _collect_markers(stdout_lines: list[str]) -> dict[str, bool]:
|
||||
return {
|
||||
"signal_detail": any(line.startswith("SIGNAL_DETAIL:") for line in stdout_lines),
|
||||
"result_meta": any(line.startswith("RESULT_META:") for line in stdout_lines),
|
||||
"analysis_complete": any(line.startswith("ANALYSIS_COMPLETE:") for line in stdout_lines),
|
||||
"heartbeat": any(line.startswith("HEARTBEAT:") for line in stdout_lines),
|
||||
"auth_checkpoint": any(line.startswith("CHECKPOINT:AUTH:") for line in stdout_lines),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _build_observation(
|
||||
*,
|
||||
request_context: RequestContext,
|
||||
ticker: str,
|
||||
date: str,
|
||||
status: str,
|
||||
observation_code: str,
|
||||
stage: Optional[str],
|
||||
stdout_timeout_secs: float,
|
||||
total_timeout_secs: Optional[float],
|
||||
returncode: Optional[int],
|
||||
markers: dict[str, bool],
|
||||
message: Optional[str] = None,
|
||||
data_quality: Optional[dict] = None,
|
||||
source_diagnostics: Optional[dict] = None,
|
||||
stdout_excerpt: Optional[list[str]] = None,
|
||||
stderr_excerpt: Optional[list[str]] = None,
|
||||
) -> dict[str, Any]:
|
||||
metadata = dict(request_context.metadata or {})
|
||||
return {
|
||||
"status": status,
|
||||
"observation_code": observation_code,
|
||||
"request_id": request_context.request_id,
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"provider": request_context.llm_provider,
|
||||
"backend_url": request_context.backend_url,
|
||||
"model": request_context.deep_think_llm,
|
||||
"selected_analysts": list(request_context.selected_analysts),
|
||||
"analysis_prompt_style": request_context.analysis_prompt_style,
|
||||
"attempt_index": metadata.get("attempt_index", 0),
|
||||
"attempt_mode": metadata.get("attempt_mode", "baseline"),
|
||||
"probe_mode": metadata.get("probe_mode", "none"),
|
||||
"stdout_timeout_secs": stdout_timeout_secs,
|
||||
"total_timeout_secs": total_timeout_secs,
|
||||
"cost_cap": metadata.get("cost_cap"),
|
||||
"stage": stage,
|
||||
"returncode": returncode,
|
||||
"markers": markers,
|
||||
"message": message,
|
||||
"data_quality": data_quality,
|
||||
"source_diagnostics": source_diagnostics,
|
||||
"stdout_excerpt": list(stdout_excerpt or []),
|
||||
"stderr_excerpt": list(stderr_excerpt or []),
|
||||
"evidence_id": metadata.get("evidence_id"),
|
||||
}
|
||||
|
||||
|
||||
class DirectAnalysisExecutor:
|
||||
"""Placeholder for a future in-process executor implementation."""
|
||||
|
||||
async def execute(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
ticker: str,
|
||||
date: str,
|
||||
request_context: RequestContext,
|
||||
on_stage: Optional[StageCallback] = None,
|
||||
) -> AnalysisExecutionOutput:
|
||||
del task_id, ticker, date, request_context, on_stage
|
||||
raise NotImplementedError("DirectAnalysisExecutor is not implemented in phase 1")
|
||||
|
|
@ -0,0 +1,366 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
from typing import Any, Callable
|
||||
|
||||
|
||||
CONTRACT_VERSION = "v1alpha1"
|
||||
DEFAULT_EXECUTOR_TYPE = "legacy_subprocess"
|
||||
|
||||
|
||||
class JobService:
|
||||
"""Application-layer job state orchestrator with contract-first public projections."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
task_results: dict[str, dict],
|
||||
analysis_tasks: dict[str, asyncio.Task],
|
||||
processes: dict[str, Any],
|
||||
persist_task: Callable[[str, dict], None],
|
||||
delete_task: Callable[[str], None],
|
||||
):
|
||||
self.task_results = task_results
|
||||
self.analysis_tasks = analysis_tasks
|
||||
self.processes = processes
|
||||
self.persist_task = persist_task
|
||||
self.delete_task = delete_task
|
||||
|
||||
def restore_task_results(self, restored: dict[str, dict]) -> None:
|
||||
self.task_results.update(
|
||||
{
|
||||
task_id: self._normalize_task_state(task_id, state)
|
||||
for task_id, state in restored.items()
|
||||
}
|
||||
)
|
||||
|
||||
def create_analysis_job(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
ticker: str,
|
||||
date: str,
|
||||
request_id: str | None = None,
|
||||
executor_type: str = DEFAULT_EXECUTOR_TYPE,
|
||||
contract_version: str = CONTRACT_VERSION,
|
||||
result_ref: str | None = None,
|
||||
) -> dict:
|
||||
state = self._normalize_task_state(task_id, {
|
||||
"task_id": task_id,
|
||||
"ticker": ticker,
|
||||
"date": date,
|
||||
"status": "running",
|
||||
"progress": 0,
|
||||
"current_stage": "analysts",
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"elapsed_seconds": 0,
|
||||
"elapsed": 0,
|
||||
"stages": [
|
||||
{
|
||||
"name": stage_name,
|
||||
"status": "running" if index == 0 else "pending",
|
||||
"completed_at": None,
|
||||
}
|
||||
for index, stage_name in enumerate(
|
||||
["analysts", "research", "trading", "risk", "portfolio"]
|
||||
)
|
||||
],
|
||||
"logs": [],
|
||||
"result": None,
|
||||
"error": None,
|
||||
"request_id": request_id,
|
||||
"executor_type": executor_type,
|
||||
"contract_version": contract_version,
|
||||
"result_ref": result_ref,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"evidence_summary": None,
|
||||
"tentative_classification": None,
|
||||
"budget_state": {},
|
||||
"compat": {},
|
||||
})
|
||||
self.task_results[task_id] = state
|
||||
self.processes.setdefault(task_id, None)
|
||||
return state
|
||||
|
||||
def create_portfolio_job(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
total: int,
|
||||
request_id: str | None = None,
|
||||
executor_type: str = DEFAULT_EXECUTOR_TYPE,
|
||||
contract_version: str = CONTRACT_VERSION,
|
||||
result_ref: str | None = None,
|
||||
) -> dict:
|
||||
state = self._normalize_task_state(task_id, {
|
||||
"task_id": task_id,
|
||||
"type": "portfolio",
|
||||
"status": "running",
|
||||
"total": total,
|
||||
"completed": 0,
|
||||
"failed": 0,
|
||||
"current_ticker": None,
|
||||
"results": [],
|
||||
"error": None,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"request_id": request_id,
|
||||
"executor_type": executor_type,
|
||||
"contract_version": contract_version,
|
||||
"result_ref": result_ref,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"evidence_summary": None,
|
||||
"tentative_classification": None,
|
||||
"budget_state": {},
|
||||
"compat": {},
|
||||
})
|
||||
self.task_results[task_id] = state
|
||||
self.processes.setdefault(task_id, None)
|
||||
return state
|
||||
|
||||
def attach_result_contract(
|
||||
self,
|
||||
task_id: str,
|
||||
*,
|
||||
result_ref: str,
|
||||
contract_version: str = CONTRACT_VERSION,
|
||||
executor_type: str | None = None,
|
||||
) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
state["result_ref"] = result_ref
|
||||
state["contract_version"] = contract_version or state.get("contract_version") or CONTRACT_VERSION
|
||||
if executor_type:
|
||||
state["executor_type"] = executor_type
|
||||
return state
|
||||
|
||||
def complete_analysis_job(
|
||||
self,
|
||||
task_id: str,
|
||||
*,
|
||||
contract: dict,
|
||||
result_ref: str,
|
||||
executor_type: str | None = None,
|
||||
) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
result = dict(contract.get("result") or {})
|
||||
signals = result.get("signals") or {}
|
||||
quant = signals.get("quant") or {}
|
||||
llm = signals.get("llm") or {}
|
||||
|
||||
state["status"] = contract.get("status", "completed")
|
||||
state["progress"] = contract.get("progress", 100)
|
||||
state["current_stage"] = contract.get("current_stage", state.get("current_stage"))
|
||||
state["elapsed_seconds"] = contract.get("elapsed_seconds", state.get("elapsed_seconds", 0))
|
||||
state["elapsed"] = contract.get("elapsed", state["elapsed_seconds"])
|
||||
state["result"] = result
|
||||
state["error"] = contract.get("error")
|
||||
state["contract_version"] = contract.get("contract_version", state.get("contract_version"))
|
||||
state["degradation_summary"] = contract.get("degradation") or self._build_degradation_summary(result)
|
||||
state["data_quality_summary"] = contract.get("data_quality")
|
||||
state["evidence_summary"] = contract.get("evidence")
|
||||
state["tentative_classification"] = contract.get("tentative_classification")
|
||||
state["budget_state"] = contract.get("budget_state") or state.get("budget_state") or {}
|
||||
state["compat"] = {
|
||||
"decision": result.get("decision"),
|
||||
"quant_signal": quant.get("rating"),
|
||||
"llm_signal": llm.get("rating"),
|
||||
"confidence": result.get("confidence"),
|
||||
}
|
||||
self.attach_result_contract(
|
||||
task_id,
|
||||
result_ref=result_ref,
|
||||
contract_version=state["contract_version"],
|
||||
executor_type=executor_type,
|
||||
)
|
||||
self.persist_task(task_id, state)
|
||||
return state
|
||||
|
||||
def update_portfolio_progress(self, task_id: str, *, ticker: str, completed: int) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
state["current_ticker"] = ticker
|
||||
state["status"] = "running"
|
||||
state["completed"] = completed
|
||||
return state
|
||||
|
||||
def append_portfolio_result(self, task_id: str, rec: dict) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
state["completed"] += 1
|
||||
state["results"].append(rec)
|
||||
return state
|
||||
|
||||
def mark_portfolio_failure(self, task_id: str) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
state["failed"] += 1
|
||||
return state
|
||||
|
||||
def complete_job(self, task_id: str) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
state["status"] = "completed"
|
||||
state["current_ticker"] = None
|
||||
self.persist_task(task_id, state)
|
||||
return state
|
||||
|
||||
def fail_job(self, task_id: str, error: str) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
state["status"] = "failed"
|
||||
state["error"] = error
|
||||
self.persist_task(task_id, state)
|
||||
return state
|
||||
|
||||
def to_public_task_payload(self, task_id: str, *, contract: dict | None = None) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
payload = {
|
||||
"contract_version": state.get("contract_version", CONTRACT_VERSION),
|
||||
"task_id": task_id,
|
||||
"request_id": state.get("request_id"),
|
||||
"executor_type": state.get("executor_type", DEFAULT_EXECUTOR_TYPE),
|
||||
"result_ref": state.get("result_ref"),
|
||||
"status": self._public_status(state.get("status")),
|
||||
"created_at": state.get("created_at"),
|
||||
"degradation_summary": state.get("degradation_summary"),
|
||||
"data_quality_summary": state.get("data_quality_summary"),
|
||||
"evidence": state.get("evidence_summary"),
|
||||
"tentative_classification": state.get("tentative_classification"),
|
||||
"budget_state": state.get("budget_state") or {},
|
||||
"error": self._public_error(contract, state),
|
||||
}
|
||||
if state.get("type") == "portfolio":
|
||||
payload.update({
|
||||
"type": "portfolio",
|
||||
"total": state.get("total", 0),
|
||||
"completed": state.get("completed", 0),
|
||||
"failed": state.get("failed", 0),
|
||||
"current_ticker": state.get("current_ticker"),
|
||||
"results": state.get("results", []),
|
||||
})
|
||||
else:
|
||||
payload.update({
|
||||
"ticker": state.get("ticker"),
|
||||
"date": state.get("date"),
|
||||
"progress": state.get("progress", 0),
|
||||
"current_stage": state.get("current_stage"),
|
||||
"elapsed_seconds": state.get("elapsed_seconds", 0),
|
||||
"stages": state.get("stages", []),
|
||||
"result": self._public_result(contract, state),
|
||||
})
|
||||
|
||||
compat = {
|
||||
key: value
|
||||
for key, value in (state.get("compat") or {}).items()
|
||||
if value is not None
|
||||
}
|
||||
if compat:
|
||||
payload["compat"] = compat
|
||||
return payload
|
||||
|
||||
def to_task_summary(self, task_id: str, *, contract: dict | None = None) -> dict:
|
||||
state = self.task_results[task_id]
|
||||
payload = self.to_public_task_payload(task_id, contract=contract)
|
||||
summary = {
|
||||
"task_id": payload["task_id"],
|
||||
"contract_version": payload["contract_version"],
|
||||
"request_id": payload.get("request_id"),
|
||||
"executor_type": payload.get("executor_type"),
|
||||
"result_ref": payload.get("result_ref"),
|
||||
"status": payload["status"],
|
||||
"created_at": payload.get("created_at"),
|
||||
"error": payload.get("error"),
|
||||
"data_quality_summary": payload.get("data_quality_summary"),
|
||||
"degradation_summary": payload.get("degradation_summary"),
|
||||
"tentative_classification": payload.get("tentative_classification"),
|
||||
"budget_state": payload.get("budget_state") or {},
|
||||
}
|
||||
if state.get("type") == "portfolio":
|
||||
summary.update({
|
||||
"type": "portfolio",
|
||||
"total": payload.get("total", 0),
|
||||
"completed": payload.get("completed", 0),
|
||||
"failed": payload.get("failed", 0),
|
||||
"current_ticker": payload.get("current_ticker"),
|
||||
})
|
||||
return summary
|
||||
|
||||
result = payload.get("result") or {}
|
||||
summary.update({
|
||||
"ticker": payload.get("ticker"),
|
||||
"date": payload.get("date"),
|
||||
"progress": payload.get("progress", 0),
|
||||
"current_stage": payload.get("current_stage"),
|
||||
"summary": {
|
||||
"decision": result.get("decision"),
|
||||
"confidence": result.get("confidence"),
|
||||
"degraded": result.get("degraded", False),
|
||||
},
|
||||
})
|
||||
compat = payload.get("compat")
|
||||
if compat:
|
||||
summary["compat"] = compat
|
||||
return summary
|
||||
|
||||
def register_background_task(self, task_id: str, task: asyncio.Task) -> None:
|
||||
self.analysis_tasks[task_id] = task
|
||||
|
||||
def register_process(self, task_id: str, process: Any) -> None:
|
||||
self.processes[task_id] = process
|
||||
|
||||
def cancel_job(self, task_id: str, error: str = "用户取消") -> dict | None:
|
||||
state = self.task_results.get(task_id)
|
||||
if not state:
|
||||
return None
|
||||
state["status"] = "failed"
|
||||
state["error"] = error
|
||||
return state
|
||||
|
||||
@staticmethod
|
||||
def _normalize_task_state(task_id: str, state: dict) -> dict:
|
||||
normalized = dict(state)
|
||||
normalized.setdefault("request_id", task_id)
|
||||
normalized.setdefault("executor_type", DEFAULT_EXECUTOR_TYPE)
|
||||
normalized.setdefault("contract_version", CONTRACT_VERSION)
|
||||
normalized.setdefault("result_ref", None)
|
||||
normalized.setdefault("degradation_summary", None)
|
||||
normalized.setdefault("data_quality_summary", None)
|
||||
normalized.setdefault("evidence_summary", None)
|
||||
normalized.setdefault("tentative_classification", None)
|
||||
normalized.setdefault("budget_state", {})
|
||||
if "data_quality" in normalized and normalized.get("data_quality_summary") is None:
|
||||
normalized["data_quality_summary"] = normalized.get("data_quality")
|
||||
compat = normalized.get("compat")
|
||||
if not isinstance(compat, dict):
|
||||
compat = {}
|
||||
for key in ("decision", "quant_signal", "llm_signal", "confidence"):
|
||||
if key in normalized and key not in compat:
|
||||
compat[key] = normalized.get(key)
|
||||
normalized["compat"] = compat
|
||||
return normalized
|
||||
|
||||
@staticmethod
|
||||
def _build_degradation_summary(result: dict) -> dict | None:
|
||||
if not result:
|
||||
return None
|
||||
degraded = bool(result.get("degraded"))
|
||||
report = result.get("report") or {}
|
||||
return {
|
||||
"degraded": degraded,
|
||||
"report_available": bool(report.get("available")),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _public_result(contract: dict | None, state: dict) -> dict | None:
|
||||
if contract is not None:
|
||||
return contract.get("result")
|
||||
return state.get("result")
|
||||
|
||||
@staticmethod
|
||||
def _public_error(contract: dict | None, state: dict) -> dict | str | None:
|
||||
if contract is not None and "error" in contract:
|
||||
return contract.get("error")
|
||||
return state.get("error")
|
||||
|
||||
@staticmethod
|
||||
def _public_status(status: str | None) -> str | None:
|
||||
if status in {"collecting_evidence", "auto_recovering", "classification_pending", "probing_provider"}:
|
||||
return "running"
|
||||
return status
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
def _env_flag(name: str, default: bool = False) -> bool:
|
||||
raw = os.environ.get(name)
|
||||
if raw is None:
|
||||
return default
|
||||
return raw.strip().lower() in {"1", "true", "yes", "on"}
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class MigrationFlags:
|
||||
"""Migration modes for contract-first backend rollout."""
|
||||
|
||||
executor_mode: str = "legacy"
|
||||
response_mode: str = "contract_first"
|
||||
write_mode: str = "dual_write"
|
||||
read_mode: str = "dual_read"
|
||||
request_context_enabled: bool = True
|
||||
|
||||
@property
|
||||
def use_application_services(self) -> bool:
|
||||
return self.executor_mode in {"legacy", "direct", "auto"}
|
||||
|
||||
@property
|
||||
def use_result_store(self) -> bool:
|
||||
return self.read_mode in {"dual_read", "contract_only"}
|
||||
|
||||
@property
|
||||
def use_request_context(self) -> bool:
|
||||
return self.request_context_enabled
|
||||
|
||||
|
||||
def load_migration_flags() -> MigrationFlags:
|
||||
"""Load service migration modes from the environment with boolean compatibility."""
|
||||
executor_mode = os.environ.get("TRADINGAGENTS_EXECUTOR_MODE")
|
||||
if executor_mode is None:
|
||||
executor_mode = "legacy" if _env_flag("TRADINGAGENTS_USE_APPLICATION_SERVICES", default=False) else "legacy"
|
||||
|
||||
response_mode = os.environ.get("TRADINGAGENTS_RESPONSE_MODE", "contract_first")
|
||||
write_mode = os.environ.get("TRADINGAGENTS_WRITE_MODE")
|
||||
if write_mode is None:
|
||||
write_mode = "dual_write" if _env_flag("TRADINGAGENTS_USE_RESULT_STORE", default=False) else "dual_write"
|
||||
|
||||
read_mode = os.environ.get("TRADINGAGENTS_READ_MODE")
|
||||
if read_mode is None:
|
||||
read_mode = "dual_read" if _env_flag("TRADINGAGENTS_USE_RESULT_STORE", default=False) else "legacy_only"
|
||||
|
||||
return MigrationFlags(
|
||||
executor_mode=executor_mode,
|
||||
response_mode=response_mode,
|
||||
write_mode=write_mode,
|
||||
read_mode=read_mode,
|
||||
request_context_enabled=_env_flag("TRADINGAGENTS_USE_REQUEST_CONTEXT", default=True),
|
||||
)
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field, replace
|
||||
from typing import Any, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from fastapi import Request
|
||||
|
||||
|
||||
CONTRACT_VERSION = "v1alpha1"
|
||||
DEFAULT_EXECUTOR_TYPE = "legacy_subprocess"
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RequestContext:
|
||||
"""Minimal request-scoped metadata passed into application services."""
|
||||
|
||||
request_id: str
|
||||
contract_version: str = CONTRACT_VERSION
|
||||
executor_type: str = DEFAULT_EXECUTOR_TYPE
|
||||
auth_key: Optional[str] = None
|
||||
provider_api_key: Optional[str] = None
|
||||
llm_provider: Optional[str] = None
|
||||
backend_url: Optional[str] = None
|
||||
deep_think_llm: Optional[str] = None
|
||||
quick_think_llm: Optional[str] = None
|
||||
selected_analysts: tuple[str, ...] = ()
|
||||
analysis_prompt_style: Optional[str] = None
|
||||
llm_timeout: Optional[float] = None
|
||||
llm_max_retries: Optional[int] = None
|
||||
client_host: Optional[str] = None
|
||||
is_local: bool = False
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
def build_request_context(
|
||||
request: Optional[Request] = None,
|
||||
*,
|
||||
auth_key: Optional[str] = None,
|
||||
provider_api_key: Optional[str] = None,
|
||||
llm_provider: Optional[str] = None,
|
||||
backend_url: Optional[str] = None,
|
||||
deep_think_llm: Optional[str] = None,
|
||||
quick_think_llm: Optional[str] = None,
|
||||
selected_analysts: Optional[list[str] | tuple[str, ...]] = None,
|
||||
analysis_prompt_style: Optional[str] = None,
|
||||
llm_timeout: Optional[float] = None,
|
||||
llm_max_retries: Optional[int] = None,
|
||||
request_id: Optional[str] = None,
|
||||
contract_version: str = CONTRACT_VERSION,
|
||||
executor_type: str = DEFAULT_EXECUTOR_TYPE,
|
||||
metadata: Optional[dict[str, Any]] = None,
|
||||
) -> RequestContext:
|
||||
"""Create a stable request context without leaking FastAPI internals into services."""
|
||||
client_host = request.client.host if request and request.client else None
|
||||
is_local = client_host in {"127.0.0.1", "::1", "localhost", "testclient"}
|
||||
return RequestContext(
|
||||
request_id=request_id or uuid4().hex,
|
||||
contract_version=contract_version,
|
||||
executor_type=executor_type,
|
||||
auth_key=auth_key,
|
||||
provider_api_key=provider_api_key,
|
||||
llm_provider=llm_provider,
|
||||
backend_url=backend_url,
|
||||
deep_think_llm=deep_think_llm,
|
||||
quick_think_llm=quick_think_llm,
|
||||
selected_analysts=tuple(selected_analysts or ()),
|
||||
analysis_prompt_style=analysis_prompt_style,
|
||||
llm_timeout=llm_timeout,
|
||||
llm_max_retries=llm_max_retries,
|
||||
client_host=client_host,
|
||||
is_local=is_local,
|
||||
metadata=dict(metadata or {}),
|
||||
)
|
||||
|
||||
|
||||
def clone_request_context(
|
||||
context: RequestContext,
|
||||
*,
|
||||
metadata_updates: Optional[dict[str, Any]] = None,
|
||||
**overrides: Any,
|
||||
) -> RequestContext:
|
||||
metadata = dict(context.metadata)
|
||||
metadata.update(metadata_updates or {})
|
||||
return replace(context, metadata=metadata, **overrides)
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
CONTRACT_VERSION = "v1alpha1"
|
||||
|
||||
|
||||
class ResultStore:
|
||||
"""Storage boundary for persisted task state and portfolio results."""
|
||||
|
||||
def __init__(self, task_status_dir: Path, portfolio_gateway):
|
||||
self.task_status_dir = task_status_dir
|
||||
self.result_contract_dir = self.task_status_dir / "results"
|
||||
self.legacy_result_contract_dir = self.task_status_dir / "result_contracts"
|
||||
self.portfolio_gateway = portfolio_gateway
|
||||
|
||||
def restore_task_results(self) -> dict[str, dict]:
|
||||
restored: dict[str, dict] = {}
|
||||
self.task_status_dir.mkdir(parents=True, exist_ok=True)
|
||||
for file_path in self.task_status_dir.glob("*.json"):
|
||||
try:
|
||||
data = json.loads(file_path.read_text())
|
||||
except Exception:
|
||||
continue
|
||||
task_id = data.get("task_id")
|
||||
if task_id:
|
||||
restored[task_id] = data
|
||||
return restored
|
||||
|
||||
def save_task_status(self, task_id: str, data: dict) -> None:
|
||||
self.task_status_dir.mkdir(parents=True, exist_ok=True)
|
||||
(self.task_status_dir / f"{task_id}.json").write_text(json.dumps(data, ensure_ascii=False))
|
||||
|
||||
def save_result_contract(self, task_id: str, contract: dict) -> str:
|
||||
target_dir = self.result_contract_dir / task_id
|
||||
target_dir.mkdir(parents=True, exist_ok=True)
|
||||
payload = dict(contract)
|
||||
payload.setdefault("task_id", task_id)
|
||||
payload.setdefault("contract_version", CONTRACT_VERSION)
|
||||
file_path = target_dir / "result.v1alpha1.json"
|
||||
file_path.write_text(json.dumps(payload, ensure_ascii=False))
|
||||
return file_path.relative_to(self.task_status_dir).as_posix()
|
||||
|
||||
def load_result_contract(
|
||||
self,
|
||||
*,
|
||||
result_ref: str | None = None,
|
||||
task_id: str | None = None,
|
||||
) -> dict | None:
|
||||
candidates: list[Path] = []
|
||||
if result_ref:
|
||||
candidates.append(self.task_status_dir / result_ref)
|
||||
if task_id:
|
||||
candidates.append(self.result_contract_dir / task_id / "result.v1alpha1.json")
|
||||
candidates.append(self.legacy_result_contract_dir / f"{task_id}.json")
|
||||
for path in candidates:
|
||||
if not path.exists():
|
||||
continue
|
||||
try:
|
||||
return json.loads(path.read_text())
|
||||
except Exception:
|
||||
continue
|
||||
return None
|
||||
|
||||
def delete_task_status(self, task_id: str) -> None:
|
||||
(self.task_status_dir / f"{task_id}.json").unlink(missing_ok=True)
|
||||
|
||||
def get_watchlist(self) -> list:
|
||||
return self.portfolio_gateway.get_watchlist()
|
||||
|
||||
def get_accounts(self) -> dict:
|
||||
return self.portfolio_gateway.get_accounts()
|
||||
|
||||
async def get_positions(self, account: Optional[str] = None) -> list:
|
||||
return await self.portfolio_gateway.get_positions(account)
|
||||
|
||||
def get_recommendations(self, date: Optional[str] = None, limit: int = 50, offset: int = 0) -> dict:
|
||||
return self.portfolio_gateway.get_recommendations(date, limit, offset)
|
||||
|
||||
def get_recommendation(self, date: str, ticker: str) -> Optional[dict]:
|
||||
return self.portfolio_gateway.get_recommendation(date, ticker)
|
||||
|
||||
def save_recommendation(self, date: str, ticker: str, data: dict) -> None:
|
||||
self.portfolio_gateway.save_recommendation(date, ticker, data)
|
||||
|
|
@ -0,0 +1,484 @@
|
|||
import importlib
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
from starlette.websockets import WebSocketDisconnect
|
||||
|
||||
|
||||
def _load_main_module(monkeypatch, *, env_file=""):
|
||||
backend_dir = Path(__file__).resolve().parents[1]
|
||||
monkeypatch.setenv("TRADINGAGENTS_ENV_FILE", env_file)
|
||||
monkeypatch.syspath_prepend(str(backend_dir))
|
||||
sys.modules.pop("main", None)
|
||||
return importlib.import_module("main")
|
||||
|
||||
|
||||
def test_config_check_smoke(monkeypatch):
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
monkeypatch.delenv("MINIMAX_API_KEY", raising=False)
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
response = client.get("/api/config/check")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json() == {"configured": False}
|
||||
|
||||
|
||||
def test_repo_env_overrides_stale_shell_provider_env(monkeypatch, tmp_path):
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text(
|
||||
"\n".join(
|
||||
[
|
||||
"TRADINGAGENTS_LLM_PROVIDER=anthropic",
|
||||
"TRADINGAGENTS_BACKEND_URL=https://api.minimaxi.com/anthropic",
|
||||
"TRADINGAGENTS_MODEL=MiniMax-M2.7-highspeed",
|
||||
]
|
||||
),
|
||||
encoding="utf-8",
|
||||
)
|
||||
monkeypatch.setenv("TRADINGAGENTS_LLM_PROVIDER", "openai")
|
||||
monkeypatch.setenv("TRADINGAGENTS_BACKEND_URL", "https://api.openai.com/v1")
|
||||
monkeypatch.setenv("TRADINGAGENTS_MODEL", "gpt-5.4")
|
||||
|
||||
main = _load_main_module(monkeypatch, env_file=str(env_file))
|
||||
|
||||
settings = main._resolve_analysis_runtime_settings()
|
||||
|
||||
assert settings["llm_provider"] == "anthropic"
|
||||
assert settings["backend_url"] == "https://api.minimaxi.com/anthropic"
|
||||
assert settings["deep_think_llm"] == "MiniMax-M2.7-highspeed"
|
||||
assert settings["quick_think_llm"] == "MiniMax-M2.7-highspeed"
|
||||
|
||||
|
||||
def test_saved_api_key_is_provider_scoped(monkeypatch, tmp_path):
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
monkeypatch.delenv("MINIMAX_API_KEY", raising=False)
|
||||
monkeypatch.delenv("OPENAI_API_KEY", raising=False)
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
config_path = tmp_path / "config.json"
|
||||
monkeypatch.setattr(main, "CONFIG_PATH", config_path)
|
||||
|
||||
main._persist_analysis_api_key("anth-key", provider="anthropic")
|
||||
|
||||
saved = json.loads(config_path.read_text())
|
||||
assert saved["api_keys"]["anthropic"] == "anth-key"
|
||||
assert main._get_analysis_provider_api_key("anthropic", saved) == "anth-key"
|
||||
assert main._get_analysis_provider_api_key("openai", saved) is None
|
||||
|
||||
|
||||
def test_analysis_task_routes_smoke(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
seeded_task = {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-smoke",
|
||||
"request_id": "req-task-smoke",
|
||||
"executor_type": "legacy_subprocess",
|
||||
"result_ref": None,
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "running",
|
||||
"progress": 10,
|
||||
"current_stage": "analysts",
|
||||
"created_at": "2026-04-11T10:00:00",
|
||||
"elapsed_seconds": 1,
|
||||
"stages": [],
|
||||
"result": None,
|
||||
"error": None,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"compat": {},
|
||||
}
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
main.app.state.task_results["task-smoke"] = seeded_task
|
||||
|
||||
health_response = client.get("/health")
|
||||
tasks_response = client.get("/api/analysis/tasks")
|
||||
status_response = client.get("/api/analysis/status/task-smoke")
|
||||
|
||||
assert health_response.status_code == 200
|
||||
assert health_response.json() == {"status": "ok"}
|
||||
assert tasks_response.status_code == 200
|
||||
assert tasks_response.json()["total"] >= 1
|
||||
assert any(task["task_id"] == "task-smoke" for task in tasks_response.json()["tasks"])
|
||||
assert status_response.status_code == 200
|
||||
assert status_response.json()["task_id"] == "task-smoke"
|
||||
assert status_response.json()["contract_version"] == "v1alpha1"
|
||||
assert status_response.json()["request_id"] == "req-task-smoke"
|
||||
assert status_response.json()["result"] is None
|
||||
|
||||
|
||||
def test_analysis_status_route_uses_task_query_service(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
expected = {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-query",
|
||||
"status": "running",
|
||||
"via": "task-query-service",
|
||||
}
|
||||
|
||||
def _fake_public_task_payload(task_id, *, state_override=None):
|
||||
assert task_id == "task-query"
|
||||
assert state_override is None
|
||||
return expected
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
main.app.state.task_results["task-query"] = {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-query",
|
||||
"request_id": "req-task-query",
|
||||
"executor_type": "legacy_subprocess",
|
||||
"result_ref": None,
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "running",
|
||||
"progress": 10,
|
||||
"current_stage": "analysts",
|
||||
"created_at": "2026-04-11T10:00:00",
|
||||
"elapsed_seconds": 1,
|
||||
"stages": [],
|
||||
"result": None,
|
||||
"error": None,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"compat": {},
|
||||
}
|
||||
monkeypatch.setattr(main.app.state.task_query_service, "public_task_payload", _fake_public_task_payload)
|
||||
response = client.get("/api/analysis/status/task-query")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json() == expected
|
||||
|
||||
|
||||
def test_analysis_tasks_route_uses_task_query_service(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
expected = {
|
||||
"contract_version": "v1alpha1",
|
||||
"tasks": [{"task_id": "task-query"}],
|
||||
"total": 1,
|
||||
}
|
||||
|
||||
def _fake_list_task_summaries():
|
||||
return expected
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
monkeypatch.setattr(main.app.state.task_query_service, "list_task_summaries", _fake_list_task_summaries)
|
||||
response = client.get("/api/analysis/tasks")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json() == expected
|
||||
|
||||
|
||||
def test_analysis_start_route_uses_analysis_service(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
created: dict[str, object] = {}
|
||||
|
||||
class DummyTask:
|
||||
def cancel(self):
|
||||
return None
|
||||
|
||||
def fake_create_task(coro):
|
||||
created["scheduled_coro"] = coro.cr_code.co_name
|
||||
coro.close()
|
||||
task = DummyTask()
|
||||
created["task"] = task
|
||||
return task
|
||||
|
||||
monkeypatch.setattr(main.asyncio, "create_task", fake_create_task)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
response = client.post(
|
||||
"/api/analysis/start",
|
||||
json={"ticker": "AAPL", "date": "2026-04-11"},
|
||||
headers={"api-key": "test-key"},
|
||||
)
|
||||
|
||||
payload = response.json()
|
||||
task_id = payload["task_id"]
|
||||
|
||||
assert response.status_code == 200
|
||||
assert payload["ticker"] == "AAPL"
|
||||
assert payload["date"] == "2026-04-11"
|
||||
assert payload["status"] == "running"
|
||||
assert created["scheduled_coro"] == "_run_analysis"
|
||||
assert main.app.state.analysis_tasks[task_id] is created["task"]
|
||||
assert main.app.state.task_results[task_id]["current_stage"] == "analysts"
|
||||
assert main.app.state.task_results[task_id]["status"] == "running"
|
||||
assert main.app.state.task_results[task_id]["request_id"]
|
||||
assert main.app.state.task_results[task_id]["executor_type"] == "legacy_subprocess"
|
||||
assert main.app.state.task_results[task_id]["result_ref"] is None
|
||||
assert main.app.state.task_results[task_id]["compat"] == {}
|
||||
|
||||
|
||||
def test_portfolio_analyze_route_uses_analysis_service_smoke(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.setenv("TRADINGAGENTS_USE_APPLICATION_SERVICES", "1")
|
||||
monkeypatch.setenv("ANTHROPIC_API_KEY", "service-key")
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
captured: dict[str, object] = {}
|
||||
|
||||
async def fake_start_portfolio_analysis(*, task_id, date, request_context, broadcast_progress):
|
||||
captured["task_id"] = task_id
|
||||
captured["date"] = date
|
||||
captured["request_context"] = request_context
|
||||
captured["broadcast_progress"] = broadcast_progress
|
||||
return {"task_id": task_id, "status": "running", "total": 3}
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
monkeypatch.setattr(main.app.state.analysis_service, "start_portfolio_analysis", fake_start_portfolio_analysis)
|
||||
response = client.post("/api/portfolio/analyze", headers={"api-key": "service-key"})
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "running"
|
||||
assert str(captured["task_id"]).startswith("port_")
|
||||
assert isinstance(captured["date"], str)
|
||||
assert captured["request_context"].auth_key == "service-key"
|
||||
assert callable(captured["broadcast_progress"])
|
||||
|
||||
|
||||
def test_analysis_websocket_progress_is_contract_first(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
main.app.state.task_results["task-ws"] = {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-ws",
|
||||
"request_id": "req-task-ws",
|
||||
"executor_type": "legacy_subprocess",
|
||||
"result_ref": None,
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "running",
|
||||
"progress": 50,
|
||||
"current_stage": "research",
|
||||
"created_at": "2026-04-11T10:00:00",
|
||||
"elapsed_seconds": 3,
|
||||
"stages": [],
|
||||
"result": None,
|
||||
"error": None,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"compat": {"decision": "HOLD"},
|
||||
}
|
||||
with client.websocket_connect("/ws/analysis/task-ws?api_key=test-key") as websocket:
|
||||
message = websocket.receive_json()
|
||||
|
||||
assert message["type"] == "progress"
|
||||
assert message["contract_version"] == "v1alpha1"
|
||||
assert message["task_id"] == "task-ws"
|
||||
assert message["request_id"] == "req-task-ws"
|
||||
assert message["compat"]["decision"] == "HOLD"
|
||||
assert "decision" not in message
|
||||
|
||||
|
||||
def test_analysis_websocket_maps_internal_runtime_status_to_running(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
main.app.state.task_results["task-ws-runtime"] = {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-ws-runtime",
|
||||
"request_id": "req-task-ws-runtime",
|
||||
"executor_type": "legacy_subprocess",
|
||||
"result_ref": None,
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "auto_recovering",
|
||||
"progress": 50,
|
||||
"current_stage": "research",
|
||||
"created_at": "2026-04-11T10:00:00",
|
||||
"elapsed_seconds": 3,
|
||||
"stages": [],
|
||||
"result": None,
|
||||
"error": None,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"evidence_summary": {"attempts": []},
|
||||
"tentative_classification": None,
|
||||
"budget_state": {},
|
||||
"compat": {},
|
||||
}
|
||||
with client.websocket_connect("/ws/analysis/task-ws-runtime?api_key=test-key") as websocket:
|
||||
message = websocket.receive_json()
|
||||
|
||||
assert message["status"] == "running"
|
||||
|
||||
|
||||
def test_analysis_cancel_route_preserves_response_shape_and_broadcasts_cancelled_state(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
class _DummyTask:
|
||||
def cancel(self):
|
||||
return None
|
||||
|
||||
class _DummyProcess:
|
||||
returncode = None
|
||||
|
||||
def kill(self):
|
||||
return None
|
||||
|
||||
captured: dict[str, dict] = {}
|
||||
|
||||
def _save_sync(task_id, data):
|
||||
captured["saved_state"] = json.loads(json.dumps(data))
|
||||
|
||||
def _delete_sync(task_id):
|
||||
captured["deleted_task_id"] = task_id
|
||||
|
||||
async def _fake_broadcast(task_id, progress):
|
||||
captured["broadcast_payload"] = main.app.state.task_query_service.public_task_payload(
|
||||
task_id,
|
||||
state_override=progress,
|
||||
)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
main.app.state.task_results["task-cancel"] = {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-cancel",
|
||||
"request_id": "req-task-cancel",
|
||||
"executor_type": "legacy_subprocess",
|
||||
"result_ref": None,
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "running",
|
||||
"progress": 25,
|
||||
"current_stage": "research",
|
||||
"created_at": "2026-04-11T10:00:00",
|
||||
"elapsed_seconds": 4,
|
||||
"stages": [],
|
||||
"result": None,
|
||||
"error": None,
|
||||
"degradation_summary": None,
|
||||
"data_quality_summary": None,
|
||||
"compat": {},
|
||||
}
|
||||
main.app.state.analysis_tasks["task-cancel"] = _DummyTask()
|
||||
main.app.state.processes["task-cancel"] = _DummyProcess()
|
||||
monkeypatch.setattr(main.app.state.result_store, "save_task_status", _save_sync)
|
||||
monkeypatch.setattr(main.app.state.result_store, "delete_task_status", _delete_sync)
|
||||
monkeypatch.setattr(main, "broadcast_progress", _fake_broadcast)
|
||||
response = client.delete("/api/analysis/cancel/task-cancel")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json() == {
|
||||
"contract_version": "v1alpha1",
|
||||
"task_id": "task-cancel",
|
||||
"status": "cancelled",
|
||||
}
|
||||
assert "error" not in response.json()
|
||||
assert captured["saved_state"]["status"] == "cancelled"
|
||||
assert captured["broadcast_payload"]["status"] == "cancelled"
|
||||
assert captured["broadcast_payload"]["error"] == {
|
||||
"code": "cancelled",
|
||||
"message": "用户取消",
|
||||
"retryable": False,
|
||||
}
|
||||
assert captured["deleted_task_id"] == "task-cancel"
|
||||
|
||||
|
||||
def test_orchestrator_websocket_smoke_is_contract_first(monkeypatch):
|
||||
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
|
||||
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
import orchestrator.config as config_module
|
||||
import orchestrator.live_mode as live_mode_module
|
||||
import orchestrator.orchestrator as orchestrator_module
|
||||
|
||||
class DummyConfig:
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
|
||||
class DummyOrchestrator:
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
|
||||
class DummyLiveMode:
|
||||
def __init__(self, orchestrator):
|
||||
self.orchestrator = orchestrator
|
||||
|
||||
async def run_once(self, tickers, date=None):
|
||||
assert tickers == ["AAPL"]
|
||||
assert date == "2026-04-11"
|
||||
return [
|
||||
{
|
||||
"contract_version": "v1alpha1",
|
||||
"ticker": "AAPL",
|
||||
"date": "2026-04-11",
|
||||
"status": "degraded_success",
|
||||
"result": {
|
||||
"direction": 1,
|
||||
"confidence": 0.55,
|
||||
"quant_direction": None,
|
||||
"llm_direction": 1,
|
||||
"timestamp": "2026-04-11T12:00:00+00:00",
|
||||
},
|
||||
"error": None,
|
||||
"degradation": {
|
||||
"degraded": True,
|
||||
"reason_codes": ["quant_signal_failed"],
|
||||
"source_diagnostics": {"quant": {"reason_code": "quant_signal_failed"}},
|
||||
},
|
||||
"data_quality": {"state": "partial_data", "source": "quant"},
|
||||
}
|
||||
]
|
||||
|
||||
monkeypatch.setattr(config_module, "OrchestratorConfig", DummyConfig)
|
||||
monkeypatch.setattr(orchestrator_module, "TradingOrchestrator", DummyOrchestrator)
|
||||
monkeypatch.setattr(live_mode_module, "LiveMode", DummyLiveMode)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
with client.websocket_connect("/ws/orchestrator?api_key=test-key") as websocket:
|
||||
websocket.send_json({"tickers": ["AAPL"], "date": "2026-04-11"})
|
||||
message = websocket.receive_json()
|
||||
|
||||
assert message["contract_version"] == "v1alpha1"
|
||||
assert message["signals"][0]["contract_version"] == "v1alpha1"
|
||||
assert message["signals"][0]["status"] == "degraded_success"
|
||||
assert message["signals"][0]["degradation"]["reason_codes"] == ["quant_signal_failed"]
|
||||
assert message["signals"][0]["data_quality"]["state"] == "partial_data"
|
||||
|
||||
|
||||
def test_orchestrator_websocket_rejects_unauthorized(monkeypatch):
|
||||
monkeypatch.setenv("DASHBOARD_API_KEY", "dashboard-secret")
|
||||
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
|
||||
|
||||
main = _load_main_module(monkeypatch)
|
||||
|
||||
with TestClient(main.app) as client:
|
||||
with pytest.raises(WebSocketDisconnect) as exc_info:
|
||||
with client.websocket_connect("/ws/orchestrator"):
|
||||
pass
|
||||
|
||||
assert exc_info.value.code == 4401
|
||||
|
|
@ -0,0 +1,457 @@
|
|||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from services.executor import AnalysisExecutorError, LegacySubprocessAnalysisExecutor
|
||||
from services.request_context import build_request_context
|
||||
|
||||
|
||||
class _FakeStdout:
|
||||
def __init__(self, lines, *, stall: bool = False, delay: float = 0.0):
|
||||
self._lines = list(lines)
|
||||
self._stall = stall
|
||||
self._delay = delay
|
||||
|
||||
async def readline(self):
|
||||
if self._stall:
|
||||
await asyncio.sleep(3600)
|
||||
if self._delay:
|
||||
await asyncio.sleep(self._delay)
|
||||
if self._lines:
|
||||
return self._lines.pop(0)
|
||||
return b""
|
||||
|
||||
|
||||
class _FakeStderr:
|
||||
def __init__(self, payload: bytes = b""):
|
||||
self._payload = payload
|
||||
|
||||
async def read(self):
|
||||
return self._payload
|
||||
|
||||
|
||||
class _FakeProcess:
|
||||
def __init__(self, stdout, *, stderr: bytes = b"", returncode=None):
|
||||
self.stdout = stdout
|
||||
self.stderr = _FakeStderr(stderr)
|
||||
self.returncode = returncode
|
||||
self.kill_called = False
|
||||
self.wait_called = False
|
||||
|
||||
async def wait(self):
|
||||
self.wait_called = True
|
||||
if self.returncode is None:
|
||||
self.returncode = -9 if self.kill_called else 0
|
||||
return self.returncode
|
||||
|
||||
def kill(self):
|
||||
self.kill_called = True
|
||||
self.returncode = -9
|
||||
|
||||
|
||||
def test_executor_raises_when_required_markers_missing(monkeypatch):
|
||||
process = _FakeProcess(
|
||||
_FakeStdout(
|
||||
[
|
||||
b"STAGE:analysts\n",
|
||||
b"STAGE:portfolio\n",
|
||||
b"SIGNAL_DETAIL:{\"quant_signal\":\"BUY\",\"llm_signal\":\"BUY\",\"confidence\":0.8}\n",
|
||||
],
|
||||
),
|
||||
returncode=0,
|
||||
)
|
||||
|
||||
async def fake_create_subprocess_exec(*args, **kwargs):
|
||||
return process
|
||||
|
||||
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path("/usr/bin/python3"),
|
||||
repo_root=Path("."),
|
||||
api_key_resolver=lambda: "env-key",
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
with pytest.raises(AnalysisExecutorError, match="required markers: RESULT_META, ANALYSIS_COMPLETE"):
|
||||
await executor.execute(
|
||||
task_id="task-1",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
),
|
||||
)
|
||||
|
||||
asyncio.run(scenario())
|
||||
|
||||
|
||||
def test_executor_kills_subprocess_on_timeout(monkeypatch):
|
||||
process = _FakeProcess(_FakeStdout([], stall=True))
|
||||
|
||||
async def fake_create_subprocess_exec(*args, **kwargs):
|
||||
return process
|
||||
|
||||
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path("/usr/bin/python3"),
|
||||
repo_root=Path("."),
|
||||
api_key_resolver=lambda: "env-key",
|
||||
stdout_timeout_secs=0.01,
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
with pytest.raises(AnalysisExecutorError, match="timed out"):
|
||||
await executor.execute(
|
||||
task_id="task-2",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
),
|
||||
)
|
||||
|
||||
asyncio.run(scenario())
|
||||
|
||||
assert process.kill_called is True
|
||||
assert process.wait_called is True
|
||||
|
||||
|
||||
def test_executor_marks_degraded_success_when_result_meta_reports_data_quality():
|
||||
output = LegacySubprocessAnalysisExecutor._parse_output(
|
||||
stdout_lines=[
|
||||
'SIGNAL_DETAIL:{"quant_signal":"HOLD","llm_signal":"BUY","confidence":0.6}',
|
||||
'RESULT_META:{"degrade_reason_codes":["non_trading_day"],"data_quality":{"state":"non_trading_day","requested_date":"2026-04-12"}}',
|
||||
"ANALYSIS_COMPLETE:OVERWEIGHT",
|
||||
],
|
||||
stderr_lines=[],
|
||||
ticker="AAPL",
|
||||
date="2026-04-12",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
),
|
||||
contract_version="v1alpha1",
|
||||
executor_type="legacy_subprocess",
|
||||
stdout_timeout_secs=300.0,
|
||||
total_timeout_secs=300.0,
|
||||
last_stage="portfolio",
|
||||
)
|
||||
|
||||
contract = output.to_result_contract(
|
||||
task_id="task-3",
|
||||
ticker="AAPL",
|
||||
date="2026-04-12",
|
||||
created_at="2026-04-12T10:00:00",
|
||||
elapsed_seconds=3,
|
||||
)
|
||||
|
||||
assert contract["status"] == "degraded_success"
|
||||
assert contract["data_quality"]["state"] == "non_trading_day"
|
||||
assert contract["degradation"]["reason_codes"] == ["non_trading_day"]
|
||||
assert output.observation["status"] == "completed"
|
||||
assert output.observation["stage"] == "portfolio"
|
||||
|
||||
|
||||
def test_executor_parses_llm_decision_structured_from_signal_detail():
|
||||
output = LegacySubprocessAnalysisExecutor._parse_output(
|
||||
stdout_lines=[
|
||||
'SIGNAL_DETAIL:{"quant_signal":"HOLD","llm_signal":"BUY","confidence":0.6,"llm_decision_structured":{"rating":"BUY","entry_style":"IMMEDIATE"}}',
|
||||
'RESULT_META:{"degrade_reason_codes":[],"data_quality":{"state":"ok"}}',
|
||||
"ANALYSIS_COMPLETE:BUY",
|
||||
],
|
||||
stderr_lines=[],
|
||||
ticker="AAPL",
|
||||
date="2026-04-12",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
),
|
||||
contract_version="v1alpha1",
|
||||
executor_type="legacy_subprocess",
|
||||
stdout_timeout_secs=300.0,
|
||||
total_timeout_secs=300.0,
|
||||
last_stage="portfolio",
|
||||
)
|
||||
|
||||
assert output.llm_decision_structured == {"rating": "BUY", "entry_style": "IMMEDIATE"}
|
||||
|
||||
|
||||
def test_executor_requires_result_meta_on_success():
|
||||
with pytest.raises(AnalysisExecutorError, match="required markers: RESULT_META"):
|
||||
LegacySubprocessAnalysisExecutor._parse_output(
|
||||
stdout_lines=[
|
||||
'SIGNAL_DETAIL:{"quant_signal":"HOLD","llm_signal":"BUY","confidence":0.6}',
|
||||
"ANALYSIS_COMPLETE:OVERWEIGHT",
|
||||
],
|
||||
stderr_lines=[],
|
||||
ticker="AAPL",
|
||||
date="2026-04-12",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
),
|
||||
contract_version="v1alpha1",
|
||||
executor_type="legacy_subprocess",
|
||||
stdout_timeout_secs=300.0,
|
||||
total_timeout_secs=300.0,
|
||||
last_stage="portfolio",
|
||||
)
|
||||
|
||||
|
||||
def test_executor_injects_provider_specific_env(monkeypatch):
|
||||
captured = {}
|
||||
process = _FakeProcess(
|
||||
_FakeStdout(
|
||||
[
|
||||
b'SIGNAL_DETAIL:{"quant_signal":"BUY","llm_signal":"BUY","confidence":0.8}\n',
|
||||
b'RESULT_META:{"degrade_reason_codes":[],"data_quality":{"state":"ok"}}\n',
|
||||
b"ANALYSIS_COMPLETE:BUY\n",
|
||||
]
|
||||
),
|
||||
returncode=0,
|
||||
)
|
||||
|
||||
async def fake_create_subprocess_exec(*args, **kwargs):
|
||||
captured["env"] = kwargs["env"]
|
||||
return process
|
||||
|
||||
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path("/usr/bin/python3"),
|
||||
repo_root=Path("."),
|
||||
api_key_resolver=lambda provider="openai": "fallback-key",
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
await executor.execute(
|
||||
task_id="task-provider",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
auth_key="dashboard-key",
|
||||
provider_api_key="provider-key",
|
||||
llm_provider="openai",
|
||||
backend_url="https://api.openai.com/v1",
|
||||
deep_think_llm="gpt-5.4",
|
||||
quick_think_llm="gpt-5.4-mini",
|
||||
selected_analysts=["market"],
|
||||
analysis_prompt_style="compact",
|
||||
llm_timeout=45,
|
||||
llm_max_retries=0,
|
||||
metadata={
|
||||
"portfolio_context": "Growth exposure already elevated.",
|
||||
"peer_context": "Same-theme rank: leader.",
|
||||
"peer_context_mode": "SAME_THEME_NORMALIZED",
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
asyncio.run(scenario())
|
||||
|
||||
assert captured["env"]["TRADINGAGENTS_LLM_PROVIDER"] == "openai"
|
||||
assert captured["env"]["TRADINGAGENTS_BACKEND_URL"] == "https://api.openai.com/v1"
|
||||
assert captured["env"]["OPENAI_API_KEY"] == "provider-key"
|
||||
assert captured["env"]["TRADINGAGENTS_SELECTED_ANALYSTS"] == "market"
|
||||
assert captured["env"]["TRADINGAGENTS_ANALYSIS_PROMPT_STYLE"] == "compact"
|
||||
assert captured["env"]["TRADINGAGENTS_LLM_TIMEOUT"] == "45"
|
||||
assert captured["env"]["TRADINGAGENTS_LLM_MAX_RETRIES"] == "0"
|
||||
assert captured["env"]["TRADINGAGENTS_PORTFOLIO_CONTEXT"] == "Growth exposure already elevated."
|
||||
assert captured["env"]["TRADINGAGENTS_PEER_CONTEXT"] == "Same-theme rank: leader."
|
||||
assert captured["env"]["TRADINGAGENTS_PEER_CONTEXT_MODE"] == "SAME_THEME_NORMALIZED"
|
||||
assert captured["env"]["TRADINGAGENTS_PROVIDER_API_KEY"] == "provider-key"
|
||||
assert captured["env"]["TRADINGAGENTS_HEARTBEAT_SECS"] == "10.0"
|
||||
assert captured["env"]["OPENAI_API_KEY"] == "provider-key"
|
||||
assert "ANTHROPIC_API_KEY" not in captured["env"]
|
||||
|
||||
|
||||
def test_executor_requires_result_meta_on_failure(monkeypatch):
|
||||
process = _FakeProcess(
|
||||
_FakeStdout([]),
|
||||
stderr=b"ANALYSIS_ERROR:boom\n",
|
||||
returncode=1,
|
||||
)
|
||||
|
||||
async def fake_create_subprocess_exec(*args, **kwargs):
|
||||
return process
|
||||
|
||||
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path("/usr/bin/python3"),
|
||||
repo_root=Path("."),
|
||||
api_key_resolver=lambda: "env-key",
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
with pytest.raises(AnalysisExecutorError, match="required markers: RESULT_META"):
|
||||
await executor.execute(
|
||||
task_id="task-failure",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
),
|
||||
)
|
||||
|
||||
asyncio.run(scenario())
|
||||
|
||||
|
||||
def test_executor_includes_observation_on_timeout(monkeypatch):
|
||||
process = _FakeProcess(_FakeStdout([], stall=True))
|
||||
|
||||
async def fake_create_subprocess_exec(*args, **kwargs):
|
||||
return process
|
||||
|
||||
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path("/usr/bin/python3"),
|
||||
repo_root=Path("."),
|
||||
api_key_resolver=lambda: "env-key",
|
||||
stdout_timeout_secs=0.01,
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
with pytest.raises(AnalysisExecutorError) as exc_info:
|
||||
await executor.execute(
|
||||
task_id="task-timeout-observation",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
metadata={"attempt_index": 0, "attempt_mode": "baseline", "probe_mode": "none"},
|
||||
),
|
||||
)
|
||||
return exc_info.value
|
||||
|
||||
exc = asyncio.run(scenario())
|
||||
assert exc.observation["observation_code"] == "subprocess_stdout_timeout"
|
||||
assert exc.observation["attempt_mode"] == "baseline"
|
||||
assert exc.observation["provider"] == "anthropic"
|
||||
|
||||
|
||||
def test_executor_collect_markers_tracks_heartbeat_and_auth_checkpoint():
|
||||
markers = LegacySubprocessAnalysisExecutor._collect_markers(
|
||||
[
|
||||
'CHECKPOINT:AUTH:{"provider":"anthropic","api_key_present":true}',
|
||||
'HEARTBEAT:{"elapsed_seconds":10.0}',
|
||||
"STAGE:trading",
|
||||
"RESULT_META:{}",
|
||||
]
|
||||
)
|
||||
|
||||
assert markers["auth_checkpoint"] is True
|
||||
assert markers["heartbeat"] is True
|
||||
assert markers["result_meta"] is True
|
||||
|
||||
|
||||
def test_executor_uses_total_timeout_separately_from_stdout_timeout(monkeypatch):
|
||||
process = _FakeProcess(
|
||||
_FakeStdout(
|
||||
[b'CHECKPOINT:AUTH:{"provider":"anthropic","api_key_present":true}\n'] * 10,
|
||||
delay=0.02,
|
||||
)
|
||||
)
|
||||
|
||||
async def fake_create_subprocess_exec(*args, **kwargs):
|
||||
return process
|
||||
|
||||
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path("/usr/bin/python3"),
|
||||
repo_root=Path("."),
|
||||
api_key_resolver=lambda: "env-key",
|
||||
stdout_timeout_secs=1.0,
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
with pytest.raises(AnalysisExecutorError, match="total timeout"):
|
||||
await executor.execute(
|
||||
task_id="task-total-timeout",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
metadata={"stdout_timeout_secs": 1.0, "total_timeout_secs": 0.05},
|
||||
),
|
||||
)
|
||||
|
||||
asyncio.run(scenario())
|
||||
|
||||
assert process.kill_called is True
|
||||
|
||||
|
||||
def test_executor_real_subprocess_heartbeat_survives_blocking_sleep(tmp_path):
|
||||
script_template = """
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
|
||||
print('CHECKPOINT:AUTH:' + json.dumps({'provider':'anthropic','api_key_present': True}), flush=True)
|
||||
print('STAGE:analysts', flush=True)
|
||||
print('STAGE:research', flush=True)
|
||||
print('STAGE:trading', flush=True)
|
||||
|
||||
stop = threading.Event()
|
||||
def heartbeat():
|
||||
while not stop.wait(0.01):
|
||||
print('HEARTBEAT:' + json.dumps({'alive': True}), flush=True)
|
||||
|
||||
threading.Thread(target=heartbeat, daemon=True).start()
|
||||
time.sleep(0.12)
|
||||
stop.set()
|
||||
|
||||
print('STAGE:risk', flush=True)
|
||||
print('STAGE:portfolio', flush=True)
|
||||
print('SIGNAL_DETAIL:' + json.dumps({'quant_signal':'HOLD','llm_signal':'BUY','confidence':0.8}), flush=True)
|
||||
print('RESULT_META:' + json.dumps({'degrade_reason_codes': [], 'data_quality': {'state': 'ok'}}), flush=True)
|
||||
print('ANALYSIS_COMPLETE:BUY', flush=True)
|
||||
"""
|
||||
|
||||
executor = LegacySubprocessAnalysisExecutor(
|
||||
analysis_python=Path(sys.executable),
|
||||
repo_root=tmp_path,
|
||||
api_key_resolver=lambda: "env-key",
|
||||
script_template=script_template,
|
||||
stdout_timeout_secs=0.03,
|
||||
)
|
||||
|
||||
async def scenario():
|
||||
return await executor.execute(
|
||||
task_id="task-heartbeat-real",
|
||||
ticker="AAPL",
|
||||
date="2026-04-13",
|
||||
request_context=build_request_context(
|
||||
provider_api_key="ctx-key",
|
||||
llm_provider="anthropic",
|
||||
backend_url="https://api.minimaxi.com/anthropic",
|
||||
metadata={
|
||||
"stdout_timeout_secs": 0.03,
|
||||
"total_timeout_secs": 1.0,
|
||||
"heartbeat_interval_secs": 0.01,
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
output = asyncio.run(scenario())
|
||||
assert output.decision == "BUY"
|
||||
assert output.observation["markers"]["heartbeat"] is True
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
from pathlib import Path
|
||||
import re
|
||||
|
||||
|
||||
FRONTEND_SRC = Path(__file__).resolve().parents[2] / "frontend" / "src"
|
||||
CONTRACT_VIEW = FRONTEND_SRC / "utils" / "contractView.js"
|
||||
LEGACY_TOP_LEVEL_FIELDS = ("decision", "confidence", "quant_signal", "llm_signal")
|
||||
DIRECT_FIELD_ACCESS = re.compile(r"(?:\?|)\.\s*(decision|confidence|quant_signal|llm_signal)\b")
|
||||
|
||||
|
||||
def test_contract_view_reads_contract_result_before_compat_fields():
|
||||
source = CONTRACT_VIEW.read_text()
|
||||
|
||||
assert "getResult(payload).decision ?? getCompat(payload).decision" in source
|
||||
assert "getResult(payload).confidence ?? getCompat(payload).confidence" in source
|
||||
assert "getResult(payload).signals?.quant?.rating ?? getCompat(payload).quant_signal" in source
|
||||
assert "getResult(payload).signals?.llm?.rating ?? getCompat(payload).llm_signal" in source
|
||||
|
||||
|
||||
def test_frontend_consumers_use_contract_view_helpers_for_signal_fields():
|
||||
offenders: list[str] = []
|
||||
|
||||
for path in sorted(FRONTEND_SRC.rglob("*.js")) + sorted(FRONTEND_SRC.rglob("*.jsx")):
|
||||
if path == CONTRACT_VIEW:
|
||||
continue
|
||||
matches = {
|
||||
match.group(1)
|
||||
for match in DIRECT_FIELD_ACCESS.finditer(path.read_text())
|
||||
if match.group(1) in LEGACY_TOP_LEVEL_FIELDS
|
||||
}
|
||||
if matches:
|
||||
offenders.append(f"{path.relative_to(FRONTEND_SRC)} -> {sorted(matches)}")
|
||||
|
||||
assert offenders == []
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue