wip: stage uncommitted changes before merge

This commit is contained in:
陈少杰 2026-04-16 17:01:04 +08:00
parent eda9980729
commit 579c787027
45 changed files with 3828 additions and 336 deletions

View File

@ -1,6 +1,13 @@
# LLM Providers (set the one you use)
# MiniMax via Anthropic-compatible API
MINIMAX_API_KEY=
ANTHROPIC_API_KEY=
ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
TRADINGAGENTS_LLM_PROVIDER=anthropic
TRADINGAGENTS_MODEL=MiniMax-M2.7-highspeed
TRADINGAGENTS_BACKEND_URL=https://api.minimaxi.com/anthropic
# Other providers (optional)
OPENAI_API_KEY=
GOOGLE_API_KEY=
ANTHROPIC_API_KEY=
XAI_API_KEY=
OPENROUTER_API_KEY=

146
CLAUDE.md
View File

@ -15,84 +15,110 @@ TradingAgents 是一个基于 LangGraph 的多智能体 LLM 金融交易框架
# 激活环境
source env312/bin/activate
# SEPA筛选 + TradingAgents 完整流程
python sepa_v5.py
# 单股分析
python run_ningde.py # 宁德时代 (300750.SZ)
python run_312.py # 贵州茅台
# CLI 交互模式
# CLI 交互模式(推荐)
python -m cli.main
# 单股分析(编程方式)
python -c "from tradingagents.graph.trading_graph import TradingAgentsGraph; ta = TradingAgentsGraph(debug=True); _, decision = ta.propagate('NVDA', '2026-01-15'); print(decision)"
# 运行测试
python -m pytest orchestrator/tests/
# Orchestrator 回测模式
QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_backtest.py
# Orchestrator 实时模式
QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_live.py
```
## 核心架构
### 工作流程
```
SEPA筛选 (定量) → 分析师团队 → 研究员辩论 → 交易员 → 风险管理辩论 → 组合经理
分析师团队 → 研究员辩论 → 交易员 → 风险管理辩论 → 组合经理
```
### 关键组件 (`tradingagents/`)
### 关键组件
| 目录 | 职责 |
|------|------|
| `agents/` | LLM智能体实现 (分析师、研究员、交易员、风控) |
| `dataflows/` | 数据源集成 (yfinance, alpha_vantage, china_data) |
| `graph/` | LangGraph 工作流编排 |
| `llm_clients/` | 多Provider LLM支持 (OpenAI, Anthropic, Google) |
**tradingagents/** - 核心多智能体框架
- `agents/` - LLM智能体实现 (分析师、研究员、交易员、风控)
- `dataflows/` - 数据源集成,通过 `interface.py` 路由到 yfinance/alpha_vantage/china_data
- `graph/` - LangGraph 工作流编排,`trading_graph.py` 是主协调器
- `llm_clients/` - 多Provider LLM支持 (OpenAI, Anthropic, Google, xAI, OpenRouter, Ollama)
- `default_config.py` - 默认配置LLM provider、模型选择、数据源路由、辩论轮数
### 数据流向
```
数据源 → dataflows/interface.py (路由) → 各智能体工具调用
```
**orchestrator/** - 量化+LLM信号融合层
- `orchestrator.py` - 主协调器,融合 quant 和 LLM 信号
- `quant_runner.py` - 量化信号获取
- `llm_runner.py` - LLM 信号获取(调用 TradingAgentsGraph
- `signals.py` - 信号合并逻辑
- `backtest_mode.py` / `live_mode.py` - 回测/实时运行模式
- `contracts/` - 配置和结果契约定义
## A股特定配置
**cli/** - 交互式命令行界面
- `main.py` - Typer CLI 入口,实时显示智能体状态和报告
## 配置系统
### TradingAgents 配置 (`tradingagents/default_config.py`)
运行时可覆盖的关键配置:
- `llm_provider`: "openai" | "google" | "anthropic" | "xai" | "openrouter" | "ollama"
- `deep_think_llm`: 复杂推理模型(本地默认 `MiniMax-M2.7-highspeed`
- `quick_think_llm`: 快速任务模型(本地默认 `MiniMax-M2.7-highspeed`
- `backend_url`: LLM API endpoint
- `data_vendors`: 按类别配置数据源 (core_stock_apis, technical_indicators, fundamental_data, news_data)
- `tool_vendors`: 按工具覆盖数据源(优先级高于 data_vendors
- `max_debate_rounds`: 研究员辩论轮数
- `max_risk_discuss_rounds`: 风险管理辩论轮数
- `output_language`: 输出语言("English" | "中文"
### Orchestrator 配置 (`orchestrator/config.py`)
- `quant_backtest_path`: 量化回测输出目录(必须设置才能使用 quant 信号)
- `trading_agents_config`: 传递给 TradingAgentsGraph 的配置
- `quant_weight_cap` / `llm_weight_cap`: 信号置信度上限
- `llm_batch_days`: LLM 运行间隔天数
- `cache_dir`: LLM 信号缓存目录
- `llm_solo_penalty` / `quant_solo_penalty`: 单轨运行时的置信度折扣
### A股特定配置
- **数据源**: yfinance (akshare 财务 API 已损坏)
- **股票代码格式**: `300750.SZ` (深圳), `603259.SS` (上海), `688256.SS` (科创板)
- **API**: MiniMax (Anthropic兼容), Base URL: `https://api.minimaxi.com/anthropic`
- **MiniMax API**: Anthropic 兼容Base URL: `https://api.minimaxi.com/anthropic`
- **本地默认模型**: `MiniMax-M2.7-highspeed`
## 关键文件
## 数据流向
| 文件 | 用途 |
|------|------|
| `tradingagents/graph/trading_graph.py` | 主协调器 TradingAgentsGraph |
| `tradingagents/graph/setup.py` | LangGraph 节点/边配置 |
| `dataflows/interface.py` | 数据供应商路由 |
| `sepa_v5.py` | SEPA筛选流程 |
| `default_config.py` | 默认配置 |
```
1. 工具调用 (agents/utils/*_tools.py)
2. 路由层 (dataflows/interface.py)
- 根据 config["data_vendors"] 和 config["tool_vendors"] 路由
3. 数据供应商实现
- yfinance: y_finance.py, yfinance_news.py
- alpha_vantage: alpha_vantage*.py
- china_data: china_data.py (需要 akshare当前不可用)
4. 返回数据给智能体
```
## 配置
## 重要实现细节
默认配置在 `tradingagents/default_config.py`,运行时可覆盖:
- `llm_provider`: LLM提供商
- `deep_think_llm` / `quick_think_llm`: 模型选择
- `data_vendors`: 数据源路由
- `max_debate_rounds`: 辩论轮数
### LLM 客户端
- `llm_clients/base_client.py` - 统一接口
- `llm_clients/model_catalog.py` - 模型目录和验证
- 支持 provider-specific thinking 配置 (google_thinking_level, openai_reasoning_effort, anthropic_effort)
## 设计上下文 (Web Dashboard)
### 信号融合 (Orchestrator)
- 双轨制quant 信号 + LLM 信号
- 降级策略:单轨失败时使用另一轨,应用 solo_penalty
- 缓存机制LLM 信号缓存到 `cache_dir`,避免重复 API 调用
- 契约化:使用 `contracts/` 定义的结构化输出
### 核心功能
- **股票筛选面板**: 输入股票代码运行SEPA筛选展示筛选结果表格
- **分析监控台**: 实时显示TradingAgents多智能体分析进度分析师→研究员→交易员→风控
- **历史报告查看**: 展示历史分析报告,支持搜索、筛选、导出
- **批量管理**: 批量提交股票分析任务,查看队列状态
### 界面风格
- **风格**: 数据可视化优先 - 图表驱动,实时更新
- **参考**: Grafana监控面板、彭博终端、币安交易界面
- **主题**: 深色主题为主,大量使用图表展示数据
### 设计原则
1. **实时性优先** - 所有状态变化即时反映,图表数据自动刷新
2. **数据可视化** - 数字指标用图表展示,不用纯文本堆砌
3. **清晰的状态层级** - 当前任务 > 队列任务 > 历史记录
4. **批量效率** - 支持多任务同时提交、统一管理
5. **专业金融感** - 深色主题、K线/折线图、数据表格
## 设计系统
Always read `DESIGN.md` before making any visual or UI decisions.
All font choices, colors, spacing, and aesthetic direction are defined there.
Do not deviate without explicit user approval.
### 测试
- `orchestrator/tests/` - Orchestrator 单元测试
- `tests/` - TradingAgents 核心测试
- 使用 pytest 运行:`python -m pytest orchestrator/tests/`

View File

@ -144,13 +144,19 @@ export OPENROUTER_API_KEY=... # OpenRouter
export ALPHA_VANTAGE_API_KEY=... # Alpha Vantage
```
For local models, configure Ollama with `llm_provider: "ollama"` in your config.
For this local repo, the default daily lane is MiniMax via Anthropic-compatible API:
Alternatively, copy `.env.example` to `.env` and fill in your keys:
```bash
cp .env.example .env
# then fill:
# MINIMAX_API_KEY=...
# ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
# TRADINGAGENTS_LLM_PROVIDER=anthropic
# TRADINGAGENTS_MODEL=MiniMax-M2.7-highspeed
```
For local models, configure Ollama with `llm_provider: "ollama"` in your config.
### CLI Usage
Launch the interactive CLI:
@ -186,9 +192,10 @@ To use TradingAgents inside your code, you can import the `tradingagents` module
```python
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.default_config import get_default_config, load_project_env
ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
load_project_env(__file__)
ta = TradingAgentsGraph(debug=True, config=get_default_config())
# forward propagate
_, decision = ta.propagate("NVDA", "2026-01-15")
@ -199,12 +206,12 @@ You can also adjust the default configuration to set your own choice of LLMs, de
```python
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.default_config import get_default_config, load_project_env
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "openai" # openai, google, anthropic, xai, openrouter, ollama
config["deep_think_llm"] = "gpt-5.4" # Model for complex reasoning
config["quick_think_llm"] = "gpt-5.4-mini" # Model for quick tasks
load_project_env(__file__)
config = get_default_config()
# Local repo default is MiniMax Anthropic-compatible.
# Override only when you intentionally want a different provider/model.
config["max_debate_rounds"] = 2
ta = TradingAgentsGraph(debug=True, config=config)

View File

@ -24,7 +24,7 @@ from rich.align import Align
from rich.rule import Rule
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.default_config import get_default_config
from cli.models import AnalystType
from cli.utils import *
from cli.announcements import fetch_announcements, display_announcements
@ -930,7 +930,7 @@ def run_analysis():
selections = get_user_selections()
# Create config with selected research depth
config = DEFAULT_CONFIG.copy()
config = get_default_config()
config["max_debate_rounds"] = selections["research_depth"]
config["max_risk_discuss_rounds"] = selections["research_depth"]
config["quick_think_llm"] = selections["shallow_thinker"]
@ -1168,7 +1168,14 @@ def run_analysis():
# Update final report sections
for section in message_buffer.report_sections.keys():
if section in final_state:
if section == "final_trade_decision":
report_value = final_state.get(
"final_trade_decision_report",
final_state.get("final_trade_decision"),
)
if report_value:
message_buffer.update_report_section(section, report_value)
elif section in final_state:
message_buffer.update_report_section(section, final_state[section])
update_display(layout, stats_handler=stats_handler, start_time=start_time)

View File

@ -11,12 +11,13 @@ This document is still the **target boundary** document, but several convergence
- `web_dashboard/backend/services/job_service.py` now owns public task/job projection logic;
- `web_dashboard/backend/services/result_store.py` persists result contracts under `results/<task_id>/result.v1alpha1.json`;
- `web_dashboard/backend/services/analysis_service.py` and `api/portfolio.py` already expose contract-first result payloads by default;
- task lifecycle query/command routing for `status/list/cancel` now sits behind backend task services instead of route-local orchestration in `main.py`;
- `/ws/analysis/{task_id}` and `/ws/orchestrator` already carry `contract_version = "v1alpha1"` and include result/degradation/data-quality metadata.
What is **not** fully finished yet:
- `web_dashboard/backend/main.py` still contains too much orchestration glue and transport-local logic;
- route handlers are thinner than before, but the application layer has not fully absorbed every lifecycle branch;
- `web_dashboard/backend/main.py` still contains too much orchestration glue and transport-local logic outside the task lifecycle slice;
- route handlers are thinner than before, but the application layer has not fully absorbed reports/export and every remaining lifecycle branch;
- migration flags/modes still coexist with legacy compatibility paths.
## 1. Why this document exists
@ -49,7 +50,6 @@ This is the correct place for quant/LLM merge semantics.
- analysis subprocess template creation;
- stage-to-progress mapping;
- task state persistence in `app.state.task_results` and `data/task_status/*.json`;
- conversion from `FinalSignal` to UI-oriented fields such as `decision`, `quant_signal`, `llm_signal`, `confidence`;
- report materialization into `results/<ticker>/<date>/complete_report.md`.
@ -59,6 +59,7 @@ At the same time, current mainline no longer matches the oldest “all logic sit
- merge semantics remain in `orchestrator/`;
- public payload shaping has started moving into backend services;
- task lifecycle query/command paths now route through backend task services;
- legacy compatibility fields still exist for UI safety.
## 3. Target boundary

View File

@ -102,7 +102,10 @@ Optional transport-specific wrapper fields such as WebSocket `type` may sit outs
{"name": "portfolio", "status": "pending", "completed_at": null}
],
"result": null,
"error": null
"error": null,
"evidence_summary": null,
"tentative_classification": null,
"budget_state": {}
}
```
@ -111,6 +114,7 @@ Notes:
- `elapsed_seconds` is preferred over the current loosely typed `elapsed`.
- stage entries should carry explicit `name`; current positional arrays are fragile.
- `result` remains nullable until completion.
- `evidence_summary`, `tentative_classification`, and `budget_state` are additive helper fields for runtime recovery / attribution and may be absent in older payloads.
## 5.3 Completed result payload
@ -137,6 +141,29 @@ Notes:
"available": true
}
},
"evidence": {
"attempts": [
{
"status": "completed",
"observation_code": "completed",
"stage": "portfolio"
}
],
"last_observation": {
"status": "completed",
"observation_code": "completed",
"stage": "portfolio"
}
},
"tentative_classification": {
"kind": "healthy",
"summary": "baseline execution succeeded without fallback"
},
"budget_state": {
"local_recovery_used": false,
"provider_probe_used": false,
"baseline_timeout_secs": 300.0
},
"error": null
}
```
@ -256,6 +283,7 @@ Consumers should tolerate:
- absent `result.signals.quant` when quant path is unavailable
- absent `result.signals.llm` when LLM path is unavailable
- `result.degraded = true` when only one lane produced a usable signal
- optional additive fields such as `evidence`, `tentative_classification`, `budget_state`, `evidence_summary`
### fields to avoid freezing yet

10
main.py
View File

@ -1,15 +1,11 @@
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from dotenv import load_dotenv
from tradingagents.default_config import get_default_config, load_project_env
# Load environment variables from .env file
load_dotenv()
load_project_env(__file__)
# Create a custom config
config = DEFAULT_CONFIG.copy()
config["deep_think_llm"] = "gpt-5.4-mini" # Use a different model
config["quick_think_llm"] = "gpt-5.4-mini" # Use a different model
config = get_default_config()
config["max_debate_rounds"] = 1 # Increase debate rounds
# Configure data vendors (default uses yfinance, no extra API keys needed)

View File

@ -21,14 +21,20 @@ class TradingAgentsConfigPayload(TypedDict, total=False):
openai_reasoning_effort: Optional[str]
anthropic_effort: Optional[str]
output_language: str
portfolio_context: str
peer_context: str
peer_context_mode: str
max_debate_rounds: int
max_risk_discuss_rounds: int
max_recur_limit: int
analyst_node_timeout_secs: float
data_vendors: dict[str, str]
tool_vendors: dict[str, str]
selected_analysts: list[str]
llm_timeout: float
llm_max_retries: int
minimax_retry_attempts: int
minimax_retry_base_delay: float
timeout: float
max_retries: int
use_responses_api: bool

View File

@ -1,6 +1,7 @@
import json
from pathlib import Path
from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.default_config import DEFAULT_CONFIG, get_default_config, load_project_env, normalize_runtime_llm_config
from tradingagents.graph.trading_graph import TradingAgentsGraph, _merge_with_default_config
@ -31,6 +32,56 @@ def test_merge_with_default_config_merges_nested_vendor_settings():
assert merged["tool_vendors"]["get_stock_data"] == "alpha_vantage"
def test_get_default_config_prefers_runtime_minimax_env(monkeypatch):
monkeypatch.setenv("ANTHROPIC_BASE_URL", "https://api.minimaxi.com/anthropic")
monkeypatch.setenv("TRADINGAGENTS_MODEL", "MiniMax-M2.7-highspeed")
monkeypatch.setenv("MINIMAX_API_KEY", "test-minimax-key")
monkeypatch.delenv("TRADINGAGENTS_LLM_PROVIDER", raising=False)
monkeypatch.delenv("TRADINGAGENTS_BACKEND_URL", raising=False)
config = get_default_config()
assert config["llm_provider"] == "anthropic"
assert config["backend_url"] == "https://api.minimaxi.com/anthropic"
assert config["deep_think_llm"] == "MiniMax-M2.7-highspeed"
assert config["quick_think_llm"] == "MiniMax-M2.7-highspeed"
assert config["api_key"] == "test-minimax-key"
assert config["llm_timeout"] == 60.0
assert config["llm_max_retries"] == 1
assert config["minimax_retry_attempts"] == 2
def test_load_project_env_overrides_stale_shell_vars(monkeypatch, tmp_path):
monkeypatch.setenv("ANTHROPIC_BASE_URL", "https://stale.example.com/api")
env_file = tmp_path / ".env"
env_file.write_text("ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic\n", encoding="utf-8")
load_project_env(env_file)
assert Path(env_file).exists()
assert Path(env_file).read_text(encoding="utf-8")
assert Path(env_file).name == ".env"
assert __import__("os").environ["ANTHROPIC_BASE_URL"] == "https://api.minimaxi.com/anthropic"
def test_normalize_runtime_llm_config_keeps_model_and_canonicalizes_minimax_url():
normalized = normalize_runtime_llm_config(
{
"llm_provider": "anthropic",
"backend_url": "https://api.minimaxi.com/anthropic/",
"deep_think_llm": "MiniMax-M2.7-highspeed",
"quick_think_llm": "MiniMax-M2.7-highspeed",
}
)
assert normalized["backend_url"] == "https://api.minimaxi.com/anthropic"
assert normalized["deep_think_llm"] == "MiniMax-M2.7-highspeed"
assert normalized["quick_think_llm"] == "MiniMax-M2.7-highspeed"
assert normalized["llm_timeout"] == 60.0
assert normalized["llm_max_retries"] == 1
assert normalized["minimax_retry_attempts"] == 2
def test_log_state_persists_research_provenance(tmp_path):
graph = TradingAgentsGraph.__new__(TradingAgentsGraph)
graph.config = {"results_dir": str(tmp_path)}
@ -77,3 +128,35 @@ def test_log_state_persists_research_provenance(tmp_path):
assert payload["investment_debate_state"]["research_mode"] == "degraded_synthesis"
assert payload["investment_debate_state"]["timed_out_nodes"] == ["Bull Researcher"]
assert payload["investment_debate_state"]["manager_confidence"] == 0.0
def test_normalize_decision_outputs_repairs_invalid_final_report():
graph = TradingAgentsGraph.__new__(TradingAgentsGraph)
final_state = {
"portfolio_context": "Current account is crowded in growth beta.",
"peer_context": "Within the same theme, this name ranks near the top on quality.",
"investment_plan": "RECOMMENDATION: BUY\nSimple execution plan: build on weakness.",
"trader_investment_plan": "TRADER_RATING: BUY\nFINAL TRANSACTION PROPOSAL: **BUY**",
"risk_debate_state": {
"judge_decision": "",
"history": "",
"aggressive_history": "",
"conservative_history": "",
"neutral_history": "",
"latest_speaker": "Judge",
"current_aggressive_response": "",
"current_conservative_response": "",
"current_neutral_response": "",
"count": 3,
},
"final_trade_decision": 'I will gather more market data. <tool_call>name="stock_data"</tool_call>',
}
normalized = TradingAgentsGraph._normalize_decision_outputs(graph, final_state)
assert normalized["final_trade_decision"] == "BUY"
assert normalized["final_trade_decision_structured"]["rating_source"] == "trader_plan"
assert normalized["final_trade_decision_structured"]["portfolio_context_used"] is True
assert normalized["final_trade_decision_structured"]["peer_context_used"] is True
assert normalized["final_trade_decision_report"].startswith("## Normalized Portfolio Decision")
assert normalized["risk_debate_state"]["judge_decision"] == normalized["final_trade_decision_report"]

View File

@ -50,3 +50,8 @@ class ModelValidationTests(unittest.TestCase):
client.get_llm()
self.assertEqual(caught, [])
def test_minimax_anthropic_compatible_models_are_known(self):
for model in ("MiniMax-M2.7-highspeed", "MiniMax-M2.7"):
with self.subTest(model=model):
self.assertTrue(validate_model("anthropic", model))

View File

@ -24,7 +24,7 @@ def create_fundamentals_analyst(llm):
if use_compact_analysis_prompt():
system_message = (
"You are a fundamentals analyst. Use `get_fundamentals` first, then only call statement tools if needed. Summarize the company in under 220 words with: business quality, growth/profitability, balance-sheet risk, cash-flow quality, and a trading implication. End with a Markdown table."
"You are a fundamentals analyst. Make at most one `get_fundamentals` call first, then only call statement tools if a specific gap remains. Avoid iterative follow-up tool calls. Summarize the company in under 220 words with: business quality, growth/profitability, balance-sheet risk, cash-flow quality, and a trading implication. End with a Markdown table."
+ get_language_instruction()
)
else:

View File

@ -22,7 +22,9 @@ def create_market_analyst(llm):
if use_compact_analysis_prompt():
system_message = (
"""You are a market analyst. First call `get_stock_data`, then call `get_indicators` with 4 to 6 complementary indicators chosen from: `close_10_ema`, `close_50_sma`, `close_200_sma`, `macd`, `macds`, `macdh`, `rsi`, `boll`, `boll_ub`, `boll_lb`, `atr`, `vwma`.
"""You are a market analyst. Make at most two tool calls total:
1. Call `get_stock_data` once.
2. Call `get_indicators` once with 4 to 6 complementary indicators passed as a single comma-separated string chosen from: `close_10_ema`, `close_50_sma`, `close_200_sma`, `macd`, `macds`, `macdh`, `rsi`, `boll`, `boll_ub`, `boll_lb`, `atr`, `vwma`.
Pick indicators that cover trend, momentum, volatility, and volume without redundancy. Then produce a concise report with:
- market regime
@ -31,7 +33,7 @@ Pick indicators that cover trend, momentum, volatility, and volume without redun
- trade implications
- risk warnings
Keep the report under 250 words and end with a Markdown table of the key signals."""
Do not make repeated follow-up tool calls after the indicator batch returns. Keep the report under 250 words and end with a Markdown table of the key signals."""
+ get_language_instruction()
)
else:

View File

@ -21,7 +21,7 @@ def create_news_analyst(llm):
if use_compact_analysis_prompt():
system_message = (
"You are a news analyst. Gather only the most relevant recent company and macro news. Summarize in under 180 words with: bullish catalysts, bearish catalysts, macro context, and likely near-term market impact. End with a Markdown table."
"You are a news analyst. Make at most one `get_news` call and one `get_global_news` call, then gather only the most relevant recent company and macro news. Summarize in under 180 words with: bullish catalysts, bearish catalysts, macro context, and likely near-term market impact. End with a Markdown table."
+ get_language_instruction()
)
else:

View File

@ -19,7 +19,7 @@ def create_social_media_analyst(llm):
if use_compact_analysis_prompt():
system_message = (
"You are a sentiment analyst. Use `get_news` to infer recent company sentiment from news and public discussion. Summarize in under 180 words with: sentiment direction, what is driving it, whether sentiment confirms or contradicts price action, and the trading implication. End with a Markdown table."
"You are a sentiment analyst. Make at most one `get_news` call, then infer recent company sentiment from news and public discussion. Summarize in under 180 words with: sentiment direction, what is driving it, whether sentiment confirms or contradicts price action, and the trading implication. End with a Markdown table."
+ get_language_instruction()
)
else:

View File

@ -1,9 +1,12 @@
from tradingagents.agents.utils.agent_utils import (
build_instrument_context,
build_optional_decision_context,
get_language_instruction,
summarize_structured_signal,
truncate_prompt_text,
use_compact_analysis_prompt,
)
from tradingagents.agents.utils.decision_utils import build_structured_decision
def create_portfolio_manager(llm, memory):
@ -19,6 +22,16 @@ def create_portfolio_manager(llm, memory):
sentiment_report = state["sentiment_report"]
research_plan = state["investment_plan"]
trader_plan = state["trader_investment_plan"]
research_structured = state.get("investment_plan_structured") or {}
trader_structured = state.get("trader_investment_plan_structured") or {}
portfolio_context = state.get("portfolio_context", "")
peer_context = state.get("peer_context", "")
decision_context = build_optional_decision_context(
portfolio_context,
peer_context,
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
max_chars=550,
)
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2)
@ -33,15 +46,25 @@ def create_portfolio_manager(llm, memory):
{instrument_context}
Use exactly one rating: Buy / Overweight / Hold / Underweight / Sell.
You already have enough evidence. Do not ask for more data and do not emit tool calls.
Return only:
1. Rating
2. Executive summary
3. Key risks
Return with this exact header first:
RATING: BUY|OVERWEIGHT|HOLD|UNDERWEIGHT|SELL
HOLD_SUBTYPE: DEFENSIVE_HOLD|STAGED_BUY_HOLD|STANDARD_HOLD|N/A
ENTRY_STYLE: IMMEDIATE|STAGED|WAIT_PULLBACK|EXISTING_ONLY|REDUCE|EXIT|UNKNOWN
SAME_THEME_RANK: LEADER|UPPER|MIDDLE|LOWER|LAGGARD|UNKNOWN
ACCOUNT_FIT: FAVORABLE|NEUTRAL|CROWDED_GROWTH|DEFENSIVE_REBALANCE|UNKNOWN
Then return only:
1. Executive summary
2. Key risks
Research plan: {truncate_prompt_text(research_plan, 500)}
Research signal summary: {summarize_structured_signal(research_structured)}
Trader plan: {truncate_prompt_text(trader_plan, 500)}
Trader signal summary: {summarize_structured_signal(trader_structured)}
Past lessons: {truncate_prompt_text(past_memory_str, 400)}
{decision_context}
Risk debate: {truncate_prompt_text(history, 1400)}{get_language_instruction()}"""
else:
prompt = f"""As the Portfolio Manager, synthesize the risk analysts' debate and deliver the final trading decision.
@ -59,11 +82,19 @@ Risk debate: {truncate_prompt_text(history, 1400)}{get_language_instruction()}""
**Context:**
- Research Manager's investment plan: **{research_plan}**
- Research Manager structured signal: **{summarize_structured_signal(research_structured)}**
- Trader's transaction proposal: **{trader_plan}**
- Trader structured signal: **{summarize_structured_signal(trader_structured)}**
- Lessons from past decisions: **{past_memory_str}**
{decision_context}
**Required Output Structure:**
1. **Rating**: State one of Buy / Overweight / Hold / Underweight / Sell.
1. Start with these exact header lines:
- `RATING: BUY|OVERWEIGHT|HOLD|UNDERWEIGHT|SELL`
- `HOLD_SUBTYPE: DEFENSIVE_HOLD|STAGED_BUY_HOLD|STANDARD_HOLD|N/A`
- `ENTRY_STYLE: IMMEDIATE|STAGED|WAIT_PULLBACK|EXISTING_ONLY|REDUCE|EXIT|UNKNOWN`
- `SAME_THEME_RANK: LEADER|UPPER|MIDDLE|LOWER|LAGGARD|UNKNOWN`
- `ACCOUNT_FIT: FAVORABLE|NEUTRAL|CROWDED_GROWTH|DEFENSIVE_REBALANCE|UNKNOWN`
2. **Executive Summary**: A concise action plan covering entry strategy, position sizing, key risk levels, and time horizon.
3. **Investment Thesis**: Detailed reasoning anchored in the analysts' debate and past reflections.
@ -74,12 +105,26 @@ Risk debate: {truncate_prompt_text(history, 1400)}{get_language_instruction()}""
---
Be decisive and ground every conclusion in specific evidence from the analysts.{get_language_instruction()}"""
Be decisive and ground every conclusion in specific evidence from the analysts.
Do not ask for more data and do not emit tool calls.{get_language_instruction()}"""
response = llm.invoke(prompt)
structured_decision = build_structured_decision(
response.content,
fallback_candidates=(
("trader_plan", trader_plan),
("investment_plan", research_plan),
),
default_rating="HOLD",
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
context_usage={
"portfolio_context": bool(str(portfolio_context).strip()),
"peer_context": bool(str(peer_context).strip()),
},
)
new_risk_debate_state = {
"judge_decision": response.content,
"judge_decision": structured_decision["report_text"],
"history": risk_debate_state["history"],
"aggressive_history": risk_debate_state["aggressive_history"],
"conservative_history": risk_debate_state["conservative_history"],
@ -93,7 +138,9 @@ Be decisive and ground every conclusion in specific evidence from the analysts.{
return {
"risk_debate_state": new_risk_debate_state,
"final_trade_decision": response.content,
"final_trade_decision": structured_decision["rating"],
"final_trade_decision_report": structured_decision["report_text"],
"final_trade_decision_structured": structured_decision,
}
return portfolio_manager_node

View File

@ -1,8 +1,10 @@
from tradingagents.agents.utils.agent_utils import (
build_instrument_context,
build_optional_decision_context,
truncate_prompt_text,
use_compact_analysis_prompt,
)
from tradingagents.agents.utils.decision_utils import build_structured_decision
def create_research_manager(llm, memory):
@ -15,6 +17,14 @@ def create_research_manager(llm, memory):
fundamentals_report = state["fundamentals_report"]
investment_debate_state = state["investment_debate_state"]
portfolio_context = state.get("portfolio_context", "")
peer_context = state.get("peer_context", "")
decision_context = build_optional_decision_context(
portfolio_context,
peer_context,
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
max_chars=500,
)
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(
@ -30,7 +40,7 @@ def create_research_manager(llm, memory):
prompt = f"""You are the research manager. Decide Buy, Sell, or Hold based on the debate.
Return a concise response with:
1. Recommendation
1. Recommendation line formatted exactly as `RECOMMENDATION: BUY|HOLD|SELL`
2. Top reasons
3. Simple execution plan
@ -39,9 +49,12 @@ Past lessons:
{instrument_context}
{decision_context}
Debate history:
{truncate_prompt_text(history, 700)}
You already have enough evidence. Do not ask for more data and do not emit tool calls.
Keep the full answer under 180 words."""
else:
prompt = f"""As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented.
@ -60,10 +73,24 @@ Here are your past reflections on mistakes:
{instrument_context}
{decision_context}
Here is the debate:
Debate History:
{history}"""
{history}
Start the answer with `RECOMMENDATION: BUY|HOLD|SELL`.
You already have enough evidence. Do not ask for more data and do not emit tool calls."""
response = llm.invoke(prompt)
structured_plan = build_structured_decision(
response.content,
default_rating="HOLD",
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
context_usage={
"portfolio_context": bool(str(portfolio_context).strip()),
"peer_context": bool(str(peer_context).strip()),
},
)
new_investment_debate_state = {
"judge_decision": response.content,
@ -77,6 +104,7 @@ Debate History:
return {
"investment_debate_state": new_investment_debate_state,
"investment_plan": response.content,
"investment_plan_structured": structured_plan,
}
return research_manager_node

View File

@ -1,11 +1,23 @@
from tradingagents.agents.utils.agent_utils import (
truncate_prompt_text,
use_compact_analysis_prompt,
)
from tradingagents.agents.utils.subagent_runner import (
run_parallel_subagents,
synthesize_subagent_results,
)
def create_bear_researcher(llm, memory):
"""
Create a Bear Researcher node that uses parallel subagents for each dimension.
Instead of a single large LLM call that times out, this implementation:
1. Spawns parallel subagents for market, sentiment, news, fundamentals
2. Each subagent has its own timeout (15s default)
3. Synthesizes results into a unified bear argument
4. If some subagents fail, still produces output with available results
"""
def bear_node(state) -> dict:
investment_debate_state = state["investment_debate_state"]
history = investment_debate_state.get("history", "")
@ -27,51 +39,168 @@ def create_bear_researcher(llm, memory):
for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n"
if use_compact_analysis_prompt():
prompt = f"""You are a Bear Analyst. Make the strongest concise short case against the stock.
# Build dimension-specific prompts for parallel execution
dimension_configs = []
Use only the highest-signal evidence from the reports below. Address the latest bull point directly. Keep the answer under 140 words and end with a clear stance.
# Market analysis subagent
market_prompt = f"""You are a Bear Analyst focusing on MARKET data.
Market: {truncate_prompt_text(market_research_report, 420)}
Sentiment: {truncate_prompt_text(sentiment_report, 220)}
News: {truncate_prompt_text(news_report, 220)}
Fundamentals: {truncate_prompt_text(fundamentals_report, 320)}
Debate history: {truncate_prompt_text(history, 260)}
Last bull argument: {truncate_prompt_text(current_response, 180)}
Past lessons: {truncate_prompt_text(past_memory_str, 180)}
Based ONLY on the market report below, make a concise bear case (under 80 words).
Focus on: price weakness, resistance rejection, moving average bearish alignment, overbought conditions.
Address the latest bull argument directly if provided.
Market Report:
{truncate_prompt_text(market_research_report, 500)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bull Argument:
{truncate_prompt_text(current_response, 150)}
Return your analysis in this format:
BEAR CASE: [your concise bear argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
else:
prompt = f"""You are a Bear Analyst making the case against investing in the stock. Your goal is to present a well-reasoned argument emphasizing risks, challenges, and negative indicators. Leverage the provided research and data to highlight potential downsides and counter bullish arguments effectively.
dimension_configs.append({
"dimension": "market",
"prompt": market_prompt,
})
Key points to focus on:
# Sentiment analysis subagent
sentiment_prompt = f"""You are a Bear Analyst focusing on SENTIMENT data.
- Risks and Challenges: Highlight factors like market saturation, financial instability, or macroeconomic threats that could hinder the stock's performance.
- Competitive Weaknesses: Emphasize vulnerabilities such as weaker market positioning, declining innovation, or threats from competitors.
- Negative Indicators: Use evidence from financial data, market trends, or recent adverse news to support your position.
- Bull Counterpoints: Critically analyze the bull argument with specific data and sound reasoning, exposing weaknesses or over-optimistic assumptions.
- Engagement: Present your argument in a conversational style, directly engaging with the bull analyst's points and debating effectively rather than simply listing facts.
Based ONLY on the sentiment report below, make a concise bear case (under 80 words).
Focus on: negative sentiment trends, social media bearishness, analyst downgrades.
Address the latest bull argument directly if provided.
Resources available:
Sentiment Report:
{truncate_prompt_text(sentiment_report, 300)}
Market research report: {market_research_report}
Social media sentiment report: {sentiment_report}
Latest world affairs news: {news_report}
Company fundamentals report: {fundamentals_report}
Conversation history of the debate: {history}
Last bull argument: {current_response}
Reflections from similar situations and lessons learned: {past_memory_str}
Use this information to deliver a compelling bear argument, refute the bull's claims, and engage in a dynamic debate that demonstrates the risks and weaknesses of investing in the stock. You must also address reflections and learn from lessons and mistakes you made in the past.
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bull Argument:
{truncate_prompt_text(current_response, 150)}
Return your analysis in this format:
BEAR CASE: [your concise bear argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
dimension_configs.append({
"dimension": "sentiment",
"prompt": sentiment_prompt,
})
# News analysis subagent
news_prompt = f"""You are a Bear Analyst focusing on NEWS data.
Based ONLY on the news report below, make a concise bear case (under 80 words).
Focus on: negative news, regulatory risks, competitive threats, strategic setbacks.
Address the latest bull argument directly if provided.
News Report:
{truncate_prompt_text(news_report, 300)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bull Argument:
{truncate_prompt_text(current_response, 150)}
Return your analysis in this format:
BEAR CASE: [your concise bear argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
dimension_configs.append({
"dimension": "news",
"prompt": news_prompt,
})
# Fundamentals analysis subagent
fundamentals_prompt = f"""You are a Bear Analyst focusing on FUNDAMENTALS data.
Based ONLY on the fundamentals report below, make a concise bear case (under 80 words).
Focus on: declining revenues, margin compression, high debt, deteriorating cash flow, overvaluation.
Address the latest bull argument directly if provided.
Fundamentals Report:
{truncate_prompt_text(fundamentals_report, 400)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bull Argument:
{truncate_prompt_text(current_response, 150)}
Past Lessons:
{truncate_prompt_text(past_memory_str, 150)}
Return your analysis in this format:
BEAR CASE: [your concise bear argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
dimension_configs.append({
"dimension": "fundamentals",
"prompt": fundamentals_prompt,
})
# Run all subagents in parallel with 25s timeout each (LLM can be slow)
subagent_results = run_parallel_subagents(
llm=llm,
dimension_configs=dimension_configs,
timeout_per_subagent=25.0,
max_workers=4,
)
# Synthesize results into a unified bear argument
synthesized_dimensions, synthesis_metadata = synthesize_subagent_results(
subagent_results,
max_chars_per_result=200,
)
# Generate the final bear argument using synthesis
synthesis_prompt = f"""You are a Bear Analyst. Based on the following dimension analyses from your team,
synthesize a compelling bear argument (under 200 words) for this stock.
=== TEAM ANALYSIS RESULTS ===
{synthesized_dimensions}
=== SYNTHESIS INSTRUCTIONS ===
1. Combine the strongest bear points from each dimension
2. Address the latest bull argument directly
3. End with a clear stance: SELL, HOLD (with理由), or BUY (if overwhelming bull case)
Be decisive. Do not hedge. Present the bear case forcefully.
"""
try:
synthesis_response = llm.invoke(synthesis_prompt)
final_argument = synthesis_response.content if hasattr(synthesis_response, 'content') else str(synthesis_response)
except Exception as e:
# Fallback: just use synthesized dimensions directly
final_argument = f"""BEAR SYNTHESIS FAILED: {str(e)}
=== AVAILABLE ANALYSES ===
{synthesized_dimensions}
FALLBACK CONCLUSION: Based on available data, the bear case is MIXTED.
Further analysis needed before making a definitive recommendation.
"""
response = llm.invoke(prompt)
argument = f"Bear Analyst: {final_argument}"
argument = f"Bear Analyst: {response.content}"
# Add subagent metadata to the argument for transparency
timing_info = ", ".join([
f"{dim}={timing}s"
for dim, timing in synthesis_metadata["subagent_timings"].items()
])
metadata_note = f"\n\n[Subagent timing: {timing_info}]"
new_investment_debate_state = {
"history": history + "\n" + argument,
"bear_history": bear_history + "\n" + argument,
"history": history + "\n" + argument + metadata_note,
"bear_history": bear_history + "\n" + argument + metadata_note,
"bull_history": investment_debate_state.get("bull_history", ""),
"current_response": argument,
"current_response": argument + metadata_note,
"count": investment_debate_state["count"] + 1,
}

View File

@ -1,11 +1,23 @@
from tradingagents.agents.utils.agent_utils import (
truncate_prompt_text,
use_compact_analysis_prompt,
)
from tradingagents.agents.utils.subagent_runner import (
run_parallel_subagents,
synthesize_subagent_results,
)
def create_bull_researcher(llm, memory):
"""
Create a Bull Researcher node that uses parallel subagents for each dimension.
Instead of a single large LLM call that times out, this implementation:
1. Spawns parallel subagents for market, sentiment, news, fundamentals
2. Each subagent has its own timeout (15s default)
3. Synthesizes results into a unified bull argument
4. If some subagents fail, still produces output with available results
"""
def bull_node(state) -> dict:
investment_debate_state = state["investment_debate_state"]
history = investment_debate_state.get("history", "")
@ -27,49 +39,168 @@ def create_bull_researcher(llm, memory):
for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n"
if use_compact_analysis_prompt():
prompt = f"""You are a Bull Analyst. Make the strongest concise long case for the stock.
# Build dimension-specific prompts for parallel execution
dimension_configs = []
Use only the highest-signal evidence from the reports below. Address the latest bear point directly. Keep the answer under 140 words and end with a clear stance.
# Market analysis subagent
market_prompt = f"""You are a Bull Analyst focusing on MARKET data.
Market: {truncate_prompt_text(market_research_report, 420)}
Sentiment: {truncate_prompt_text(sentiment_report, 220)}
News: {truncate_prompt_text(news_report, 220)}
Fundamentals: {truncate_prompt_text(fundamentals_report, 320)}
Debate history: {truncate_prompt_text(history, 260)}
Last bear argument: {truncate_prompt_text(current_response, 180)}
Past lessons: {truncate_prompt_text(past_memory_str, 180)}
Based ONLY on the market report below, make a concise bull case (under 80 words).
Focus on: price trends, support/resistance, moving averages, technical indicators.
Address the latest bear argument directly if provided.
Market Report:
{truncate_prompt_text(market_research_report, 500)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bear Argument:
{truncate_prompt_text(current_response, 150)}
Return your analysis in this format:
BULL CASE: [your concise bull argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
else:
prompt = f"""You are a Bull Analyst advocating for investing in the stock. Your task is to build a strong, evidence-based case emphasizing growth potential, competitive advantages, and positive market indicators. Leverage the provided research and data to address concerns and counter bearish arguments effectively.
dimension_configs.append({
"dimension": "market",
"prompt": market_prompt,
})
Key points to focus on:
- Growth Potential: Highlight the company's market opportunities, revenue projections, and scalability.
- Competitive Advantages: Emphasize factors like unique products, strong branding, or dominant market positioning.
- Positive Indicators: Use financial health, industry trends, and recent positive news as evidence.
- Bear Counterpoints: Critically analyze the bear argument with specific data and sound reasoning, addressing concerns thoroughly and showing why the bull perspective holds stronger merit.
- Engagement: Present your argument in a conversational style, engaging directly with the bear analyst's points and debating effectively rather than just listing data.
# Sentiment analysis subagent
sentiment_prompt = f"""You are a Bull Analyst focusing on SENTIMENT data.
Resources available:
Market research report: {market_research_report}
Social media sentiment report: {sentiment_report}
Latest world affairs news: {news_report}
Company fundamentals report: {fundamentals_report}
Conversation history of the debate: {history}
Last bear argument: {current_response}
Reflections from similar situations and lessons learned: {past_memory_str}
Use this information to deliver a compelling bull argument, refute the bear's concerns, and engage in a dynamic debate that demonstrates the strengths of the bull position. You must also address reflections and learn from lessons and mistakes you made in the past.
Based ONLY on the sentiment report below, make a concise bull case (under 80 words).
Focus on: positive sentiment trends, social media bullishness, analyst upgrades.
Address the latest bear argument directly if provided.
Sentiment Report:
{truncate_prompt_text(sentiment_report, 300)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bear Argument:
{truncate_prompt_text(current_response, 150)}
Return your analysis in this format:
BULL CASE: [your concise bull argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
dimension_configs.append({
"dimension": "sentiment",
"prompt": sentiment_prompt,
})
# News analysis subagent
news_prompt = f"""You are a Bull Analyst focusing on NEWS data.
Based ONLY on the news report below, make a concise bull case (under 80 words).
Focus on: positive news, catalysts, strategic developments, partnerships.
Address the latest bear argument directly if provided.
News Report:
{truncate_prompt_text(news_report, 300)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bear Argument:
{truncate_prompt_text(current_response, 150)}
Return your analysis in this format:
BULL CASE: [your concise bull argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
dimension_configs.append({
"dimension": "news",
"prompt": news_prompt,
})
# Fundamentals analysis subagent
fundamentals_prompt = f"""You are a Bull Analyst focusing on FUNDAMENTALS data.
Based ONLY on the fundamentals report below, make a concise bull case (under 80 words).
Focus on: revenue growth, profit margins, cash flow, valuation metrics.
Address the latest bear argument directly if provided.
Fundamentals Report:
{truncate_prompt_text(fundamentals_report, 400)}
Debate History (for context):
{truncate_prompt_text(history, 200)}
Last Bear Argument:
{truncate_prompt_text(current_response, 150)}
Past Lessons:
{truncate_prompt_text(past_memory_str, 150)}
Return your analysis in this format:
BULL CASE: [your concise bull argument]
CONFIDENCE: [HIGH/MEDIUM/LOW]
"""
dimension_configs.append({
"dimension": "fundamentals",
"prompt": fundamentals_prompt,
})
# Run all subagents in parallel with 25s timeout each (LLM can be slow)
subagent_results = run_parallel_subagents(
llm=llm,
dimension_configs=dimension_configs,
timeout_per_subagent=25.0,
max_workers=4,
)
# Synthesize results into a unified bull argument
synthesized_dimensions, synthesis_metadata = synthesize_subagent_results(
subagent_results,
max_chars_per_result=200,
)
# Generate the final bull argument using synthesis
synthesis_prompt = f"""You are a Bull Analyst. Based on the following dimension analyses from your team,
synthesize a compelling bull argument (under 200 words) for this stock.
=== TEAM ANALYSIS RESULTS ===
{synthesized_dimensions}
=== SYNTHESIS INSTRUCTIONS ===
1. Combine the strongest bull points from each dimension
2. Address the latest bear argument directly
3. End with a clear stance: BUY, HOLD (with理由), or SELL (if overwhelming bear case)
Be decisive. Do not hedge. Present the bull case forcefully.
"""
try:
synthesis_response = llm.invoke(synthesis_prompt)
final_argument = synthesis_response.content if hasattr(synthesis_response, 'content') else str(synthesis_response)
except Exception as e:
# Fallback: just use synthesized dimensions directly
final_argument = f"""BULL SYNTHESIS FAILED: {str(e)}
=== AVAILABLE ANALYSES ===
{synthesized_dimensions}
FALLBACK CONCLUSION: Based on available data, the bull case is MIXTED.
Further analysis needed before making a definitive recommendation.
"""
response = llm.invoke(prompt)
argument = f"Bull Analyst: {final_argument}"
argument = f"Bull Analyst: {response.content}"
# Add subagent metadata to the argument for transparency
timing_info = ", ".join([
f"{dim}={timing}s"
for dim, timing in synthesis_metadata["subagent_timings"].items()
])
metadata_note = f"\n\n[Subagent timing: {timing_info}]"
new_investment_debate_state = {
"history": history + "\n" + argument,
"bull_history": bull_history + "\n" + argument,
"history": history + "\n" + argument + metadata_note,
"bull_history": bull_history + "\n" + argument + metadata_note,
"bear_history": investment_debate_state.get("bear_history", ""),
"current_response": argument,
"current_response": argument + metadata_note,
"count": investment_debate_state["count"] + 1,
}

View File

@ -1,5 +1,7 @@
from tradingagents.agents.utils.agent_utils import (
build_optional_decision_context,
summarize_structured_signal,
truncate_prompt_text,
use_compact_analysis_prompt,
)
@ -20,11 +22,22 @@ def create_aggressive_debator(llm):
fundamentals_report = state["fundamentals_report"]
trader_decision = state["trader_investment_plan"]
trader_structured = state.get("trader_investment_plan_structured") or {}
research_structured = state.get("investment_plan_structured") or {}
decision_context = build_optional_decision_context(
state.get("portfolio_context", ""),
state.get("peer_context", ""),
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
max_chars=400,
)
if use_compact_analysis_prompt():
prompt = f"""You are the Aggressive Risk Analyst. Defend upside and attack excessive caution.
Research signal: {summarize_structured_signal(research_structured)}
Trader signal: {summarize_structured_signal(trader_structured)}
Trader decision: {truncate_prompt_text(trader_decision, 500)}
{decision_context}
Market report: {truncate_prompt_text(market_research_report, 500)}
Sentiment report: {truncate_prompt_text(sentiment_report, 350)}
News report: {truncate_prompt_text(news_report, 350)}
@ -39,6 +52,10 @@ Keep it under 180 words and focus on 2-3 high-upside arguments."""
{trader_decision}
Structured research signal: {summarize_structured_signal(research_structured)}
Structured trader signal: {summarize_structured_signal(trader_structured)}
{decision_context}
Your task is to create a compelling case for the trader's decision by questioning and critiquing the conservative and neutral stances to demonstrate why your high-reward perspective offers the best path forward. Incorporate insights from the following sources into your arguments:
Market Research Report: {market_research_report}

View File

@ -1,5 +1,7 @@
from tradingagents.agents.utils.agent_utils import (
build_optional_decision_context,
summarize_structured_signal,
truncate_prompt_text,
use_compact_analysis_prompt,
)
@ -20,11 +22,22 @@ def create_conservative_debator(llm):
fundamentals_report = state["fundamentals_report"]
trader_decision = state["trader_investment_plan"]
trader_structured = state.get("trader_investment_plan_structured") or {}
research_structured = state.get("investment_plan_structured") or {}
decision_context = build_optional_decision_context(
state.get("portfolio_context", ""),
state.get("peer_context", ""),
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
max_chars=400,
)
if use_compact_analysis_prompt():
prompt = f"""You are the Conservative Risk Analyst. Focus on downside protection and capital preservation.
Research signal: {summarize_structured_signal(research_structured)}
Trader signal: {summarize_structured_signal(trader_structured)}
Trader decision: {truncate_prompt_text(trader_decision, 500)}
{decision_context}
Market report: {truncate_prompt_text(market_research_report, 500)}
Sentiment report: {truncate_prompt_text(sentiment_report, 350)}
News report: {truncate_prompt_text(news_report, 350)}
@ -39,6 +52,10 @@ Keep it under 180 words and focus on 2-3 main risks."""
{trader_decision}
Structured research signal: {summarize_structured_signal(research_structured)}
Structured trader signal: {summarize_structured_signal(trader_structured)}
{decision_context}
Your task is to actively counter the arguments of the Aggressive and Neutral Analysts, highlighting where their views may overlook potential threats or fail to prioritize sustainability. Respond directly to their points, drawing from the following data sources to build a convincing case for a low-risk approach adjustment to the trader's decision:
Market Research Report: {market_research_report}

View File

@ -1,5 +1,7 @@
from tradingagents.agents.utils.agent_utils import (
build_optional_decision_context,
summarize_structured_signal,
truncate_prompt_text,
use_compact_analysis_prompt,
)
@ -20,11 +22,22 @@ def create_neutral_debator(llm):
fundamentals_report = state["fundamentals_report"]
trader_decision = state["trader_investment_plan"]
trader_structured = state.get("trader_investment_plan_structured") or {}
research_structured = state.get("investment_plan_structured") or {}
decision_context = build_optional_decision_context(
state.get("portfolio_context", ""),
state.get("peer_context", ""),
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
max_chars=400,
)
if use_compact_analysis_prompt():
prompt = f"""You are the Neutral Risk Analyst. Balance upside and downside and prefer robust execution.
Research signal: {summarize_structured_signal(research_structured)}
Trader signal: {summarize_structured_signal(trader_structured)}
Trader decision: {truncate_prompt_text(trader_decision, 500)}
{decision_context}
Market report: {truncate_prompt_text(market_research_report, 500)}
Sentiment report: {truncate_prompt_text(sentiment_report, 350)}
News report: {truncate_prompt_text(news_report, 350)}
@ -39,6 +52,10 @@ Keep it under 180 words and argue for the most balanced path."""
{trader_decision}
Structured research signal: {summarize_structured_signal(research_structured)}
Structured trader signal: {summarize_structured_signal(trader_structured)}
{decision_context}
Your task is to challenge both the Aggressive and Conservative Analysts, pointing out where each perspective may be overly optimistic or overly cautious. Use insights from the following data sources to support a moderate, sustainable strategy to adjust the trader's decision:
Market Research Report: {market_research_report}

View File

@ -1,6 +1,11 @@
import functools
from tradingagents.agents.utils.agent_utils import build_instrument_context
from tradingagents.agents.utils.agent_utils import (
build_instrument_context,
build_optional_decision_context,
summarize_structured_signal,
)
from tradingagents.agents.utils.decision_utils import build_structured_decision
def create_trader(llm, memory):
@ -12,6 +17,9 @@ def create_trader(llm, memory):
sentiment_report = state["sentiment_report"]
news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"]
portfolio_context = state.get("portfolio_context", "")
peer_context = state.get("peer_context", "")
research_plan_structured = state.get("investment_plan_structured") or {}
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}"
past_memories = memory.get_memories(curr_situation, n_matches=2)
@ -23,24 +31,55 @@ def create_trader(llm, memory):
else:
past_memory_str = "No past memories found."
decision_context = build_optional_decision_context(
portfolio_context,
peer_context,
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
max_chars=500,
)
context = {
"role": "user",
"content": f"Based on a comprehensive analysis by a team of analysts, here is an investment plan tailored for {company_name}. {instrument_context} This plan incorporates insights from current technical market trends, macroeconomic indicators, and social media sentiment. Use this plan as a foundation for evaluating your next trading decision.\n\nProposed Investment Plan: {investment_plan}\n\nLeverage these insights to make an informed and strategic decision.",
"content": (
f"Based on a comprehensive analysis by a team of analysts, here is an investment plan tailored for {company_name}. "
f"{instrument_context} This plan incorporates insights from current technical market trends, macroeconomic indicators, and social media sentiment. "
"Use this plan as a foundation for evaluating your next trading decision.\n\n"
f"Research signal summary: {summarize_structured_signal(research_plan_structured)}\n"
f"{decision_context}\n\n"
f"Proposed Investment Plan: {investment_plan}\n\n"
"Leverage these insights to make an informed and strategic decision."
),
}
messages = [
{
"role": "system",
"content": f"""You are a trading agent analyzing market data to make investment decisions. Based on your analysis, provide a specific recommendation to buy, sell, or hold. End with a firm decision and always conclude your response with 'FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**' to confirm your recommendation. Apply lessons from past decisions to strengthen your analysis. Here are reflections from similar situations you traded in and the lessons learned: {past_memory_str}""",
"content": (
"You are a trading agent analyzing market data to make investment decisions. "
"Based on your analysis, provide a specific recommendation to buy, sell, or hold. "
"Include a machine-readable line formatted exactly as `TRADER_RATING: BUY|HOLD|SELL` and "
"always conclude your response with `FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**`. "
"Do not emit tool calls or ask for more data. "
f"Apply lessons from past decisions to strengthen your analysis. Here are reflections from similar situations you traded in and the lessons learned: {past_memory_str}"
),
},
context,
]
result = llm.invoke(messages)
structured_plan = build_structured_decision(
result.content,
default_rating="HOLD",
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
context_usage={
"portfolio_context": bool(str(portfolio_context).strip()),
"peer_context": bool(str(peer_context).strip()),
},
)
return {
"messages": [result],
"trader_investment_plan": result.content,
"trader_investment_plan_structured": structured_plan,
"sender": name,
}

View File

@ -75,6 +75,9 @@ class RiskDebateState(TypedDict):
class AgentState(MessagesState):
company_of_interest: Annotated[str, "Company that we are interested in trading"]
trade_date: Annotated[str, "What date we are trading at"]
portfolio_context: Annotated[str, "Optional portfolio/account context for this analysis"]
peer_context: Annotated[str, "Optional same-theme or peer ranking context for this analysis"]
peer_context_mode: Annotated[str, "Mode describing whether peer_context is same-theme normalized or only a book snapshot"]
sender: Annotated[str, "Agent that sent this message"]
@ -91,11 +94,21 @@ class AgentState(MessagesState):
InvestDebateState, "Current state of the debate on if to invest or not"
]
investment_plan: Annotated[str, "Plan generated by the Analyst"]
investment_plan_structured: Annotated[
Mapping[str, Any], "Structured metadata extracted from the research-manager decision"
]
trader_investment_plan: Annotated[str, "Plan generated by the Trader"]
trader_investment_plan_structured: Annotated[
Mapping[str, Any], "Structured metadata extracted from the trader decision"
]
# risk management team discussion step
risk_debate_state: Annotated[
RiskDebateState, "Current state of the debate on evaluating risk"
]
final_trade_decision: Annotated[str, "Final decision made by the Risk Analysts"]
final_trade_decision_report: Annotated[str, "Human-readable final decision report"]
final_trade_decision_structured: Annotated[
Mapping[str, Any], "Structured metadata extracted from the portfolio-manager decision"
]

View File

@ -1,3 +1,5 @@
from typing import Any, Mapping
from langchain_core.messages import HumanMessage, RemoveMessage
# Import tools from separate utility files
@ -55,6 +57,59 @@ def truncate_prompt_text(text: str, max_chars: int = 1200) -> str:
return text[:max_chars].rstrip() + "\n...[truncated]..."
def build_optional_decision_context(
portfolio_context: str | None,
peer_context: str | None,
*,
peer_context_mode: str = "UNSPECIFIED",
max_chars: int = 700,
) -> str:
sections: list[str] = []
if str(portfolio_context or "").strip():
sections.append(
f"Portfolio context: {truncate_prompt_text(str(portfolio_context), max_chars)}"
)
if str(peer_context or "").strip():
mode = str(peer_context_mode or "UNSPECIFIED").strip().upper()
if mode == "SAME_THEME_NORMALIZED":
sections.append(
"Peer context mode: SAME_THEME_NORMALIZED. "
"You may use this context when deciding SAME_THEME_RANK if the evidence is explicit."
)
sections.append(
f"Peer / same-theme context: {truncate_prompt_text(str(peer_context), max_chars)}"
)
else:
sections.append(
f"Peer context mode: {mode}. This context is not same-theme normalized. "
"Treat SAME_THEME_RANK as UNKNOWN unless the context itself contains explicit same-theme evidence."
)
sections.append(
f"Peer universe context: {truncate_prompt_text(str(peer_context), max_chars)}"
)
return "\n".join(sections)
def summarize_structured_signal(payload: Mapping[str, Any] | None) -> str:
if not payload:
return "rating=UNKNOWN"
parts = [f"rating={payload.get('rating', 'UNKNOWN')}"]
hold_subtype = payload.get("hold_subtype")
if hold_subtype and hold_subtype != "N/A":
parts.append(f"hold_subtype={hold_subtype}")
entry_style = payload.get("entry_style")
if entry_style and entry_style != "UNKNOWN":
parts.append(f"entry_style={entry_style}")
same_theme_rank = payload.get("same_theme_rank")
if same_theme_rank and same_theme_rank != "UNKNOWN":
parts.append(f"same_theme_rank={same_theme_rank}")
account_fit = payload.get("account_fit")
if account_fit and account_fit != "UNKNOWN":
parts.append(f"account_fit={account_fit}")
return ", ".join(parts)
def build_instrument_context(ticker: str) -> str:
"""Describe the exact instrument so agents preserve exchange-qualified tickers."""
return (

View File

@ -6,7 +6,7 @@ from tradingagents.dataflows.interface import route_to_vendor
@tool
def get_stock_data(
symbol: Annotated[str, "ticker symbol of the company"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format"],
start_date: Annotated[str, "Start date in yyyy-mm-dd format. Prefer recent windows unless a longer history is strictly necessary."],
end_date: Annotated[str, "End date in yyyy-mm-dd format"],
) -> str:
"""
@ -14,7 +14,7 @@ def get_stock_data(
Uses the configured core_stock_apis vendor.
Args:
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
start_date (str): Start date in yyyy-mm-dd format
start_date (str): Start date in yyyy-mm-dd format. Prefer recent windows unless a longer history is strictly necessary.
end_date (str): End date in yyyy-mm-dd format
Returns:
str: A formatted dataframe containing the stock price data for the specified ticker symbol in the specified date range.

View File

@ -5,20 +5,20 @@ from tradingagents.dataflows.interface import route_to_vendor
@tool
def get_indicators(
symbol: Annotated[str, "ticker symbol of the company"],
indicator: Annotated[str, "technical indicator to get the analysis and report of"],
indicator: Annotated[str, "technical indicator name or a comma-separated list of indicator names for batch retrieval"],
curr_date: Annotated[str, "The current trading date you are trading on, YYYY-mm-dd"],
look_back_days: Annotated[int, "how many days to look back"] = 30,
) -> str:
"""
Retrieve a single technical indicator for a given ticker symbol.
Retrieve one or more technical indicators for a given ticker symbol.
Uses the configured technical_indicators vendor.
Args:
symbol (str): Ticker symbol of the company, e.g. AAPL, TSM
indicator (str): A single technical indicator name, e.g. 'rsi', 'macd'. Call this tool once per indicator.
indicator (str): One technical indicator name, e.g. 'rsi', 'macd', or a comma-separated batch such as 'macd,rsi,atr,close_50_sma'.
curr_date (str): The current trading date you are trading on, YYYY-mm-dd
look_back_days (int): How many days to look back, default is 30
Returns:
str: A formatted dataframe containing the technical indicators for the specified ticker symbol and indicator.
str: A formatted dataframe containing the requested technical indicator output(s). Batch requests are recommended to reduce repeated tool calls.
"""
# LLMs sometimes pass multiple indicators as a comma-separated string;
# split and process each individually.

View File

@ -3,6 +3,7 @@ import logging
import pandas as pd
import yfinance as yf
import requests
from yfinance.exceptions import YFRateLimitError
from stockstats import wrap
from typing import Annotated
@ -12,6 +13,109 @@ from .config import get_config
logger = logging.getLogger(__name__)
def _symbol_to_tencent_code(symbol: str) -> str:
code, exchange = symbol.upper().split(".")
if exchange == "SS":
return f"sh{code}"
if exchange == "SZ":
return f"sz{code}"
raise ValueError(f"Unsupported A-share symbol for Tencent fallback: {symbol}")
def _fetch_tencent_ohlcv(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
"""Fallback daily OHLCV fetch for A-shares via Tencent."""
session = requests.Session()
session.trust_env = False
response = session.get(
"https://web.ifzq.gtimg.cn/appstock/app/fqkline/get",
params={
"param": f"{_symbol_to_tencent_code(symbol)},day,{start_date},{end_date},320,qfq"
},
headers={
"User-Agent": "Mozilla/5.0",
"Referer": "https://gu.qq.com/",
},
timeout=20,
)
response.raise_for_status()
payload = response.json()
data = ((payload or {}).get("data") or {}).get(_symbol_to_tencent_code(symbol)) or {}
rows = data.get("qfqday") or data.get("day") or []
if not rows:
raise ValueError(f"No Tencent OHLCV data returned for {symbol}")
parsed = []
for line in rows:
# [date, open, close, high, low, volume]
date_str, open_p, close_p, high_p, low_p, volume = line[:6]
parsed.append(
{
"Date": date_str,
"Open": float(open_p),
"High": float(high_p),
"Low": float(low_p),
"Close": float(close_p),
"Volume": float(volume),
}
)
return pd.DataFrame(parsed)
def _symbol_to_eastmoney_secid(symbol: str) -> str:
code, exchange = symbol.upper().split(".")
if exchange == "SS":
return f"1.{code}"
if exchange in {"SZ", "BJ"}:
return f"0.{code}"
raise ValueError(f"Unsupported A-share symbol for Eastmoney fallback: {symbol}")
def _fetch_eastmoney_ohlcv(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
"""Fallback daily OHLCV fetch for A-shares via Eastmoney."""
session = requests.Session()
session.trust_env = False
url = "https://push2his.eastmoney.com/api/qt/stock/kline/get"
response = session.get(
url,
params={
"secid": _symbol_to_eastmoney_secid(symbol),
"fields1": "f1,f2,f3,f4,f5,f6",
"fields2": "f51,f52,f53,f54,f55,f56,f57,f58,f59,f60,f61",
"klt": "101",
"fqt": "1",
"beg": start_date.replace("-", ""),
"end": end_date.replace("-", ""),
"ut": "fa5fd1943c7b386f172d6893dbfba10b",
},
headers={
"User-Agent": "Mozilla/5.0",
"Referer": "https://quote.eastmoney.com/",
},
timeout=20,
)
response.raise_for_status()
payload = response.json()
klines = ((payload or {}).get("data") or {}).get("klines") or []
if not klines:
raise ValueError(f"No Eastmoney OHLCV data returned for {symbol}")
rows = []
for line in klines:
date_str, open_p, close_p, high_p, low_p, volume, amount, *_rest = line.split(",")
rows.append(
{
"Date": date_str,
"Open": float(open_p),
"High": float(high_p),
"Low": float(low_p),
"Close": float(close_p),
"Volume": float(volume),
"Amount": float(amount),
}
)
return pd.DataFrame(rows)
def _is_transient_yfinance_error(exc: Exception) -> bool:
"""Heuristic for flaky yfinance transport/parser failures."""
if isinstance(exc, YFRateLimitError):
@ -70,6 +174,7 @@ def load_ohlcv(symbol: str, curr_date: str) -> pd.DataFrame:
"""
config = get_config()
curr_date_dt = pd.to_datetime(curr_date)
min_acceptable_date = curr_date_dt - pd.Timedelta(days=1)
# Cache uses a fixed window (15y to today) so one file per symbol
today_date = pd.Timestamp.today()
@ -83,9 +188,23 @@ def load_ohlcv(symbol: str, curr_date: str) -> pd.DataFrame:
f"{symbol}-YFin-data-{start_str}-{end_str}.csv",
)
need_refresh = True
data = None
if os.path.exists(data_file):
data = pd.read_csv(data_file, on_bad_lines="skip")
else:
cached = pd.read_csv(data_file, on_bad_lines="skip")
if "Date" in cached.columns:
parsed_dates = pd.to_datetime(cached["Date"], errors="coerce")
latest_cached = parsed_dates.dropna().max()
if (
latest_cached is not pd.NaT
and latest_cached is not None
and latest_cached >= min_acceptable_date
):
data = cached
need_refresh = False
if need_refresh:
try:
data = yf_retry(lambda: yf.download(
symbol,
start=start_str,
@ -95,6 +214,21 @@ def load_ohlcv(symbol: str, curr_date: str) -> pd.DataFrame:
auto_adjust=True,
))
data = data.reset_index()
latest_downloaded = pd.to_datetime(data.get("Date"), errors="coerce").dropna().max()
if latest_downloaded is pd.NaT or latest_downloaded is None or latest_downloaded < min_acceptable_date:
raise ValueError(
f"yfinance returned stale data for {symbol}: latest={latest_downloaded}"
)
except Exception as exc:
logger.warning(
"yfinance download failed for %s, falling back to Tencent/Eastmoney OHLCV: %s",
symbol,
exc,
)
try:
data = _fetch_tencent_ohlcv(symbol, start_str, end_str)
except Exception:
data = _fetch_eastmoney_ohlcv(symbol, start_str, end_str)
data.to_csv(data_file, index=False)
data = _clean_dataframe(data)

View File

@ -4,7 +4,21 @@ from dateutil.relativedelta import relativedelta
import pandas as pd
import yfinance as yf
import os
from .stockstats_utils import StockstatsUtils, _clean_dataframe, yf_retry, load_ohlcv, filter_financials_by_date
from .stockstats_utils import (
StockstatsUtils,
_clean_dataframe,
_fetch_eastmoney_ohlcv,
_fetch_tencent_ohlcv,
yf_retry,
load_ohlcv,
filter_financials_by_date,
)
from .config import get_config
def _use_compact_data_output() -> bool:
mode = str(get_config().get("analysis_prompt_style", "standard")).strip().lower()
return mode in {"compact", "fast", "minimax"}
def get_YFin_data_online(
symbol: Annotated[str, "ticker symbol of the company"],
@ -19,16 +33,31 @@ def get_YFin_data_online(
ticker = yf.Ticker(symbol.upper())
# Fetch historical data for the specified date range
try:
data = yf_retry(lambda: ticker.history(start=start_date, end=end_date))
except Exception:
try:
data = _fetch_tencent_ohlcv(symbol.upper(), start_date, end_date)
except Exception:
data = _fetch_eastmoney_ohlcv(symbol.upper(), start_date, end_date)
# Check if data is empty
if data.empty:
try:
data = _fetch_tencent_ohlcv(symbol.upper(), start_date, end_date)
except Exception:
try:
data = _fetch_eastmoney_ohlcv(symbol.upper(), start_date, end_date)
except Exception:
return (
f"No data found for symbol '{symbol}' between {start_date} and {end_date}"
)
if "Date" not in data.columns and data.index.name is not None:
data = data.reset_index()
# Remove timezone info from index for cleaner output
if data.index.tz is not None:
if getattr(data.index, "tz", None) is not None:
data.index = data.index.tz_localize(None)
# Round numerical values to 2 decimal places for cleaner display
@ -37,11 +66,19 @@ def get_YFin_data_online(
if col in data.columns:
data[col] = data[col].round(2)
compact_mode = _use_compact_data_output()
original_len = len(data)
if compact_mode and original_len > 20:
data = data.tail(20)
# Convert DataFrame to CSV string
csv_string = data.to_csv()
# Add header information
header = f"# Stock data for {symbol.upper()} from {start_date} to {end_date}\n"
if compact_mode and original_len > len(data):
header += f"# Showing last {len(data)} of {original_len} records (compact mode)\n"
else:
header += f"# Total records: {len(data)}\n"
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
@ -134,6 +171,10 @@ def get_stock_stats_indicators_window(
f"Indicator {indicator} is not supported. Please choose from: {list(best_ind_params.keys())}"
)
compact_mode = _use_compact_data_output()
if compact_mode:
look_back_days = min(look_back_days, 14)
end_date = curr_date
curr_date_dt = datetime.strptime(curr_date, "%Y-%m-%d")
before = curr_date_dt - relativedelta(days=look_back_days)
@ -158,6 +199,13 @@ def get_stock_stats_indicators_window(
date_values.append((date_str, indicator_value))
current_dt = current_dt - relativedelta(days=1)
if compact_mode:
date_values = [
(date_str, value)
for date_str, value in date_values
if not str(value).startswith("N/A: Not a trading day")
][:10]
# Build the result string
ind_string = ""
for date_str, value in date_values:
@ -168,11 +216,16 @@ def get_stock_stats_indicators_window(
# Fallback to original implementation if bulk method fails
ind_string = ""
curr_date_dt = datetime.strptime(curr_date, "%Y-%m-%d")
emitted = 0
while curr_date_dt >= before:
indicator_value = get_stockstats_indicator(
symbol, indicator, curr_date_dt.strftime("%Y-%m-%d")
)
if not compact_mode or not str(indicator_value).startswith("N/A: Not a trading day"):
ind_string += f"{curr_date_dt.strftime('%Y-%m-%d')}: {indicator_value}\n"
emitted += 1
if compact_mode and emitted >= 10:
break
curr_date_dt = curr_date_dt - relativedelta(days=1)
result_str = (

View File

@ -1,5 +1,13 @@
import copy
import os
from pathlib import Path
_MINIMAX_ANTHROPIC_BASE_URL = "https://api.minimaxi.com/anthropic"
_MINIMAX_DEFAULT_TIMEOUT_SECS = 60.0
_MINIMAX_DEFAULT_MAX_RETRIES = 1
_MINIMAX_DEFAULT_EXTRA_RETRY_ATTEMPTS = 2
_MINIMAX_DEFAULT_RETRY_BASE_DELAY_SECS = 1.5
_MINIMAX_DEFAULT_ANALYST_NODE_TIMEOUT_SECS = 75.0
DEFAULT_CONFIG = {
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
@ -20,11 +28,15 @@ DEFAULT_CONFIG = {
# Output language for analyst reports and final decision
# Internal agent debate stays in English for reasoning quality
"output_language": "English",
# Optional runtime context for account-aware and peer-aware decisions
"portfolio_context": "",
"peer_context": "",
"peer_context_mode": "UNSPECIFIED",
# Debate and discussion settings
"max_debate_rounds": 1,
"max_risk_discuss_rounds": 1,
"max_recur_limit": 100,
"research_node_timeout_secs": 30.0,
"research_node_timeout_secs": 90.0, # Increased for parallel subagent architecture with slow LLM
# Data vendor configuration
# Category-level configuration (default for all tools in category)
"data_vendors": {
@ -40,5 +52,105 @@ DEFAULT_CONFIG = {
}
def _looks_like_minimax_anthropic(provider: str | None, backend_url: str | None) -> bool:
return (
str(provider or "").lower() == "anthropic"
and _MINIMAX_ANTHROPIC_BASE_URL in str(backend_url or "").lower()
)
def normalize_runtime_llm_config(config: dict) -> dict:
"""Normalize runtime LLM settings for known provider/backend quirks."""
normalized = copy.deepcopy(config)
provider = normalized.get("llm_provider")
backend_url = normalized.get("backend_url")
if _looks_like_minimax_anthropic(provider, backend_url):
normalized["backend_url"] = _MINIMAX_ANTHROPIC_BASE_URL
if not normalized.get("llm_timeout"):
normalized["llm_timeout"] = _MINIMAX_DEFAULT_TIMEOUT_SECS
if normalized.get("llm_max_retries") in (None, 0):
normalized["llm_max_retries"] = _MINIMAX_DEFAULT_MAX_RETRIES
if not normalized.get("minimax_retry_attempts"):
normalized["minimax_retry_attempts"] = _MINIMAX_DEFAULT_EXTRA_RETRY_ATTEMPTS
if not normalized.get("minimax_retry_base_delay"):
normalized["minimax_retry_base_delay"] = _MINIMAX_DEFAULT_RETRY_BASE_DELAY_SECS
if not normalized.get("analyst_node_timeout_secs"):
normalized["analyst_node_timeout_secs"] = _MINIMAX_DEFAULT_ANALYST_NODE_TIMEOUT_SECS
return normalized
def _resolve_runtime_llm_overrides() -> dict:
"""Resolve provider/model/base URL overrides from the current environment."""
overrides: dict[str, object] = {}
provider = os.getenv("TRADINGAGENTS_LLM_PROVIDER")
if not provider:
if os.getenv("ANTHROPIC_BASE_URL"):
provider = "anthropic"
elif os.getenv("OPENAI_BASE_URL"):
provider = "openai"
if provider:
overrides["llm_provider"] = provider
backend_url = (
os.getenv("TRADINGAGENTS_BACKEND_URL")
or os.getenv("ANTHROPIC_BASE_URL")
or os.getenv("OPENAI_BASE_URL")
)
if backend_url:
overrides["backend_url"] = backend_url
shared_model = os.getenv("TRADINGAGENTS_MODEL")
deep_model = os.getenv("TRADINGAGENTS_DEEP_MODEL") or shared_model
quick_model = os.getenv("TRADINGAGENTS_QUICK_MODEL") or shared_model
if deep_model:
overrides["deep_think_llm"] = deep_model
if quick_model:
overrides["quick_think_llm"] = quick_model
anthropic_api_key = os.getenv("ANTHROPIC_API_KEY") or os.getenv("MINIMAX_API_KEY")
if anthropic_api_key:
overrides["api_key"] = anthropic_api_key
portfolio_context = os.getenv("TRADINGAGENTS_PORTFOLIO_CONTEXT")
if portfolio_context is not None:
overrides["portfolio_context"] = portfolio_context
peer_context = os.getenv("TRADINGAGENTS_PEER_CONTEXT")
if peer_context is not None:
overrides["peer_context"] = peer_context
peer_context_mode = os.getenv("TRADINGAGENTS_PEER_CONTEXT_MODE")
if peer_context_mode is not None:
overrides["peer_context_mode"] = peer_context_mode
return overrides
def load_project_env(start_path):
"""Load the nearest .env from the given path upward."""
try:
from dotenv import load_dotenv
except ImportError:
return None
current = Path(start_path).resolve()
if current.is_file():
current = current.parent
for directory in (current, *current.parents):
env_path = directory / ".env"
if env_path.exists():
# Project entrypoints should use the repo-local runtime settings even
# when the user's shell exports unrelated Anthropic/OpenAI variables.
load_dotenv(env_path, override=True)
return env_path
return None
def get_default_config():
return copy.deepcopy(DEFAULT_CONFIG)
config = copy.deepcopy(DEFAULT_CONFIG)
config.update(_resolve_runtime_llm_overrides())
return normalize_runtime_llm_config(config)

View File

@ -16,13 +16,22 @@ class Propagator:
self.max_recur_limit = max_recur_limit
def create_initial_state(
self, company_name: str, trade_date: str
self,
company_name: str,
trade_date: str,
*,
portfolio_context: str = "",
peer_context: str = "",
peer_context_mode: str = "UNSPECIFIED",
) -> Dict[str, Any]:
"""Create the initial state for the agent graph."""
return {
"messages": [("human", company_name)],
"company_of_interest": company_name,
"trade_date": str(trade_date),
"portfolio_context": portfolio_context,
"peer_context": peer_context,
"peer_context_mode": peer_context_mode,
"investment_debate_state": InvestDebateState(
{
"bull_history": "",
@ -57,6 +66,13 @@ class Propagator:
"fundamentals_report": "",
"sentiment_report": "",
"news_report": "",
"investment_plan": "",
"investment_plan_structured": {},
"trader_investment_plan": "",
"trader_investment_plan_structured": {},
"final_trade_decision": "",
"final_trade_decision_report": "",
"final_trade_decision_structured": {},
}
def get_graph_args(self, callbacks: Optional[List] = None) -> Dict[str, Any]:

View File

@ -4,9 +4,11 @@ import concurrent.futures
import time
from typing import Any, Dict
from langgraph.graph import END, START, StateGraph
from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode
from tradingagents.agents import *
from tradingagents.agents.utils.decision_utils import build_structured_decision
from tradingagents.agents.utils.agent_states import AgentState
from .conditional_logic import ConditionalLogic
@ -26,6 +28,7 @@ class GraphSetup:
invest_judge_memory,
portfolio_manager_memory,
conditional_logic: ConditionalLogic,
analyst_node_timeout_secs: float = 75.0,
research_node_timeout_secs: float = 30.0,
):
"""Initialize with required components."""
@ -38,6 +41,7 @@ class GraphSetup:
self.invest_judge_memory = invest_judge_memory
self.portfolio_manager_memory = portfolio_manager_memory
self.conditional_logic = conditional_logic
self.analyst_node_timeout_secs = analyst_node_timeout_secs
self.research_node_timeout_secs = research_node_timeout_secs
def setup_graph(
@ -61,29 +65,37 @@ class GraphSetup:
tool_nodes = {}
if "market" in selected_analysts:
analyst_nodes["market"] = create_market_analyst(
self.quick_thinking_llm
analyst_nodes["market"] = self._guard_analyst_node(
"Market Analyst",
create_market_analyst(self.quick_thinking_llm),
report_field="market_report",
)
delete_nodes["market"] = create_msg_delete()
tool_nodes["market"] = self.tool_nodes["market"]
if "social" in selected_analysts:
analyst_nodes["social"] = create_social_media_analyst(
self.quick_thinking_llm
analyst_nodes["social"] = self._guard_analyst_node(
"Social Analyst",
create_social_media_analyst(self.quick_thinking_llm),
report_field="sentiment_report",
)
delete_nodes["social"] = create_msg_delete()
tool_nodes["social"] = self.tool_nodes["social"]
if "news" in selected_analysts:
analyst_nodes["news"] = create_news_analyst(
self.quick_thinking_llm
analyst_nodes["news"] = self._guard_analyst_node(
"News Analyst",
create_news_analyst(self.quick_thinking_llm),
report_field="news_report",
)
delete_nodes["news"] = create_msg_delete()
tool_nodes["news"] = self.tool_nodes["news"]
if "fundamentals" in selected_analysts:
analyst_nodes["fundamentals"] = create_fundamentals_analyst(
self.quick_thinking_llm
analyst_nodes["fundamentals"] = self._guard_analyst_node(
"Fundamentals Analyst",
create_fundamentals_analyst(self.quick_thinking_llm),
report_field="fundamentals_report",
)
delete_nodes["fundamentals"] = create_msg_delete()
tool_nodes["fundamentals"] = self.tool_nodes["fundamentals"]
@ -249,6 +261,35 @@ class GraphSetup:
return wrapped
def _guard_analyst_node(self, node_name: str, node, *, report_field: str):
def wrapped(state):
started_at = time.time()
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
future = executor.submit(node, state)
try:
return future.result(timeout=self.analyst_node_timeout_secs)
except concurrent.futures.TimeoutError:
future.cancel()
executor.shutdown(wait=False, cancel_futures=True)
return self._apply_analyst_fallback(
node_name=node_name,
report_field=report_field,
reason=f"{node_name.lower().replace(' ', '_')}_timeout",
started_at=started_at,
)
except Exception as exc:
executor.shutdown(wait=False, cancel_futures=True)
return self._apply_analyst_fallback(
node_name=node_name,
report_field=report_field,
reason=f"{node_name.lower().replace(' ', '_')}_{type(exc).__name__.lower()}",
started_at=started_at,
)
finally:
executor.shutdown(wait=False, cancel_futures=True)
return wrapped
@staticmethod
def _provenance(state) -> dict:
debate_state = dict(state["investment_debate_state"])
@ -298,6 +339,11 @@ class GraphSetup:
return {
"investment_debate_state": debate_state,
"investment_plan": fallback,
"investment_plan_structured": build_structured_decision(
fallback,
default_rating="HOLD",
peer_context_mode=state.get("peer_context_mode", "UNSPECIFIED"),
),
}
prefix = "Bull Analyst" if dimension == "bull" else "Bear Analyst"
@ -312,3 +358,15 @@ class GraphSetup:
debate_state["count"] = debate_state.get("count", 0) + 1
debate_state.update(provenance)
return {"investment_debate_state": debate_state}
@staticmethod
def _apply_analyst_fallback(*, node_name: str, report_field: str, reason: str, started_at: float):
elapsed_seconds = round(time.time() - started_at, 3)
fallback = (
f"[DEGRADED] {node_name} unavailable ({reason}). "
f"Proceed with partial research context. Guard elapsed={elapsed_seconds}s."
)
return {
"messages": [AIMessage(content=fallback)],
report_field: fallback,
}

View File

@ -2,6 +2,8 @@
from typing import Any
from tradingagents.agents.utils.decision_utils import CANONICAL_RATINGS, extract_rating
class SignalProcessor:
"""Processes trading signals to extract actionable decisions."""
@ -20,6 +22,10 @@ class SignalProcessor:
Returns:
Extracted rating (BUY, OVERWEIGHT, HOLD, UNDERWEIGHT, or SELL)
"""
parsed = extract_rating(full_signal)
if parsed in CANONICAL_RATINGS:
return parsed
messages = [
(
"system",

View File

@ -12,7 +12,7 @@ from langgraph.prebuilt import ToolNode
from tradingagents.llm_clients import create_llm_client
from tradingagents.agents import *
from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.default_config import get_default_config
from tradingagents.agents.utils.memory import FinancialSituationMemory
from tradingagents.agents.utils.agent_states import (
AgentState,
@ -20,6 +20,7 @@ from tradingagents.agents.utils.agent_states import (
RiskDebateState,
extract_research_provenance,
)
from tradingagents.agents.utils.decision_utils import build_structured_decision
from tradingagents.dataflows.config import set_config
# Import the new abstract tool methods from agent_utils
@ -43,13 +44,13 @@ from .signal_processing import SignalProcessor
def _merge_with_default_config(config: Optional[Dict[str, Any]]) -> Dict[str, Any]:
"""Merge a partial user config onto DEFAULT_CONFIG.
"""Merge a partial user config onto the runtime default config.
Orchestrator callers often override only a few LLM/vendor fields. Without a
merge step, required defaults such as ``project_dir`` disappear and the
graph fails during initialization.
"""
merged = copy.deepcopy(DEFAULT_CONFIG)
merged = get_default_config()
if not config:
return merged
@ -145,6 +146,7 @@ class TradingAgentsGraph:
self.invest_judge_memory,
self.portfolio_manager_memory,
self.conditional_logic,
analyst_node_timeout_secs=float(self.config.get("analyst_node_timeout_secs", 75.0)),
research_node_timeout_secs=float(self.config.get("research_node_timeout_secs", 30.0)),
)
@ -194,6 +196,11 @@ class TradingAgentsGraph:
if effort:
kwargs["effort"] = effort
# Pass api_key if present in config (for MiniMax and other third-party Anthropic-compatible APIs)
api_key = self.config.get("api_key")
if api_key:
kwargs["api_key"] = api_key
return kwargs
def _create_tool_nodes(self) -> Dict[str, ToolNode]:
@ -239,7 +246,11 @@ class TradingAgentsGraph:
# Initialize state
init_agent_state = self.propagator.create_initial_state(
company_name, trade_date
company_name,
trade_date,
portfolio_context=str(self.config.get("portfolio_context", "") or ""),
peer_context=str(self.config.get("peer_context", "") or ""),
peer_context_mode=str(self.config.get("peer_context_mode", "UNSPECIFIED") or "UNSPECIFIED"),
)
args = self.propagator.get_graph_args()
@ -258,6 +269,8 @@ class TradingAgentsGraph:
# Standard mode without tracing
final_state = self.graph.invoke(init_agent_state, **args)
final_state = self._normalize_decision_outputs(final_state)
# Store current state for reflection
self.curr_state = final_state
@ -267,6 +280,65 @@ class TradingAgentsGraph:
# Return decision and processed signal
return final_state, self.process_signal(final_state["final_trade_decision"])
def _normalize_decision_outputs(self, final_state: Dict[str, Any]) -> Dict[str, Any]:
normalized = copy.deepcopy(final_state)
portfolio_context = bool(str(normalized.get("portfolio_context", "") or "").strip())
peer_context = bool(str(normalized.get("peer_context", "") or "").strip())
context_usage = {
"portfolio_context": portfolio_context,
"peer_context": peer_context,
}
investment_plan = str(normalized.get("investment_plan", "") or "")
trader_plan = str(normalized.get("trader_investment_plan", "") or "")
final_rating = str(normalized.get("final_trade_decision", "") or "")
final_report = str(
normalized.get("final_trade_decision_report")
or normalized.get("risk_debate_state", {}).get("judge_decision", "")
or final_rating
)
investment_structured = normalized.get("investment_plan_structured") or build_structured_decision(
investment_plan,
default_rating="HOLD",
peer_context_mode=normalized.get("peer_context_mode", "UNSPECIFIED"),
context_usage=context_usage,
)
trader_structured = normalized.get("trader_investment_plan_structured") or build_structured_decision(
trader_plan,
fallback_candidates=(("investment_plan", investment_plan),),
default_rating="HOLD",
peer_context_mode=normalized.get("peer_context_mode", "UNSPECIFIED"),
context_usage=context_usage,
)
final_structured = normalized.get("final_trade_decision_structured") or build_structured_decision(
final_report,
fallback_candidates=(
("trader_plan", trader_plan),
("investment_plan", investment_plan),
),
default_rating="HOLD",
peer_context_mode=normalized.get("peer_context_mode", "UNSPECIFIED"),
context_usage=context_usage,
)
if final_rating and final_rating != final_structured["rating"]:
warnings = list(final_structured.get("warnings") or [])
warnings.append(f"final_trade_decision_overridden:{final_rating}->{final_structured['rating']}")
final_structured["warnings"] = warnings
normalized["investment_plan_structured"] = investment_structured
normalized["trader_investment_plan_structured"] = trader_structured
normalized["final_trade_decision"] = final_structured["rating"]
normalized["final_trade_decision_report"] = final_structured["report_text"]
normalized["final_trade_decision_structured"] = final_structured
risk_state = dict(normalized.get("risk_debate_state") or {})
risk_state["judge_decision"] = final_structured["report_text"]
normalized["risk_debate_state"] = risk_state
return normalized
def _log_state(self, trade_date, final_state):
"""Log the final state to a JSON file."""
self.log_states_dict[str(trade_date)] = {
@ -294,6 +366,7 @@ class TradingAgentsGraph:
),
},
"trader_investment_decision": final_state["trader_investment_plan"],
"trader_investment_plan_structured": final_state.get("trader_investment_plan_structured", {}),
"risk_debate_state": {
"aggressive_history": final_state["risk_debate_state"]["aggressive_history"],
"conservative_history": final_state["risk_debate_state"]["conservative_history"],
@ -302,7 +375,10 @@ class TradingAgentsGraph:
"judge_decision": final_state["risk_debate_state"]["judge_decision"],
},
"investment_plan": final_state["investment_plan"],
"investment_plan_structured": final_state.get("investment_plan_structured", {}),
"final_trade_decision": final_state["final_trade_decision"],
"final_trade_decision_report": final_state.get("final_trade_decision_report", ""),
"final_trade_decision_structured": final_state.get("final_trade_decision_structured", {}),
}
# Save to file

View File

@ -1,3 +1,5 @@
import logging
import time
from typing import Any, Optional
from langchain_anthropic import ChatAnthropic
@ -5,12 +7,34 @@ from langchain_anthropic import ChatAnthropic
from .base_client import BaseLLMClient, normalize_content
from .validators import validate_model
logger = logging.getLogger(__name__)
_PASSTHROUGH_KWARGS = (
"timeout", "max_retries", "api_key", "max_tokens",
"callbacks", "http_client", "http_async_client", "effort",
)
def _is_minimax_anthropic_base_url(base_url: Optional[str]) -> bool:
return "api.minimaxi.com/anthropic" in str(base_url or "").lower()
def _is_retryable_minimax_error(exc: Exception) -> bool:
text = f"{type(exc).__name__}: {exc}".lower()
retry_markers = (
"overloaded_error",
"http_code': '529'",
'http_code": "529"',
" 529 ",
"429",
"timeout",
"timed out",
"connection reset",
"temporarily unavailable",
)
return any(marker in text for marker in retry_markers)
class NormalizedChatAnthropic(ChatAnthropic):
"""ChatAnthropic with normalized content output.
@ -20,7 +44,25 @@ class NormalizedChatAnthropic(ChatAnthropic):
"""
def invoke(self, input, config=None, **kwargs):
extra_attempts = max(0, int(getattr(self, "_minimax_retry_attempts", 0)))
base_delay = max(0.0, float(getattr(self, "_minimax_retry_base_delay", 0.0)))
for attempt in range(extra_attempts + 1):
try:
return normalize_content(super().invoke(input, config, **kwargs))
except Exception as exc:
if attempt >= extra_attempts or not _is_retryable_minimax_error(exc):
raise
delay = base_delay * (2 ** attempt)
logger.warning(
"MiniMax Anthropic invoke failed (%s); retrying in %.1fs (%s/%s)",
exc,
delay,
attempt + 1,
extra_attempts,
)
time.sleep(delay)
class AnthropicClient(BaseLLMClient):
@ -41,7 +83,19 @@ class AnthropicClient(BaseLLMClient):
if key in self.kwargs:
llm_kwargs[key] = self.kwargs[key]
return NormalizedChatAnthropic(**llm_kwargs)
llm = NormalizedChatAnthropic(**llm_kwargs)
if _is_minimax_anthropic_base_url(self.base_url):
object.__setattr__(
llm,
"_minimax_retry_attempts",
int(self.kwargs.get("minimax_retry_attempts", 0)),
)
object.__setattr__(
llm,
"_minimax_retry_base_delay",
float(self.kwargs.get("minimax_retry_base_delay", 0.0)),
)
return llm
def validate_model(self) -> bool:
"""Validate model for Anthropic."""

View File

@ -25,11 +25,15 @@ MODEL_OPTIONS: ProviderModeOptions = {
},
"anthropic": {
"quick": [
("MiniMax M2.7 Highspeed - Repo local default via Anthropic-compatible API", "MiniMax-M2.7-highspeed"),
("MiniMax M2.7 - Anthropic-compatible legacy fallback", "MiniMax-M2.7"),
("Claude Sonnet 4.6 - Best speed and intelligence balance", "claude-sonnet-4-6"),
("Claude Haiku 4.5 - Fast, near-instant responses", "claude-haiku-4-5"),
("Claude Sonnet 4.5 - Agents and coding", "claude-sonnet-4-5"),
],
"deep": [
("MiniMax M2.7 Highspeed - Repo local default via Anthropic-compatible API", "MiniMax-M2.7-highspeed"),
("MiniMax M2.7 - Anthropic-compatible legacy fallback", "MiniMax-M2.7"),
("Claude Opus 4.6 - Most intelligent, agents and coding", "claude-opus-4-6"),
("Claude Opus 4.5 - Premium, max intelligence", "claude-opus-4-5"),
("Claude Sonnet 4.6 - Best speed and intelligence balance", "claude-sonnet-4-6"),

View File

@ -16,6 +16,7 @@ def _setup() -> GraphSetup:
invest_judge_memory=None,
portfolio_manager_memory=None,
conditional_logic=None,
analyst_node_timeout_secs=0.01,
research_node_timeout_secs=0.01,
)
@ -210,6 +211,28 @@ def test_guard_timeout_returns_without_waiting_for_node_completion(monkeypatch):
assert debate["timed_out_nodes"] == ["Bull Researcher"]
def test_analyst_guard_timeout_returns_degraded_report_quickly():
setup = _setup()
def slow_node(_state):
time.sleep(0.2)
return {"messages": [], "market_report": "ok"}
wrapped = setup._guard_analyst_node(
"Market Analyst",
slow_node,
report_field="market_report",
)
started = time.monotonic()
result = wrapped({"messages": []})
elapsed = time.monotonic() - started
assert elapsed < 0.1
assert result["market_report"].startswith("[DEGRADED] Market Analyst unavailable")
assert result["messages"][0].content.startswith("[DEGRADED] Market Analyst unavailable")
def test_extract_research_provenance_returns_subset():
payload = extract_research_provenance(
{

View File

@ -13,18 +13,31 @@ from pathlib import Path
from typing import Optional
from contextlib import asynccontextmanager
from dotenv import load_dotenv
from fastapi import FastAPI, HTTPException, Request, WebSocket, WebSocketDisconnect, Query, Header
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import Response, FileResponse
from fastapi.staticfiles import StaticFiles
from pydantic import BaseModel
from tradingagents.default_config import get_default_config
from tradingagents.default_config import get_default_config, normalize_runtime_llm_config
from services import AnalysisService, JobService, ResultStore, build_request_context, load_migration_flags
from services import (
AnalysisService,
JobService,
ResultStore,
TaskCommandService,
TaskQueryService,
build_request_context,
clone_request_context,
load_migration_flags,
)
from services.executor import LegacySubprocessAnalysisExecutor
# Path to TradingAgents repo root
REPO_ROOT = Path(__file__).parent.parent.parent
_env_file = os.environ.get("TRADINGAGENTS_ENV_FILE")
if _env_file != "":
load_dotenv(Path(_env_file) if _env_file else REPO_ROOT / ".env", override=True)
# Use the currently running Python interpreter
ANALYSIS_PYTHON = Path(sys.executable)
# Task state persistence directory
@ -64,6 +77,18 @@ async def lifespan(app: FastAPI):
retry_count=MAX_RETRY_COUNT,
retry_base_delay_secs=RETRY_BASE_DELAY_SECS,
)
app.state.task_query_service = TaskQueryService(
task_results=app.state.task_results,
result_store=app.state.result_store,
job_service=app.state.job_service,
)
app.state.task_command_service = TaskCommandService(
task_results=app.state.task_results,
analysis_tasks=app.state.analysis_tasks,
processes=app.state.processes,
result_store=app.state.result_store,
job_service=app.state.job_service,
)
# Restore persisted task states from disk
app.state.job_service.restore_task_results(app.state.result_store.restore_task_results())
@ -95,6 +120,9 @@ app.add_middleware(
class AnalysisRequest(BaseModel):
ticker: str
date: Optional[str] = None
portfolio_context: Optional[str] = None
peer_context: Optional[str] = None
peer_context_mode: Optional[str] = None
class ScreenRequest(BaseModel):
mode: str = "china_strict"
@ -126,7 +154,8 @@ async def save_apikey(request: Request, body: dict = None, api_key: Optional[str
raise HTTPException(status_code=400, detail="api_key cannot be empty")
try:
_persist_analysis_api_key(apikey)
runtime_provider = _resolve_analysis_runtime_settings().get("llm_provider", "anthropic")
_persist_analysis_api_key(apikey, provider=str(runtime_provider).lower())
return {"ok": True, "saved": True}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Failed to save API key: {e}")
@ -175,15 +204,22 @@ def _load_saved_config() -> dict:
return {}
def _persist_analysis_api_key(api_key_value: str):
def _persist_analysis_api_key(api_key_value: str, *, provider: str):
global _api_key
existing = _load_saved_config()
api_keys = dict(existing.get("api_keys") or {})
api_keys[provider] = api_key_value
payload = dict(existing)
payload["api_keys"] = api_keys
payload["api_key_provider"] = provider
payload["api_key"] = api_key_value
CONFIG_PATH.parent.mkdir(parents=True, exist_ok=True)
CONFIG_PATH.write_text(json.dumps({"api_key": api_key_value}, ensure_ascii=False))
CONFIG_PATH.write_text(json.dumps(payload, ensure_ascii=False))
os.chmod(CONFIG_PATH, 0o600)
_api_key = None
def _get_analysis_provider_api_key(provider: str, saved_api_key: Optional[str] = None) -> Optional[str]:
def _get_analysis_provider_api_key(provider: str, saved_config: Optional[dict] = None) -> Optional[str]:
env_names = {
"anthropic": ("ANTHROPIC_API_KEY", "MINIMAX_API_KEY"),
"openai": ("OPENAI_API_KEY",),
@ -196,7 +232,17 @@ def _get_analysis_provider_api_key(provider: str, saved_api_key: Optional[str] =
value = os.environ.get(env_name)
if value:
return value
return saved_api_key
saved = dict(saved_config or {})
api_keys = saved.get("api_keys")
if isinstance(api_keys, dict):
value = api_keys.get(provider.lower())
if value:
return value
legacy_provider = str(saved.get("api_key_provider") or "").lower()
legacy_key = saved.get("api_key")
if legacy_provider == provider.lower() and legacy_key:
return legacy_key
return None
def _resolve_analysis_runtime_settings() -> dict:
@ -231,9 +277,19 @@ def _resolve_analysis_runtime_settings() -> dict:
selected_analysts_raw = os.environ.get("TRADINGAGENTS_SELECTED_ANALYSTS", "market")
selected_analysts = [item.strip() for item in selected_analysts_raw.split(",") if item.strip()]
analysis_prompt_style = os.environ.get("TRADINGAGENTS_ANALYSIS_PROMPT_STYLE", "compact")
llm_timeout = float(os.environ.get("TRADINGAGENTS_LLM_TIMEOUT", "45"))
llm_max_retries = int(os.environ.get("TRADINGAGENTS_LLM_MAX_RETRIES", "0"))
return {
llm_timeout = float(
os.environ.get(
"TRADINGAGENTS_LLM_TIMEOUT",
str(defaults.get("llm_timeout", 45)),
)
)
llm_max_retries = int(
os.environ.get(
"TRADINGAGENTS_LLM_MAX_RETRIES",
str(defaults.get("llm_max_retries", 0)),
)
)
settings = {
"llm_provider": provider,
"backend_url": backend_url,
"deep_think_llm": deep_model,
@ -242,8 +298,9 @@ def _resolve_analysis_runtime_settings() -> dict:
"analysis_prompt_style": analysis_prompt_style,
"llm_timeout": llm_timeout,
"llm_max_retries": llm_max_retries,
"provider_api_key": _get_analysis_provider_api_key(provider, saved.get("api_key")),
"provider_api_key": _get_analysis_provider_api_key(provider, saved),
}
return normalize_runtime_llm_config(settings)
def _build_analysis_request_context(request: Request, auth_key: Optional[str]):
@ -260,6 +317,15 @@ def _build_analysis_request_context(request: Request, auth_key: Optional[str]):
analysis_prompt_style=settings["analysis_prompt_style"],
llm_timeout=settings["llm_timeout"],
llm_max_retries=settings["llm_max_retries"],
metadata={
"stdout_timeout_secs": max(float(settings["llm_timeout"]) * 4.0, 120.0),
"total_timeout_secs": max(float(settings["llm_timeout"]) * 12.0, 900.0),
"heartbeat_interval_secs": 10.0,
"local_recovery_timeout_secs": max(float(settings["llm_timeout"]) * 2.5, 90.0),
"provider_probe_timeout_secs": max(float(settings["llm_timeout"]) * 1.5, 60.0),
"local_recovery_cost_cap": 1.0,
"provider_probe_cost_cap": 1.0,
},
)
@ -350,6 +416,15 @@ async def start_analysis(
task_id = f"{payload.ticker}_{datetime.now().strftime('%Y%m%d_%H%M%S')}_{uuid.uuid4().hex[:6]}"
date = payload.date or datetime.now().strftime("%Y-%m-%d")
request_context = _build_analysis_request_context(http_request, api_key)
if payload.portfolio_context is not None or payload.peer_context is not None:
request_context = clone_request_context(
request_context,
metadata_updates={
"portfolio_context": payload.portfolio_context,
"peer_context": payload.peer_context,
"peer_context_mode": payload.peer_context_mode or ("CALLER_PROVIDED" if payload.peer_context else None),
},
)
try:
return await app.state.analysis_service.start_analysis(
@ -370,9 +445,10 @@ async def get_task_status(task_id: str, api_key: Optional[str] = Header(None)):
"""Get task status"""
if not _check_api_key(api_key):
_auth_error()
if task_id not in app.state.task_results:
payload = app.state.task_query_service.public_task_payload(task_id)
if payload is None:
raise HTTPException(status_code=404, detail="Task not found")
return _public_task_payload(task_id)
return payload
@app.get("/api/analysis/tasks")
@ -380,10 +456,7 @@ async def list_tasks(api_key: Optional[str] = Header(None)):
"""List all tasks (active and recent)"""
if not _check_api_key(api_key):
_auth_error()
tasks = [_public_task_summary(task_id) for task_id in app.state.task_results]
# Sort by created_at descending (most recent first)
tasks.sort(key=lambda x: x.get("created_at") or "", reverse=True)
return {"contract_version": "v1alpha1", "tasks": tasks, "total": len(tasks)}
return app.state.task_query_service.list_task_summaries()
@app.delete("/api/analysis/cancel/{task_id}")
@ -391,33 +464,13 @@ async def cancel_task(task_id: str, api_key: Optional[str] = Header(None)):
"""Cancel a running task."""
if not _check_api_key(api_key):
_auth_error()
if task_id not in app.state.task_results:
payload = await app.state.task_command_service.cancel_task(
task_id,
broadcast_progress=broadcast_progress,
)
if payload is None:
raise HTTPException(status_code=404, detail="Task not found")
proc = app.state.processes.get(task_id)
if proc and proc.returncode is None:
try:
proc.kill()
except Exception:
pass
task = app.state.analysis_tasks.get(task_id)
if task:
task.cancel()
state = app.state.job_service.cancel_job(task_id, error="用户取消")
if state is not None:
state["status"] = "cancelled"
state["error"] = {
"code": "cancelled",
"message": "用户取消",
"retryable": False,
}
app.state.result_store.save_task_status(task_id, state)
await broadcast_progress(task_id, state)
app.state.result_store.delete_task_status(task_id)
return {"contract_version": "v1alpha1", "task_id": task_id, "status": "cancelled"}
return payload
# ============== WebSocket ==============
@ -474,25 +527,21 @@ async def broadcast_progress(task_id: str, progress: dict):
def _load_task_contract(task_id: str, state: Optional[dict] = None) -> Optional[dict]:
current_state = state or app.state.task_results.get(task_id)
if current_state is None:
return None
return app.state.result_store.load_result_contract(
result_ref=current_state.get("result_ref"),
task_id=task_id,
)
return app.state.task_query_service.load_task_contract(task_id, state_override=state)
def _public_task_payload(task_id: str, state_override: Optional[dict] = None) -> dict:
state = state_override or app.state.task_results[task_id]
contract = _load_task_contract(task_id, state)
return app.state.job_service.to_public_task_payload(task_id, contract=contract)
payload = app.state.task_query_service.public_task_payload(task_id, state_override=state_override)
if payload is None:
raise KeyError(task_id)
return payload
def _public_task_summary(task_id: str, state_override: Optional[dict] = None) -> dict:
state = state_override or app.state.task_results[task_id]
contract = _load_task_contract(task_id, state)
return app.state.job_service.to_task_summary(task_id, contract=contract)
summary = app.state.task_query_service.public_task_summary(task_id, state_override=state_override)
if summary is None:
raise KeyError(task_id)
return summary
# ============== Reports ==============

View File

@ -1,8 +1,10 @@
from .analysis_service import AnalysisService
from .job_service import JobService
from .migration_flags import MigrationFlags, load_migration_flags
from .request_context import RequestContext, build_request_context
from .request_context import RequestContext, build_request_context, clone_request_context
from .result_store import ResultStore
from .task_command_service import TaskCommandService
from .task_query_service import TaskQueryService
__all__ = [
"AnalysisService",
@ -10,6 +12,9 @@ __all__ = [
"MigrationFlags",
"RequestContext",
"ResultStore",
"TaskCommandService",
"TaskQueryService",
"build_request_context",
"clone_request_context",
"load_migration_flags",
]

View File

@ -3,8 +3,9 @@ from __future__ import annotations
import asyncio
import json
import time
from dataclasses import replace
from datetime import datetime
from typing import Awaitable, Callable, Optional
from typing import Any, Awaitable, Callable, Optional
from .executor import AnalysisExecutionOutput, AnalysisExecutor, AnalysisExecutorError
from .request_context import RequestContext
@ -30,6 +31,8 @@ class AnalysisService:
self.job_service = job_service
self.retry_count = retry_count
self.retry_base_delay_secs = retry_base_delay_secs
self.local_recovery_limit = 1
self.provider_probe_limit = 1
async def start_analysis(
self,
@ -56,7 +59,11 @@ class AnalysisService:
task_id=task_id,
ticker=ticker,
date=date,
request_context=request_context,
request_context=await self._enrich_request_context(
request_context,
ticker=ticker,
date=date,
),
broadcast_progress=broadcast_progress,
)
)
@ -95,7 +102,11 @@ class AnalysisService:
task_id=task_id,
date=date,
watchlist=watchlist,
request_context=request_context,
request_context=self._freeze_batch_peer_snapshot(
request_context,
date=date,
watchlist=watchlist,
),
broadcast_progress=broadcast_progress,
)
)
@ -117,20 +128,36 @@ class AnalysisService:
broadcast_progress: BroadcastFn,
) -> None:
start_time = time.monotonic()
state = self.job_service.task_results[task_id]
evidence_attempts: list[dict[str, Any]] = []
budget_state = self._initial_budget_state(request_context)
try:
output = await self.executor.execute(
await self._set_analysis_runtime_state(
task_id=task_id,
status="collecting_evidence",
current_stage="analysts",
started_at=start_time,
broadcast_progress=broadcast_progress,
budget_state=budget_state,
)
baseline_context = self._with_attempt_metadata(
request_context,
attempt_index=0,
attempt_mode="baseline",
probe_mode="none",
stdout_timeout_secs=budget_state["baseline_timeout_secs"],
cost_cap=None,
)
output, evidence_attempts, tentative_classification = await self._execute_with_runtime_policy(
task_id=task_id,
ticker=ticker,
date=date,
request_context=request_context,
on_stage=lambda stage: self._handle_analysis_stage(
task_id=task_id,
stage_name=stage,
started_at=start_time,
request_context=baseline_context,
broadcast_progress=broadcast_progress,
),
started_at=start_time,
evidence_attempts=evidence_attempts,
budget_state=budget_state,
)
state = self.job_service.task_results[task_id]
elapsed_seconds = int(time.monotonic() - start_time)
contract = output.to_result_contract(
task_id=task_id,
@ -140,6 +167,9 @@ class AnalysisService:
elapsed_seconds=elapsed_seconds,
current_stage=ANALYSIS_STAGE_NAMES[-1],
)
contract["evidence"] = self._build_evidence_summary(evidence_attempts, fallback=output.observation)
contract["tentative_classification"] = tentative_classification
contract["budget_state"] = budget_state
result_ref = self.result_store.save_result_contract(task_id, contract)
self.job_service.complete_analysis_job(
task_id,
@ -148,6 +178,10 @@ class AnalysisService:
executor_type=request_context.executor_type,
)
except AnalysisExecutorError as exc:
observation = exc.observation or {}
if observation and self._should_append_observation(evidence_attempts, observation):
evidence_attempts.append(observation)
tentative_classification = self._classify_attempts(evidence_attempts) if evidence_attempts else None
self._fail_analysis_state(
task_id=task_id,
message=str(exc),
@ -158,8 +192,13 @@ class AnalysisService:
"degraded": bool(exc.degrade_reason_codes) or bool(exc.data_quality),
"reason_codes": list(exc.degrade_reason_codes),
"source_diagnostics": exc.source_diagnostics or {},
} if (exc.degrade_reason_codes or exc.data_quality or exc.source_diagnostics) else None,
}
if (exc.degrade_reason_codes or exc.data_quality or exc.source_diagnostics)
else None,
data_quality=exc.data_quality,
evidence_summary=self._build_evidence_summary(evidence_attempts, fallback=observation or None),
tentative_classification=tentative_classification,
budget_state=budget_state,
)
except Exception as exc:
self._fail_analysis_state(
@ -170,6 +209,9 @@ class AnalysisService:
retryable=False,
degradation=None,
data_quality=None,
evidence_summary=self._build_evidence_summary(evidence_attempts),
tentative_classification=self._classify_attempts(evidence_attempts) if evidence_attempts else None,
budget_state=budget_state,
)
await broadcast_progress(task_id, self.job_service.task_results[task_id])
@ -228,11 +270,16 @@ class AnalysisService:
ticker=ticker,
stock=stock,
date=date,
request_context=request_context,
request_context=await self._enrich_request_context(
request_context,
ticker=ticker,
date=date,
stock=stock,
),
)
if success and rec is not None:
if rec is not None:
self.job_service.append_portfolio_result(task_id, rec)
else:
if not success:
self.job_service.mark_portfolio_failure(task_id)
await broadcast_progress(task_id, self.job_service.task_results[task_id])
@ -252,33 +299,644 @@ class AnalysisService:
date: str,
request_context: RequestContext,
) -> tuple[bool, Optional[dict]]:
last_error: Optional[str] = None
for attempt in range(self.retry_count + 1):
child_task_id = f"{task_id}_{stock['_idx']}"
evidence_attempts: list[dict[str, Any]] = []
budget_state = self._initial_budget_state(request_context)
baseline_context = self._with_attempt_metadata(
request_context,
attempt_index=0,
attempt_mode="baseline",
probe_mode="none",
stdout_timeout_secs=budget_state["baseline_timeout_secs"],
cost_cap=None,
)
try:
output = await self.executor.execute(
task_id=f"{task_id}_{stock['_idx']}",
output = await self._execute_portfolio_with_runtime_policy(
task_id=child_task_id,
ticker=ticker,
date=date,
request_context=request_context,
request_context=baseline_context,
evidence_attempts=evidence_attempts,
budget_state=budget_state,
)
tentative_classification = self._classify_attempts(evidence_attempts)
rec = self._build_recommendation_record(
output=output,
ticker=ticker,
stock=stock,
date=date,
evidence_summary=self._build_evidence_summary(evidence_attempts, fallback=output.observation),
tentative_classification=tentative_classification,
budget_state=budget_state,
)
self.result_store.save_recommendation(date, ticker, rec)
return True, rec
except AnalysisExecutorError as exc:
if exc.observation and self._should_append_observation(evidence_attempts, exc.observation):
evidence_attempts.append(exc.observation)
if exc.observation:
self.job_service.task_results[task_id]["last_error"] = exc.observation.get("message") or str(exc)
else:
self.job_service.task_results[task_id]["last_error"] = str(exc)
rec = self._build_failed_recommendation_record(
ticker=ticker,
stock=stock,
date=date,
evidence_summary=self._build_evidence_summary(evidence_attempts),
tentative_classification=self._classify_attempts(evidence_attempts) if evidence_attempts else None,
budget_state=budget_state,
exc=exc,
)
self.result_store.save_recommendation(date, ticker, rec)
return False, rec
except Exception as exc:
last_error = str(exc)
if attempt < self.retry_count:
await asyncio.sleep(self.retry_base_delay_secs ** attempt)
if last_error:
self.job_service.task_results[task_id]["last_error"] = last_error
self.job_service.task_results[task_id]["last_error"] = str(exc)
return False, None
async def _execute_portfolio_with_runtime_policy(
self,
*,
task_id: str,
ticker: str,
date: str,
request_context: RequestContext,
evidence_attempts: list[dict[str, Any]],
budget_state: dict[str, Any],
) -> AnalysisExecutionOutput:
try:
output = await self.executor.execute(
task_id=task_id,
ticker=ticker,
date=date,
request_context=request_context,
)
if output.observation:
evidence_attempts.append(output.observation)
return output
except AnalysisExecutorError as baseline_exc:
if baseline_exc.observation:
evidence_attempts.append(baseline_exc.observation)
tentative_classification = self._classify_attempts(evidence_attempts)
if self._can_use_local_recovery(budget_state):
budget_state["local_recovery_used"] = True
budget_state["local_recovery_cost_used"] += 1.0
recovery_context = self._with_attempt_metadata(
request_context,
attempt_index=1,
attempt_mode="local_recovery",
probe_mode="none",
stdout_timeout_secs=budget_state["local_recovery_timeout_secs"],
cost_cap=budget_state["local_recovery_cost_cap"],
)
try:
output = await self.executor.execute(
task_id=task_id,
ticker=ticker,
date=date,
request_context=recovery_context,
)
if output.observation:
evidence_attempts.append(output.observation)
return output
except AnalysisExecutorError as recovery_exc:
if recovery_exc.observation:
evidence_attempts.append(recovery_exc.observation)
tentative_classification = self._classify_attempts(evidence_attempts)
if self._can_use_provider_probe(budget_state, tentative_classification):
budget_state["provider_probe_used"] = True
budget_state["provider_probe_cost_used"] += 1.0
probe_context = self._build_probe_context(request_context, budget_state)
try:
output = await self.executor.execute(
task_id=task_id,
ticker=ticker,
date=date,
request_context=probe_context,
)
if output.observation:
evidence_attempts.append(output.observation)
return output
except AnalysisExecutorError as probe_exc:
if probe_exc.observation:
evidence_attempts.append(probe_exc.observation)
raise probe_exc
raise recovery_exc
raise baseline_exc
async def _enrich_request_context(
self,
request_context: RequestContext,
*,
ticker: str,
date: str,
stock: Optional[dict[str, Any]] = None,
) -> RequestContext:
metadata = dict(request_context.metadata or {})
if not str(metadata.get("portfolio_context") or "").strip():
metadata["portfolio_context"] = await self._build_portfolio_context(
ticker=ticker,
stock=stock,
)
if not str(metadata.get("peer_context") or "").strip():
metadata["peer_context"] = self._build_peer_context(
ticker=ticker,
date=date,
peer_snapshot=metadata.get("peer_recommendation_snapshot"),
watchlist_snapshot=metadata.get("peer_context_batch_watchlist"),
)
metadata.setdefault("peer_context_mode", "PORTFOLIO_SNAPSHOT")
elif not str(metadata.get("peer_context_mode") or "").strip():
metadata["peer_context_mode"] = "CALLER_PROVIDED"
return replace(request_context, metadata=metadata)
def _freeze_batch_peer_snapshot(
self,
request_context: RequestContext,
*,
date: str,
watchlist: list[dict[str, Any]],
) -> RequestContext:
metadata = dict(request_context.metadata or {})
if metadata.get("peer_recommendation_snapshot") is not None:
return request_context
snapshot = (
self.result_store.get_recommendations(date=date, limit=200, offset=0).get("recommendations", [])
)
metadata["peer_recommendation_snapshot"] = snapshot
metadata.setdefault("peer_context_mode", "PORTFOLIO_SNAPSHOT")
metadata["peer_context_batch_watchlist"] = [
{"ticker": item.get("ticker"), "name": item.get("name")}
for item in watchlist
]
return replace(request_context, metadata=metadata)
async def _build_portfolio_context(
self,
*,
ticker: str,
stock: Optional[dict[str, Any]] = None,
) -> str:
try:
positions = await self.result_store.get_positions(None)
except Exception:
positions = []
if not positions:
watchlist = self.result_store.get_watchlist() or []
if watchlist:
return (
f"No recorded open positions. Watchlist size={len(watchlist)}. "
f"Current analysis target={ticker} ({(stock or {}).get('name', ticker)})."
)
return f"No recorded open positions for the current book. Current analysis target={ticker}."
def _position_value(pos: dict[str, Any]) -> float:
price = pos.get("current_price")
if price is None:
price = pos.get("cost_price") or 0.0
return float(price or 0.0) * float(pos.get("shares") or 0.0)
sorted_positions = sorted(positions, key=_position_value, reverse=True)
current_positions = [pos for pos in positions if pos.get("ticker") == ticker]
top_positions = sorted_positions[:4]
losing_positions = sorted(
[pos for pos in positions if pos.get("unrealized_pnl_pct") is not None],
key=lambda pos: float(pos.get("unrealized_pnl_pct") or 0.0),
)[:3]
lines = [f"Current portfolio has {len(positions)} open position(s)."]
if current_positions:
current = current_positions[0]
pnl_pct = current.get("unrealized_pnl_pct")
pnl_text = (
f", unrealized_pnl_pct={float(pnl_pct):.2f}%"
if pnl_pct is not None
else ""
)
lines.append(
"Existing position in target: "
f"{ticker}, shares={current.get('shares')}, cost={current.get('cost_price')}{pnl_text}."
)
else:
lines.append(f"No existing position in target ticker {ticker}.")
if top_positions:
top_text = ", ".join(
f"{pos.get('ticker')} value~{_position_value(pos):.0f}"
for pos in top_positions
)
lines.append(f"Largest current positions: {top_text}.")
if losing_positions:
losing_text = ", ".join(
f"{pos.get('ticker')} pnl={float(pos.get('unrealized_pnl_pct') or 0.0):.2f}%"
for pos in losing_positions
)
lines.append(f"Weakest current positions by unrealized P&L: {losing_text}.")
return " ".join(lines)
def _build_peer_context(
self,
*,
ticker: str,
date: str,
peer_snapshot: Optional[list[dict[str, Any]]] = None,
watchlist_snapshot: Optional[list[dict[str, Any]]] = None,
) -> str:
recommendations = peer_snapshot
if recommendations is None:
recommendations = (
self.result_store.get_recommendations(date=date, limit=20, offset=0).get("recommendations", [])
)
peers = [rec for rec in recommendations if rec.get("ticker") != ticker]
if not peers:
watchlist = watchlist_snapshot or self.result_store.get_watchlist() or []
if watchlist:
sample = ", ".join(item.get("ticker", "") for item in watchlist[:5] if item.get("ticker"))
return (
"No prior recommendation peers are available for this date yet. "
f"Current watchlist sample: {sample}."
)
return "No prior recommendation peers are available for this date yet."
def _decision_rank(rec: dict[str, Any]) -> tuple[int, float]:
rating = (((rec.get("result") or {}).get("decision")) or "").upper()
confidence = float(((rec.get("result") or {}).get("confidence")) or 0.0)
direction = 1 if rating in {"BUY", "OVERWEIGHT"} else -1 if rating in {"SELL", "UNDERWEIGHT"} else 0
return direction, confidence
bullish = sorted(
[rec for rec in peers if _decision_rank(rec)[0] > 0],
key=lambda rec: _decision_rank(rec)[1],
reverse=True,
)[:3]
bearish = sorted(
[rec for rec in peers if _decision_rank(rec)[0] < 0],
key=lambda rec: _decision_rank(rec)[1],
reverse=True,
)[:3]
neutral = sorted(
[rec for rec in peers if _decision_rank(rec)[0] == 0],
key=lambda rec: _decision_rank(rec)[1],
reverse=True,
)[:2]
lines = [
"Peer context is auto-derived from a portfolio/book snapshot and is not industry-normalized. "
"It should be used for broad book-relative comparison, not as evidence for SAME_THEME_RANK."
]
if bullish:
lines.append(
"Current strongest bullish peers: "
+ ", ".join(
f"{rec.get('ticker')}:{((rec.get('result') or {}).get('decision'))}"
for rec in bullish
)
+ "."
)
if bearish:
lines.append(
"Current strongest bearish peers: "
+ ", ".join(
f"{rec.get('ticker')}:{((rec.get('result') or {}).get('decision'))}"
for rec in bearish
)
+ "."
)
if neutral and not bullish and not bearish:
lines.append(
"Current neutral peers: "
+ ", ".join(
f"{rec.get('ticker')}:{((rec.get('result') or {}).get('decision'))}"
for rec in neutral
)
+ "."
)
return " ".join(lines)
async def _execute_with_runtime_policy(
self,
*,
task_id: str,
ticker: str,
date: str,
request_context: RequestContext,
broadcast_progress: BroadcastFn,
started_at: float,
evidence_attempts: list[dict[str, Any]],
budget_state: dict[str, Any],
) -> tuple[AnalysisExecutionOutput, list[dict[str, Any]], dict[str, Any]]:
try:
output = await self._execute_once(
task_id=task_id,
ticker=ticker,
date=date,
request_context=request_context,
started_at=started_at,
broadcast_progress=broadcast_progress,
)
self._record_observation(evidence_attempts, output.observation)
return output, evidence_attempts, self._classify_attempts(evidence_attempts)
except AnalysisExecutorError as baseline_exc:
self._record_observation(evidence_attempts, baseline_exc.observation)
tentative_classification = self._classify_attempts(evidence_attempts)
if self._can_use_local_recovery(budget_state):
budget_state["local_recovery_used"] = True
budget_state["local_recovery_cost_used"] += 1.0
await self._set_analysis_runtime_state(
task_id=task_id,
status="auto_recovering",
current_stage=self.job_service.task_results[task_id].get("current_stage"),
started_at=started_at,
broadcast_progress=broadcast_progress,
evidence_summary=self._build_evidence_summary(evidence_attempts),
tentative_classification=tentative_classification,
budget_state=budget_state,
)
recovery_context = self._with_attempt_metadata(
request_context,
attempt_index=1,
attempt_mode="local_recovery",
probe_mode="none",
stdout_timeout_secs=budget_state["local_recovery_timeout_secs"],
cost_cap=budget_state["local_recovery_cost_cap"],
)
try:
output = await self._execute_once(
task_id=task_id,
ticker=ticker,
date=date,
request_context=recovery_context,
started_at=started_at,
broadcast_progress=broadcast_progress,
)
self._record_observation(evidence_attempts, output.observation)
return output, evidence_attempts, self._classify_attempts(evidence_attempts)
except AnalysisExecutorError as recovery_exc:
self._record_observation(evidence_attempts, recovery_exc.observation)
tentative_classification = self._classify_attempts(evidence_attempts)
if self._can_use_provider_probe(budget_state, tentative_classification):
budget_state["provider_probe_used"] = True
budget_state["provider_probe_cost_used"] += 1.0
await self._set_analysis_runtime_state(
task_id=task_id,
status="classification_pending",
current_stage=self.job_service.task_results[task_id].get("current_stage"),
started_at=started_at,
broadcast_progress=broadcast_progress,
evidence_summary=self._build_evidence_summary(evidence_attempts),
tentative_classification=tentative_classification,
budget_state=budget_state,
)
await self._set_analysis_runtime_state(
task_id=task_id,
status="probing_provider",
current_stage=self.job_service.task_results[task_id].get("current_stage"),
started_at=started_at,
broadcast_progress=broadcast_progress,
evidence_summary=self._build_evidence_summary(evidence_attempts),
tentative_classification=tentative_classification,
budget_state=budget_state,
)
probe_context = self._build_probe_context(request_context, budget_state)
try:
output = await self._execute_once(
task_id=task_id,
ticker=ticker,
date=date,
request_context=probe_context,
started_at=started_at,
broadcast_progress=broadcast_progress,
)
self._record_observation(evidence_attempts, output.observation)
return output, evidence_attempts, self._classify_attempts(evidence_attempts)
except AnalysisExecutorError as probe_exc:
self._record_observation(evidence_attempts, probe_exc.observation)
raise probe_exc
raise recovery_exc
raise baseline_exc
async def _execute_once(
self,
*,
task_id: str,
ticker: str,
date: str,
request_context: RequestContext,
started_at: float,
broadcast_progress: BroadcastFn,
) -> AnalysisExecutionOutput:
return await self.executor.execute(
task_id=task_id,
ticker=ticker,
date=date,
request_context=request_context,
on_stage=lambda stage: self._handle_analysis_stage(
task_id=task_id,
stage_name=stage,
started_at=started_at,
broadcast_progress=broadcast_progress,
),
)
async def _set_analysis_runtime_state(
self,
*,
task_id: str,
status: str,
current_stage: Optional[str],
started_at: float,
broadcast_progress: BroadcastFn,
evidence_summary: Optional[dict] = None,
tentative_classification: Optional[dict] = None,
budget_state: Optional[dict] = None,
) -> None:
state = self.job_service.task_results[task_id]
state["status"] = status
if current_stage is not None:
state["current_stage"] = current_stage
state["elapsed_seconds"] = int(time.monotonic() - started_at)
state["elapsed"] = state["elapsed_seconds"]
if evidence_summary is not None:
state["evidence_summary"] = evidence_summary
if tentative_classification is not None:
state["tentative_classification"] = tentative_classification
if budget_state is not None:
state["budget_state"] = dict(budget_state)
self.result_store.save_task_status(task_id, state)
await broadcast_progress(task_id, state)
def _initial_budget_state(self, request_context: RequestContext) -> dict[str, Any]:
metadata = dict(request_context.metadata or {})
baseline_timeout = float(metadata.get("stdout_timeout_secs", 300.0))
local_recovery_timeout = float(metadata.get("local_recovery_timeout_secs", min(baseline_timeout, 180.0)))
provider_probe_timeout = float(metadata.get("provider_probe_timeout_secs", min(baseline_timeout, 90.0)))
return {
"local_recovery_used": False,
"provider_probe_used": False,
"local_recovery_limit": self.local_recovery_limit,
"provider_probe_limit": self.provider_probe_limit,
"local_recovery_cost_cap": float(metadata.get("local_recovery_cost_cap", 1.0)),
"provider_probe_cost_cap": float(metadata.get("provider_probe_cost_cap", 1.0)),
"local_recovery_cost_used": 0.0,
"provider_probe_cost_used": 0.0,
"baseline_timeout_secs": baseline_timeout,
"local_recovery_timeout_secs": local_recovery_timeout,
"provider_probe_timeout_secs": provider_probe_timeout,
}
def _with_attempt_metadata(
self,
request_context: RequestContext,
*,
attempt_index: int,
attempt_mode: str,
probe_mode: str,
stdout_timeout_secs: float,
cost_cap: Optional[float],
) -> RequestContext:
metadata = dict(request_context.metadata or {})
metadata.update({
"attempt_index": attempt_index,
"attempt_mode": attempt_mode,
"probe_mode": probe_mode,
"stdout_timeout_secs": stdout_timeout_secs,
"cost_cap": cost_cap,
"evidence_id": f"{request_context.request_id}:{attempt_mode}:{attempt_index}",
})
return replace(request_context, metadata=metadata)
def _build_probe_context(self, request_context: RequestContext, budget_state: dict[str, Any]) -> RequestContext:
selected = tuple(request_context.selected_analysts or ("market",))
probe_selected = ("market",) if "market" in selected else (selected[0],)
return self._with_attempt_metadata(
replace(
request_context,
selected_analysts=probe_selected,
analysis_prompt_style=request_context.analysis_prompt_style or "compact",
),
attempt_index=2,
attempt_mode="provider_probe",
probe_mode="provider_boundary",
stdout_timeout_secs=budget_state["provider_probe_timeout_secs"],
cost_cap=budget_state["provider_probe_cost_cap"],
)
def _build_evidence_summary(
self,
observations: list[dict[str, Any]],
*,
fallback: Optional[dict[str, Any]] = None,
) -> dict[str, Any]:
last_observation = observations[-1] if observations else fallback
return {
"attempts": observations,
"last_observation": last_observation,
}
def _record_observation(
self,
observations: list[dict[str, Any]],
observation: Optional[dict[str, Any]],
) -> None:
if observation and self._should_append_observation(observations, observation):
observations.append(observation)
def _can_use_local_recovery(self, budget_state: dict[str, Any]) -> bool:
return (
not budget_state["local_recovery_used"]
and budget_state["local_recovery_cost_used"] < budget_state["local_recovery_cost_cap"]
)
def _can_use_provider_probe(
self,
budget_state: dict[str, Any],
tentative_classification: dict[str, Any],
) -> bool:
return (
not budget_state["provider_probe_used"]
and budget_state["provider_probe_cost_used"] < budget_state["provider_probe_cost_cap"]
and tentative_classification.get("kind") in {"interaction_effect", "provider_boundary"}
)
def _classify_attempts(self, observations: list[dict[str, Any]]) -> dict[str, Any]:
if not observations:
return {
"kind": "interaction_effect",
"tentative": True,
"basis": ["no_observation"],
}
if any(
observation.get("attempt_mode") == "local_recovery" and observation.get("status") == "completed"
for observation in observations
):
return {
"kind": "local_runtime",
"tentative": True,
"basis": ["local_recovery_succeeded"],
"last_observation_code": observations[-1].get("observation_code"),
}
if any(
observation.get("attempt_mode") == "provider_probe" and observation.get("status") == "completed"
for observation in observations
):
return {
"kind": "interaction_effect",
"tentative": True,
"basis": ["provider_probe_succeeded_after_runtime_failures"],
"last_observation_code": observations[-1].get("observation_code"),
}
latest_kind = self._classify_observation(observations[-1])
return {
"kind": latest_kind,
"tentative": True,
"basis": [obs.get("observation_code") for obs in observations if obs.get("observation_code")],
"last_observation_code": observations[-1].get("observation_code"),
}
@staticmethod
def _should_append_observation(observations: list[dict[str, Any]], observation: dict[str, Any]) -> bool:
if not observations:
return True
last = observations[-1]
if last.get("evidence_id") and observation.get("evidence_id"):
return last.get("evidence_id") != observation.get("evidence_id")
return last != observation
def _classify_observation(self, observation: dict[str, Any]) -> str:
data_quality = observation.get("data_quality") or {}
status = str(observation.get("status") or "").lower()
state = str(data_quality.get("state") or "").lower()
message = str(observation.get("message") or "").lower()
code = str(observation.get("observation_code") or "").lower()
if status == "completed":
return "no_issue"
if state == "provider_mismatch" or "api key not configured" in message:
return "local_runtime"
if code == "analysis_protocol_failed" or "required markers" in message or "parse result_meta" in message:
return "local_runtime"
provider_markers = (
"429",
"529",
"overloaded",
"temporarily unavailable",
"connection reset",
"rate limit",
)
if any(marker in message for marker in provider_markers):
return "provider_boundary"
if "timed out" in message or code == "subprocess_stdout_timeout":
return "interaction_effect"
return "interaction_effect"
def _fail_analysis_state(
self,
*,
@ -289,6 +947,9 @@ class AnalysisService:
retryable: bool,
degradation: Optional[dict],
data_quality: Optional[dict],
evidence_summary: Optional[dict],
tentative_classification: Optional[dict],
budget_state: Optional[dict],
) -> None:
state = self.job_service.task_results[task_id]
state["status"] = "failed"
@ -297,6 +958,9 @@ class AnalysisService:
state["result"] = None
state["degradation_summary"] = degradation
state["data_quality_summary"] = data_quality
state["evidence_summary"] = evidence_summary
state["tentative_classification"] = tentative_classification
state["budget_state"] = budget_state or {}
state["error"] = {
"code": code,
"message": message,
@ -312,12 +976,17 @@ class AnalysisService:
date: str,
output: AnalysisExecutionOutput | None = None,
stdout: str | None = None,
evidence_summary: Optional[dict] = None,
tentative_classification: Optional[dict] = None,
budget_state: Optional[dict] = None,
error_message: Optional[str] = None,
) -> dict:
if output is not None:
decision = output.decision
quant_signal = output.quant_signal
llm_signal = output.llm_signal
confidence = output.confidence
llm_decision_structured = output.llm_decision_structured
data_quality = output.data_quality
degrade_reason_codes = list(output.degrade_reason_codes)
else:
@ -325,6 +994,7 @@ class AnalysisService:
quant_signal = None
llm_signal = None
confidence = None
llm_decision_structured = None
data_quality = None
degrade_reason_codes = []
for line in (stdout or "").splitlines():
@ -336,6 +1006,7 @@ class AnalysisService:
quant_signal = detail.get("quant_signal")
llm_signal = detail.get("llm_signal")
confidence = detail.get("confidence")
llm_decision_structured = detail.get("llm_decision_structured")
if line.startswith("ANALYSIS_COMPLETE:"):
decision = line.split(":", 1)[1].strip()
@ -363,6 +1034,7 @@ class AnalysisService:
"direction": 1 if llm_signal in {"BUY", "OVERWEIGHT"} else -1 if llm_signal in {"SELL", "UNDERWEIGHT"} else 0,
"rating": llm_signal,
"available": llm_signal is not None,
"structured": llm_decision_structured,
},
},
"degraded": quant_signal is None or llm_signal is None,
@ -372,11 +1044,78 @@ class AnalysisService:
"reason_codes": degrade_reason_codes,
},
"data_quality": data_quality,
"evidence": evidence_summary,
"tentative_classification": tentative_classification,
"budget_state": budget_state or {},
"error": error_message,
"compat": {
"analysis_date": date,
"decision": decision,
"quant_signal": quant_signal,
"llm_signal": llm_signal,
"confidence": confidence,
"llm_decision_structured": llm_decision_structured,
},
}
@staticmethod
def _build_failed_recommendation_record(
*,
ticker: str,
stock: dict,
date: str,
evidence_summary: Optional[dict],
tentative_classification: Optional[dict],
budget_state: Optional[dict],
exc: AnalysisExecutorError,
) -> dict:
return {
"contract_version": "v1alpha1",
"ticker": ticker,
"name": stock.get("name", ticker),
"date": date,
"status": "failed",
"created_at": datetime.now().isoformat(),
"result": {
"decision": None,
"confidence": None,
"signals": {
"merged": {
"direction": 0,
"rating": None,
},
"quant": {
"direction": 0,
"rating": None,
"available": False,
},
"llm": {
"direction": 0,
"rating": None,
"available": False,
},
},
"degraded": False,
},
"degradation": {
"degraded": bool(exc.degrade_reason_codes) or bool(exc.data_quality),
"reason_codes": list(exc.degrade_reason_codes),
"source_diagnostics": exc.source_diagnostics or {},
},
"data_quality": exc.data_quality,
"evidence": evidence_summary,
"tentative_classification": tentative_classification,
"budget_state": budget_state or {},
"error": {
"code": exc.code,
"message": str(exc),
"retryable": exc.retryable,
},
"compat": {
"analysis_date": date,
"decision": None,
"quant_signal": None,
"llm_signal": None,
"confidence": None,
},
}

View File

@ -6,7 +6,7 @@ import os
import tempfile
from dataclasses import dataclass
from pathlib import Path
from typing import Awaitable, Callable, Optional, Protocol
from typing import Any, Awaitable, Callable, Optional, Protocol
from .request_context import (
CONTRACT_VERSION,
@ -21,6 +21,8 @@ LEGACY_ANALYSIS_SCRIPT_TEMPLATE = """
import json
import os
import sys
import threading
import time
from pathlib import Path
ticker = sys.argv[1]
@ -34,7 +36,27 @@ sys.modules["mini_racer"] = py_mini_racer
from orchestrator.config import OrchestratorConfig
from orchestrator.orchestrator import TradingOrchestrator
from tradingagents.default_config import get_default_config
from tradingagents.default_config import get_default_config, normalize_runtime_llm_config
def _provider_api_key(provider: str):
provider = str(provider or "").lower()
if os.environ.get("TRADINGAGENTS_PROVIDER_API_KEY"):
return os.environ["TRADINGAGENTS_PROVIDER_API_KEY"]
env_names = {
"anthropic": ("ANTHROPIC_API_KEY", "MINIMAX_API_KEY"),
"openai": ("OPENAI_API_KEY",),
"openrouter": ("OPENROUTER_API_KEY",),
"xai": ("XAI_API_KEY",),
"google": ("GOOGLE_API_KEY",),
}.get(provider, tuple())
for env_name in env_names:
value = os.environ.get(env_name)
if value:
return value
return None
trading_config = get_default_config()
trading_config["project_dir"] = os.path.join(repo_root, "tradingagents")
@ -70,6 +92,42 @@ if os.environ.get("TRADINGAGENTS_LLM_TIMEOUT"):
trading_config["llm_timeout"] = float(os.environ["TRADINGAGENTS_LLM_TIMEOUT"])
if os.environ.get("TRADINGAGENTS_LLM_MAX_RETRIES"):
trading_config["llm_max_retries"] = int(os.environ["TRADINGAGENTS_LLM_MAX_RETRIES"])
if os.environ.get("TRADINGAGENTS_PORTFOLIO_CONTEXT") is not None:
trading_config["portfolio_context"] = os.environ["TRADINGAGENTS_PORTFOLIO_CONTEXT"]
if os.environ.get("TRADINGAGENTS_PEER_CONTEXT") is not None:
trading_config["peer_context"] = os.environ["TRADINGAGENTS_PEER_CONTEXT"]
if os.environ.get("TRADINGAGENTS_PEER_CONTEXT_MODE") is not None:
trading_config["peer_context_mode"] = os.environ["TRADINGAGENTS_PEER_CONTEXT_MODE"]
provider_api_key = _provider_api_key(trading_config.get("llm_provider", "anthropic"))
if provider_api_key:
trading_config["api_key"] = provider_api_key
trading_config = normalize_runtime_llm_config(trading_config)
print(
"CHECKPOINT:AUTH:" + json.dumps(
{
"provider": trading_config.get("llm_provider"),
"backend_url": trading_config.get("backend_url"),
"api_key_present": bool(provider_api_key),
}
),
flush=True,
)
if trading_config.get("llm_provider") != "ollama" and not provider_api_key:
result_meta = {
"degrade_reason_codes": ["provider_api_key_missing"],
"data_quality": {
"state": "provider_api_key_missing",
"provider": trading_config.get("llm_provider"),
},
"source_diagnostics": {
"llm": {
"reason_code": "provider_api_key_missing",
}
},
}
print("RESULT_META:" + json.dumps(result_meta), file=sys.stderr, flush=True)
print("ANALYSIS_ERROR:provider API key missing inside analysis subprocess", file=sys.stderr, flush=True)
sys.exit(1)
print("STAGE:analysts", flush=True)
print("STAGE:research", flush=True)
@ -82,9 +140,30 @@ orchestrator = TradingOrchestrator(config)
print("STAGE:trading", flush=True)
heartbeat_interval = float(os.environ.get("TRADINGAGENTS_HEARTBEAT_SECS", "10"))
heartbeat_stop = threading.Event()
heartbeat_started_at = time.monotonic()
def _heartbeat():
while not heartbeat_stop.wait(heartbeat_interval):
print(
"HEARTBEAT:" + json.dumps(
{
"ticker": ticker,
"elapsed_seconds": round(time.monotonic() - heartbeat_started_at, 1),
"phase": "trading",
}
),
flush=True,
)
heartbeat_thread = threading.Thread(target=_heartbeat, name="analysis-heartbeat", daemon=True)
heartbeat_thread.start()
try:
result = orchestrator.get_combined_signal(ticker, date)
except Exception as exc:
heartbeat_stop.set()
result_meta = {
"degrade_reason_codes": list(getattr(exc, "reason_codes", ()) or ()),
"data_quality": getattr(exc, "data_quality", None),
@ -93,6 +172,8 @@ except Exception as exc:
print("RESULT_META:" + json.dumps(result_meta), file=sys.stderr, flush=True)
print("ANALYSIS_ERROR:" + str(exc), file=sys.stderr, flush=True)
sys.exit(1)
finally:
heartbeat_stop.set()
print("STAGE:risk", flush=True)
@ -101,6 +182,7 @@ confidence = result.confidence
llm_sig_obj = result.llm_signal
quant_sig_obj = result.quant_signal
llm_signal = llm_sig_obj.metadata.get("rating", "HOLD") if llm_sig_obj else "HOLD"
llm_decision_structured = llm_sig_obj.metadata.get("decision_structured") if llm_sig_obj else None
if quant_sig_obj is None:
quant_signal = "HOLD"
elif quant_sig_obj.direction == 1:
@ -138,7 +220,12 @@ report_path = results_dir / "complete_report.md"
report_path.write_text(report_content)
print("STAGE:portfolio", flush=True)
signal_detail = json.dumps({"llm_signal": llm_signal, "quant_signal": quant_signal, "confidence": confidence})
signal_detail = json.dumps({
"llm_signal": llm_signal,
"quant_signal": quant_signal,
"confidence": confidence,
"llm_decision_structured": llm_decision_structured,
})
result_meta = json.dumps({
"degrade_reason_codes": list(getattr(result, "degrade_reason_codes", ())),
"data_quality": (result.metadata or {}).get("data_quality"),
@ -165,9 +252,11 @@ class AnalysisExecutionOutput:
llm_signal: Optional[str]
confidence: Optional[float]
report_path: Optional[str] = None
llm_decision_structured: Optional[dict[str, Any]] = None
degrade_reason_codes: tuple[str, ...] = ()
data_quality: Optional[dict] = None
source_diagnostics: Optional[dict] = None
observation: Optional[dict[str, Any]] = None
contract_version: str = CONTRACT_VERSION
executor_type: str = DEFAULT_EXECUTOR_TYPE
@ -216,6 +305,7 @@ class AnalysisExecutionOutput:
"direction": _rating_to_direction(self.llm_signal),
"rating": self.llm_signal,
"available": self.llm_signal is not None,
"structured": self.llm_decision_structured,
},
},
"degraded": degraded,
@ -238,6 +328,7 @@ class AnalysisExecutorError(RuntimeError):
degrade_reason_codes: tuple[str, ...] = (),
data_quality: Optional[dict] = None,
source_diagnostics: Optional[dict] = None,
observation: Optional[dict[str, Any]] = None,
):
super().__init__(message)
self.code = code
@ -245,6 +336,7 @@ class AnalysisExecutorError(RuntimeError):
self.degrade_reason_codes = degrade_reason_codes
self.data_quality = data_quality
self.source_diagnostics = source_diagnostics
self.observation = observation
class AnalysisExecutor(Protocol):
@ -278,6 +370,7 @@ class LegacySubprocessAnalysisExecutor:
self.process_registry = process_registry
self.script_template = script_template
self.stdout_timeout_secs = stdout_timeout_secs
self.default_total_timeout_secs = max(stdout_timeout_secs * 6.0, 900.0)
async def execute(
self,
@ -291,10 +384,31 @@ class LegacySubprocessAnalysisExecutor:
llm_provider = (request_context.llm_provider or "anthropic").lower()
analysis_api_key = request_context.provider_api_key or self._resolve_provider_api_key(llm_provider)
if llm_provider != "ollama" and not analysis_api_key:
raise RuntimeError(f"{llm_provider} provider API key not configured")
raise AnalysisExecutorError(
f"{llm_provider} provider API key not configured",
code="analysis_failed",
observation=self._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code="provider_api_key_missing",
stage=None,
stdout_timeout_secs=float((request_context.metadata or {}).get("stdout_timeout_secs", self.stdout_timeout_secs)),
returncode=None,
markers={},
message=f"{llm_provider} provider API key not configured",
),
)
runtime_metadata = dict(request_context.metadata or {})
stdout_timeout_secs = float(runtime_metadata.get("stdout_timeout_secs", self.stdout_timeout_secs))
total_timeout_secs = float(
runtime_metadata.get("total_timeout_secs", self.default_total_timeout_secs)
)
script_path: Optional[Path] = None
proc: asyncio.subprocess.Process | None = None
last_stage: Optional[str] = None
try:
fd, script_path_str = tempfile.mkstemp(suffix=".py", prefix=f"analysis_{task_id}_")
script_path = Path(script_path_str)
@ -307,6 +421,15 @@ class LegacySubprocessAnalysisExecutor:
for key, value in os.environ.items()
if not key.startswith(("PYTHON", "CONDA", "VIRTUAL"))
}
for env_name in (
"ANTHROPIC_API_KEY",
"MINIMAX_API_KEY",
"OPENAI_API_KEY",
"OPENROUTER_API_KEY",
"XAI_API_KEY",
"GOOGLE_API_KEY",
):
clean_env.pop(env_name, None)
clean_env["TRADINGAGENTS_LLM_PROVIDER"] = llm_provider
if request_context.backend_url:
clean_env["TRADINGAGENTS_BACKEND_URL"] = request_context.backend_url
@ -322,12 +445,29 @@ class LegacySubprocessAnalysisExecutor:
clean_env["TRADINGAGENTS_LLM_TIMEOUT"] = str(request_context.llm_timeout)
if request_context.llm_max_retries is not None:
clean_env["TRADINGAGENTS_LLM_MAX_RETRIES"] = str(request_context.llm_max_retries)
if runtime_metadata.get("portfolio_context") is not None:
clean_env["TRADINGAGENTS_PORTFOLIO_CONTEXT"] = str(
runtime_metadata.get("portfolio_context") or ""
)
if runtime_metadata.get("peer_context") is not None:
clean_env["TRADINGAGENTS_PEER_CONTEXT"] = str(
runtime_metadata.get("peer_context") or ""
)
if runtime_metadata.get("peer_context_mode") is not None:
clean_env["TRADINGAGENTS_PEER_CONTEXT_MODE"] = str(
runtime_metadata.get("peer_context_mode") or "UNSPECIFIED"
)
clean_env["TRADINGAGENTS_PROVIDER_API_KEY"] = analysis_api_key or ""
clean_env["TRADINGAGENTS_HEARTBEAT_SECS"] = str(
float(runtime_metadata.get("heartbeat_interval_secs", 10.0))
)
for env_name in self._provider_api_env_names(llm_provider):
if analysis_api_key:
clean_env[env_name] = analysis_api_key
proc = await asyncio.create_subprocess_exec(
str(self.analysis_python),
"-u",
str(script_path),
ticker,
date,
@ -340,25 +480,78 @@ class LegacySubprocessAnalysisExecutor:
self.process_registry(task_id, proc)
stdout_lines: list[str] = []
started_at = asyncio.get_running_loop().time()
assert proc.stdout is not None
while True:
elapsed = asyncio.get_running_loop().time() - started_at
remaining_total = total_timeout_secs - elapsed
if remaining_total <= 0:
await self._terminate_process(proc)
observation = self._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code="subprocess_total_timeout",
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=getattr(proc, "returncode", None),
markers=self._collect_markers(stdout_lines),
message=f"analysis subprocess exceeded total timeout of {total_timeout_secs:g}s",
stdout_excerpt=stdout_lines[-8:],
)
raise AnalysisExecutorError(
f"analysis subprocess exceeded total timeout of {total_timeout_secs:g}s",
retryable=True,
observation=observation,
)
try:
line_bytes = await asyncio.wait_for(
proc.stdout.readline(),
timeout=self.stdout_timeout_secs,
timeout=min(stdout_timeout_secs, remaining_total),
)
except asyncio.TimeoutError as exc:
await self._terminate_process(proc)
timed_out_total = (
asyncio.get_running_loop().time() - started_at
) >= total_timeout_secs
observation_code = (
"subprocess_total_timeout"
if timed_out_total
else "subprocess_stdout_timeout"
)
message = (
f"analysis subprocess exceeded total timeout of {total_timeout_secs:g}s"
if timed_out_total
else f"analysis subprocess timed out after {stdout_timeout_secs:g}s"
)
observation = self._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code=observation_code,
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=getattr(proc, "returncode", None),
markers=self._collect_markers(stdout_lines),
message=message,
stdout_excerpt=stdout_lines[-8:],
)
raise AnalysisExecutorError(
f"analysis subprocess timed out after {self.stdout_timeout_secs:g}s",
message,
retryable=True,
observation=observation,
) from exc
if not line_bytes:
break
line = line_bytes.decode(errors="replace").rstrip()
stdout_lines.append(line)
if on_stage is not None and line.startswith("STAGE:"):
await on_stage(line.split(":", 1)[1].strip())
last_stage = line.split(":", 1)[1].strip()
await on_stage(last_stage)
await proc.wait()
stderr_bytes = await proc.stderr.read() if proc.stderr is not None else b""
@ -366,10 +559,28 @@ class LegacySubprocessAnalysisExecutor:
if proc.returncode != 0:
failure_meta = self._parse_failure_metadata(stdout_lines, stderr_lines)
message = self._extract_error_message(stderr_lines) or (stderr_bytes.decode(errors="replace")[-1000:] if stderr_bytes else f"exit {proc.returncode}")
observation = self._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code="analysis_protocol_failed" if failure_meta is None else "analysis_failed",
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=proc.returncode,
markers=self._collect_markers(stdout_lines),
message=message,
data_quality=(failure_meta or {}).get("data_quality"),
source_diagnostics=(failure_meta or {}).get("source_diagnostics"),
stdout_excerpt=stdout_lines[-8:],
stderr_excerpt=stderr_lines[-8:],
)
if failure_meta is None:
raise AnalysisExecutorError(
"analysis subprocess failed without required markers: RESULT_META",
code="analysis_protocol_failed",
observation=observation,
)
raise AnalysisExecutorError(
message,
@ -377,14 +588,20 @@ class LegacySubprocessAnalysisExecutor:
degrade_reason_codes=failure_meta["degrade_reason_codes"],
data_quality=failure_meta["data_quality"],
source_diagnostics=failure_meta["source_diagnostics"],
observation=observation,
)
return self._parse_output(
stdout_lines=stdout_lines,
stderr_lines=stderr_lines,
ticker=ticker,
date=date,
request_context=request_context,
contract_version=request_context.contract_version,
executor_type=request_context.executor_type,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
last_stage=last_stage,
)
finally:
if self.process_registry is not None:
@ -414,7 +631,7 @@ class LegacySubprocessAnalysisExecutor:
@staticmethod
def _provider_api_env_names(provider: str) -> tuple[str, ...]:
return {
"anthropic": ("ANTHROPIC_API_KEY",),
"anthropic": ("ANTHROPIC_API_KEY", "MINIMAX_API_KEY"),
"openai": ("OPENAI_API_KEY",),
"openrouter": ("OPENROUTER_API_KEY",),
"xai": ("XAI_API_KEY",),
@ -451,15 +668,21 @@ class LegacySubprocessAnalysisExecutor:
def _parse_output(
*,
stdout_lines: list[str],
stderr_lines: list[str],
ticker: str,
date: str,
request_context: RequestContext,
contract_version: str,
executor_type: str,
stdout_timeout_secs: float,
total_timeout_secs: float,
last_stage: Optional[str],
) -> AnalysisExecutionOutput:
decision: Optional[str] = None
quant_signal = None
llm_signal = None
confidence = None
llm_decision_structured = None
degrade_reason_codes: tuple[str, ...] = ()
data_quality = None
source_diagnostics = None
@ -473,16 +696,51 @@ class LegacySubprocessAnalysisExecutor:
try:
detail = json.loads(line.split(":", 1)[1].strip())
except Exception as exc:
raise AnalysisExecutorError("failed to parse SIGNAL_DETAIL payload") from exc
raise AnalysisExecutorError(
"failed to parse SIGNAL_DETAIL payload",
observation=LegacySubprocessAnalysisExecutor._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code="signal_detail_parse_failed",
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=0,
markers=LegacySubprocessAnalysisExecutor._collect_markers(stdout_lines),
message="failed to parse SIGNAL_DETAIL payload",
stdout_excerpt=stdout_lines[-8:],
stderr_excerpt=stderr_lines[-8:],
),
) from exc
quant_signal = detail.get("quant_signal")
llm_signal = detail.get("llm_signal")
confidence = detail.get("confidence")
llm_decision_structured = detail.get("llm_decision_structured")
elif line.startswith("RESULT_META:"):
seen_result_meta = True
try:
detail = json.loads(line.split(":", 1)[1].strip())
except Exception as exc:
raise AnalysisExecutorError("failed to parse RESULT_META payload") from exc
raise AnalysisExecutorError(
"failed to parse RESULT_META payload",
observation=LegacySubprocessAnalysisExecutor._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code="result_meta_parse_failed",
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=0,
markers=LegacySubprocessAnalysisExecutor._collect_markers(stdout_lines),
message="failed to parse RESULT_META payload",
stdout_excerpt=stdout_lines[-8:],
stderr_excerpt=stderr_lines[-8:],
),
) from exc
degrade_reason_codes = tuple(detail.get("degrade_reason_codes") or ())
data_quality = detail.get("data_quality")
source_diagnostics = detail.get("source_diagnostics")
@ -498,9 +756,31 @@ class LegacySubprocessAnalysisExecutor:
if not seen_complete:
missing_markers.append("ANALYSIS_COMPLETE")
if missing_markers:
observation = LegacySubprocessAnalysisExecutor._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="failed",
observation_code="analysis_protocol_failed",
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=0,
markers={
"signal_detail": seen_signal_detail,
"result_meta": seen_result_meta,
"analysis_complete": seen_complete,
},
message="analysis subprocess completed without required markers: " + ", ".join(missing_markers),
data_quality=data_quality,
source_diagnostics=source_diagnostics,
stdout_excerpt=stdout_lines[-8:],
stderr_excerpt=stderr_lines[-8:],
)
raise AnalysisExecutorError(
"analysis subprocess completed without required markers: "
+ ", ".join(missing_markers)
+ ", ".join(missing_markers),
observation=observation,
)
report_path = str(Path("results") / ticker / date / "complete_report.md")
@ -510,13 +790,88 @@ class LegacySubprocessAnalysisExecutor:
llm_signal=llm_signal,
confidence=confidence,
report_path=report_path,
llm_decision_structured=llm_decision_structured,
degrade_reason_codes=degrade_reason_codes,
data_quality=data_quality,
source_diagnostics=source_diagnostics,
observation=LegacySubprocessAnalysisExecutor._build_observation(
request_context=request_context,
ticker=ticker,
date=date,
status="completed",
observation_code="completed",
stage=last_stage,
stdout_timeout_secs=stdout_timeout_secs,
total_timeout_secs=total_timeout_secs,
returncode=0,
markers=LegacySubprocessAnalysisExecutor._collect_markers(stdout_lines),
data_quality=data_quality,
source_diagnostics=source_diagnostics,
stdout_excerpt=stdout_lines[-8:],
stderr_excerpt=stderr_lines[-8:],
),
contract_version=contract_version,
executor_type=executor_type,
)
@staticmethod
def _collect_markers(stdout_lines: list[str]) -> dict[str, bool]:
return {
"signal_detail": any(line.startswith("SIGNAL_DETAIL:") for line in stdout_lines),
"result_meta": any(line.startswith("RESULT_META:") for line in stdout_lines),
"analysis_complete": any(line.startswith("ANALYSIS_COMPLETE:") for line in stdout_lines),
"heartbeat": any(line.startswith("HEARTBEAT:") for line in stdout_lines),
"auth_checkpoint": any(line.startswith("CHECKPOINT:AUTH:") for line in stdout_lines),
}
@staticmethod
def _build_observation(
*,
request_context: RequestContext,
ticker: str,
date: str,
status: str,
observation_code: str,
stage: Optional[str],
stdout_timeout_secs: float,
total_timeout_secs: Optional[float],
returncode: Optional[int],
markers: dict[str, bool],
message: Optional[str] = None,
data_quality: Optional[dict] = None,
source_diagnostics: Optional[dict] = None,
stdout_excerpt: Optional[list[str]] = None,
stderr_excerpt: Optional[list[str]] = None,
) -> dict[str, Any]:
metadata = dict(request_context.metadata or {})
return {
"status": status,
"observation_code": observation_code,
"request_id": request_context.request_id,
"ticker": ticker,
"date": date,
"provider": request_context.llm_provider,
"backend_url": request_context.backend_url,
"model": request_context.deep_think_llm,
"selected_analysts": list(request_context.selected_analysts),
"analysis_prompt_style": request_context.analysis_prompt_style,
"attempt_index": metadata.get("attempt_index", 0),
"attempt_mode": metadata.get("attempt_mode", "baseline"),
"probe_mode": metadata.get("probe_mode", "none"),
"stdout_timeout_secs": stdout_timeout_secs,
"total_timeout_secs": total_timeout_secs,
"cost_cap": metadata.get("cost_cap"),
"stage": stage,
"returncode": returncode,
"markers": markers,
"message": message,
"data_quality": data_quality,
"source_diagnostics": source_diagnostics,
"stdout_excerpt": list(stdout_excerpt or []),
"stderr_excerpt": list(stderr_excerpt or []),
"evidence_id": metadata.get("evidence_id"),
}
class DirectAnalysisExecutor:
"""Placeholder for a future in-process executor implementation."""

View File

@ -75,6 +75,9 @@ class JobService:
"result_ref": result_ref,
"degradation_summary": None,
"data_quality_summary": None,
"evidence_summary": None,
"tentative_classification": None,
"budget_state": {},
"compat": {},
})
self.task_results[task_id] = state
@ -108,6 +111,9 @@ class JobService:
"result_ref": result_ref,
"degradation_summary": None,
"data_quality_summary": None,
"evidence_summary": None,
"tentative_classification": None,
"budget_state": {},
"compat": {},
})
self.task_results[task_id] = state
@ -153,6 +159,9 @@ class JobService:
state["contract_version"] = contract.get("contract_version", state.get("contract_version"))
state["degradation_summary"] = contract.get("degradation") or self._build_degradation_summary(result)
state["data_quality_summary"] = contract.get("data_quality")
state["evidence_summary"] = contract.get("evidence")
state["tentative_classification"] = contract.get("tentative_classification")
state["budget_state"] = contract.get("budget_state") or state.get("budget_state") or {}
state["compat"] = {
"decision": result.get("decision"),
"quant_signal": quant.get("rating"),
@ -208,10 +217,13 @@ class JobService:
"request_id": state.get("request_id"),
"executor_type": state.get("executor_type", DEFAULT_EXECUTOR_TYPE),
"result_ref": state.get("result_ref"),
"status": state.get("status"),
"status": self._public_status(state.get("status")),
"created_at": state.get("created_at"),
"degradation_summary": state.get("degradation_summary"),
"data_quality_summary": state.get("data_quality_summary"),
"evidence": state.get("evidence_summary"),
"tentative_classification": state.get("tentative_classification"),
"budget_state": state.get("budget_state") or {},
"error": self._public_error(contract, state),
}
if state.get("type") == "portfolio":
@ -257,6 +269,8 @@ class JobService:
"error": payload.get("error"),
"data_quality_summary": payload.get("data_quality_summary"),
"degradation_summary": payload.get("degradation_summary"),
"tentative_classification": payload.get("tentative_classification"),
"budget_state": payload.get("budget_state") or {},
}
if state.get("type") == "portfolio":
summary.update({
@ -292,15 +306,11 @@ class JobService:
self.processes[task_id] = process
def cancel_job(self, task_id: str, error: str = "用户取消") -> dict | None:
task = self.analysis_tasks.get(task_id)
if task:
task.cancel()
state = self.task_results.get(task_id)
if not state:
return None
state["status"] = "failed"
state["error"] = error
self.persist_task(task_id, state)
return state
@staticmethod
@ -312,6 +322,9 @@ class JobService:
normalized.setdefault("result_ref", None)
normalized.setdefault("degradation_summary", None)
normalized.setdefault("data_quality_summary", None)
normalized.setdefault("evidence_summary", None)
normalized.setdefault("tentative_classification", None)
normalized.setdefault("budget_state", {})
if "data_quality" in normalized and normalized.get("data_quality_summary") is None:
normalized["data_quality_summary"] = normalized.get("data_quality")
compat = normalized.get("compat")
@ -345,3 +358,9 @@ class JobService:
if contract is not None and "error" in contract:
return contract.get("error")
return state.get("error")
@staticmethod
def _public_status(status: str | None) -> str | None:
if status in {"collecting_evidence", "auto_recovering", "classification_pending", "probing_provider"}:
return "running"
return status

View File

@ -1,7 +1,7 @@
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Optional
from dataclasses import dataclass, field, replace
from typing import Any, Optional
from uuid import uuid4
from fastapi import Request
@ -30,7 +30,7 @@ class RequestContext:
llm_max_retries: Optional[int] = None
client_host: Optional[str] = None
is_local: bool = False
metadata: dict[str, str] = field(default_factory=dict)
metadata: dict[str, Any] = field(default_factory=dict)
def build_request_context(
@ -49,7 +49,7 @@ def build_request_context(
request_id: Optional[str] = None,
contract_version: str = CONTRACT_VERSION,
executor_type: str = DEFAULT_EXECUTOR_TYPE,
metadata: Optional[dict[str, str]] = None,
metadata: Optional[dict[str, Any]] = None,
) -> RequestContext:
"""Create a stable request context without leaking FastAPI internals into services."""
client_host = request.client.host if request and request.client else None
@ -72,3 +72,14 @@ def build_request_context(
is_local=is_local,
metadata=dict(metadata or {}),
)
def clone_request_context(
context: RequestContext,
*,
metadata_updates: Optional[dict[str, Any]] = None,
**overrides: Any,
) -> RequestContext:
metadata = dict(context.metadata)
metadata.update(metadata_updates or {})
return replace(context, metadata=metadata, **overrides)

View File

@ -1,4 +1,5 @@
import importlib
import json
import sys
from pathlib import Path
@ -7,8 +8,9 @@ from fastapi.testclient import TestClient
from starlette.websockets import WebSocketDisconnect
def _load_main_module(monkeypatch):
def _load_main_module(monkeypatch, *, env_file=""):
backend_dir = Path(__file__).resolve().parents[1]
monkeypatch.setenv("TRADINGAGENTS_ENV_FILE", env_file)
monkeypatch.syspath_prepend(str(backend_dir))
sys.modules.pop("main", None)
return importlib.import_module("main")
@ -27,6 +29,49 @@ def test_config_check_smoke(monkeypatch):
assert response.json() == {"configured": False}
def test_repo_env_overrides_stale_shell_provider_env(monkeypatch, tmp_path):
env_file = tmp_path / ".env"
env_file.write_text(
"\n".join(
[
"TRADINGAGENTS_LLM_PROVIDER=anthropic",
"TRADINGAGENTS_BACKEND_URL=https://api.minimaxi.com/anthropic",
"TRADINGAGENTS_MODEL=MiniMax-M2.7-highspeed",
]
),
encoding="utf-8",
)
monkeypatch.setenv("TRADINGAGENTS_LLM_PROVIDER", "openai")
monkeypatch.setenv("TRADINGAGENTS_BACKEND_URL", "https://api.openai.com/v1")
monkeypatch.setenv("TRADINGAGENTS_MODEL", "gpt-5.4")
main = _load_main_module(monkeypatch, env_file=str(env_file))
settings = main._resolve_analysis_runtime_settings()
assert settings["llm_provider"] == "anthropic"
assert settings["backend_url"] == "https://api.minimaxi.com/anthropic"
assert settings["deep_think_llm"] == "MiniMax-M2.7-highspeed"
assert settings["quick_think_llm"] == "MiniMax-M2.7-highspeed"
def test_saved_api_key_is_provider_scoped(monkeypatch, tmp_path):
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
monkeypatch.delenv("MINIMAX_API_KEY", raising=False)
monkeypatch.delenv("OPENAI_API_KEY", raising=False)
main = _load_main_module(monkeypatch)
config_path = tmp_path / "config.json"
monkeypatch.setattr(main, "CONFIG_PATH", config_path)
main._persist_analysis_api_key("anth-key", provider="anthropic")
saved = json.loads(config_path.read_text())
assert saved["api_keys"]["anthropic"] == "anth-key"
assert main._get_analysis_provider_api_key("anthropic", saved) == "anth-key"
assert main._get_analysis_provider_api_key("openai", saved) is None
def test_analysis_task_routes_smoke(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
@ -73,6 +118,73 @@ def test_analysis_task_routes_smoke(monkeypatch):
assert status_response.json()["result"] is None
def test_analysis_status_route_uses_task_query_service(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
main = _load_main_module(monkeypatch)
expected = {
"contract_version": "v1alpha1",
"task_id": "task-query",
"status": "running",
"via": "task-query-service",
}
def _fake_public_task_payload(task_id, *, state_override=None):
assert task_id == "task-query"
assert state_override is None
return expected
with TestClient(main.app) as client:
main.app.state.task_results["task-query"] = {
"contract_version": "v1alpha1",
"task_id": "task-query",
"request_id": "req-task-query",
"executor_type": "legacy_subprocess",
"result_ref": None,
"ticker": "AAPL",
"date": "2026-04-11",
"status": "running",
"progress": 10,
"current_stage": "analysts",
"created_at": "2026-04-11T10:00:00",
"elapsed_seconds": 1,
"stages": [],
"result": None,
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"compat": {},
}
monkeypatch.setattr(main.app.state.task_query_service, "public_task_payload", _fake_public_task_payload)
response = client.get("/api/analysis/status/task-query")
assert response.status_code == 200
assert response.json() == expected
def test_analysis_tasks_route_uses_task_query_service(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
main = _load_main_module(monkeypatch)
expected = {
"contract_version": "v1alpha1",
"tasks": [{"task_id": "task-query"}],
"total": 1,
}
def _fake_list_task_summaries():
return expected
with TestClient(main.app) as client:
monkeypatch.setattr(main.app.state.task_query_service, "list_task_summaries", _fake_list_task_summaries)
response = client.get("/api/analysis/tasks")
assert response.status_code == 200
assert response.json() == expected
def test_analysis_start_route_uses_analysis_service(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
@ -182,6 +294,117 @@ def test_analysis_websocket_progress_is_contract_first(monkeypatch):
assert "decision" not in message
def test_analysis_websocket_maps_internal_runtime_status_to_running(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")
main = _load_main_module(monkeypatch)
with TestClient(main.app) as client:
main.app.state.task_results["task-ws-runtime"] = {
"contract_version": "v1alpha1",
"task_id": "task-ws-runtime",
"request_id": "req-task-ws-runtime",
"executor_type": "legacy_subprocess",
"result_ref": None,
"ticker": "AAPL",
"date": "2026-04-11",
"status": "auto_recovering",
"progress": 50,
"current_stage": "research",
"created_at": "2026-04-11T10:00:00",
"elapsed_seconds": 3,
"stages": [],
"result": None,
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"evidence_summary": {"attempts": []},
"tentative_classification": None,
"budget_state": {},
"compat": {},
}
with client.websocket_connect("/ws/analysis/task-ws-runtime?api_key=test-key") as websocket:
message = websocket.receive_json()
assert message["status"] == "running"
def test_analysis_cancel_route_preserves_response_shape_and_broadcasts_cancelled_state(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
main = _load_main_module(monkeypatch)
class _DummyTask:
def cancel(self):
return None
class _DummyProcess:
returncode = None
def kill(self):
return None
captured: dict[str, dict] = {}
def _save_sync(task_id, data):
captured["saved_state"] = json.loads(json.dumps(data))
def _delete_sync(task_id):
captured["deleted_task_id"] = task_id
async def _fake_broadcast(task_id, progress):
captured["broadcast_payload"] = main.app.state.task_query_service.public_task_payload(
task_id,
state_override=progress,
)
with TestClient(main.app) as client:
main.app.state.task_results["task-cancel"] = {
"contract_version": "v1alpha1",
"task_id": "task-cancel",
"request_id": "req-task-cancel",
"executor_type": "legacy_subprocess",
"result_ref": None,
"ticker": "AAPL",
"date": "2026-04-11",
"status": "running",
"progress": 25,
"current_stage": "research",
"created_at": "2026-04-11T10:00:00",
"elapsed_seconds": 4,
"stages": [],
"result": None,
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"compat": {},
}
main.app.state.analysis_tasks["task-cancel"] = _DummyTask()
main.app.state.processes["task-cancel"] = _DummyProcess()
monkeypatch.setattr(main.app.state.result_store, "save_task_status", _save_sync)
monkeypatch.setattr(main.app.state.result_store, "delete_task_status", _delete_sync)
monkeypatch.setattr(main, "broadcast_progress", _fake_broadcast)
response = client.delete("/api/analysis/cancel/task-cancel")
assert response.status_code == 200
assert response.json() == {
"contract_version": "v1alpha1",
"task_id": "task-cancel",
"status": "cancelled",
}
assert "error" not in response.json()
assert captured["saved_state"]["status"] == "cancelled"
assert captured["broadcast_payload"]["status"] == "cancelled"
assert captured["broadcast_payload"]["error"] == {
"code": "cancelled",
"message": "用户取消",
"retryable": False,
}
assert captured["deleted_task_id"] == "task-cancel"
def test_orchestrator_websocket_smoke_is_contract_first(monkeypatch):
monkeypatch.delenv("DASHBOARD_API_KEY", raising=False)
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-key")

View File

@ -1,4 +1,5 @@
import asyncio
import sys
from pathlib import Path
import pytest
@ -8,13 +9,16 @@ from services.request_context import build_request_context
class _FakeStdout:
def __init__(self, lines, *, stall: bool = False):
def __init__(self, lines, *, stall: bool = False, delay: float = 0.0):
self._lines = list(lines)
self._stall = stall
self._delay = delay
async def readline(self):
if self._stall:
await asyncio.sleep(3600)
if self._delay:
await asyncio.sleep(self._delay)
if self._lines:
return self._lines.pop(0)
return b""
@ -127,10 +131,19 @@ def test_executor_marks_degraded_success_when_result_meta_reports_data_quality()
'RESULT_META:{"degrade_reason_codes":["non_trading_day"],"data_quality":{"state":"non_trading_day","requested_date":"2026-04-12"}}',
"ANALYSIS_COMPLETE:OVERWEIGHT",
],
stderr_lines=[],
ticker="AAPL",
date="2026-04-12",
request_context=build_request_context(
provider_api_key="ctx-key",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
),
contract_version="v1alpha1",
executor_type="legacy_subprocess",
stdout_timeout_secs=300.0,
total_timeout_secs=300.0,
last_stage="portfolio",
)
contract = output.to_result_contract(
@ -144,6 +157,33 @@ def test_executor_marks_degraded_success_when_result_meta_reports_data_quality()
assert contract["status"] == "degraded_success"
assert contract["data_quality"]["state"] == "non_trading_day"
assert contract["degradation"]["reason_codes"] == ["non_trading_day"]
assert output.observation["status"] == "completed"
assert output.observation["stage"] == "portfolio"
def test_executor_parses_llm_decision_structured_from_signal_detail():
output = LegacySubprocessAnalysisExecutor._parse_output(
stdout_lines=[
'SIGNAL_DETAIL:{"quant_signal":"HOLD","llm_signal":"BUY","confidence":0.6,"llm_decision_structured":{"rating":"BUY","entry_style":"IMMEDIATE"}}',
'RESULT_META:{"degrade_reason_codes":[],"data_quality":{"state":"ok"}}',
"ANALYSIS_COMPLETE:BUY",
],
stderr_lines=[],
ticker="AAPL",
date="2026-04-12",
request_context=build_request_context(
provider_api_key="ctx-key",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
),
contract_version="v1alpha1",
executor_type="legacy_subprocess",
stdout_timeout_secs=300.0,
total_timeout_secs=300.0,
last_stage="portfolio",
)
assert output.llm_decision_structured == {"rating": "BUY", "entry_style": "IMMEDIATE"}
def test_executor_requires_result_meta_on_success():
@ -153,10 +193,19 @@ def test_executor_requires_result_meta_on_success():
'SIGNAL_DETAIL:{"quant_signal":"HOLD","llm_signal":"BUY","confidence":0.6}',
"ANALYSIS_COMPLETE:OVERWEIGHT",
],
stderr_lines=[],
ticker="AAPL",
date="2026-04-12",
request_context=build_request_context(
provider_api_key="ctx-key",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
),
contract_version="v1alpha1",
executor_type="legacy_subprocess",
stdout_timeout_secs=300.0,
total_timeout_secs=300.0,
last_stage="portfolio",
)
@ -201,6 +250,11 @@ def test_executor_injects_provider_specific_env(monkeypatch):
analysis_prompt_style="compact",
llm_timeout=45,
llm_max_retries=0,
metadata={
"portfolio_context": "Growth exposure already elevated.",
"peer_context": "Same-theme rank: leader.",
"peer_context_mode": "SAME_THEME_NORMALIZED",
},
),
)
@ -213,6 +267,12 @@ def test_executor_injects_provider_specific_env(monkeypatch):
assert captured["env"]["TRADINGAGENTS_ANALYSIS_PROMPT_STYLE"] == "compact"
assert captured["env"]["TRADINGAGENTS_LLM_TIMEOUT"] == "45"
assert captured["env"]["TRADINGAGENTS_LLM_MAX_RETRIES"] == "0"
assert captured["env"]["TRADINGAGENTS_PORTFOLIO_CONTEXT"] == "Growth exposure already elevated."
assert captured["env"]["TRADINGAGENTS_PEER_CONTEXT"] == "Same-theme rank: leader."
assert captured["env"]["TRADINGAGENTS_PEER_CONTEXT_MODE"] == "SAME_THEME_NORMALIZED"
assert captured["env"]["TRADINGAGENTS_PROVIDER_API_KEY"] == "provider-key"
assert captured["env"]["TRADINGAGENTS_HEARTBEAT_SECS"] == "10.0"
assert captured["env"]["OPENAI_API_KEY"] == "provider-key"
assert "ANTHROPIC_API_KEY" not in captured["env"]
@ -248,3 +308,150 @@ def test_executor_requires_result_meta_on_failure(monkeypatch):
)
asyncio.run(scenario())
def test_executor_includes_observation_on_timeout(monkeypatch):
process = _FakeProcess(_FakeStdout([], stall=True))
async def fake_create_subprocess_exec(*args, **kwargs):
return process
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
executor = LegacySubprocessAnalysisExecutor(
analysis_python=Path("/usr/bin/python3"),
repo_root=Path("."),
api_key_resolver=lambda: "env-key",
stdout_timeout_secs=0.01,
)
async def scenario():
with pytest.raises(AnalysisExecutorError) as exc_info:
await executor.execute(
task_id="task-timeout-observation",
ticker="AAPL",
date="2026-04-13",
request_context=build_request_context(
provider_api_key="ctx-key",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
metadata={"attempt_index": 0, "attempt_mode": "baseline", "probe_mode": "none"},
),
)
return exc_info.value
exc = asyncio.run(scenario())
assert exc.observation["observation_code"] == "subprocess_stdout_timeout"
assert exc.observation["attempt_mode"] == "baseline"
assert exc.observation["provider"] == "anthropic"
def test_executor_collect_markers_tracks_heartbeat_and_auth_checkpoint():
markers = LegacySubprocessAnalysisExecutor._collect_markers(
[
'CHECKPOINT:AUTH:{"provider":"anthropic","api_key_present":true}',
'HEARTBEAT:{"elapsed_seconds":10.0}',
"STAGE:trading",
"RESULT_META:{}",
]
)
assert markers["auth_checkpoint"] is True
assert markers["heartbeat"] is True
assert markers["result_meta"] is True
def test_executor_uses_total_timeout_separately_from_stdout_timeout(monkeypatch):
process = _FakeProcess(
_FakeStdout(
[b'CHECKPOINT:AUTH:{"provider":"anthropic","api_key_present":true}\n'] * 10,
delay=0.02,
)
)
async def fake_create_subprocess_exec(*args, **kwargs):
return process
monkeypatch.setattr(asyncio, "create_subprocess_exec", fake_create_subprocess_exec)
executor = LegacySubprocessAnalysisExecutor(
analysis_python=Path("/usr/bin/python3"),
repo_root=Path("."),
api_key_resolver=lambda: "env-key",
stdout_timeout_secs=1.0,
)
async def scenario():
with pytest.raises(AnalysisExecutorError, match="total timeout"):
await executor.execute(
task_id="task-total-timeout",
ticker="AAPL",
date="2026-04-13",
request_context=build_request_context(
provider_api_key="ctx-key",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
metadata={"stdout_timeout_secs": 1.0, "total_timeout_secs": 0.05},
),
)
asyncio.run(scenario())
assert process.kill_called is True
def test_executor_real_subprocess_heartbeat_survives_blocking_sleep(tmp_path):
script_template = """
import json
import threading
import time
print('CHECKPOINT:AUTH:' + json.dumps({'provider':'anthropic','api_key_present': True}), flush=True)
print('STAGE:analysts', flush=True)
print('STAGE:research', flush=True)
print('STAGE:trading', flush=True)
stop = threading.Event()
def heartbeat():
while not stop.wait(0.01):
print('HEARTBEAT:' + json.dumps({'alive': True}), flush=True)
threading.Thread(target=heartbeat, daemon=True).start()
time.sleep(0.12)
stop.set()
print('STAGE:risk', flush=True)
print('STAGE:portfolio', flush=True)
print('SIGNAL_DETAIL:' + json.dumps({'quant_signal':'HOLD','llm_signal':'BUY','confidence':0.8}), flush=True)
print('RESULT_META:' + json.dumps({'degrade_reason_codes': [], 'data_quality': {'state': 'ok'}}), flush=True)
print('ANALYSIS_COMPLETE:BUY', flush=True)
"""
executor = LegacySubprocessAnalysisExecutor(
analysis_python=Path(sys.executable),
repo_root=tmp_path,
api_key_resolver=lambda: "env-key",
script_template=script_template,
stdout_timeout_secs=0.03,
)
async def scenario():
return await executor.execute(
task_id="task-heartbeat-real",
ticker="AAPL",
date="2026-04-13",
request_context=build_request_context(
provider_api_key="ctx-key",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
metadata={
"stdout_timeout_secs": 0.03,
"total_timeout_secs": 1.0,
"heartbeat_interval_secs": 0.01,
},
),
)
output = asyncio.run(scenario())
assert output.decision == "BUY"
assert output.observation["markers"]["heartbeat"] is True

View File

@ -1,12 +1,15 @@
import json
import asyncio
from pathlib import Path
from services.analysis_service import AnalysisService
from services.executor import AnalysisExecutionOutput
from services.executor import AnalysisExecutionOutput, AnalysisExecutorError
from services.job_service import JobService
from services.migration_flags import load_migration_flags
from services.request_context import build_request_context
from services.result_store import ResultStore
from services.task_command_service import TaskCommandService
from services.task_query_service import TaskQueryService
class DummyPortfolioGateway:
@ -167,7 +170,7 @@ def test_job_service_restores_legacy_tasks_with_contract_metadata():
def test_analysis_service_build_recommendation_record():
rec = AnalysisService._build_recommendation_record(
stdout='\n'.join([
'SIGNAL_DETAIL:{"quant_signal":"BUY","llm_signal":"HOLD","confidence":0.75}',
'SIGNAL_DETAIL:{"quant_signal":"BUY","llm_signal":"HOLD","confidence":0.75,"llm_decision_structured":{"rating":"HOLD","hold_subtype":"DEFENSIVE_HOLD"}}',
"ANALYSIS_COMPLETE:OVERWEIGHT",
]),
ticker="AAPL",
@ -180,7 +183,177 @@ def test_analysis_service_build_recommendation_record():
assert rec["result"]["decision"] == "OVERWEIGHT"
assert rec["result"]["signals"]["quant"]["rating"] == "BUY"
assert rec["result"]["signals"]["llm"]["rating"] == "HOLD"
assert rec["result"]["signals"]["llm"]["structured"]["hold_subtype"] == "DEFENSIVE_HOLD"
assert rec["compat"]["confidence"] == 0.75
assert rec["compat"]["llm_decision_structured"]["rating"] == "HOLD"
class RichPortfolioGateway(DummyPortfolioGateway):
async def get_positions(self, account=None):
return [
{
"ticker": "AAPL",
"account": account or "默认账户",
"shares": 10,
"cost_price": 100.0,
"current_price": 110.0,
"unrealized_pnl_pct": 10.0,
},
{
"ticker": "TSLA",
"account": account or "默认账户",
"shares": 5,
"cost_price": 200.0,
"current_price": 170.0,
"unrealized_pnl_pct": -15.0,
},
]
def get_watchlist(self):
return [
{"ticker": "AAPL", "name": "Apple"},
{"ticker": "TSLA", "name": "Tesla"},
{"ticker": "MSFT", "name": "Microsoft"},
]
def get_recommendations(self, date=None, limit=50, offset=0):
return {
"recommendations": [
{
"ticker": "MSFT",
"result": {"decision": "BUY", "confidence": 0.8},
},
{
"ticker": "TSLA",
"result": {"decision": "SELL", "confidence": 0.9},
},
],
"total": 2,
"limit": limit,
"offset": offset,
}
def test_analysis_service_enriches_missing_decision_context(tmp_path):
gateway = RichPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
service = AnalysisService(
executor=FakeExecutor(),
result_store=store,
job_service=JobService(
task_results={},
analysis_tasks={},
processes={},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
),
)
context = build_request_context(metadata={})
enriched = asyncio.run(
service._enrich_request_context(
context,
ticker="AAPL",
date="2026-04-13",
)
)
assert "Current portfolio has 2 open position(s)." in enriched.metadata["portfolio_context"]
assert "Existing position in target: AAPL" in enriched.metadata["portfolio_context"]
assert "MSFT:BUY" in enriched.metadata["peer_context"]
assert "TSLA:SELL" in enriched.metadata["peer_context"]
assert enriched.metadata["peer_context_mode"] == "PORTFOLIO_SNAPSHOT"
def test_analysis_service_preserves_explicit_decision_context(tmp_path):
gateway = RichPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
service = AnalysisService(
executor=FakeExecutor(),
result_store=store,
job_service=JobService(
task_results={},
analysis_tasks={},
processes={},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
),
)
context = build_request_context(
metadata={
"portfolio_context": "manual portfolio context",
"peer_context": "manual peer context",
}
)
enriched = asyncio.run(
service._enrich_request_context(
context,
ticker="AAPL",
date="2026-04-13",
)
)
assert enriched.metadata["portfolio_context"] == "manual portfolio context"
assert enriched.metadata["peer_context"] == "manual peer context"
assert enriched.metadata["peer_context_mode"] == "CALLER_PROVIDED"
def test_freeze_batch_peer_snapshot_uses_stable_recommendation_source(tmp_path):
gateway = RichPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
service = AnalysisService(
executor=FakeExecutor(),
result_store=store,
job_service=JobService(
task_results={},
analysis_tasks={},
processes={},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
),
)
context = build_request_context(metadata={})
frozen = service._freeze_batch_peer_snapshot(
context,
date="2026-04-13",
watchlist=gateway.get_watchlist(),
)
assert len(frozen.metadata["peer_recommendation_snapshot"]) == 2
assert frozen.metadata["peer_context_mode"] == "PORTFOLIO_SNAPSHOT"
assert [item["ticker"] for item in frozen.metadata["peer_context_batch_watchlist"]] == ["AAPL", "TSLA", "MSFT"]
def test_build_peer_context_prefers_frozen_snapshot_over_live_store(tmp_path):
gateway = RichPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
service = AnalysisService(
executor=FakeExecutor(),
result_store=store,
job_service=JobService(
task_results={},
analysis_tasks={},
processes={},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
),
)
context = service._build_peer_context(
ticker="AAPL",
date="2026-04-13",
peer_snapshot=[
{"ticker": "AAA", "result": {"decision": "BUY", "confidence": 0.7}},
{"ticker": "BBB", "result": {"decision": "SELL", "confidence": 0.6}},
],
watchlist_snapshot=[{"ticker": "AAPL"}, {"ticker": "AAA"}, {"ticker": "BBB"}],
)
assert "AAA:BUY" in context
assert "BBB:SELL" in context
assert "industry-normalized" in context
class FakeExecutor:
@ -197,6 +370,108 @@ class FakeExecutor:
llm_signal="BUY",
confidence=0.82,
report_path=f"results/{ticker}/{date}/complete_report.md",
observation={
"status": "completed",
"observation_code": "completed",
"attempt_mode": request_context.metadata.get("attempt_mode", "baseline"),
"evidence_id": "fake-success",
},
)
class RecoveryThenSuccessExecutor:
def __init__(self):
self.attempt_modes = []
async def execute(self, *, task_id, ticker, date, request_context, on_stage=None):
mode = request_context.metadata.get("attempt_mode", "baseline")
self.attempt_modes.append(mode)
if on_stage is not None:
await on_stage("analysts")
if mode == "baseline":
raise AnalysisExecutorError(
"analysis subprocess failed without required markers: RESULT_META",
code="analysis_protocol_failed",
observation={
"status": "failed",
"observation_code": "analysis_protocol_failed",
"attempt_mode": mode,
"evidence_id": "baseline",
"message": "analysis subprocess failed without required markers: RESULT_META",
},
)
return AnalysisExecutionOutput(
decision="BUY",
quant_signal="OVERWEIGHT",
llm_signal="BUY",
confidence=0.82,
report_path=f"results/{ticker}/{date}/complete_report.md",
observation={
"status": "completed",
"observation_code": "completed",
"attempt_mode": mode,
"evidence_id": f"{mode}-success",
},
)
class RecoveryThenProbeExecutor:
def __init__(self):
self.attempt_modes = []
self.selected_analysts = []
async def execute(self, *, task_id, ticker, date, request_context, on_stage=None):
mode = request_context.metadata.get("attempt_mode", "baseline")
self.attempt_modes.append(mode)
self.selected_analysts.append(tuple(request_context.selected_analysts))
if on_stage is not None:
await on_stage("analysts")
if mode == "provider_probe":
return AnalysisExecutionOutput(
decision="HOLD",
quant_signal="HOLD",
llm_signal="HOLD",
confidence=0.5,
report_path=f"results/{ticker}/{date}/complete_report.md",
observation={
"status": "completed",
"observation_code": "completed",
"attempt_mode": mode,
"evidence_id": "provider-probe-success",
},
)
raise AnalysisExecutorError(
"analysis subprocess timed out after 300s",
code="analysis_failed",
retryable=True,
observation={
"status": "failed",
"observation_code": "subprocess_stdout_timeout",
"attempt_mode": mode,
"evidence_id": f"{mode}-failure",
"message": "analysis subprocess timed out after 300s",
},
)
class AlwaysFailRuntimePolicyExecutor:
def __init__(self):
self.attempt_modes = []
async def execute(self, *, task_id, ticker, date, request_context, on_stage=None):
mode = request_context.metadata.get("attempt_mode", "baseline")
self.attempt_modes.append(mode)
raise AnalysisExecutorError(
f"{mode} failed",
code="analysis_failed",
retryable=(mode != "provider_probe"),
observation={
"status": "failed",
"observation_code": "subprocess_stdout_timeout",
"attempt_mode": mode,
"evidence_id": f"{mode}-failure",
"message": f"{mode} failed",
},
)
@ -253,6 +528,7 @@ def test_analysis_service_start_analysis_uses_executor(tmp_path):
"status": "running",
}
assert task_results["task-1"]["status"] == "completed"
assert task_results["task-1"]["tentative_classification"]["kind"] == "no_issue"
assert task_results["task-1"]["compat"]["decision"] == "BUY"
assert task_results["task-1"]["result_ref"] == "results/task-1/result.v1alpha1.json"
assert task_results["task-1"]["result"]["signals"]["llm"]["rating"] == "BUY"
@ -264,3 +540,414 @@ def test_analysis_service_start_analysis_uses_executor(tmp_path):
assert saved_contract["result"]["signals"]["merged"]["rating"] == "BUY"
assert broadcasts[0] == ("task-1", "running", "analysts")
assert broadcasts[-1][1] == "completed"
def test_classify_attempts_marks_baseline_success_as_no_issue():
analysis_service = AnalysisService(
executor=None,
result_store=None,
job_service=None,
)
classification = analysis_service._classify_attempts([
{
"status": "completed",
"observation_code": "completed",
"attempt_mode": "baseline",
}
])
assert classification["kind"] == "no_issue"
def test_analysis_service_promotes_local_recovery_before_success(tmp_path):
gateway = DummyPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
task_results = {}
analysis_tasks = {}
processes = {}
service = JobService(
task_results=task_results,
analysis_tasks=analysis_tasks,
processes=processes,
persist_task=store.save_task_status,
delete_task=store.delete_task_status,
)
executor = RecoveryThenSuccessExecutor()
analysis_service = AnalysisService(
executor=executor,
result_store=store,
job_service=service,
)
broadcasts = []
async def _broadcast(task_id, payload):
broadcasts.append((task_id, payload["status"], payload.get("tentative_classification")))
async def scenario():
response = await analysis_service.start_analysis(
task_id="task-recovery",
ticker="AAPL",
date="2026-04-13",
request_context=build_request_context(
provider_api_key="provider-secret",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
selected_analysts=["market", "news"],
),
broadcast_progress=_broadcast,
)
await analysis_tasks["task-recovery"]
return response
response = asyncio.run(scenario())
assert response["status"] == "running"
assert executor.attempt_modes == ["baseline", "local_recovery"]
assert task_results["task-recovery"]["status"] == "completed"
assert task_results["task-recovery"]["tentative_classification"]["kind"] == "local_runtime"
assert task_results["task-recovery"]["budget_state"]["local_recovery_used"] is True
assert any(status == "auto_recovering" for _, status, _ in broadcasts)
def test_analysis_service_uses_single_provider_probe_after_recovery_failure(tmp_path):
gateway = DummyPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
task_results = {}
analysis_tasks = {}
processes = {}
service = JobService(
task_results=task_results,
analysis_tasks=analysis_tasks,
processes=processes,
persist_task=store.save_task_status,
delete_task=store.delete_task_status,
)
executor = RecoveryThenProbeExecutor()
analysis_service = AnalysisService(
executor=executor,
result_store=store,
job_service=service,
)
broadcasts = []
async def _broadcast(task_id, payload):
broadcasts.append(payload["status"])
async def scenario():
response = await analysis_service.start_analysis(
task_id="task-probe",
ticker="AAPL",
date="2026-04-13",
request_context=build_request_context(
provider_api_key="provider-secret",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
selected_analysts=["news", "fundamentals"],
),
broadcast_progress=_broadcast,
)
await analysis_tasks["task-probe"]
return response
response = asyncio.run(scenario())
assert response["status"] == "running"
assert executor.attempt_modes == ["baseline", "local_recovery", "provider_probe"]
assert executor.selected_analysts[-1] == ("news",)
assert task_results["task-probe"]["status"] == "completed"
assert task_results["task-probe"]["budget_state"]["provider_probe_used"] is True
assert "probing_provider" in broadcasts
assert task_results["task-probe"]["tentative_classification"]["kind"] == "interaction_effect"
def test_portfolio_analysis_uses_runtime_policy_and_persists_failure_evidence(tmp_path):
gateway = DummyPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
task_results = {}
analysis_tasks = {}
processes = {}
service = JobService(
task_results=task_results,
analysis_tasks=analysis_tasks,
processes=processes,
persist_task=store.save_task_status,
delete_task=store.delete_task_status,
)
executor = AlwaysFailRuntimePolicyExecutor()
analysis_service = AnalysisService(
executor=executor,
result_store=store,
job_service=service,
)
async def _broadcast(task_id, payload):
return None
async def scenario():
response = await analysis_service.start_portfolio_analysis(
task_id="portfolio-runtime-policy",
date="2026-04-13",
request_context=build_request_context(
provider_api_key="provider-secret",
llm_provider="anthropic",
backend_url="https://api.minimaxi.com/anthropic",
selected_analysts=["market", "social"],
),
broadcast_progress=_broadcast,
)
await analysis_tasks["portfolio-runtime-policy"]
return response
response = asyncio.run(scenario())
assert response["status"] == "running"
assert executor.attempt_modes == ["baseline", "local_recovery", "provider_probe"]
assert task_results["portfolio-runtime-policy"]["status"] == "completed"
assert task_results["portfolio-runtime-policy"]["failed"] == 1
assert len(task_results["portfolio-runtime-policy"]["results"]) == 1
rec = task_results["portfolio-runtime-policy"]["results"][0]
assert rec["status"] == "failed"
assert rec["error"]["code"] == "analysis_failed"
assert rec["tentative_classification"]["kind"] == "interaction_effect"
assert rec["budget_state"]["provider_probe_used"] is True
assert rec["evidence"]["attempts"][-1]["attempt_mode"] == "provider_probe"
def test_task_query_service_loads_contract_and_lists_sorted_summaries(tmp_path):
gateway = DummyPortfolioGateway()
store = ResultStore(tmp_path / "task_status", gateway)
task_results = {
"task-old": {
"contract_version": "v1alpha1",
"task_id": "task-old",
"request_id": "req-old",
"executor_type": "legacy_subprocess",
"result_ref": None,
"ticker": "AAPL",
"date": "2026-04-10",
"status": "running",
"progress": 10,
"current_stage": "analysts",
"created_at": "2026-04-10T10:00:00",
"elapsed_seconds": 1,
"stages": [],
"result": None,
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"compat": {},
},
"task-new": {
"contract_version": "v1alpha1",
"task_id": "task-new",
"request_id": "req-new",
"executor_type": "legacy_subprocess",
"result_ref": "results/task-new/result.v1alpha1.json",
"ticker": "MSFT",
"date": "2026-04-11",
"status": "completed",
"progress": 100,
"current_stage": "portfolio",
"created_at": "2026-04-11T10:00:00",
"elapsed_seconds": 3,
"stages": [],
"result": {"decision": "STALE"},
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"compat": {},
},
}
service = JobService(
task_results=task_results,
analysis_tasks={},
processes={},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
)
store.save_result_contract(
"task-new",
{
"status": "completed",
"ticker": "MSFT",
"date": "2026-04-11",
"result": {
"decision": "BUY",
"confidence": 0.91,
"degraded": False,
"signals": {"merged": {"rating": "BUY"}},
},
"error": None,
},
)
query_service = TaskQueryService(
task_results=task_results,
result_store=store,
job_service=service,
)
payload = query_service.public_task_payload("task-new")
listing = query_service.list_task_summaries()
assert payload["result"]["decision"] == "BUY"
assert listing["contract_version"] == "v1alpha1"
assert listing["total"] == 2
assert [task["task_id"] for task in listing["tasks"]] == ["task-new", "task-old"]
def test_job_service_maps_internal_runtime_statuses_to_running_public_status():
service = JobService(
task_results={
"task-runtime": {
"contract_version": "v1alpha1",
"task_id": "task-runtime",
"request_id": "req-runtime",
"executor_type": "legacy_subprocess",
"result_ref": None,
"ticker": "AAPL",
"date": "2026-04-13",
"status": "auto_recovering",
"progress": 10,
"current_stage": "analysts",
"created_at": "2026-04-13T10:00:00",
"elapsed_seconds": 2,
"stages": [],
"result": None,
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"evidence_summary": {"attempts": []},
"tentative_classification": None,
"budget_state": {},
"compat": {},
}
},
analysis_tasks={},
processes={},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
)
payload = service.to_public_task_payload("task-runtime")
summary = service.to_task_summary("task-runtime")
assert payload["status"] == "running"
assert summary["status"] == "running"
class _DummyTask:
def __init__(self, events):
self.events = events
def cancel(self):
self.events.append("background task cancel")
class _DummyProcess:
def __init__(self, events):
self.events = events
self.returncode = None
def kill(self):
self.events.append("process kill")
class _RecordingTaskStatusStore:
def __init__(self, task_status_dir: Path, events: list[str]):
self.task_status_dir = task_status_dir
self.events = events
def save_task_status(self, task_id: str, data: dict) -> None:
self.events.append("save_task_status")
self.task_status_dir.mkdir(parents=True, exist_ok=True)
(self.task_status_dir / f"{task_id}.json").write_text(json.dumps(data, ensure_ascii=False))
def delete_task_status(self, task_id: str) -> None:
self.events.append("delete_task_status")
(self.task_status_dir / f"{task_id}.json").unlink(missing_ok=True)
def load_result_contract(self, *, result_ref=None, task_id=None):
return None
def test_task_command_service_preserves_delete_on_cancel_sequence(tmp_path):
events: list[str] = []
task_status_dir = tmp_path / "task_status"
store = _RecordingTaskStatusStore(task_status_dir, events)
task_results = {
"task-cancel": {
"contract_version": "v1alpha1",
"task_id": "task-cancel",
"request_id": "req-cancel",
"executor_type": "legacy_subprocess",
"result_ref": None,
"ticker": "AAPL",
"date": "2026-04-11",
"status": "running",
"progress": 20,
"current_stage": "research",
"created_at": "2026-04-11T10:00:00",
"elapsed_seconds": 3,
"stages": [],
"result": None,
"error": None,
"degradation_summary": None,
"data_quality_summary": None,
"compat": {},
}
}
job_service = JobService(
task_results=task_results,
analysis_tasks={"task-cancel": _DummyTask(events)},
processes={"task-cancel": _DummyProcess(events)},
persist_task=lambda task_id, data: None,
delete_task=lambda task_id: None,
)
original_cancel_job = job_service.cancel_job
def _wrapped_cancel_job(task_id: str, error: str = "用户取消"):
events.append("job_service.cancel_job")
return original_cancel_job(task_id, error)
job_service.cancel_job = _wrapped_cancel_job
command_service = TaskCommandService(
task_results=task_results,
analysis_tasks=job_service.analysis_tasks,
processes=job_service.processes,
result_store=store,
job_service=job_service,
)
broadcasts: list[dict] = []
async def _broadcast(task_id: str, payload: dict):
events.append("broadcast_progress")
broadcasts.append(json.loads(json.dumps(payload)))
response = asyncio.run(
command_service.cancel_task("task-cancel", broadcast_progress=_broadcast)
)
assert response == {
"contract_version": "v1alpha1",
"task_id": "task-cancel",
"status": "cancelled",
}
assert events == [
"process kill",
"background task cancel",
"job_service.cancel_job",
"save_task_status",
"broadcast_progress",
"delete_task_status",
]
assert broadcasts[-1]["status"] == "cancelled"
assert broadcasts[-1]["error"] == {
"code": "cancelled",
"message": "用户取消",
"retryable": False,
}
assert task_results["task-cancel"]["status"] == "cancelled"
assert task_results["task-cancel"]["error"]["code"] == "cancelled"
assert not (task_status_dir / "task-cancel.json").exists()