Compare commits

..

52 Commits

Author SHA1 Message Date
Shaojie cd63214f06
Merge 4f88c4c6c2 into fa4d01c23a 2026-04-17 11:13:31 +08:00
陈少杰 4f88c4c6c2 Unblock PR review by removing portability and secret-handling regressions
The open review threads on this branch were all grounded in real issues:
a committed API key in handover docs, Unix-only locking and timeout
mechanisms, synchronous network I/O inside an async API path, and missing
retry/session reuse on market-data calls. This change removes the leaked
credential from the tracked docs, makes the portfolio and profiling paths
portable across platforms, moves live price fetches off the event loop,
and reuses the existing yfinance retry/session helpers where the review
called for them.

While verifying these fixes, the branch also failed to import parts of the
TradingAgents graph because two utility modules referenced by the new code
were absent. I restored those utilities with minimal implementations so the
relevant regression tests and import graph work again in this PR.

Constraint: No new dependencies; portability fixes had to stay in the standard library
Rejected: Add portalocker or filelock | unnecessary new dependency for a small compatibility gap
Rejected: Keep signal.alarm and fcntl as Unix-only behavior | leaves the reported review blockers unresolved
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Keep shared runtime paths cross-platform and keep async handlers free of direct blocking network I/O
Tested: python -m pytest -q web_dashboard/backend/tests/test_portfolio_api.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_profile_stage_chain.py tradingagents/tests/test_stockstats_utils.py
Tested: python -m pytest -q orchestrator/tests/test_trading_graph_config.py tradingagents/tests/test_research_guard.py
Not-tested: Full repository test suite and GitHub-side post-push checks
2026-04-17 10:50:47 +08:00
陈少杰 e581adbeca refactor(factory): add pattern caching and type safety to validation
Improvements:
- Add ProviderMismatch TypedDict for type-safe return values
- Cache compiled regex patterns for better performance
- Update documentation to reflect optimizations

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 20:28:14 +08:00
陈少杰 78312851f9 refactor(orchestrator): centralize provider validation in factory
Move provider × base_url validation patterns from llm_runner.py to
factory.py's ProviderSpec, implementing the architecture improvement
suggested in docs/architecture/orchestrator-validation.md.

Changes:
- Add base_url_patterns field to ProviderSpec dataclass
- Split ollama and openrouter into separate ProviderSpec entries
  (previously shared openai's spec with dynamic provider selection)
- Add validate_provider_base_url() function in factory for reusable validation
- Simplify LLMRunner._detect_provider_mismatch() to delegate to factory
- Update architecture doc with change log and implementation notes

Benefits:
- Single source of truth for provider configuration
- Easier maintenance when adding/updating providers
- Reduced code duplication (llm_runner.py: -39 lines, factory.py: +84 lines)
- Factory validation can be tested independently

All 28 orchestrator validation tests pass, including 6 provider mismatch tests.
2026-04-16 20:06:30 +08:00
陈少杰 a5fd95af82 chore: add gstack skill routing rules to CLAUDE.md 2026-04-16 19:57:22 +08:00
陈少杰 9753635370 fix: resolve merge conflicts in README and factory.py 2026-04-16 17:01:57 +08:00
陈少杰 579c787027 wip: stage uncommitted changes before merge 2026-04-16 17:01:04 +08:00
陈少杰 eda9980729 feat(orchestrator): add comprehensive provider and timeout validation
Add three layers of configuration validation to LLMRunner:

1. Provider × base_url matrix validation
   - Validates all 6 providers (anthropic, openai, google, xai, ollama, openrouter)
   - Uses precompiled regex patterns for efficiency
   - Detects mismatches before expensive graph initialization

2. Timeout configuration validation
   - Warns when analyst/research timeouts may be insufficient
   - Provides recommendations based on analyst count (1-4)
   - Non-blocking warnings logged at init time

3. Enhanced error classification
   - Distinguishes provider_mismatch from provider_auth_failed
   - Uses heuristic detection for auth failures
   - Simplified nested ternary expressions for readability

Improvements:
- Validation runs before cache check (prevents stale cache on config errors)
- EAFP pattern for cache reading (more robust than TOCTOU)
- Precompiled regex patterns (avoid recompilation overhead)
- All 21 unit tests passing

Documentation:
- docs/architecture/orchestrator-validation.md - complete validation guide
- orchestrator/examples/validation_examples.py - runnable examples

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-16 11:43:19 +08:00
陈少杰 0ba4e40601 Keep maintainer docs aligned with the current contract-first and provenance reality
The repository state has moved well past the oldest migration drafts: backend public payloads are already contract-first in several paths, research provenance now spans runner/live/full-state logs, and the offline trace/A-B toolchain is part of the normal maintainer workflow. This doc update records what is already true on mainline versus what remains target-state, so future changes stop treating stale design notes as the current architecture.\n\nConstraint: Reflect only behavior that is already present on mainline; avoid documenting unrecovered worker-only experiments as current reality\nRejected: Collapse everything into README | maintainer-facing migration/provenance details would become harder to keep precise and reviewable\nConfidence: high\nScope-risk: narrow\nDirective: When changing backend public fields or profiling semantics, update AGENTS.md and the linked docs in the same change set so maintainer guidance does not drift behind code again\nTested: git diff --check on updated documentation set\nNot-tested: No runtime/code-path changes in this docs-only commit
2026-04-14 15:20:39 +08:00
陈少杰 64e3583f66 Unify research provenance extraction and persist it into state logs
The earlier Phase 1-4 recovery left one unique worker-1 slice unrecovered: provenance extraction logic was still duplicated in the runner and the full-state log path still dropped the structured research fields. This change centralizes provenance extraction in agent state helpers, reuses it from the LLM runner, and writes the same structured fields into TradingAgents full-state logs with focused regression tests.\n\nConstraint: Preserve the existing debate-string output shape while making provenance reuse consistent across runner and state-log surfaces\nRejected: Cherry-pick worker-1 auto-checkpoint wholesale | it mixed duplicate A/B files and uv.lock churn with the useful provenance helper changes\nConfidence: high\nScope-risk: narrow\nDirective: Keep research provenance extraction centralized; new consumers should call the helper instead of re-listing field names by hand\nTested: python -m pytest -q tradingagents/tests/test_research_guard.py orchestrator/tests/test_trading_graph_config.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_profile_stage_chain.py orchestrator/tests/test_profile_ab.py orchestrator/tests/test_contract_v1alpha1.py orchestrator/tests/test_live_mode.py\nTested: python -m compileall tradingagents/agents/utils/agent_states.py tradingagents/graph/trading_graph.py orchestrator/llm_runner.py orchestrator/tests/test_trading_graph_config.py tradingagents/tests/test_research_guard.py\nNot-tested: Live-provider end-to-end analysis run that emits a new full_states_log file
2026-04-14 13:34:25 +08:00
陈少杰 8c6da22f4f Finish the A/B harness recovery without leaving conflict markers behind
The worker-4 recovery brought in the trace-summary helper split and A/B harness updates, but the cherry-pick left conflict markers around build_trace_payload in profile_stage_chain.py. This follow-up keeps the merged import-based shape and records the cleanup as a standalone reversible step.\n\nConstraint: Preserve the recovered trace payload shape while removing only the cherry-pick residue\nRejected: Re-run the cherry-pick from scratch | unnecessary after the resolved file already passed targeted verification\nConfidence: high\nScope-risk: narrow\nDirective: If profile_stage_chain.py is touched again, verify the file is marker-free before running compile/test to avoid silent recovery drift\nTested: python -m pytest -q orchestrator/tests/test_contract_v1alpha1.py tradingagents/tests/test_research_guard.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_live_mode.py orchestrator/tests/test_profile_stage_chain.py orchestrator/tests/test_profile_ab.py; python -m orchestrator.profile_stage_chain --help; python -m compileall orchestrator/profile_stage_chain.py orchestrator/profile_trace_utils.py orchestrator/profile_ab.py orchestrator/tests/test_profile_ab.py tradingagents/tests/test_research_guard.py\nNot-tested: Live-provider end-to-end profile_ab comparison on real traces
2026-04-14 05:15:21 +08:00
陈少杰 d34ad8d3ef omx(team): auto-checkpoint worker-4 [unknown] 2026-04-14 05:14:01 +08:00
陈少杰 a81f825203 Make A/B trace comparisons easier to trust during profiling
The minimal offline harness now carries forward source-file and trace-schema
metadata, and it can break ties using error counts instead of only elapsed
runtime and degraded-research totals. This keeps Phase 1-4 profile comparisons
self-describing when multiple dumps are aggregated.

Constraint: Keep the harness offline and avoid changing the default runtime path
Rejected: Add a live dual-run executor | would couple profiling to external LLM calls and increase risk
Confidence: high
Scope-risk: narrow
Directive: Preserve the trace dump shape as the source of truth for future comparison tooling
Tested: uv run python inline assertions for orchestrator.tests.test_profile_ab
Tested: uv run python CLI smoke test for orchestrator.profile_ab with temp traces
Tested: uv run python -m compileall orchestrator/profile_stage_chain.py orchestrator/profile_trace_utils.py orchestrator/profile_ab.py orchestrator/tests/test_profile_ab.py
2026-04-14 05:12:13 +08:00
陈少杰 5aa0091773 Clarify the executable provenance profiling entrypoint
The provenance guide already documented the guard semantics and A/B harness, but its example command used the script path that fails from the repo root because package imports do not resolve there. Document the module invocation instead so verification can reproduce the harness without ad hoc path fixes.

Constraint: Keep documentation aligned with the current harness without changing runtime behavior or the default debate path
Rejected: Add PYTHONPATH=. to the examples | less ergonomic and easier to drift from normal repo-root usage
Confidence: high
Scope-risk: narrow
Directive: Keep profiling examples runnable from the repo root; update the docs if the harness entrypoint changes again
Tested: python -m orchestrator.profile_stage_chain --help
Tested: python -m pytest tradingagents/tests/test_research_guard.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_live_mode.py orchestrator/tests/test_contract_v1alpha1.py orchestrator/tests/test_trading_graph_config.py
Tested: lsp_diagnostics_directory (0 errors, 0 warnings)
Not-tested: end-to-end profile run against a live LLM backend
2026-04-14 05:11:48 +08:00
陈少杰 909519ff17 omx(team): auto-checkpoint worker-2 [unknown] 2026-04-14 04:47:52 +08:00
陈少杰 addc4a1e9c Keep research degradation visible while bounding researcher nodes
Research provenance now rides with the debate state, cache metadata, live payloads, and trace dumps so degraded research no longer masquerades as a normal sample. Bull/Bear/Manager nodes also return explicit guarded fallbacks on timeout or exception, which gives the graph a real node budget boundary without rewriting the bull/bear output shape or removing debate.\n\nConstraint: Must preserve bull/bear debate structure and output shape while adding provenance and node guards\nRejected: Skip bull/bear debate in compact mode | would trade away analysis quality before A/B evidence exists\nConfidence: high\nScope-risk: moderate\nReversibility: clean\nDirective: Treat research_status and data_quality as rollout gates; do not collapse degraded research back into normal success samples\nTested: python -m pytest tradingagents/tests/test_research_guard.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_live_mode.py web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py -q; python -m compileall tradingagents/graph/setup.py tradingagents/agents/utils/agent_states.py tradingagents/graph/propagation.py orchestrator/llm_runner.py orchestrator/live_mode.py orchestrator/profile_stage_chain.py; python orchestrator/profile_stage_chain.py --ticker 600519.SS --date 2026-04-10 --provider anthropic --model MiniMax-M2.7-highspeed --base-url https://api.minimaxi.com/anthropic --selected-analysts market --analysis-prompt-style compact --timeout 45 --max-retries 0 --overall-timeout 120 --dump-raw-on-failure\nNot-tested: Full successful live-provider completion through Portfolio Manager after the post-research connection failure
2026-04-14 03:49:33 +08:00
陈少杰 baf67dbd58 Trim the research phase before trusting profiling output
The legacy path was already narrowed to market-only compact execution, but the research stage remained the slowest leg and the profiler lacked persistent raw event artifacts for comparison. This change further compresses the compact prompts for Bull Researcher, Bear Researcher, and Research Manager, adds durable raw event dumps to the graph profiler, and keeps profiling evidence out of the runtime contract itself.

Constraint: No new dependencies and no runtime-contract pollution for profiling-only data
Rejected: Add synthetic timing fields back into the subprocess protocol | those timings are not real graph-stage boundaries and would mislead diagnosis
Rejected: Skip raw event dump persistence and rely on console output | makes multi-run comparison and regression tracking fragile
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep profiling as an external diagnostic surface; if stage timing ever enters contracts again, it must come from real graph boundaries
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py -q
Tested: python -m compileall tradingagents/agents/researchers/bull_researcher.py tradingagents/agents/researchers/bear_researcher.py tradingagents/agents/managers/research_manager.py orchestrator/profile_stage_chain.py
Tested: real provider profiling via orchestrator/profile_stage_chain.py with market-only compact settings; dump persisted to orchestrator/profile_runs/600519.SS_2026-04-10_20260413T184742Z.json
Not-tested: browser/manual consumption of the persisted profiling dump
2026-04-14 02:51:07 +08:00
陈少杰 8a4f0ad540 Reduce the legacy execution path before profiling it for real
The provider itself was healthy, but the legacy dashboard path still ran the heaviest graph shape by default and had no trustworthy stage profiling story. This change narrows the default legacy execution settings to the market-only compact path with conservative timeout/retry values, injects those settings through the unified request/runtime surface, and adds a standalone graph-update profiler so stage timing comes from real node completions rather than synthetic script labels.

Constraint: Profiling evidence had to be grounded in the real provider path without adding new dependencies or polluting the runtime contract
Rejected: Keep synthetic STAGE_TIMING in the subprocess protocol | misattributes the heaviest work to the wrong phase and makes the profiling conclusion untrustworthy
Rejected: Broaden the default legacy path and rely on longer timeouts | raises cost and latency while obscuring the true bottleneck
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep operational profiling separate from runtime business contracts unless timings are sourced from real graph-stage boundaries
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py -q
Tested: python -m compileall web_dashboard/backend orchestrator/profile_stage_chain.py
Tested: real provider direct invoke returned OK against MiniMax anthropic-compatible endpoint
Tested: real graph profiling via orchestrator/profile_stage_chain.py produced stage timings for 600519.SS on 2026-04-10 with selected_analysts=market and compact prompt
Not-tested: legacy subprocess full end-to-end success case on the same provider path (current run still exits via protocol failure after upstream connection error)
2026-04-14 02:42:53 +08:00
陈少杰 eb2ab0afcf Preserve diagnostics in live-mode failure payloads
The previous hardening pass still dropped source diagnostics and data-quality context once live-mode serialized a dual-lane failure. Keep those fields when a structured CombinedSignalFailure reaches the websocket layer so consumers can distinguish provider mismatch, stale data, and other degraded cases even when no final signal exists.

Constraint: Follow-on fix after 63858bf should stay minimal and not reopen unrelated executor/calendar work
Rejected: Fold this into a larger amend of the prior commit | history is already shared and the delta is a single behavioral correction
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When failure exceptions carry structured diagnostics, live serializers must preserve them instead of flattening to a generic message
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py orchestrator/tests/test_market_calendar.py orchestrator/tests/test_live_mode.py orchestrator/tests/test_application_service.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_llm_runner.py -q
Tested: python -m compileall orchestrator web_dashboard/backend
Tested: npm run build (web_dashboard/frontend)
Not-tested: real websocket consumers against provider-backed failure paths
2026-04-14 02:10:31 +08:00
陈少杰 a4def7aff9 Harden executor configuration and failure contracts before further rollout
The rollout-ready branch still conflated dashboard auth with provider credentials, discarded diagnostics when both signal lanes degraded, and treated RESULT_META as optional even though downstream contracts now depend on it. This change separates provider runtime settings from request auth, preserves source diagnostics/data quality in full-failure contracts, requires RESULT_META in the subprocess protocol, and moves A-share holidays into an updateable calendar data source.

Constraint: No external market-calendar dependency is available in env312 and dependency policy forbids adding one casually
Rejected: Keep reading provider keys from request headers | couples dashboard auth to execution and breaks non-anthropic providers
Rejected: Leave both-signals-unavailable as a bare ValueError | loses diagnostics before live/backend contracts can serialize them
Rejected: Keep A-share holidays embedded in Python constants | requires code edits every year and preserves the stopgap design
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Keep subprocess protocol fields explicit and fail closed when RESULT_META is missing; do not route provider credentials through dashboard auth again
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py orchestrator/tests/test_market_calendar.py orchestrator/tests/test_live_mode.py orchestrator/tests/test_application_service.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_llm_runner.py -q
Tested: python -m compileall orchestrator web_dashboard/backend
Not-tested: real provider-backed execution across openai/google providers
Not-tested: browser/manual verification beyond existing frontend contract consumers
2026-04-14 01:54:44 +08:00
陈少杰 a245915f4e Recover the next verified Phase 4 improvements without waiting on team teardown
The team run reached a quiescent state with no in-progress work but still had pending bookkeeping tasks, so the next safe step was to pull only the newly verified commits into main. This batch adds a frontend contract-view audit guard and the reusable contract cue UI so degradation and data-quality states are visible where the contract-first payload already exposes them.

Constraint: The team snapshot still has pending bookkeeping tasks, so do not treat it as terminal cleanup-ready
Rejected: Wait for terminal team shutdown before any further recovery | delays low-risk verified changes even though no workers are actively modifying code
Rejected: Pull the entire worker-3 checkpoint verbatim | unnecessary risk of reintroducing snapshot-only churn when only the frontend files are needed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep frontend contract cue rendering centralized; avoid reintroducing page-specific ad-hoc degradation badges
Tested: python -m pytest web_dashboard/backend/tests/test_frontend_contract_view_audit.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_services_migration.py -q
Tested: npm run build (web_dashboard/frontend)
Not-tested: manual browser interaction with the new ContractCues component
Not-tested: final OMX team terminal shutdown path
2026-04-14 01:19:01 +08:00
陈少杰 11cbb7ce85 Carry Phase 4 rollout-readiness work back into the mainline safely
Team execution produced recoverable commits for market-holiday handling, live websocket contracts, regression coverage, and the remaining frontend contract-view polish. Recover those changes into main without waiting for terminal team shutdown, preserving the verified payload semantics while avoiding the worker auto-checkpoint noise.

Constraint: Team workers were still in progress, so recovery had to avoid destructive shutdown and ignore the worker-3 uv.lock churn
Rejected: Wait for terminal shutdown before recovery | unnecessary delay once commits were already recoverable and verified
Rejected: Cherry-pick worker-3 checkpoint wholesale | would import unrelated uv.lock churn into main
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Treat team INTEGRATED mailbox messages as hints only; always inspect snapshot refs/worktrees before claiming the leader actually merged code
Tested: python -m pytest orchestrator/tests/test_market_calendar.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_application_service.py orchestrator/tests/test_live_mode.py web_dashboard/backend/tests/test_api_smoke.py -q
Tested: python -m compileall orchestrator web_dashboard/backend
Tested: npm run build (web_dashboard/frontend)
Not-tested: final team terminal completion after recovery
Not-tested: real websocket clients or live provider-backed market holiday sessions
2026-04-14 01:15:18 +08:00
陈少杰 7cd9c4617a Expose data-quality semantics before rolling contract-first further
Phase 3 adds concrete data-quality states to the contract surface so weekend runs, stale market data, partial payloads, and provider/config mismatches stop collapsing into generic success or failure. The backend now carries those diagnostics from quant/llm runners through the legacy executor contract, while the frontend reads decision/confidence fields from result or compat instead of assuming legacy top-level payloads.

Constraint: existing recommendation/task files and current dashboard routes must remain readable during migration
Rejected: infer data quality only in the service layer | loses source-specific evidence and violates the executor/orchestrator boundary
Rejected: leave frontend on top-level decision fields | breaks as soon as contract-first payloads become the default
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: keep new data-quality states explicit in contract metadata and route all UI reads through result/compat helpers
Tested: python -m pytest orchestrator/tests/test_quant_runner.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_signals.py orchestrator/tests/test_application_service.py orchestrator/tests/test_trading_graph_config.py web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py -q
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Tested: npm run build (web_dashboard/frontend)
Not-tested: real exchange holiday calendars beyond weekend detection
Not-tested: real provider-backed end-to-end runs for provider_mismatch and stale-data scenarios
2026-04-14 00:37:35 +08:00
陈少杰 d86b805c12 Make backend task and recommendation APIs contract-first by default
Phase 2 moves the dashboard off raw task-state leakage and onto stable public projections. Task status, task listings, progress websocket events, and portfolio recommendation reads now load persisted contracts when available, expose a contract-first envelope, and keep legacy fields inside a compat block instead of smearing them across top-level payloads.

Constraint: existing task-status JSON and recommendation files must continue to read successfully during migration
Rejected: return raw task_results directly from API and websocket | keeps legacy fields as the public contract and blocks cutover
Rejected: rewrite stored recommendation files in-place | adds risky migration work before rollout gates exist
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: keep public payload shaping in job/result-store projections, not in ad-hoc route logic
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py -q
Tested: python -m pytest orchestrator/tests/test_application_service.py orchestrator/tests/test_trading_graph_config.py -q
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Not-tested: legacy frontend rendering against new compat-wrapped task payloads
Not-tested: real websocket clients and provider-backed end-to-end analysis
2026-04-14 00:26:28 +08:00
陈少杰 a4fb0c4060 Prevent executor regressions from leaking through the dashboard
Phase 1 left the backend halfway between legacy task payloads and the new executor boundary. This commit finishes the review-fix pass so missing protocol markers fail closed, timed-out subprocesses are killed, and successful analysis runs persist a result contract before task state is marked complete.

Constraint: env312 lacks pytest-asyncio so async executor tests must run without extra plugins
Rejected: Keep missing marker fallback as HOLD | masks protocol regressions as neutral signals
Rejected: Leave service success assembly in AnalysisService | breaks contract-first persistence and result_ref wiring
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Keep backend success state driven by persisted result contracts; do not reintroduce raw stdout parsing in services
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py -q
Tested: python -m pytest orchestrator/tests/test_application_service.py orchestrator/tests/test_trading_graph_config.py -q
Not-tested: real provider-backed MiniMax execution
Not-tested: full dashboard websocket/manual UI flow
2026-04-14 00:19:13 +08:00
陈少杰 b6e57d01e3 Stabilize TradingAgents contracts so orchestration and dashboard can converge
This change set introduces a versioned result contract, shared config schema/loading, provider/data adapter seams, and a no-strategy application-service skeleton so the current research graph, orchestrator layer, and dashboard backend stop drifting further apart. It also keeps the earlier MiniMax compatibility and compact-prompt work aligned with the new contract shape and extends regression coverage so degradation, fallback, and service migration remain testable during the next phases.

Constraint: Must preserve existing FastAPI entrypoints and fallback behavior while introducing an application-service seam
Constraint: Must not turn application service into a new strategy or learning layer
Rejected: Full backend rewrite to service-only execution now | too risky before contract and fallback paths stabilize
Rejected: Leave provider/data/config logic distributed across scripts and endpoints | continues boundary drift and weakens verification
Confidence: high
Scope-risk: broad
Directive: Keep future application-service changes orchestration-only; move any scoring, signal fusion, or learning logic to orchestrator or tradingagents instead
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Tested: python -m pytest orchestrator/tests/test_signals.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_contract_v1alpha1.py orchestrator/tests/test_application_service.py orchestrator/tests/test_provider_adapter.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_services_migration.py -q
Not-tested: live MiniMax/provider execution against external services
Not-tested: full dashboard/manual websocket flow against a running frontend
Not-tested: omx team runtime end-to-end in the primary workspace
2026-04-13 17:25:07 +08:00
陈少杰 5b2d631393 fix(backend): add MINIMAX_API_KEY fallback + project_dir in orchestrator config
- project_dir was missing from trading_agents_config causing KeyError in TradingAgentsGraph
- ANTHROPIC_API_KEY falls back to MINIMAX_API_KEY for users using MiniMax API
- Both /api/analysis/start and /api/portfolio/analyze updated

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 02:31:05 +08:00
陈少杰 d419d85494 fix(orchestrator): fix FinalSignal dataclass attribute access in script template
- result.get() raises AttributeError since FinalSignal is a dataclass not dict
- Access direction/confidence as result.direction, result.confidence
- LLM signal rating extracted from Signal.metadata["rating"]
- Quant signal rating derived from quant_sig_obj.direction + confidence
  (quant metadata has no "rating" field, only sharpe/params)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 02:16:40 +08:00
陈少杰 0cd40a9bab feat: integrate TradingOrchestrator with 5-level signal dashboard
- Merge orchestrator module (Quant+LLM dual-track signal fusion)
- Replace ANALYSIS_SCRIPT_TEMPLATE to use TradingOrchestrator.get_combined_signal()
- Extend signal levels: BUY/OVERWEIGHT/HOLD/UNDERWEIGHT/SELL (direction × confidence≥0.7)
- Backend: parse SIGNAL_DETAIL: stdout line, populate quant_signal/llm_signal/confidence fields
- Backend: update _extract_decision() regex for 5-level signals
- Backend: add OVERWEIGHT/UNDERWEIGHT colors to PDF export
- Frontend: DecisionBadge classMap for all 5 signal levels
- Frontend: index.css color tokens --overweight/--underweight
- Frontend: AnalysisMonitor shows LLM signal, Quant signal, confidence% on completion
- Add orchestrator/cache/ to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 01:59:43 +08:00
陈少杰 8960fdf321 feat(orchestrator): merge orchestrator module into main 2026-04-10 01:52:00 +08:00
陈少杰 b50e5b4725 fix(review): hmac.compare_digest for API key, ws/orchestrator auth, SignalMerger per-signal cap logic 2026-04-09 23:00:20 +08:00
陈少杰 28a95f34a7 fix(review): api_key→anthropic_key bug, sync-in-async event loop block, orchestrator per-message re-init, dead code cleanup 2026-04-09 22:55:36 +08:00
陈少杰 ce2e6d32cc feat(orchestrator): example scripts for backtest and live mode 2026-04-09 22:12:02 +08:00
陈少杰 480f0299b0 feat(orchestrator): LiveMode + /ws/orchestrator WebSocket endpoint 2026-04-09 22:10:15 +08:00
陈少杰 724c447720 feat(orchestrator): BacktestMode for historical signal collection 2026-04-09 22:09:38 +08:00
陈少杰 928f069184 test(orchestrator): unit tests for SignalMerger, LLMRunner._map_rating, QuantRunner._calc_confidence 2026-04-09 22:07:21 +08:00
陈少杰 14191abc29 feat(orchestrator): TradingOrchestrator main class with get_combined_signal 2026-04-09 22:05:03 +08:00
陈少杰 ba3297a696 fix(llm_runner): use stored direction/confidence on cache hit, sanitize ticker path 2026-04-09 22:03:17 +08:00
陈少杰 852b6c98e3 feat(orchestrator): implement LLMRunner with lazy graph init and JSON cache 2026-04-09 21:58:38 +08:00
陈少杰 29aae4bb18 feat(orchestrator): implement LLMRunner with caching and rating mapping 2026-04-09 21:54:48 +08:00
陈少杰 30d8f90467 fix(quant_runner): fix 3 critical issues and 2 important improvements
- Critical 1: initialize orders=[] before loop to prevent NameError when df is empty
- Critical 2: replace bare sqlite3 conn with context manager (with statement) in get_signal
- Critical 3: remove ticker param from _load_best_params (table has no ticker col, params are global)
- Important: extract db_path as self._db_path attribute in __init__ (DRY)
- Important: add comment explaining lazy imports require sys.path set in __init__
2026-04-09 21:51:38 +08:00
陈少杰 7a03c29330 feat(orchestrator): implement QuantRunner with BollingerStrategy signal generation 2026-04-09 21:44:34 +08:00
陈少杰 dacb3316fa fix(orchestrator): code quality fixes in config and signals
- config: remove hardcoded absolute path for quant_backtest_path (now empty string)
- config: add llm_solo_penalty (0.7) and quant_solo_penalty (0.8) fields
- signals: SignalMerger now accepts OrchestratorConfig in __init__
- signals: use config.llm_solo_penalty / quant_solo_penalty instead of magic numbers
- signals: apply quant_weight_cap / llm_weight_cap as confidence upper bounds
- signals: both-None branch raises ValueError instead of returning ticker=""
- signals: replace assert with explicit ValueError for llm-None-when-quant-None
- signals: replace datetime.utcnow() with datetime.now(timezone.utc)
2026-04-09 21:39:23 +08:00
陈少杰 56dc76d44a feat(orchestrator): add signals.py and config.py
- Signal / FinalSignal dataclasses
- SignalMerger with weighted merge, single-track fallbacks, and cancel-out HOLD
- OrchestratorConfig with all required fields
2026-04-09 21:35:31 +08:00
陈少杰 73fa75d9fb chore: add .worktrees/ to .gitignore 2026-04-09 21:24:21 +08:00
陈少杰 dd9392c9fb refactor(dashboard): simplify components and fix efficiency issues
- Extract DecisionBadge and StatusIcon/StatusTag to shared components
  to eliminate duplication across BatchManager, AnalysisMonitor, PortfolioPanel
- Remove dead code: unused maxConcurrent state and formatTime function
- Add useMemo for columns (all pages) and derived stats (BatchManager, PortfolioPanel)
- Fix polling flash: BatchManager fetchTasks accepts showLoading param
- Fix RecommendationsTab: consolidate progress completion into connectWs handler,
  replace double-arrow cleanup with named cleanup function
- Extract DEFAULT_ACCOUNT constant to avoid magic strings
- Extract HEADER_LABEL_STYLE and HEADER_ICON_STYLE constants in ScreeningPanel
- Remove unused imports (CheckCircleOutlined, CloseCircleOutlined, etc.)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 20:27:49 +08:00
陈少杰 5c4d0a72fc feat(dashboard): dark terminal design system overhaul
Complete visual redesign replacing Apple glassmorphism with Bloomberg-style
dark trading terminal aesthetic:

- Dark palette: #0d0d0f base, cyan accent (#00d4ff), green/red/amber signals
- Font pair: DM Sans (UI) + JetBrains Mono (data/numbers)
- Solid sidebar (no backdrop-filter blur)
- Compact stat strip in BatchManager (replaces 4-card hero row)
- Color system: semantic buy/sell/hold/running with CSS variables
- All inline rgba(0,0,0,...) → dark theme tokens
- All var(--font-*) → font-ui / font-data
- Focus-visible outlines on all interactive elements
- prefers-reduced-motion support
- Emoji status indicators → CSS status-dot

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 20:05:16 +08:00
Shaojie d9db22b1af ci: add GitHub Actions workflow for dashboard tests (#5)
- Backend: pytest on web_dashboard/backend/tests/
- Frontend: npm ci + lint on push/PR to dashboard paths
- Triggers on main, feat/**, fix/** branches

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 19:12:39 +08:00
Shaojie 7d8f7b5ae0 fix: add security tests + fix Header import (#4)
* fix: add API key auth, pagination, and configurable CORS to dashboard API

Security hardening:
- API key authentication via X-API-Key header on all endpoints
  (opt-in: set DASHBOARD_API_KEY or ANTHROPIC_API_KEY env var to enable)
  If no key is set, endpoints remain open (backward-compatible)
- WebSocket auth via ?api_key= query parameter
- CORS now configurable via CORS_ORIGINS env var (default: allow all)

Pagination (all list endpoints):
- GET /api/reports/list — limit/offset with total count
- GET /api/portfolio/recommendations — limit/offset with total count
- DEFAULT_PAGE_SIZE=50, MAX_PAGE_SIZE=500

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test: add tests for critical security fixes in dashboard API

- remove_position: empty position_id must be rejected (mass deletion fix)
- get_recommendation: path traversal blocked for ticker/date inputs
- get_recommendations: pagination limit/offset works correctly
- Named constants verified: semaphore, pagination, retry values
- API key auth: logic tested for both enabled/disabled states
- _auth_error helper exists for 401 responses

15 tests covering: mass deletion, path traversal (2 vectors),
pagination, auth logic, magic number constants

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 19:01:02 +08:00
Shaojie 1cee59dd9f fix: add API key auth, pagination, and configurable CORS to dashboard API (#3)
Security hardening:
- API key authentication via X-API-Key header on all endpoints
  (opt-in: set DASHBOARD_API_KEY or ANTHROPIC_API_KEY env var to enable)
  If no key is set, endpoints remain open (backward-compatible)
- WebSocket auth via ?api_key= query parameter
- CORS now configurable via CORS_ORIGINS env var (default: allow all)

Pagination (all list endpoints):
- GET /api/reports/list — limit/offset with total count
- GET /api/portfolio/recommendations — limit/offset with total count
- DEFAULT_PAGE_SIZE=50, MAX_PAGE_SIZE=500

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 18:57:51 +08:00
Shaojie f19c1c012e feat(dashboard): web dashboard phase 1 - screening, analysis, portfolio (#2)
* feat(dashboard): apply Apple design system to all 4 pages

- Font: replace SF Pro with DM Sans (web-available) throughout
- Typography: consistent DM Sans stack, monospace data display
- ScreeningPanel: add horizontal scroll for mobile, fix stat card hover
- AnalysisMonitor: Apple progress bar, stage pills, decision badge
- BatchManager: add copy-to-clipboard for task IDs, fix error tooltip truncation, add CTA to empty state
- ReportsViewer: Apple-styled modal, search bar consistency
- Keyboard: add Escape to close modals
- CSS: progress bar ease-out, sidebar collapse button icon-only mode

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(dashboard): secure API key handling and add stage progress streaming

- Pass ANTHROPIC_API_KEY via env dict instead of CLI args (P1 security fix)
- Add monitor_subprocess() coroutine with fcntl non-blocking reads
- Inject STAGE markers (analysts/research/trading/risk/portfolio) into script stdout
- Update task stage state and broadcast WebSocket progress at each stage boundary
- Add asyncio.Event for monitor cancellation on task completion/cancel

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(dashboard): persist task state to disk for restart recovery

- Add TASK_STATUS_DIR for task state JSON files
- Lifespan startup: restore task states from disk
- Task completion/failure: write state to disk
- Task cancellation: delete persisted state

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(dashboard): correct stage key mismatch, add created_at, persist cancelled tasks

- Fix ANALYSIS_STAGES key 'trader' → 'trading' to match backend STAGE markers
- Add created_at field to task state at creation, sort list_tasks by it
- Persist task state before broadcast in cancel path (closes restart race)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(dashboard): add portfolio panel - watchlist, positions, and recommendations

New backend:
- api/portfolio.py: watchlist CRUD, positions with live P&L, recommendations
- POST /api/portfolio/analyze: batch analysis of watchlist tickers
- GET /api/portfolio/positions: live price from yfinance + unrealized P&L

New frontend:
- PortfolioPanel.jsx with 3 tabs: 自选股 / 持仓 / 今日建议
- portfolioApi.js service
- Route /portfolio (keyboard shortcut: 5)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(dashboard): add CSV and PDF report export

- GET /api/reports/export: CSV with ticker,date,decision,summary
- GET /api/reports/{ticker}/{date}/pdf: PDF via fpdf2 with DejaVu fonts
- ReportsViewer: CSV export button + PDF export in modal footer

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(dashboard): address 4 critical issues found in pre-landing review

1. main.py: move API key validation before task state creation —
   prevents phantom "running" tasks when ANTHROPIC_API_KEY is missing
2. portfolio.py: make get_positions() async and fetch yfinance prices
   concurrently via run_in_executor — no longer blocks event loop
3. portfolio.py: add fcntl.LOCK_EX around all JSON read-modify-write
   operations on watchlist.json and positions.json — eliminates TOCTOU
   lost-write races under concurrent requests
4. main.py: use tempfile.mkstemp with mode 0o600 instead of world-
   readable /tmp/analysis_{task_id}.py — script content no longer
   exposed to other users on shared hosts

Also: remove unused UploadFile/File imports, undefined _save_to_cache
function, dead code in _delete_task_status, and unused
get_or_create_default_account helper.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(dashboard): use secure temp file for batch analysis scripts

Batch portfolio analysis was writing scripts to /tmp with default
permissions (0o644), exposing the API key to other local users.
Switch to tempfile.mkstemp + chmod 0o600, matching the single-analysis
pattern. Also fix cancel_task cleanup to use glob patterns for
tempfile-generated paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(dashboard): remove fake fallback data from ReportsViewer

ReportsViewer showed fabricated Chinese text when a report failed to load,
making fake data appear indistinguishable from real analysis. Now shows
an error message instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(dashboard): reliability fixes - cross-platform PDF fonts, API timeouts, yfinance concurrency, retry logic

- PDF: try multiple DejaVu font paths (macOS + Linux) instead of hardcoded macOS
- Frontend: add 15s AbortController timeout to all API calls + proper error handling
- yfinance: cap concurrent price fetches at 5 via asyncio.Semaphore
- Batch analysis: retry failed stock analyses up to 2x with exponential backoff

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve 4 critical security/correctness bugs in web dashboard

1. Mass position deletion (portfolio.py): remove_position now rejects
   empty position_id — previously position_id="" matched all positions
   and deleted every holding for a ticker across ALL accounts.

2. Path traversal in get_recommendation (portfolio.py): added ticker/date
   validation (no ".." or path separators) + resolved-path check against
   RECOMMENDATIONS_DIR to prevent ../../etc/passwd attacks.

3. Path traversal in get_report_content (main.py): same ticker/date
   validation + resolved-path check against get_results_dir().

4. china_data import stub (interface.py + new china_data.py): the actual
   akshare implementation lives in web_dashboard/backend/china_data.py
   (different package); tradingagents/dataflows/china_data.py was missing
   entirely, so _china_data_available was always False. Added stub file
   and AttributeError to the import exception handler so the module
   gracefully degrades instead of silently hiding the missing vendor.

Magic numbers also extracted to named constants:
- MAX_RETRY_COUNT, RETRY_BASE_DELAY_SECS (main.py)
- MAX_CONCURRENT_YFINANCE_REQUESTS (portfolio.py)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-07 18:52:56 +08:00
Shaojie 09ec174049 feat(web-dashboard): connect frontend to real backend API (Phase 1) (#1)
* fix(qa): ISSUE-001 — misleading empty state message in ScreeningPanel

When API returns 0 results, show '未找到符合条件的股票' instead of
'请先选择筛选模式并刷新' which implied no filtering had been done.

Issue found by /qa on main branch

* feat(web-dashboard): connect frontend to real backend API

Phase 1: Stabilize dashboard by connecting mock data to real backend.

Backend:
- Add GET /api/analysis/tasks endpoint for BatchManager
- Fix subprocess cancellation (poll() → returncode)
- Use sys.executable instead of hardcoded env312 path
- Move API key validation before storing task state (no phantom tasks)

Frontend:
- ScreeningPanel: handleStartAnalysis calls POST /api/analysis/start
- AnalysisMonitor: real WebSocket connection via useSearchParams + useRef
- BatchManager: polls GET /api/analysis/tasks, fixed retry button
- All mock data removed

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-06 17:47:46 +08:00
15 changed files with 558 additions and 102 deletions

View File

@ -122,3 +122,23 @@ QUANT_BACKTEST_PATH=/path/to/quant_backtest python orchestrator/examples/run_liv
- `orchestrator/tests/` - Orchestrator 单元测试
- `tests/` - TradingAgents 核心测试
- 使用 pytest 运行:`python -m pytest orchestrator/tests/`
## Skill routing
When the user's request matches an available skill, ALWAYS invoke it using the Skill
tool as your FIRST action. Do NOT answer directly, do NOT use other tools first.
The skill has specialized workflows that produce better results than ad-hoc answers.
Key routing rules:
- Product ideas, "is this worth building", brainstorming → invoke office-hours
- Bugs, errors, "why is this broken", 500 errors → invoke investigate
- Ship, deploy, push, create PR → invoke ship
- QA, test the site, find bugs → invoke qa
- Code review, check my diff → invoke review
- Update docs after shipping → invoke document-release
- Weekly retro → invoke retro
- Design system, brand → invoke design-consultation
- Visual audit, design polish → invoke design-review
- Architecture review → invoke plan-eng-review
- Save progress, checkpoint, resume → invoke checkpoint
- Code quality, health check → invoke health

View File

@ -82,6 +82,6 @@ python run_ningde.py # 宁德时代
## API配置
- API Key: `sk-cp-QIUVVYGotE-euN8EPbSngq_3femJ9YnBOYqeEu6kWUAsarH72e2LnHTn5h_2SG9vvTDtjzKOQOaSiOjB_Nwb7j87rTTopucSha_rxFhQ6q0S4r9PM6PGDbk`
- API Key: 从本地环境变量读取(不要提交到仓库)
- Base URL: `https://api.minimaxi.com/anthropic`
- Model: `MiniMax-M2.7-highspeed`
- Model: `MiniMax-M2.7-highspeed`

View File

@ -4,6 +4,16 @@ Status: implemented (2026-04-16)
Audience: orchestrator users, backend maintainers
Scope: LLMRunner configuration validation and error classification
## Change Log
**2026-04-16**: Refactored provider validation to centralize patterns in `factory.py`
- Moved `_PROVIDER_BASE_URL_PATTERNS` from `llm_runner.py` to `ProviderSpec.base_url_patterns` in `factory.py`
- Added `validate_provider_base_url()` function with pattern caching for performance
- Added `ProviderMismatch` TypedDict for type-safe validation results
- Split ollama and openrouter into separate `ProviderSpec` entries (previously shared openai's spec)
- Reduced `llm_runner.py` from 45 lines to 13 lines for validation logic
- All 21 tests pass, including 6 provider mismatch tests
## Overview
`orchestrator/llm_runner.py` implements three layers of configuration validation to catch errors before expensive graph initialization or API calls:
@ -243,10 +253,20 @@ python -m pytest orchestrator/tests/test_llm_runner.py -v
When adding a new provider to `tradingagents/llm_clients/factory.py`:
1. Add URL pattern to `_PROVIDER_BASE_URL_PATTERNS` in `llm_runner.py`
2. Add test cases for valid and invalid configurations
1. Add a new `ProviderSpec` entry to `_PROVIDER_SPECS` tuple with `base_url_patterns`
2. Add test cases for valid and invalid configurations in `orchestrator/tests/test_llm_runner.py`
3. Update this documentation
**Example:**
```python
ProviderSpec(
canonical_name="newprovider",
aliases=("newprovider",),
builder=lambda model, base_url=None, **kwargs: NewProviderClient(model, base_url, **kwargs),
base_url_patterns=(r"api\.newprovider\.com",),
)
```
### Adjusting Timeout Recommendations
If profiling shows different timeout requirements:
@ -277,11 +297,25 @@ Current implementation does **not** validate API key validity before graph initi
### Provider Pattern Maintenance
URL patterns must be manually kept in sync with provider changes:
~~URL patterns must be manually kept in sync with provider changes:~~
**UPDATE (2026-04-16)**: Provider URL patterns have been moved to `tradingagents/llm_clients/factory.py` as part of `ProviderSpec`. This centralizes validation logic with provider definitions.
**Current implementation:**
- Each `ProviderSpec` includes optional `base_url_patterns` tuple
- `validate_provider_base_url()` function provides validation logic
- `LLMRunner._detect_provider_mismatch()` delegates to factory validation
- Patterns are co-located with provider builders, reducing maintenance burden
**Benefits:**
- Single source of truth for provider configuration
- Easier to keep patterns in sync when adding/updating providers
- Factory can be tested independently of orchestrator
- Reduced code duplication
**Remaining considerations:**
- **Risk**: Provider changes base URL structure (e.g., API versioning)
- **Mitigation**: Validation is non-blocking; mismatches are logged but don't prevent operation
- **Future**: Consider moving patterns to `tradingagents/llm_clients/factory.py` as part of `ProviderSpec`
### Timeout Recommendations

View File

@ -1,33 +1,16 @@
import json
import logging
import os
import re
from datetime import datetime, timezone
from orchestrator.config import OrchestratorConfig
from orchestrator.contracts.error_taxonomy import ReasonCode
from orchestrator.contracts.result_contract import Signal, build_error_signal
from tradingagents.agents.utils.agent_states import extract_research_provenance
from tradingagents.llm_clients.factory import validate_provider_base_url
logger = logging.getLogger(__name__)
# Provider × base_url validation matrix
# Note: ollama/openrouter share openai's canonical provider but have different URL patterns
_PROVIDER_BASE_URL_PATTERNS = {
"anthropic": [r"api\.anthropic\.com", r"api\.minimaxi\.com/anthropic"],
"openai": [r"api\.openai\.com"],
"google": [r"generativelanguage\.googleapis\.com"],
"xai": [r"api\.x\.ai"],
"ollama": [r"localhost:\d+", r"127\.0\.0\.1:\d+", r"ollama"],
"openrouter": [r"openrouter\.ai"],
}
# Precompile regex patterns for efficiency
_COMPILED_PATTERNS = {
provider: [re.compile(pattern) for pattern in patterns]
for provider, patterns in _PROVIDER_BASE_URL_PATTERNS.items()
}
# Recommended timeout thresholds by analyst count
_RECOMMENDED_TIMEOUTS = {
1: {"analyst": 75.0, "research": 30.0},
@ -110,35 +93,19 @@ class LLMRunner:
return self._graph
def _detect_provider_mismatch(self):
"""Validate provider × base_url compatibility using pattern matrix.
"""Validate provider × base_url compatibility using factory's validation.
Uses the original provider name (not canonical) for validation since
ollama/openrouter share openai's canonical provider but have different URLs.
ollama/openrouter have different URL patterns than openai.
"""
trading_cfg = self._config.trading_agents_config or {}
provider = str(trading_cfg.get("llm_provider", "")).lower()
base_url = str(trading_cfg.get("backend_url", "") or "").lower()
provider = trading_cfg.get("llm_provider", "")
base_url = trading_cfg.get("backend_url", "")
if not provider or not base_url:
return None
# Use original provider name for pattern matching (not canonical)
# This handles ollama/openrouter which share openai's canonical provider
compiled_patterns = _COMPILED_PATTERNS.get(provider, [])
if not compiled_patterns:
# No validation rules defined for this provider
return None
for pattern in compiled_patterns:
if pattern.search(base_url):
return None # Match found, no mismatch
# No pattern matched - return raw patterns for error message
return {
"provider": provider,
"backend_url": trading_cfg.get("backend_url"),
"expected_patterns": _PROVIDER_BASE_URL_PATTERNS[provider],
}
return validate_provider_base_url(provider, base_url)
def get_signal(self, ticker: str, date: str) -> Signal:
"""获取指定股票在指定日期的 LLM 信号,带缓存。"""

View File

@ -1,10 +1,12 @@
from __future__ import annotations
import _thread
import argparse
import json
import signal
import threading
import time
from collections import defaultdict
from contextlib import contextmanager
from datetime import datetime, timezone
from pathlib import Path
@ -58,6 +60,27 @@ class _ProfileTimeout(Exception):
pass
@contextmanager
def _overall_timeout_guard(seconds: int):
timed_out = threading.Event()
timer: threading.Timer | None = None
def interrupt_main() -> None:
timed_out.set()
_thread.interrupt_main()
if seconds > 0:
timer = threading.Timer(seconds, interrupt_main)
timer.daemon = True
timer.start()
try:
yield timed_out
finally:
if timer is not None:
timer.cancel()
def _jsonable(value):
if isinstance(value, (str, int, float, bool)) or value is None:
return value
@ -121,6 +144,8 @@ def build_trace_payload(
if exception_type is not None:
payload["exception_type"] = exception_type
return payload
def main() -> None:
args = build_parser().parse_args()
selected_analysts = [item.strip() for item in args.selected_analysts.split(",") if item.strip()]
@ -151,40 +176,40 @@ def main() -> None:
dump_dir.mkdir(parents=True, exist_ok=True)
dump_path = dump_dir / f"{args.ticker.replace('/', '_')}_{args.date}_{run_id}.json"
def alarm_handler(signum, frame):
raise _ProfileTimeout(f"profiling timeout after {args.overall_timeout}s")
signal.signal(signal.SIGALRM, alarm_handler)
signal.alarm(args.overall_timeout)
try:
for event in graph.graph.stream(state, stream_mode="updates", config=config_kwargs):
now = time.monotonic()
nodes = list(event.keys())
phases = sorted({_PHASE_MAP.get(node, "unknown") for node in nodes})
llm_kinds = sorted({_LLM_KIND_MAP.get(node, "unknown") for node in nodes})
delta = round(now - last_at, 3)
research_status, degraded_reason, history_len, response_len = _extract_research_state(event)
entry = {
"run_id": run_id,
"nodes": nodes,
"phases": phases,
"llm_kinds": llm_kinds,
"start_at": round(last_at - started_at, 3),
"end_at": round(now - started_at, 3),
"elapsed_ms": int(delta * 1000),
"selected_analysts": selected_analysts,
"analysis_prompt_style": args.analysis_prompt_style,
"research_status": research_status,
"degraded_reason": degraded_reason,
"history_len": history_len,
"response_len": response_len,
}
node_timings.append(entry)
raw_events.append(_jsonable(event))
for phase in phases:
phase_totals[phase] += delta
last_at = now
with _overall_timeout_guard(args.overall_timeout) as timed_out:
try:
for event in graph.graph.stream(state, stream_mode="updates", config=config_kwargs):
now = time.monotonic()
nodes = list(event.keys())
phases = sorted({_PHASE_MAP.get(node, "unknown") for node in nodes})
llm_kinds = sorted({_LLM_KIND_MAP.get(node, "unknown") for node in nodes})
delta = round(now - last_at, 3)
research_status, degraded_reason, history_len, response_len = _extract_research_state(event)
entry = {
"run_id": run_id,
"nodes": nodes,
"phases": phases,
"llm_kinds": llm_kinds,
"start_at": round(last_at - started_at, 3),
"end_at": round(now - started_at, 3),
"elapsed_ms": int(delta * 1000),
"selected_analysts": selected_analysts,
"analysis_prompt_style": args.analysis_prompt_style,
"research_status": research_status,
"degraded_reason": degraded_reason,
"history_len": history_len,
"response_len": response_len,
}
node_timings.append(entry)
raw_events.append(_jsonable(event))
for phase in phases:
phase_totals[phase] += delta
last_at = now
except KeyboardInterrupt:
if timed_out.is_set():
raise _ProfileTimeout(f"profiling timeout after {args.overall_timeout}s") from None
raise
payload = {
"status": "ok",
@ -212,8 +237,6 @@ def main() -> None:
"dump_path": str(dump_path),
"raw_events": raw_events,
}
finally:
signal.alarm(0)
dump_path.write_text(json.dumps(payload, ensure_ascii=False, indent=2))
print(json.dumps(payload, ensure_ascii=False, indent=2))

View File

@ -12,6 +12,7 @@ from orchestrator.config import OrchestratorConfig
from orchestrator.contracts.error_taxonomy import ReasonCode
from orchestrator.contracts.result_contract import Signal, build_error_signal
from orchestrator.market_calendar import is_non_trading_day
from tradingagents.dataflows.stockstats_utils import yf_retry
logger = logging.getLogger(__name__)
@ -48,7 +49,15 @@ class QuantRunner:
start_str = start_dt.strftime("%Y-%m-%d")
end_exclusive = (end_dt + timedelta(days=1)).strftime("%Y-%m-%d")
df = yf.download(ticker, start=start_str, end=end_exclusive, progress=False, auto_adjust=True)
df = yf_retry(
lambda: yf.download(
ticker,
start=start_str,
end=end_exclusive,
progress=False,
auto_adjust=True,
)
)
if df.empty:
logger.warning("No price data for %s between %s and %s", ticker, start_str, date)
if is_non_trading_day(ticker, end_dt.date()):

View File

@ -1,4 +1,5 @@
import json
from contextlib import contextmanager
from datetime import datetime as real_datetime, timezone
from pathlib import Path
@ -95,9 +96,13 @@ def test_main_writes_trace_payload_with_research_provenance(monkeypatch, tmp_pat
monkeypatch.setattr(profile_stage_chain, "TradingAgentsGraph", _FakeTradingAgentsGraph)
monkeypatch.setattr(profile_stage_chain, "Propagator", _FakePropagator)
monkeypatch.setattr(profile_stage_chain.time, "monotonic", lambda: next(monotonic_points))
monkeypatch.setattr(profile_stage_chain.signal, "signal", lambda *args, **kwargs: None)
monkeypatch.setattr(profile_stage_chain.signal, "alarm", lambda *args, **kwargs: None)
monkeypatch.setattr(profile_stage_chain, "datetime", _FixedDateTime)
@contextmanager
def fake_guard(_seconds):
yield profile_stage_chain.threading.Event()
monkeypatch.setattr(profile_stage_chain, "_overall_timeout_guard", fake_guard)
monkeypatch.setattr(
"sys.argv",
[
@ -161,3 +166,51 @@ def test_main_writes_trace_payload_with_research_provenance(monkeypatch, tmp_pat
dump_path = Path(output["dump_path"])
assert dump_path.exists()
assert json.loads(dump_path.read_text()) == output
class _KeyboardInterruptGraph:
def __init__(self, *, selected_analysts, config):
self.graph = self
def stream(self, state, stream_mode, config):
raise KeyboardInterrupt
yield
def test_main_reports_cross_platform_timeout(monkeypatch, tmp_path, capsys):
monkeypatch.setattr(profile_stage_chain, "TradingAgentsGraph", _KeyboardInterruptGraph)
monkeypatch.setattr(profile_stage_chain, "Propagator", _FakePropagator)
monkeypatch.setattr(profile_stage_chain, "datetime", _FixedDateTime)
@contextmanager
def timed_out_guard(seconds):
event = profile_stage_chain.threading.Event()
event.set()
yield event
monkeypatch.setattr(profile_stage_chain, "_overall_timeout_guard", timed_out_guard)
monkeypatch.setattr(
"sys.argv",
[
"profile_stage_chain.py",
"--ticker",
"AAPL",
"--date",
"2026-04-11",
"--selected-analysts",
"market,social",
"--analysis-prompt-style",
"balanced",
"--overall-timeout",
"1",
"--dump-dir",
str(tmp_path),
],
)
profile_stage_chain.main()
output = json.loads(capsys.readouterr().out)
assert output["status"] == "error"
assert output["exception_type"] == "_ProfileTimeout"
assert output["error"] == "profiling timeout after 1s"

View File

@ -183,3 +183,19 @@ def test_get_signal_marks_partial_data_when_required_columns_missing(runner, mon
assert signal.degraded is True
assert signal.reason_code == ReasonCode.PARTIAL_DATA.value
assert signal.metadata["data_quality"]["state"] == "partial_data"
def test_get_signal_uses_yf_retry_wrapper(runner, monkeypatch):
calls = []
def fake_retry(func, max_retries=3, base_delay=2.0):
calls.append((max_retries, base_delay))
return pd.DataFrame()
monkeypatch.setattr("orchestrator.quant_runner.yf_retry", fake_retry)
monkeypatch.setattr("orchestrator.quant_runner.is_non_trading_day", lambda *_args, **_kwargs: False)
signal = runner.get_signal("AAPL", "2024-01-02")
assert calls == [(3, 2.0)]
assert signal.reason_code == ReasonCode.QUANT_NO_DATA.value

View File

@ -0,0 +1,69 @@
from __future__ import annotations
import re
from typing import Any, Iterable
CANONICAL_RATINGS = ("BUY", "OVERWEIGHT", "HOLD", "UNDERWEIGHT", "SELL")
_RATING_PATTERN = re.compile(
r"\b(BUY|OVERWEIGHT|HOLD|UNDERWEIGHT|SELL)\b",
re.IGNORECASE,
)
def extract_rating(text: str) -> str | None:
match = _RATING_PATTERN.search(str(text or ""))
if not match:
return None
return match.group(1).upper()
def _normalize_report_text(rating: str, rating_source: str, report_text: str) -> str:
body = str(report_text or "").strip() or "No narrative provided."
return (
"## Normalized Portfolio Decision\n"
f"- Rating: {rating}\n"
f"- Rating Source: {rating_source}\n\n"
f"{body}"
)
def build_structured_decision(
text: str,
*,
fallback_candidates: Iterable[tuple[str, str]] = (),
default_rating: str = "HOLD",
peer_context_mode: str = "UNSPECIFIED",
context_usage: dict[str, Any] | None = None,
) -> dict[str, Any]:
warnings: list[str] = []
rating_source = "direct"
rating = extract_rating(text)
source_text = str(text or "")
if rating is None:
for candidate_name, candidate_text in fallback_candidates:
rating = extract_rating(candidate_text)
if rating is not None:
rating_source = candidate_name
source_text = str(candidate_text or "")
warnings.append(f"rating_inferred_from:{candidate_name}")
break
if rating is None:
rating = str(default_rating or "HOLD").upper()
rating_source = "default"
warnings.append("rating_defaulted")
usage = context_usage or {}
hold_subtype = "UNSPECIFIED" if rating == "HOLD" else "N/A"
return {
"rating": rating,
"hold_subtype": hold_subtype,
"rating_source": rating_source,
"report_text": _normalize_report_text(rating, rating_source, source_text),
"warnings": warnings,
"portfolio_context_used": bool(usage.get("portfolio_context")),
"peer_context_used": bool(usage.get("peer_context")),
"peer_context_mode": str(peer_context_mode or "UNSPECIFIED"),
}

View File

@ -0,0 +1,99 @@
from __future__ import annotations
import time
from concurrent.futures import ThreadPoolExecutor, TimeoutError
from typing import Any
def _invoke_dimension(llm, dimension: str, prompt: str) -> dict[str, Any]:
started_at = time.monotonic()
try:
response = llm.invoke(prompt)
content = response.content if hasattr(response, "content") else str(response)
return {
"dimension": dimension,
"content": str(content).strip(),
"ok": True,
"error": None,
"elapsed_s": round(time.monotonic() - started_at, 3),
}
except Exception as exc:
return {
"dimension": dimension,
"content": "",
"ok": False,
"error": str(exc),
"elapsed_s": round(time.monotonic() - started_at, 3),
}
def run_parallel_subagents(
*,
llm,
dimension_configs: list[dict[str, Any]],
timeout_per_subagent: float = 25.0,
max_workers: int = 4,
) -> list[dict[str, Any]]:
if not dimension_configs:
return []
executor = ThreadPoolExecutor(max_workers=max_workers)
futures = {
executor.submit(
_invoke_dimension,
llm,
config["dimension"],
config["prompt"],
): config["dimension"]
for config in dimension_configs
}
results: list[dict[str, Any]] = []
try:
for future, dimension in futures.items():
try:
results.append(future.result(timeout=timeout_per_subagent))
except TimeoutError:
results.append(
{
"dimension": dimension,
"content": "",
"ok": False,
"error": "timeout",
"elapsed_s": round(timeout_per_subagent, 3),
}
)
finally:
executor.shutdown(wait=False, cancel_futures=True)
return results
def synthesize_subagent_results(
subagent_results: list[dict[str, Any]],
*,
max_chars_per_result: int = 200,
) -> tuple[str, dict[str, Any]]:
lines: list[str] = []
timings: dict[str, float] = {}
failures: dict[str, str] = {}
for result in subagent_results:
dimension = str(result.get("dimension") or "unknown")
timings[dimension] = float(result.get("elapsed_s") or 0.0)
content = str(result.get("content") or "").strip()
if not result.get("ok"):
failure_reason = str(result.get("error") or "unknown error")
failures[dimension] = failure_reason
content = f"[UNAVAILABLE: {failure_reason}]"
if len(content) > max_chars_per_result:
content = f"{content[:max_chars_per_result - 3]}..."
lines.append(f"[{dimension.upper()}]\n{content or '[NO OUTPUT]'}")
return "\n\n".join(lines), {
"subagent_timings": timings,
"failures": failures,
}

View File

@ -1,5 +1,6 @@
import time
import logging
import threading
import pandas as pd
import yfinance as yf
@ -11,6 +12,16 @@ import os
from .config import get_config
logger = logging.getLogger(__name__)
_fallback_session_local = threading.local()
def _get_fallback_session() -> requests.Session:
session = getattr(_fallback_session_local, "session", None)
if session is None:
session = requests.Session()
session.trust_env = False
_fallback_session_local.session = session
return session
def _symbol_to_tencent_code(symbol: str) -> str:
@ -24,8 +35,7 @@ def _symbol_to_tencent_code(symbol: str) -> str:
def _fetch_tencent_ohlcv(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
"""Fallback daily OHLCV fetch for A-shares via Tencent."""
session = requests.Session()
session.trust_env = False
session = _get_fallback_session()
response = session.get(
"https://web.ifzq.gtimg.cn/appstock/app/fqkline/get",
params={
@ -72,8 +82,7 @@ def _symbol_to_eastmoney_secid(symbol: str) -> str:
def _fetch_eastmoney_ohlcv(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
"""Fallback daily OHLCV fetch for A-shares via Eastmoney."""
session = requests.Session()
session.trust_env = False
session = _get_fallback_session()
url = "https://push2his.eastmoney.com/api/qt/stock/kline/get"
response = session.get(
url,

View File

@ -1,5 +1,6 @@
from dataclasses import dataclass
from typing import Callable, Optional
from typing import Callable, Optional, TypedDict
import re
from .base_client import BaseLLMClient
from .openai_client import OpenAIClient
@ -12,26 +13,67 @@ _OPENAI_COMPATIBLE = (
"openai", "xai", "deepseek", "qwen", "glm", "ollama", "openrouter",
)
# Compiled pattern cache for validation performance
_COMPILED_PATTERNS: dict[str, list[re.Pattern]] = {}
class ProviderMismatch(TypedDict):
"""Provider validation mismatch details."""
provider: str
backend_url: str
expected_patterns: tuple[str, ...]
@dataclass(frozen=True)
class ProviderSpec:
"""Provider registry entry for LLM client creation."""
"""Provider registry entry for LLM client creation.
Attributes:
canonical_name: Primary provider identifier
aliases: Alternative names that resolve to this provider
builder: Factory function to create the client instance
base_url_patterns: Regex patterns for valid base URLs (None = no validation)
"""
canonical_name: str
aliases: tuple[str, ...]
builder: Callable[..., BaseLLMClient]
base_url_patterns: Optional[tuple[str, ...]] = None
_PROVIDER_SPECS: tuple[ProviderSpec, ...] = (
ProviderSpec(
canonical_name="openai",
aliases=("openai", "ollama", "openrouter"),
aliases=("openai",),
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
model,
base_url,
provider=kwargs.pop("provider", "openai"),
provider="openai",
**kwargs,
),
base_url_patterns=(r"api\.openai\.com",),
),
ProviderSpec(
canonical_name="ollama",
aliases=("ollama",),
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
model,
base_url,
provider="ollama",
**kwargs,
),
base_url_patterns=(r"localhost:\d+", r"127\.0\.0\.1:\d+", r"ollama"),
),
ProviderSpec(
canonical_name="openrouter",
aliases=("openrouter",),
builder=lambda model, base_url=None, **kwargs: OpenAIClient(
model,
base_url,
provider="openrouter",
**kwargs,
),
base_url_patterns=(r"openrouter\.ai",),
),
ProviderSpec(
canonical_name="xai",
@ -42,16 +84,19 @@ _PROVIDER_SPECS: tuple[ProviderSpec, ...] = (
provider="xai",
**kwargs,
),
base_url_patterns=(r"api\.x\.ai",),
),
ProviderSpec(
canonical_name="anthropic",
aliases=("anthropic",),
builder=lambda model, base_url=None, **kwargs: AnthropicClient(model, base_url, **kwargs),
base_url_patterns=(r"api\.anthropic\.com", r"api\.minimaxi\.com/anthropic"),
),
ProviderSpec(
canonical_name="google",
aliases=("google",),
builder=lambda model, base_url=None, **kwargs: GoogleClient(model, base_url, **kwargs),
base_url_patterns=(r"generativelanguage\.googleapis\.com",),
),
)
@ -92,7 +137,47 @@ def create_llm_client(
"""
provider_lower = provider.lower()
provider_spec = get_provider_spec(provider_lower)
builder_kwargs = dict(kwargs)
if provider_lower in ("openai", "ollama", "openrouter"):
builder_kwargs["provider"] = provider_lower
return provider_spec.builder(model, base_url, **builder_kwargs)
return provider_spec.builder(model, base_url, **kwargs)
def validate_provider_base_url(provider: str, base_url: str) -> Optional[ProviderMismatch]:
"""Validate provider × base_url compatibility.
Args:
provider: LLM provider name (original, not canonical)
base_url: API endpoint URL
Returns:
None if valid, or ProviderMismatch dict if invalid
"""
if not provider or not base_url:
return None
provider_lower = provider.lower()
base_url_lower = base_url.lower()
try:
spec = get_provider_spec(provider_lower)
except ValueError:
# Unknown provider - no validation rules
return None
if spec.base_url_patterns is None:
# No validation rules defined for this provider
return None
# Use cached compiled patterns for performance
cache_key = spec.canonical_name
if cache_key not in _COMPILED_PATTERNS:
_COMPILED_PATTERNS[cache_key] = [re.compile(p) for p in spec.base_url_patterns]
for pattern in _COMPILED_PATTERNS[cache_key]:
if pattern.search(base_url_lower):
return None # Match found
# No pattern matched - return mismatch details
return {
"provider": provider_lower,
"backend_url": base_url,
"expected_patterns": spec.base_url_patterns,
}

View File

@ -0,0 +1,22 @@
import threading
from tradingagents.dataflows import stockstats_utils
def test_get_fallback_session_reuses_session_in_same_thread(monkeypatch):
created = []
class FakeSession:
def __init__(self):
self.trust_env = True
created.append(self)
monkeypatch.setattr(stockstats_utils, "_fallback_session_local", threading.local())
monkeypatch.setattr(stockstats_utils.requests, "Session", FakeSession)
first = stockstats_utils._get_fallback_session()
second = stockstats_utils._get_fallback_session()
assert first is second
assert len(created) == 1
assert first.trust_env is False

View File

@ -2,8 +2,8 @@
Portfolio API 自选股持仓每日建议
"""
import asyncio
import fcntl
import json
import os
import uuid
from datetime import datetime
from pathlib import Path
@ -11,6 +11,34 @@ from typing import Optional
import yfinance
try:
import fcntl
except ImportError: # pragma: no cover - exercised on Windows
import msvcrt
class _FcntlCompat:
LOCK_SH = 1
LOCK_EX = 2
LOCK_UN = 8
@staticmethod
def flock(fd: int, operation: int) -> None:
os.lseek(fd, 0, os.SEEK_SET)
if operation == _FcntlCompat.LOCK_UN:
try:
msvcrt.locking(fd, msvcrt.LK_UNLCK, 1)
except OSError:
return
return
if os.fstat(fd).st_size == 0:
os.write(fd, b"\0")
os.lseek(fd, 0, os.SEEK_SET)
msvcrt.locking(fd, msvcrt.LK_LOCK, 1)
fcntl = _FcntlCompat()
# Data directory
DATA_DIR = Path(__file__).parent.parent.parent / "data"
DATA_DIR.mkdir(parents=True, exist_ok=True)
@ -153,7 +181,7 @@ def _fetch_price(ticker: str) -> float | None:
async def _fetch_price_throttled(ticker: str) -> float | None:
"""Fetch price with semaphore throttling."""
async with _yfinance_semaphore:
return _fetch_price(ticker)
return await asyncio.to_thread(_fetch_price, ticker)
async def get_positions(account: Optional[str] = None) -> list:

View File

@ -1,12 +1,9 @@
"""
Tests for portfolio API covers critical security and correctness fixes.
"""
import asyncio
import json
import os
import tempfile
import pytest
from pathlib import Path
from unittest.mock import patch
class TestRemovePositionMassDeletion:
@ -261,3 +258,28 @@ class TestConstants:
assert "MAX_CONCURRENT_YFINANCE_REQUESTS" in content
assert "asyncio.Semaphore(MAX_CONCURRENT_YFINANCE_REQUESTS)" in content
def test_portfolio_locking_has_windows_fallback(self):
portfolio_path = Path(__file__).parent.parent / "api" / "portfolio.py"
content = portfolio_path.read_text()
assert "except ImportError" in content
assert "msvcrt" in content
class TestAsyncPriceFetch:
def test_fetch_price_throttled_uses_worker_thread(self, monkeypatch):
from api import portfolio
calls = []
async def fake_to_thread(func, *args):
calls.append((func, args))
return 321.0
monkeypatch.setattr(portfolio.asyncio, "to_thread", fake_to_thread)
result = asyncio.run(portfolio._fetch_price_throttled("AAPL"))
assert result == 321.0
assert calls == [(portfolio._fetch_price, ("AAPL",))]