The provider itself was healthy, but the legacy dashboard path still ran the heaviest graph shape by default and had no trustworthy stage profiling story. This change narrows the default legacy execution settings to the market-only compact path with conservative timeout/retry values, injects those settings through the unified request/runtime surface, and adds a standalone graph-update profiler so stage timing comes from real node completions rather than synthetic script labels.
Constraint: Profiling evidence had to be grounded in the real provider path without adding new dependencies or polluting the runtime contract
Rejected: Keep synthetic STAGE_TIMING in the subprocess protocol | misattributes the heaviest work to the wrong phase and makes the profiling conclusion untrustworthy
Rejected: Broaden the default legacy path and rely on longer timeouts | raises cost and latency while obscuring the true bottleneck
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep operational profiling separate from runtime business contracts unless timings are sourced from real graph-stage boundaries
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py -q
Tested: python -m compileall web_dashboard/backend orchestrator/profile_stage_chain.py
Tested: real provider direct invoke returned OK against MiniMax anthropic-compatible endpoint
Tested: real graph profiling via orchestrator/profile_stage_chain.py produced stage timings for 600519.SS on 2026-04-10 with selected_analysts=market and compact prompt
Not-tested: legacy subprocess full end-to-end success case on the same provider path (current run still exits via protocol failure after upstream connection error)
The rollout-ready branch still conflated dashboard auth with provider credentials, discarded diagnostics when both signal lanes degraded, and treated RESULT_META as optional even though downstream contracts now depend on it. This change separates provider runtime settings from request auth, preserves source diagnostics/data quality in full-failure contracts, requires RESULT_META in the subprocess protocol, and moves A-share holidays into an updateable calendar data source.
Constraint: No external market-calendar dependency is available in env312 and dependency policy forbids adding one casually
Rejected: Keep reading provider keys from request headers | couples dashboard auth to execution and breaks non-anthropic providers
Rejected: Leave both-signals-unavailable as a bare ValueError | loses diagnostics before live/backend contracts can serialize them
Rejected: Keep A-share holidays embedded in Python constants | requires code edits every year and preserves the stopgap design
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Keep subprocess protocol fields explicit and fail closed when RESULT_META is missing; do not route provider credentials through dashboard auth again
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py orchestrator/tests/test_market_calendar.py orchestrator/tests/test_live_mode.py orchestrator/tests/test_application_service.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_llm_runner.py -q
Tested: python -m compileall orchestrator web_dashboard/backend
Not-tested: real provider-backed execution across openai/google providers
Not-tested: browser/manual verification beyond existing frontend contract consumers
Phase 2 moves the dashboard off raw task-state leakage and onto stable public projections. Task status, task listings, progress websocket events, and portfolio recommendation reads now load persisted contracts when available, expose a contract-first envelope, and keep legacy fields inside a compat block instead of smearing them across top-level payloads.
Constraint: existing task-status JSON and recommendation files must continue to read successfully during migration
Rejected: return raw task_results directly from API and websocket | keeps legacy fields as the public contract and blocks cutover
Rejected: rewrite stored recommendation files in-place | adds risky migration work before rollout gates exist
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: keep public payload shaping in job/result-store projections, not in ad-hoc route logic
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py -q
Tested: python -m pytest orchestrator/tests/test_application_service.py orchestrator/tests/test_trading_graph_config.py -q
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Not-tested: legacy frontend rendering against new compat-wrapped task payloads
Not-tested: real websocket clients and provider-backed end-to-end analysis
Phase 1 left the backend halfway between legacy task payloads and the new executor boundary. This commit finishes the review-fix pass so missing protocol markers fail closed, timed-out subprocesses are killed, and successful analysis runs persist a result contract before task state is marked complete.
Constraint: env312 lacks pytest-asyncio so async executor tests must run without extra plugins
Rejected: Keep missing marker fallback as HOLD | masks protocol regressions as neutral signals
Rejected: Leave service success assembly in AnalysisService | breaks contract-first persistence and result_ref wiring
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Keep backend success state driven by persisted result contracts; do not reintroduce raw stdout parsing in services
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py -q
Tested: python -m pytest orchestrator/tests/test_application_service.py orchestrator/tests/test_trading_graph_config.py -q
Not-tested: real provider-backed MiniMax execution
Not-tested: full dashboard websocket/manual UI flow
This change set introduces a versioned result contract, shared config schema/loading, provider/data adapter seams, and a no-strategy application-service skeleton so the current research graph, orchestrator layer, and dashboard backend stop drifting further apart. It also keeps the earlier MiniMax compatibility and compact-prompt work aligned with the new contract shape and extends regression coverage so degradation, fallback, and service migration remain testable during the next phases.
Constraint: Must preserve existing FastAPI entrypoints and fallback behavior while introducing an application-service seam
Constraint: Must not turn application service into a new strategy or learning layer
Rejected: Full backend rewrite to service-only execution now | too risky before contract and fallback paths stabilize
Rejected: Leave provider/data/config logic distributed across scripts and endpoints | continues boundary drift and weakens verification
Confidence: high
Scope-risk: broad
Directive: Keep future application-service changes orchestration-only; move any scoring, signal fusion, or learning logic to orchestrator or tradingagents instead
Tested: python -m compileall orchestrator tradingagents web_dashboard/backend
Tested: python -m pytest orchestrator/tests/test_signals.py orchestrator/tests/test_llm_runner.py orchestrator/tests/test_quant_runner.py orchestrator/tests/test_contract_v1alpha1.py orchestrator/tests/test_application_service.py orchestrator/tests/test_provider_adapter.py web_dashboard/backend/tests/test_main_api.py web_dashboard/backend/tests/test_portfolio_api.py web_dashboard/backend/tests/test_api_smoke.py web_dashboard/backend/tests/test_services_migration.py -q
Not-tested: live MiniMax/provider execution against external services
Not-tested: full dashboard/manual websocket flow against a running frontend
Not-tested: omx team runtime end-to-end in the primary workspace