The provider itself was healthy, but the legacy dashboard path still ran the heaviest graph shape by default and had no trustworthy stage profiling story. This change narrows the default legacy execution settings to the market-only compact path with conservative timeout/retry values, injects those settings through the unified request/runtime surface, and adds a standalone graph-update profiler so stage timing comes from real node completions rather than synthetic script labels. Constraint: Profiling evidence had to be grounded in the real provider path without adding new dependencies or polluting the runtime contract Rejected: Keep synthetic STAGE_TIMING in the subprocess protocol | misattributes the heaviest work to the wrong phase and makes the profiling conclusion untrustworthy Rejected: Broaden the default legacy path and rely on longer timeouts | raises cost and latency while obscuring the true bottleneck Confidence: high Scope-risk: narrow Reversibility: clean Directive: Keep operational profiling separate from runtime business contracts unless timings are sourced from real graph-stage boundaries Tested: python -m pytest web_dashboard/backend/tests/test_executors.py web_dashboard/backend/tests/test_services_migration.py web_dashboard/backend/tests/test_api_smoke.py -q Tested: python -m compileall web_dashboard/backend orchestrator/profile_stage_chain.py Tested: real provider direct invoke returned OK against MiniMax anthropic-compatible endpoint Tested: real graph profiling via orchestrator/profile_stage_chain.py produced stage timings for 600519.SS on 2026-04-10 with selected_analysts=market and compact prompt Not-tested: legacy subprocess full end-to-end success case on the same provider path (current run still exits via protocol failure after upstream connection error) |
||
|---|---|---|
| .. | ||
| backend | ||
| frontend | ||