Use sync functions with pool.submit() instead of async+run_in_executor
to avoid potential asyncio event-loop interaction issues with LangGraph.
Added timing logs to diagnose parallelism.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LangGraph state proxies serialize concurrent dict access, forcing
threads to run sequentially. Fix by snapshotting needed fields into
plain dicts before dispatching to ThreadPoolExecutor — same pattern
used by the working parallel analysts node.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run Bull+Bear researchers concurrently and all 3 risk analysts
(Aggressive/Conservative/Neutral) concurrently instead of sequentially.
With max_debate_rounds=1, there's no back-and-forth so parallel execution
is safe. Sequential mode is completely unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run all 4 analysts (Market, Social, News, Fundamentals) concurrently
using asyncio.gather instead of sequentially. Each analyst gets its own
isolated message state and tool-calling loop. Cuts analyst phase from
~8-9 min to ~2-3 min (total analysis from ~11 min to ~4-5 min).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>