Switch the default LLM in tradingagents/default_config.py from
nemotron-3-nano:30b-cloud (which was producing low-fidelity
structured output across services) to glm-5.1:cloud. Canonical
values live in .env which is gitignored.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- asyncpg + psycopg2-binary: same fix as gex-llm — service had
DATABASE_URL and alembic env.py but no postgres driver installed
- sse-starlette: imported by app.py but never declared; the running
container had it from a prior manual pip install. Rebuilding the
image from canonical requirements dropped it and crashed the
container with ModuleNotFoundError on startup.
Bundles a pre-existing uncommitted restructuring of requirements.txt
into pinned + categorized groups (Core, LLM Clients, Data, Analysis,
Database, CLI & UI, Testing, Utilities).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Alpaca provides 10k calls/min free with 7yr history via IEX feed.
Hybrid approach: Alpaca for price bars, snapshots, sector ETF perf,
and moving averages; yfinance for fundamentals (PE, margins, 13F).
- Add alpaca_data.py: bars, snapshots, MAs, sector ETF perf, news
- Update get_macro_indicators: sector ETF performance via Alpaca
- Update get_sector_rotation: compute relative strength vs SPY
- Update entry timing node: Alpaca MAs from actual bar data
- Add alpaca-py to requirements.txt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Return error message to LLM instead of crashing the pipeline.
Also list supported indicators in the tool docstring.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Changed deep_think_llm and quick_think_llm from gpt-5.2/gpt-5-mini
to glm-5:cloud, backend_url to ollama.com/v1.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use Ollama Cloud GPU inference instead of self-hosted CPU Ollama.
1-3s per call vs 2-15 minutes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- deep_think defaults to claude-sonnet-4-6
- quick_think defaults to claude-haiku-4-5-20251001
- LLM provider defaults to anthropic
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use Groq's free OpenAI-compatible API instead of Anthropic Claude
to avoid API credit costs. Defaults to llama-3.3-70b-versatile.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sync ThreadPoolExecutor doesn't truly parallelize inside LangGraph nodes.
Switched to async functions with asyncio.to_thread() + asyncio.gather() —
the same pattern that works for the parallel analyst node.
Result: Research (Bull+Bear) and Risk (Agg+Con+Neu) now run concurrently.
Total analysis time reduced from ~450s to ~280s (~38% faster).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use sync functions with pool.submit() instead of async+run_in_executor
to avoid potential asyncio event-loop interaction issues with LangGraph.
Added timing logs to diagnose parallelism.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LangGraph state proxies serialize concurrent dict access, forcing
threads to run sequentially. Fix by snapshotting needed fields into
plain dicts before dispatching to ThreadPoolExecutor — same pattern
used by the working parallel analysts node.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run Bull+Bear researchers concurrently and all 3 risk analysts
(Aggressive/Conservative/Neutral) concurrently instead of sequentially.
With max_debate_rounds=1, there's no back-and-forth so parallel execution
is safe. Sequential mode is completely unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The final decision block set all agents to "completed" in buf but never
emitted agent_update SSE events for them. This left Risk stage dots as
cyan (active) and Decision dot as gray on the UI. Now emits agent_update
for any agents not yet shown as completed before the decision event.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
With stream_mode="values", each chunk contains the full accumulated state.
The debate and risk sections were checking data fields (bull/bear history,
aggressive/conservative/neutral history) without guarding against re-processing,
causing completed agents to be reset to "in_progress" on every subsequent
chunk. This made agent and report counts appear stuck at 5/12 and 4/7.
Fix: move the _emitted flag guard to the outer if-block so the entire
section is skipped once its event has been emitted.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove verbose debug prints added during parallel analysts development.
Fix reports showing 4/7 by updating buf.report_sections for investment_plan,
trader_investment_plan, and final_trade_decision (previously only analyst
reports were tracked).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run all 4 analysts (Market, Social, News, Fundamentals) concurrently
using asyncio.gather instead of sequentially. Each analyst gets its own
isolated message state and tool-calling loop. Cuts analyst phase from
~8-9 min to ~2-3 min (total analysis from ~11 min to ~4-5 min).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace global update_research_team_status() with local buf calls
(was updating CLI's global buffer, not analysis-specific one)
- Add replay buffer: all events stored in memory per analysis
- Support ?last_event=N query param for reconnection replay
- Send event IDs so browser can track position
- Mark analysis as done so replay works after completion
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Railway kills idle connections after ~30s. During long LLM calls
between agent stages, the SSE stream goes silent and gets dropped.
Now sends heartbeat events every 15s to keep the connection alive.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Swap Chainlit chatbot UI for a minimal FastAPI service with:
- POST /analyze to start analysis
- GET /analyze/{id}/stream for SSE progress events
- GET /health for Railway healthcheck
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces the barebones web UI with one that mirrors the CLI:
- Agent status table with team/agent/status tracking
- Reuses CLI's MessageBuffer, update_analyst_statuses, classify_message_type
- Shows full debate transcripts (Bull/Bear, Risk team)
- Live stats (LLM calls, tokens, elapsed time)
- Collapsible Steps for each phase with full report content
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The old model ID claude-sonnet-4-5-20241022 returns 404. Updated to
the current claude-sonnet-4-6 for the deep thinking model.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Chainlit 2.9.6 rejects the manually-created config.toml as outdated.
Let it generate its own at runtime.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a Chainlit-based web interface that wraps TradingAgentsGraph,
streaming analyst reports, research debates, and final decisions
to the browser in real-time. Configured for Anthropic Claude models.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>