Bulk commit of accumulated local changes on the dtarkent2-sys fork.
Spans agents, dataflows, llm_clients, graph orchestration, CLI, and
docs. Primary work areas:
- llm_clients/ — multi-LLM client layer (anthropic, google, openai,
factory, base, validators) for swappable provider support
- dataflows/alpaca_data.py — Alpaca integration alongside existing
alpha_vantage and y_finance flows
- agents/structured/ — portfolio, scoring, and tier1/2/3 layers
- agents/analysts, researchers, risk_mgmt — local prompt and logic
customizations
- graph/ — orchestration tweaks (parallel_analysts, propagation,
reflection, signal_processing, trading_graph)
- alembic scaffolding inherited from prior commit
- chainlit web UI design notes in docs/plans/
This is a single WIP snapshot to preserve work before any upstream
merge. History can be cleaned up with interactive rebase later.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use Ollama Cloud GPU inference instead of self-hosted CPU Ollama.
1-3s per call vs 2-15 minutes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- deep_think defaults to claude-sonnet-4-6
- quick_think defaults to claude-haiku-4-5-20251001
- LLM provider defaults to anthropic
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use Groq's free OpenAI-compatible API instead of Anthropic Claude
to avoid API credit costs. Defaults to llama-3.3-70b-versatile.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run Bull+Bear researchers concurrently and all 3 risk analysts
(Aggressive/Conservative/Neutral) concurrently instead of sequentially.
With max_debate_rounds=1, there's no back-and-forth so parallel execution
is safe. Sequential mode is completely unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The final decision block set all agents to "completed" in buf but never
emitted agent_update SSE events for them. This left Risk stage dots as
cyan (active) and Decision dot as gray on the UI. Now emits agent_update
for any agents not yet shown as completed before the decision event.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
With stream_mode="values", each chunk contains the full accumulated state.
The debate and risk sections were checking data fields (bull/bear history,
aggressive/conservative/neutral history) without guarding against re-processing,
causing completed agents to be reset to "in_progress" on every subsequent
chunk. This made agent and report counts appear stuck at 5/12 and 4/7.
Fix: move the _emitted flag guard to the outer if-block so the entire
section is skipped once its event has been emitted.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove verbose debug prints added during parallel analysts development.
Fix reports showing 4/7 by updating buf.report_sections for investment_plan,
trader_investment_plan, and final_trade_decision (previously only analyst
reports were tracked).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run all 4 analysts (Market, Social, News, Fundamentals) concurrently
using asyncio.gather instead of sequentially. Each analyst gets its own
isolated message state and tool-calling loop. Cuts analyst phase from
~8-9 min to ~2-3 min (total analysis from ~11 min to ~4-5 min).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace global update_research_team_status() with local buf calls
(was updating CLI's global buffer, not analysis-specific one)
- Add replay buffer: all events stored in memory per analysis
- Support ?last_event=N query param for reconnection replay
- Send event IDs so browser can track position
- Mark analysis as done so replay works after completion
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Railway kills idle connections after ~30s. During long LLM calls
between agent stages, the SSE stream goes silent and gets dropped.
Now sends heartbeat events every 15s to keep the connection alive.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Swap Chainlit chatbot UI for a minimal FastAPI service with:
- POST /analyze to start analysis
- GET /analyze/{id}/stream for SSE progress events
- GET /health for Railway healthcheck
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces the barebones web UI with one that mirrors the CLI:
- Agent status table with team/agent/status tracking
- Reuses CLI's MessageBuffer, update_analyst_statuses, classify_message_type
- Shows full debate transcripts (Bull/Bear, Risk team)
- Live stats (LLM calls, tokens, elapsed time)
- Collapsible Steps for each phase with full report content
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The old model ID claude-sonnet-4-5-20241022 returns 404. Updated to
the current claude-sonnet-4-6 for the deep thinking model.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a Chainlit-based web interface that wraps TradingAgentsGraph,
streaming analyst reports, research debates, and final decisions
to the browser in real-time. Configured for Anthropic Claude models.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>