Bulk commit of accumulated local changes on the dtarkent2-sys fork.
Spans agents, dataflows, llm_clients, graph orchestration, CLI, and
docs. Primary work areas:
- llm_clients/ — multi-LLM client layer (anthropic, google, openai,
factory, base, validators) for swappable provider support
- dataflows/alpaca_data.py — Alpaca integration alongside existing
alpha_vantage and y_finance flows
- agents/structured/ — portfolio, scoring, and tier1/2/3 layers
- agents/analysts, researchers, risk_mgmt — local prompt and logic
customizations
- graph/ — orchestration tweaks (parallel_analysts, propagation,
reflection, signal_processing, trading_graph)
- alembic scaffolding inherited from prior commit
- chainlit web UI design notes in docs/plans/
This is a single WIP snapshot to preserve work before any upstream
merge. History can be cleaned up with interactive rebase later.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Run Bull+Bear researchers concurrently and all 3 risk analysts
(Aggressive/Conservative/Neutral) concurrently instead of sequentially.
With max_debate_rounds=1, there's no back-and-forth so parallel execution
is safe. Sequential mode is completely unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add save prompt after analysis with organized subfolder structure
- Fix report truncation by using sequential panels instead of Columns
- Add optional full report display prompt
- Add update_analyst_statuses() for unified status logic (pending/in_progress/completed)
- Normalize analyst selection to predefined ANALYST_ORDER for consistent execution
- Add message deduplication to prevent duplicates from stream_mode=values
- Restructure streaming loop so state handlers run on every chunk
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER