Bulk commit of accumulated local changes on the dtarkent2-sys fork.
Spans agents, dataflows, llm_clients, graph orchestration, CLI, and
docs. Primary work areas:
- llm_clients/ — multi-LLM client layer (anthropic, google, openai,
factory, base, validators) for swappable provider support
- dataflows/alpaca_data.py — Alpaca integration alongside existing
alpha_vantage and y_finance flows
- agents/structured/ — portfolio, scoring, and tier1/2/3 layers
- agents/analysts, researchers, risk_mgmt — local prompt and logic
customizations
- graph/ — orchestration tweaks (parallel_analysts, propagation,
reflection, signal_processing, trading_graph)
- alembic scaffolding inherited from prior commit
- chainlit web UI design notes in docs/plans/
This is a single WIP snapshot to preserve work before any upstream
merge. History can be cleaned up with interactive rebase later.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Switch the default LLM in tradingagents/default_config.py from
nemotron-3-nano:30b-cloud (which was producing low-fidelity
structured output across services) to glm-5.1:cloud. Canonical
values live in .env which is gitignored.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Alpaca provides 10k calls/min free with 7yr history via IEX feed.
Hybrid approach: Alpaca for price bars, snapshots, sector ETF perf,
and moving averages; yfinance for fundamentals (PE, margins, 13F).
- Add alpaca_data.py: bars, snapshots, MAs, sector ETF perf, news
- Update get_macro_indicators: sector ETF performance via Alpaca
- Update get_sector_rotation: compute relative strength vs SPY
- Update entry timing node: Alpaca MAs from actual bar data
- Add alpaca-py to requirements.txt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Return error message to LLM instead of crashing the pipeline.
Also list supported indicators in the tool docstring.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Changed deep_think_llm and quick_think_llm from gpt-5.2/gpt-5-mini
to glm-5:cloud, backend_url to ollama.com/v1.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sync ThreadPoolExecutor doesn't truly parallelize inside LangGraph nodes.
Switched to async functions with asyncio.to_thread() + asyncio.gather() —
the same pattern that works for the parallel analyst node.
Result: Research (Bull+Bear) and Risk (Agg+Con+Neu) now run concurrently.
Total analysis time reduced from ~450s to ~280s (~38% faster).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use sync functions with pool.submit() instead of async+run_in_executor
to avoid potential asyncio event-loop interaction issues with LangGraph.
Added timing logs to diagnose parallelism.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LangGraph state proxies serialize concurrent dict access, forcing
threads to run sequentially. Fix by snapshotting needed fields into
plain dicts before dispatching to ThreadPoolExecutor — same pattern
used by the working parallel analysts node.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run Bull+Bear researchers concurrently and all 3 risk analysts
(Aggressive/Conservative/Neutral) concurrently instead of sequentially.
With max_debate_rounds=1, there's no back-and-forth so parallel execution
is safe. Sequential mode is completely unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run all 4 analysts (Market, Social, News, Fundamentals) concurrently
using asyncio.gather instead of sequentially. Each analyst gets its own
isolated message state and tool-calling loop. Cuts analyst phase from
~8-9 min to ~2-3 min (total analysis from ~11 min to ~4-5 min).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)
- Replace hardcoded column indices with column name lookup
- Add mapping for all supported indicators to their expected CSV column names
- Handle missing columns gracefully with descriptive error messages
- Strip whitespace from header parsing for reliability
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace FinnHub with Alpha Vantage API in README documentation
- Implement comprehensive Alpha Vantage modules:
- Stock data (daily OHLCV with date filtering)
- Technical indicators (SMA, EMA, MACD, RSI, Bollinger Bands, ATR)
- Fundamental data (overview, balance sheet, cashflow, income statement)
- News and sentiment data with insider transactions
- Update news analyst tools to use ticker-based news search
- Integrate Alpha Vantage vendor methods into interface routing
- Maintain backward compatibility with existing vendor system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER