Commit Graph

130 Commits

Author SHA1 Message Date
dtarkent2-sys a913416613 chore: default deep_think_llm + quick_think_llm to glm-5.1:cloud
Switch the default LLM in tradingagents/default_config.py from
nemotron-3-nano:30b-cloud (which was producing low-fidelity
structured output across services) to glm-5.1:cloud. Canonical
values live in .env which is gitignored.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 10:06:37 -04:00
dtarkent2-sys 9550cea393 deps: add postgres drivers + sse-starlette + restructure requirements
- asyncpg + psycopg2-binary: same fix as gex-llm — service had
  DATABASE_URL and alembic env.py but no postgres driver installed
- sse-starlette: imported by app.py but never declared; the running
  container had it from a prior manual pip install. Rebuilding the
  image from canonical requirements dropped it and crashed the
  container with ModuleNotFoundError on startup.

Bundles a pre-existing uncommitted restructuring of requirements.txt
into pinned + categorized groups (Core, LLM Clients, Data, Analysis,
Database, CLI & UI, Testing, Utilities).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:42:04 -04:00
SharkQuant 2e37bc117d migrate off Railway, update Dockerfile and LLM clients 2026-04-11 22:51:22 -04:00
dtarkent2-sys ae18643103 chore: default model to nemotron-3-nano:30b-cloud
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 15:50:09 -04:00
dtarkent2-sys ac17d98974 fix: aggressive JSON fallback for LLMs that don't support structured output
- Use SystemMessage forcing JSON-only responses (no markdown)
- Provide concrete example JSON from schema fields
- Better JSON extraction: handles nested braces, code blocks, prose wrappers
- Fixes nemotron/minimax/other Ollama models returning prose instead of JSON

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 15:39:08 -04:00
dtarkent2-sys 1d3f5e9c86 fix: 10 reliability and observability fixes for trading pipeline
invoke_structured() catches ValidationError with safe defaults, ticker validation
(empty/length), 60s per-LLM-call timeout, event buffer capped at 5000, recursion
limit 50→25, tier 2 low-confidence DataFlags, tier 3 upstream confidence checks,
heartbeat JSON every 15s, data source attribution in all prompts, structured logging
replaces print()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 21:08:01 +00:00
dtarkent2-sys fe41a2dad9 Add .railwayignore and NVDA analysis outputs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 11:03:59 +00:00
dtarkent2-sys 84f7894768 feat: switch stock data to Alpaca Markets API with yfinance fallback
Alpaca provides 10k calls/min free with 7yr history via IEX feed.
Hybrid approach: Alpaca for price bars, snapshots, sector ETF perf,
and moving averages; yfinance for fundamentals (PE, margins, 13F).

- Add alpaca_data.py: bars, snapshots, MAs, sector ETF perf, news
- Update get_macro_indicators: sector ETF performance via Alpaca
- Update get_sector_rotation: compute relative strength vs SPY
- Update entry timing node: Alpaca MAs from actual bar data
- Add alpaca-py to requirements.txt

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 22:23:58 +00:00
dtarkent2-sys 5e8c81e738 fix: 6 audit issues — missing await, regime range, pct_out scaling, ticker validation, dead code, flag merge
1. app.py: await _update_in_progress (coroutine was silently dropped)
2. models.py + tier1.py: regime_score_adjustment range ±2→±10 (was negligible on 0-100 scale)
3. y_finance.py: pct_out * 100 (was fraction, displayed as percent)
4. app.py: ticker validation accepts dots/hyphens (BRK.B, BF-B)
5. portfolio.py: wire _fetch_peer_basics into theme substitution (was dead code)
6. setup.py: accumulate global_flags across parallel agents (dict.update was dropping them)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:56:38 +00:00
dtarkent2-sys ee80a42971 feat: add regime awareness, smart-money tracking, theme substitution & position replacement
- MacroRegimeOutput: risk_appetite, liquidity_regime, regime_score_adjustment (-2 to +2)
- InstitutionalFlowOutput: 13F holders, insider transactions, short interest trend, smart_money_signal
- Scoring node applies regime adjustment to master score
- Theme Substitution Engine: identifies best expression of theme, ranks peers, flags overlap
- Position Replacement Agent: compares candidate to theme alternatives, flags replacements
- Pipeline: Scoring → Portfolio Analysis → Debate → Decision
- Final decision narrative includes theme context and replacement flags

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:46:03 +00:00
dtarkent2-sys 7ad9e1d1ce feat: rebuild as structured Pydantic equity ranking engine
Replace generic LLM debate system with a tiered, macro-aware equity
ranking pipeline where every agent returns Pydantic structured output
and scoring is deterministic Python — no prose drives downstream decisions.

Architecture: Validation → Tier 1 (Macro+Liquidity parallel) →
Tier 2 (8 agents parallel) → Scoring (Archetype+MasterScore) →
Tier 3 (Bull/Bear debate + Risk + FinalDecision) → END

Master Score: 25% business_quality + 20% macro + 15% institutional_flow
+ 10% valuation + 10% entry_timing + 10% earnings_revisions + 5% backlog
+ 5% crowding. Hard veto gates, confidence penalties, position role
assignment all computed deterministically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:30:46 +00:00
dtarkent2-sys 24c90bdd5d Fix crash when LLM requests unsupported indicator (e.g. vwap)
Return error message to LLM instead of crashing the pipeline.
Also list supported indicators in the tool docstring.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:51:20 +00:00
dtarkent2-sys f8efe0113b Switch LLM models to glm-5:cloud via Ollama
Changed deep_think_llm and quick_think_llm from gpt-5.2/gpt-5-mini
to glm-5:cloud, backend_url to ollama.com/v1.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 05:05:33 +00:00
dtarkent2-sys d1fa7b6004 perf: switch to Ollama Cloud (deepseek-v3.1:671b-cloud)
Use Ollama Cloud GPU inference instead of self-hosted CPU Ollama.
1-3s per call vs 2-15 minutes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 12:18:08 +00:00
dtarkent2-sys 512aff3b40 perf: default to Anthropic Claude models instead of Groq Llama
- deep_think defaults to claude-sonnet-4-6
- quick_think defaults to claude-haiku-4-5-20251001
- LLM provider defaults to anthropic

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 12:06:22 +00:00
dtarkent2-sys 055a8159a4 Add debug logging for LLM provider and type
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 02:30:37 +00:00
dtarkent2-sys c6bf2b570b Switch from Anthropic to Groq for LLM calls
Use Groq's free OpenAI-compatible API instead of Anthropic Claude
to avoid API credit costs. Defaults to llama-3.3-70b-versatile.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 23:33:32 +00:00
dtarkent2-sys e196f5ee36 Add back questionary — also imported transitively by cli/main.py
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 03:36:16 +00:00
dtarkent2-sys 563970ade8 Add back typer, rich, python-dotenv — required by cli/main.py imports
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 03:29:57 +00:00
dtarkent2-sys e0ed485098 Fix .dockerignore: don't exclude requirements.txt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 03:22:31 +00:00
dtarkent2-sys 3ac1c5ad3d Harden security, fix memory leak, clean up deps
- Add API key auth (AGENTS_API_KEY env var) on /analyze endpoints
- Add CORS_ORIGINS env var instead of hardcoded wildcard
- Add memory cleanup (30min TTL) and concurrency semaphore (max 3)
- Add 10-minute analysis timeout
- Fix ticker validation (alphanumeric check)
- Remove unused deps (redis, backtrader, parsel, rich, typer, questionary)
- Fix pyproject.toml: replace chainlit with actual FastAPI deps
- Add .dockerignore, add eval_results/ to .gitignore

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 03:17:11 +00:00
dtarkent2-sys ba39a81e82 Fix parallel research/risk: use async+asyncio.gather instead of ThreadPoolExecutor
Sync ThreadPoolExecutor doesn't truly parallelize inside LangGraph nodes.
Switched to async functions with asyncio.to_thread() + asyncio.gather() —
the same pattern that works for the parallel analyst node.

Result: Research (Bull+Bear) and Risk (Agg+Con+Neu) now run concurrently.
Total analysis time reduced from ~450s to ~280s (~38% faster).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 18:01:54 +00:00
dtarkent2-sys 12e0d507c2 Switch parallel timing logs from logger.info to print for Railway visibility
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 17:18:46 +00:00
dtarkent2-sys 2484bd89e4 Switch parallel research/risk to sync ThreadPoolExecutor with timing logs
Use sync functions with pool.submit() instead of async+run_in_executor
to avoid potential asyncio event-loop interaction issues with LangGraph.
Added timing logs to diagnose parallelism.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 15:39:25 +00:00
dtarkent2-sys 7ff05328a8 Fix parallel research/risk: snapshot state to avoid proxy serialization
LangGraph state proxies serialize concurrent dict access, forcing
threads to run sequentially. Fix by snapshotting needed fields into
plain dicts before dispatching to ThreadPoolExecutor — same pattern
used by the working parallel analysts node.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 15:18:43 +00:00
dtarkent2-sys 3cd0c19b35 Parallelize research & risk debate stages for ~25% faster analysis
Run Bull+Bear researchers concurrently and all 3 risk analysts
(Aggressive/Conservative/Neutral) concurrently instead of sequentially.
With max_debate_rounds=1, there's no back-and-forth so parallel execution
is safe. Sequential mode is completely unchanged.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 14:45:59 +00:00
dtarkent2-sys aa654c9425 Emit final agent_update events so all dots turn green at completion
The final decision block set all agents to "completed" in buf but never
emitted agent_update SSE events for them. This left Risk stage dots as
cyan (active) and Decision dot as gray on the UI. Now emits agent_update
for any agents not yet shown as completed before the decision event.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 11:53:03 +00:00
dtarkent2-sys 67463a2b99 Fix agent status reset bug in stream_mode=values
With stream_mode="values", each chunk contains the full accumulated state.
The debate and risk sections were checking data fields (bull/bear history,
aggressive/conservative/neutral history) without guarding against re-processing,
causing completed agents to be reset to "in_progress" on every subsequent
chunk. This made agent and report counts appear stuck at 5/12 and 4/7.

Fix: move the _emitted flag guard to the outer if-block so the entire
section is skipped once its event has been emitted.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 11:50:32 +00:00
dtarkent2-sys 47771849ca Clean up debug logging and fix reports count
Remove verbose debug prints added during parallel analysts development.
Fix reports showing 4/7 by updating buf.report_sections for investment_plan,
trader_investment_plan, and final_trade_decision (previously only analyst
reports were tracked).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 11:41:01 +00:00
dtarkent2-sys 05b319c101 debug: add chunk-level logging in stream loop 2026-02-20 11:26:37 +00:00
dtarkent2-sys 64defb3939 debug: add logging to trace analysis execution 2026-02-20 11:19:35 +00:00
dtarkent2-sys 223879bc04 feat: parallelize analyst agents for ~3x speedup
Run all 4 analysts (Market, Social, News, Fundamentals) concurrently
using asyncio.gather instead of sequentially. Each analyst gets its own
isolated message state and tool-calling loop. Cuts analyst phase from
~8-9 min to ~2-3 min (total analysis from ~11 min to ~4-5 min).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 11:13:16 +00:00
dtarkent2-sys f5519b9efe fix: add SSE replay buffer + fix research agent status tracking
- Replace global update_research_team_status() with local buf calls
  (was updating CLI's global buffer, not analysis-specific one)
- Add replay buffer: all events stored in memory per analysis
- Support ?last_event=N query param for reconnection replay
- Send event IDs so browser can track position
- Mark analysis as done so replay works after completion

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 03:48:22 +00:00
dtarkent2-sys 777226722a fix: add SSE heartbeat to prevent Railway proxy timeout
Railway kills idle connections after ~30s. During long LLM calls
between agent stages, the SSE stream goes silent and gets dropped.
Now sends heartbeat events every 15s to keep the connection alive.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 03:25:53 +00:00
dtarkent2-sys 1ded94388e Fix healthcheck path to /health for FastAPI 2026-02-20 03:17:55 +00:00
dtarkent2-sys 52228414ed Replace Chainlit with FastAPI SSE backend
Swap Chainlit chatbot UI for a minimal FastAPI service with:
- POST /analyze to start analysis
- GET /analyze/{id}/stream for SSE progress events
- GET /health for Railway healthcheck

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 02:43:24 +00:00
dtarkent2-sys ac782d179d feat: rebuild Chainlit UI to match CLI experience
Replaces the barebones web UI with one that mirrors the CLI:
- Agent status table with team/agent/status tracking
- Reuses CLI's MessageBuffer, update_analyst_statuses, classify_message_type
- Shows full debate transcripts (Bull/Bear, Risk team)
- Live stats (LLM calls, tokens, elapsed time)
- Collapsible Steps for each phase with full report content

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 02:04:13 +00:00
dtarkent2-sys 76f1e0abf0 fix: use correct Claude model ID (claude-sonnet-4-6)
The old model ID claude-sonnet-4-5-20241022 returns 404. Updated to
the current claude-sonnet-4-6 for the deep thinking model.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 01:52:46 +00:00
dtarkent2-sys 979ceeb89a fix: remove outdated Chainlit config, let it auto-generate
Chainlit 2.9.6 rejects the manually-created config.toml as outdated.
Let it generate its own at runtime.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 00:57:02 +00:00
dtarkent2-sys eade96f1c9 feat: add Chainlit web UI + Dockerfile for Railway deployment
Adds a Chainlit-based web interface that wraps TradingAgentsGraph,
streaming analyst reports, research debates, and final decisions
to the browser in real-time. Configured for Anthropic Claude models.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 00:52:45 +00:00
dtarkent2-sys 48ef57715e docs: add Chainlit web UI design for Railway deployment
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 00:41:46 +00:00
Yijia Xiao 5fec171a1e
chore: add build-system config and update version to 0.2.0 2026-02-07 08:26:51 +00:00
Yijia Xiao 50c82a25b5
chore: consolidate dependencies to pyproject.toml, remove setup.py 2026-02-07 08:18:46 +00:00
Yijia Xiao 8b3068d091
Merge pull request #335 from RinZ27/security/patch-langchain-core-vulnerability
security: Patch LangGrinch vulnerability (CVE-2025-68664) (#335)
2026-02-07 00:04:44 -08:00
RinZ27 66a02b3193
security: patch LangGrinch vulnerability in langchain-core 2026-02-05 11:01:53 +07:00
Yijia Xiao e9470b69c4
TradingAgents v0.2.0: Multi-Provider LLM Support & Optimizations (#331)
Release v0.2.0: Multi-Provider LLM Support
2026-02-03 23:13:43 -08:00
Yijia Xiao b4b133eb2d
fix: add typer dependency 2026-02-04 00:39:15 +00:00
Yijia Xiao 80aab35119
docs: update README for v0.2.0 release
- TradingAgents v0.2.0 release
- Trading-R1 announcement
- Multi-provider LLM documentation
2026-02-04 00:13:10 +00:00
Yijia Xiao 393d4c6a1b
chore: add data_cache to .gitignore 2026-02-03 23:30:55 +00:00
Yijia Xiao aba1880c8c
chore: update .gitignore to official Python template 2026-02-03 23:16:38 +00:00