Move Claude Agent to the top of the provider list so it's the default
highlight. Drop the redundant "(via Claude Code, Max subscription)"
suffix on each model entry — the provider step already makes the auth
path clear — and expose all three tiers (Opus / Sonnet / Haiku) in
both the quick and deep lists, ordered by the sensible pick for each.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CLI: add "Claude Agent (Max subscription, no API key)" to the provider
picker and register opus/sonnet/haiku model aliases so the CLI flow picks
up the provider registered in factory.py. No effort-level step since the
SDK doesn't expose that knob.
Analyst runner: build a concrete user request from company_of_interest +
trade_date instead of echoing the terse ("human", ticker) initial state —
the SDK was sitting idle on prompts like just "NVDA". Add opt-in file-
based debug logging (TRADINGAGENTS_CLAUDE_AGENT_DEBUG=1 → /tmp/...log)
for observability during long adaptive-thinking blocks.
Also adds main_claude_agent.py as a ready-to-run example for a Max-only
end-to-end invocation (verified 12-min NVDA run → SELL).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add effort parameter (high/medium/low) for Claude 4.5+ and 4.6 models,
consistent with OpenAI reasoning_effort and Google thinking_level.
Also add content normalization for Anthropic responses.
- OpenAI: add GPT-5.4, GPT-5.4 Pro; remove o-series and legacy GPT-4o
- Anthropic: add Claude Opus 4.6, Sonnet 4.6; remove legacy 4.1/4.0/3.x
- Google: add Gemini 3.1 Pro, 3.1 Flash Lite; remove deprecated
gemini-3-pro-preview and Gemini 2.0 series
- xAI: clean up model list to match current API
- Simplify UnifiedChatOpenAI GPT-5 temperature handling
- Add missing tradingagents/__init__.py (fixes pip install building)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER