Addresses gemini-code-assist review on #565:
- Type hints: int | None for nullable look_back_days/limit on get_global_news
in alpha_vantage_news.py, yfinance_news.py, and news_data_tools.py.
Adds explicit str type hint to curr_date in alpha_vantage_news.py.
- Config override: news_data_tools.get_global_news no longer hardcodes
look_back_days=7 / limit=5; defaults to None so DEFAULT_CONFIG values
flow through to the dataflow layer.
- Cross-vendor consistency: alpha_vantage_news.get_news now respects the
news_article_limit config (parity with yfinance_news.get_news_yfinance).
- Fallback consistency: alpha_vantage_news.get_global_news fallback now
matches DEFAULT_CONFIG (10) instead of the legacy 50.
Three parameters were hardcoded with no way to override via config:
- count=20 in yf.Ticker.get_news() inside get_news_yfinance
- look_back_days=7 default in get_global_news_yfinance
- limit=10 default in get_global_news_yfinance
Add three new keys to DEFAULT_CONFIG:
- news_article_limit (default 20): max articles per ticker for get_news
- global_news_lookback_days (default 7): lookback window for get_global_news
- global_news_article_limit (default 10): max global articles for get_global_news
Both yfinance and alpha_vantage news implementations now read these values
from config, allowing users running weekly/monthly strategies to increase
coverage without modifying library source code.
Apply review suggestions: use concise `or` pattern for API key
resolution, consolidate tests into parameterized subTest, move
import to module level per PEP 8.
GoogleClient now accepts the unified `api_key` parameter used by
OpenAI and Anthropic clients, mapping it to the provider-specific
`google_api_key` that ChatGoogleGenerativeAI expects. Legacy
`google_api_key` still works for backward compatibility.
Resolves TODO.md item #2 (inconsistent parameter handling).
Add effort parameter (high/medium/low) for Claude 4.5+ and 4.6 models,
consistent with OpenAI reasoning_effort and Google thinking_level.
Also add content normalization for Anthropic responses.
- Point requirements.txt to pyproject.toml as single source of truth
- Resolve welcome.txt path relative to module for CLI portability
- Include cli/static files in package build
- Extract shared normalize_content for OpenAI Responses API and
Gemini 3 list-format responses into base_client.py
- Update README install and CLI usage instructions
Enable use_responses_api for native OpenAI provider, which supports
reasoning_effort with function tools across all model families.
Removes the UnifiedChatOpenAI subclass workaround.
Closes#403
- Add http_client and http_async_client parameters to all LLM clients
- OpenAIClient, GoogleClient, AnthropicClient now support custom httpx clients
- Fixes SSL certificate verification errors on Windows Conda environments
- Users can now pass custom httpx.Client with verify=False or custom certs
Fixes#369
- OpenAI: add GPT-5.4, GPT-5.4 Pro; remove o-series and legacy GPT-4o
- Anthropic: add Claude Opus 4.6, Sonnet 4.6; remove legacy 4.1/4.0/3.x
- Google: add Gemini 3.1 Pro, 3.1 Flash Lite; remove deprecated
gemini-3-pro-preview and Gemini 2.0 series
- xAI: clean up model list to match current API
- Simplify UnifiedChatOpenAI GPT-5 temperature handling
- Add missing tradingagents/__init__.py (fixes pip install building)
Add _clean_dataframe() to normalize stock DataFrames before stockstats:
coerce invalid dates/prices, drop rows missing Close, fill price gaps.
Also add on_bad_lines="skip" to all cached CSV reads.
LLMs (especially smaller models) sometimes pass multiple indicator
names as a single comma-separated string instead of making separate
tool calls. Split and process each individually at the tool boundary.
InvestDebateState was missing bull_history, bear_history, judge_decision.
RiskDebateState was missing aggressive_history, conservative_history,
neutral_history, latest_speaker, judge_decision. This caused KeyError
in _log_state() and reflection, especially with edge-case config values.
Prevents UnicodeEncodeError on Windows where the default encoding
(cp1252/gbk) cannot handle Unicode characters in LLM output.
Closes#77, closes#114, closes#126, closes#215, closes#332