Changes:
- C1: Increase Redis TTL from 1 hour to 4 hours for completed tasks (prevents report deletion)
- Word count: Change all analyst prompts from 800-1500 to 500-1000 words
- Output filter: Update word count validation to 500-1000 range, hide specific word counts in warnings
- Remove character counts from output to improve clarity
These changes address:
1. Reports being deleted after 1 hour (now 4 hours)
2. Inconsistent word counts causing reruns (now strict 500-1000)
3. Output showing specific word counts (now just pass/warning)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Pass max_debate_rounds and max_risk_discuss_rounds from config to ConditionalLogic
- Pass max_recur_limit from config to Propagator
- Increase default recursion_limit from 100 to 200 in default_config.py
- Increase Propagator default max_recur_limit from 100 to 200
Also includes earlier fixes:
- Add 365-day minimum date range validation to get_stock_data tool
- Update market analyst prompt to specify 1-year data requirement
- Initialize all debate state fields (bull_history, bear_history, judge_decision, etc.)
- Add report completeness logging in trading_service.py
- Add debug logging in frontend results page
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Replace FinnHub with Alpha Vantage API in README documentation
- Implement comprehensive Alpha Vantage modules:
- Stock data (daily OHLCV with date filtering)
- Technical indicators (SMA, EMA, MACD, RSI, Bollinger Bands, ATR)
- Fundamental data (overview, balance sheet, cashflow, income statement)
- News and sentiment data with insider transactions
- Update news analyst tools to use ticker-based news search
- Integrate Alpha Vantage vendor methods into interface routing
- Maintain backward compatibility with existing vendor system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER