Two bugs fixed:
1. Portfolio Manager uses wrong state key (critical):
portfolio_manager.py read state['investment_plan'] (Research Manager's
raw output) instead of state['trader_investment_plan'] (Trader's
memory-refined output). This silently bypassed the Trader agent's
entire contribution — including lessons learned from past trades —
in the final risk assessment and trading decision.
The Trader agent applies FinancialSituationMemory to refine the
Research Manager's plan with insights from similar past situations.
By reading the pre-refinement plan, the Portfolio Manager made its
final decision without this critical context, effectively making the
Trader node a no-op in the decision pipeline.
2. _get_stock_stats_bulk() missing pandas import:
y_finance.py uses pd.isna() in _get_stock_stats_bulk() but never
imports pandas as pd, causing NameError on every call. The fallback
in get_stock_stats_indicators_window() catches this silently, but
it defeats the bulk optimization — falling back to O(n) individual
API calls per date instead of O(1) bulk calculation.
Added 7 tests covering both fixes.
LLMs (especially smaller models) sometimes pass multiple indicator
names as a single comma-separated string instead of making separate
tool calls. Split and process each individually at the tool boundary.
- Replace FinnHub with Alpha Vantage API in README documentation
- Implement comprehensive Alpha Vantage modules:
- Stock data (daily OHLCV with date filtering)
- Technical indicators (SMA, EMA, MACD, RSI, Bollinger Bands, ATR)
- Fundamental data (overview, balance sheet, cashflow, income statement)
- News and sentiment data with insider transactions
- Update news analyst tools to use ticker-based news search
- Integrate Alpha Vantage vendor methods into interface routing
- Maintain backward compatibility with existing vendor system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER