- **Dynamic Parameter Tuning (The Learning Loop)**: Implemented full self-reflection cycle. The Reflector agent now parses its own advice into JSON (`rsi_period`, `stop_loss_pct`), persists it to `data_cache/runtime_config.json`, and the Market Analyst loads it to tune the Regime Detector in real-time.
- **Audit Archival**: Every tuning event is now archived to `results/{TICKER}/{DATE}/runtime_config.json` for historical auditing, ensuring we can reproduce why parameters changed on any given day.
- **Atomic Persistence**: Implemented `agent_utils.write_json_atomic` to prevent race conditions during config saves.
- **Centralized Config**: Moved hardcoded paths to `default_config.py` (DRY principle).
### Fixed
- **Reflector Logic Gap**: The Reflector was previously "shouting into the void"—making suggestions but having no mechanism to apply them. This circuit is now closed.
- **Relative Strength Determinism**: Upgraded `market_analyst.py` to calculate a mathematical `risk_multiplier` (0.0x - 1.5x) based on the Asset Regime vs. SPY Regime correlation, removing LLM "confidence" hallucinations from position sizing.
- **Portfolio Awareness (Rule 72)**: Implemented State Persistence (`portfolio`, `cash_balance`) and a hard-coded Stop Loss check in `trading_graph.py`. If a position's unrealized PnL drops below -10%, the system forces a "LIQUIDATE" order, bypassing all AI debate.
- **Self-Tuning Architecture**: Updated `reflection.py` to output a structured JSON block (`UPDATE_PARAMETERS`) instead of prose advice, enabling future automated parameter optimization.
Override Logic Mismatches: Fixed critical Enum-to-String type mismatch in
apply_trend_override
that was silencing the "Safety Valve" logic.
Data Pipeline Failures: Injected robust error handling and type checking in
market_analyst.py
to identify why RegimeDetector receives invalid data (causing "UNKNOWN" regimes).
Gemini 404 Errors: Removed invalid/deprecated model names causing 404s
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER