When past_memories is empty, all five agents previously injected an empty string into their prompts while still instructing the LLM to "address reflections and learn from past mistakes" — causing the LLM to hallucinate fabricated lessons on first run. Each agent now conditionally builds its memory section only when past_memories is non-empty, so the injection and its instruction are both absent when there is nothing to recall. Also fixes import ordering in memory.py (logger after imports). Tests: tests/test_hallucination_guard.py covers empty and populated memory for all five agents (bull, bear, trader, research manager, portfolio manager). Companion to #563 (memory persistence). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| agent_states.py | ||
| agent_utils.py | ||
| core_stock_tools.py | ||
| fundamental_data_tools.py | ||
| memory.py | ||
| news_data_tools.py | ||
| technical_indicators_tools.py | ||