When past_memories is empty, all five agents previously injected an
empty string into their prompts while still instructing the LLM to
"address reflections and learn from past mistakes" — causing the LLM
to hallucinate fabricated lessons on first run.
Each agent now conditionally builds its memory section only when
past_memories is non-empty, so the injection and its instruction
are both absent when there is nothing to recall.
Also fixes import ordering in memory.py (logger after imports).
Tests: tests/test_hallucination_guard.py covers empty and populated
memory for all five agents (bull, bear, trader, research manager,
portfolio manager).
Companion to #563 (memory persistence).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Apply review suggestions: use concise `or` pattern for API key
resolution, consolidate tests into parameterized subTest, move
import to module level per PEP 8.
GoogleClient now accepts the unified `api_key` parameter used by
OpenAI and Anthropic clients, mapping it to the provider-specific
`google_api_key` that ChatGoogleGenerativeAI expects. Legacy
`google_api_key` still works for backward compatibility.
Resolves TODO.md item #2 (inconsistent parameter handling).
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER