- Add quick/mid/deep thinking tiers, each with independent provider selection - Fetch available Ollama models dynamically via /api/tags instead of hardcoded list - Add mid-thinking agent selection (select_mid_thinking_agent) - Support provider-specific thinking config (Gemini thinking level, OpenAI reasoning effort) - Update default_config and trading_graph to wire three-tier LLM setup Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| conditional_logic.py | ||
| propagation.py | ||
| reflection.py | ||
| setup.py | ||
| signal_processing.py | ||
| trading_graph.py | ||