- Add quick/mid/deep thinking tiers, each with independent provider selection - Fetch available Ollama models dynamically via /api/tags instead of hardcoded list - Add mid-thinking agent selection (select_mid_thinking_agent) - Support provider-specific thinking config (Gemini thinking level, OpenAI reasoning effort) - Update default_config and trading_graph to wire three-tier LLM setup Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| agents | ||
| dataflows | ||
| graph | ||
| llm_clients | ||
| default_config.py | ||