The previous fix only checked deep_think_llm when deciding whether to include reasoning_effort, but both LLMs shared the same kwargs. This meant mixing an o-series deep model with a gpt-* quick model would still crash. Now each LLM gets its own kwargs by passing the model name to _get_provider_kwargs. reasoning_effort is only added when the specific model being configured is an o-series model. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| conditional_logic.py | ||
| propagation.py | ||
| reflection.py | ||
| setup.py | ||
| signal_processing.py | ||
| trading_graph.py | ||