The previous fix only checked deep_think_llm when deciding whether to include reasoning_effort, but both LLMs shared the same kwargs. This meant mixing an o-series deep model with a gpt-* quick model would still crash. Now each LLM gets its own kwargs by passing the model name to _get_provider_kwargs. reasoning_effort is only added when the specific model being configured is an o-series model. |
||
|---|---|---|
| .. | ||
| agents | ||
| dataflows | ||
| graph | ||
| llm_clients | ||
| __init__.py | ||
| default_config.py | ||