Per issue #403 analysis: the real problem is that reasoning_effort
with function tools requires the /v1/responses endpoint, but
LangChain's ChatOpenAI only supports /v1/chat/completions. This
affects all models (not just gpt-5.x vs o-series), so the safest
fix is to skip reasoning_effort and log an info message explaining
why.
Once LangChain adds /v1/responses support, this can be re-enabled.
The previous fix only checked deep_think_llm when deciding whether
to include reasoning_effort, but both LLMs shared the same kwargs.
This meant mixing an o-series deep model with a gpt-* quick model
would still crash.
Now each LLM gets its own kwargs by passing the model name to
_get_provider_kwargs. reasoning_effort is only added when the
specific model being configured is an o-series model.
reasoning_effort is only supported by o-series models (o1, o3,
o3-mini, o4-mini). Passing it to gpt-* models causes a 400 error
because those models don't support it on the chat completions
endpoint.
Now we check if the model name starts with 'o' before including
the parameter.
Fixes#403
InvestDebateState was missing bull_history, bear_history, judge_decision.
RiskDebateState was missing aggressive_history, conservative_history,
neutral_history, latest_speaker, judge_decision. This caused KeyError
in _log_state() and reflection, especially with edge-case config values.
Prevents UnicodeEncodeError on Windows where the default encoding
(cp1252/gbk) cannot handle Unicode characters in LLM output.
Closes#77, closes#114, closes#126, closes#215, closes#332
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)