Move config loading and validation functions from openai_client.py and factory.py into a new shared config_loader.py module. This centralizes configuration handling, reduces code duplication, and improves maintainability. The factory now gracefully falls back to default provider types if config loading fails.
- Add LM Studio as a new provider option in config.json
- Introduce LLM_PROVIDER_TYPES configuration for provider-to-client mapping
- Refactor factory.py to use centralized provider type configuration
- Add results and reports directories to .gitignore
The refactor centralizes provider configuration, making it easier to add new providers in the future without modifying the factory logic. LM Studio support enables local model hosting integration.
- Add http_client and http_async_client parameters to all LLM clients
- OpenAIClient, GoogleClient, AnthropicClient now support custom httpx clients
- Fixes SSL certificate verification errors on Windows Conda environments
- Users can now pass custom httpx.Client with verify=False or custom certs
Fixes#369
- Add StatsCallbackHandler for tracking LLM calls, tool calls, and tokens
- Integrate callbacks into TradingAgentsGraph and all LLM clients
- Dynamic agent/report counts based on selected analysts
- Fix report completion counting (tied to agent completion)