feat: add improved error handling
This commit is contained in:
parent
ecca9b1efc
commit
21f1cb1782
|
|
@ -0,0 +1,102 @@
|
||||||
|
# Enhanced Error Handling Guide
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
The TradingAgents system now includes comprehensive error handling for LLM API issues, providing clear, actionable feedback to users.
|
||||||
|
|
||||||
|
## Supported Scenarios
|
||||||
|
|
||||||
|
### 1. Invalid Model Configuration
|
||||||
|
**Error**: When an invalid model name is configured
|
||||||
|
**Response**:
|
||||||
|
- ❌ Clear error message indicating the invalid model
|
||||||
|
- 📋 List of valid models for the current provider
|
||||||
|
- 🔧 Specific configuration instructions
|
||||||
|
|
||||||
|
### 2. API Quota Exceeded
|
||||||
|
**Error**: When API usage limits are reached
|
||||||
|
**Response**:
|
||||||
|
- ❌ Clear quota exceeded message
|
||||||
|
- 🔗 Direct links to billing/quota management
|
||||||
|
- 🔄 Alternative provider suggestions
|
||||||
|
- 📴 Offline tools recommendation
|
||||||
|
|
||||||
|
### 3. Missing API Keys
|
||||||
|
**Error**: When required environment variables are not set
|
||||||
|
**Response**:
|
||||||
|
- ❌ Clear missing API key message
|
||||||
|
- 🔑 Exact export command to set the key
|
||||||
|
- 🔗 Links to get API keys
|
||||||
|
|
||||||
|
### 4. Connection Issues
|
||||||
|
**Error**: When network/connectivity problems occur
|
||||||
|
**Response**:
|
||||||
|
- ❌ Connection problem identification
|
||||||
|
- 🌐 Possible causes (network, firewall, service down)
|
||||||
|
- 🔄 Alternative provider suggestions
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
### Switching Between Providers
|
||||||
|
```python
|
||||||
|
# In tradingagents/default_config.py
|
||||||
|
|
||||||
|
# For OpenAI
|
||||||
|
"llm_provider": "openai",
|
||||||
|
"quick_think_llm": "gpt-4o-mini",
|
||||||
|
"deep_think_llm": "gpt-4o",
|
||||||
|
|
||||||
|
# For Gemini
|
||||||
|
"llm_provider": "gemini",
|
||||||
|
"gemini_quick_think_llm": "gemini-1.5-flash",
|
||||||
|
"gemini_deep_think_llm": "gemini-1.5-pro",
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Offline Tools
|
||||||
|
```python
|
||||||
|
# Disable online tools to use local data sources
|
||||||
|
"online_tools": False,
|
||||||
|
```
|
||||||
|
|
||||||
|
## Valid Models
|
||||||
|
|
||||||
|
### OpenAI Models
|
||||||
|
- `gpt-4o`
|
||||||
|
- `gpt-4o-mini`
|
||||||
|
- `gpt-4-turbo`
|
||||||
|
- `gpt-4`
|
||||||
|
- `gpt-3.5-turbo`
|
||||||
|
- `o1-preview`
|
||||||
|
- `o1-mini`
|
||||||
|
|
||||||
|
### Gemini Models
|
||||||
|
- `gemini-1.5-pro`
|
||||||
|
- `gemini-1.5-flash`
|
||||||
|
- `gemini-1.0-pro`
|
||||||
|
- `gemini-pro`
|
||||||
|
|
||||||
|
## Required Environment Variables
|
||||||
|
|
||||||
|
### For OpenAI
|
||||||
|
```bash
|
||||||
|
export OPENAI_API_KEY=your_openai_key_here
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Gemini
|
||||||
|
```bash
|
||||||
|
export GOOGLE_API_KEY=your_google_api_key_here
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling Flow
|
||||||
|
|
||||||
|
1. **Agent Tool Called** → Online LLM function invoked
|
||||||
|
2. **API Error Occurs** → Comprehensive error handling triggers
|
||||||
|
3. **User-Friendly Message** → Detailed error with solutions returned
|
||||||
|
4. **Agent Continues** → Can use offline tools or different approach
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
- **Clear Problem Identification**: Emoji indicators and specific error types
|
||||||
|
- **Actionable Solutions**: Multiple alternatives provided for each error
|
||||||
|
- **Graceful Degradation**: Agents can continue with offline tools
|
||||||
|
- **User Education**: Links to documentation and setup guides
|
||||||
|
- **Configuration Guidance**: Exact settings and commands provided
|
||||||
4
main.py
4
main.py
|
|
@ -1,3 +1,5 @@
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
load_dotenv()
|
||||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||||
from tradingagents.default_config import DEFAULT_CONFIG
|
from tradingagents.default_config import DEFAULT_CONFIG
|
||||||
|
|
||||||
|
|
@ -8,7 +10,7 @@ config["backend_url"] = "https://generativelanguage.googleapis.com/v1" # Use a
|
||||||
config["deep_think_llm"] = "gemini-2.0-flash" # Use a different model
|
config["deep_think_llm"] = "gemini-2.0-flash" # Use a different model
|
||||||
config["quick_think_llm"] = "gemini-2.0-flash" # Use a different model
|
config["quick_think_llm"] = "gemini-2.0-flash" # Use a different model
|
||||||
config["max_debate_rounds"] = 1 # Increase debate rounds
|
config["max_debate_rounds"] = 1 # Increase debate rounds
|
||||||
config["online_tools"] = True # Increase debate rounds
|
config["online_tools"] = True
|
||||||
|
|
||||||
# Initialize with custom config
|
# Initialize with custom config
|
||||||
ta = TradingAgentsGraph(debug=True, config=config)
|
ta = TradingAgentsGraph(debug=True, config=config)
|
||||||
|
|
|
||||||
|
|
@ -368,7 +368,7 @@ class Toolkit:
|
||||||
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
|
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Retrieve the latest news about a given stock by using OpenAI's news API.
|
Retrieve the latest news about a given stock by using LLM API (OpenAI/Gemini).
|
||||||
Args:
|
Args:
|
||||||
ticker (str): Ticker of a company. e.g. AAPL, TSM
|
ticker (str): Ticker of a company. e.g. AAPL, TSM
|
||||||
curr_date (str): Current date in yyyy-mm-dd format
|
curr_date (str): Current date in yyyy-mm-dd format
|
||||||
|
|
@ -376,9 +376,12 @@ class Toolkit:
|
||||||
str: A formatted string containing the latest news about the company on the given date.
|
str: A formatted string containing the latest news about the company on the given date.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
openai_news_results = interface.get_stock_news_openai(ticker, curr_date)
|
try:
|
||||||
|
openai_news_results = interface.get_stock_news_openai(ticker, curr_date)
|
||||||
return openai_news_results
|
return openai_news_results
|
||||||
|
except ValueError as e:
|
||||||
|
# Return the detailed error message to the agent
|
||||||
|
return f"⚠️ Online news tool failed:\n{str(e)}\n\nPlease use alternative offline tools or fix the configuration."
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@tool
|
@tool
|
||||||
|
|
@ -386,16 +389,19 @@ class Toolkit:
|
||||||
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
|
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Retrieve the latest macroeconomics news on a given date using OpenAI's macroeconomics news API.
|
Retrieve the latest macroeconomics news on a given date using LLM API (OpenAI/Gemini).
|
||||||
Args:
|
Args:
|
||||||
curr_date (str): Current date in yyyy-mm-dd format
|
curr_date (str): Current date in yyyy-mm-dd format
|
||||||
Returns:
|
Returns:
|
||||||
str: A formatted string containing the latest macroeconomic news on the given date.
|
str: A formatted string containing the latest macroeconomic news on the given date.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
openai_news_results = interface.get_global_news_openai(curr_date)
|
try:
|
||||||
|
openai_news_results = interface.get_global_news_openai(curr_date)
|
||||||
return openai_news_results
|
return openai_news_results
|
||||||
|
except ValueError as e:
|
||||||
|
# Return the detailed error message to the agent
|
||||||
|
return f"⚠️ Online global news tool failed:\n{str(e)}\n\nPlease use alternative offline tools or fix the configuration."
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@tool
|
@tool
|
||||||
|
|
@ -404,7 +410,7 @@ class Toolkit:
|
||||||
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
|
curr_date: Annotated[str, "Current date in yyyy-mm-dd format"],
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Retrieve the latest fundamental information about a given stock on a given date by using OpenAI's news API.
|
Retrieve the latest fundamental information about a given stock on a given date by using LLM API (OpenAI/Gemini).
|
||||||
Args:
|
Args:
|
||||||
ticker (str): Ticker of a company. e.g. AAPL, TSM
|
ticker (str): Ticker of a company. e.g. AAPL, TSM
|
||||||
curr_date (str): Current date in yyyy-mm-dd format
|
curr_date (str): Current date in yyyy-mm-dd format
|
||||||
|
|
@ -412,8 +418,11 @@ class Toolkit:
|
||||||
str: A formatted string containing the latest fundamental information about the company on the given date.
|
str: A formatted string containing the latest fundamental information about the company on the given date.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
openai_fundamentals_results = interface.get_fundamentals_openai(
|
try:
|
||||||
ticker, curr_date
|
openai_fundamentals_results = interface.get_fundamentals_openai(
|
||||||
)
|
ticker, curr_date
|
||||||
|
)
|
||||||
return openai_fundamentals_results
|
return openai_fundamentals_results
|
||||||
|
except ValueError as e:
|
||||||
|
# Return the detailed error message to the agent
|
||||||
|
return f"⚠️ Online fundamentals tool failed:\n{str(e)}\n\nPlease use alternative offline tools or fix the configuration."
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,7 @@ import pandas as pd
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
import yfinance as yf
|
import yfinance as yf
|
||||||
from openai import OpenAI
|
from openai import OpenAI
|
||||||
|
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||||
from .config import get_config, set_config, DATA_DIR
|
from .config import get_config, set_config, DATA_DIR
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -702,106 +703,237 @@ def get_YFin_data(
|
||||||
return filtered_data
|
return filtered_data
|
||||||
|
|
||||||
|
|
||||||
def get_stock_news_openai(ticker, curr_date):
|
def _get_valid_models(provider):
|
||||||
config = get_config()
|
"""Get list of valid models for a given provider"""
|
||||||
client = OpenAI(base_url=config["backend_url"])
|
if provider == "gemini":
|
||||||
|
return [
|
||||||
|
"gemini-1.5-pro",
|
||||||
|
"gemini-1.5-flash",
|
||||||
|
"gemini-1.0-pro",
|
||||||
|
"gemini-pro"
|
||||||
|
]
|
||||||
|
elif provider == "openai":
|
||||||
|
return [
|
||||||
|
"gpt-4o",
|
||||||
|
"gpt-4o-mini",
|
||||||
|
"gpt-4-turbo",
|
||||||
|
"gpt-4",
|
||||||
|
"gpt-3.5-turbo",
|
||||||
|
"o1-preview",
|
||||||
|
"o1-mini"
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
return []
|
||||||
|
|
||||||
response = client.responses.create(
|
|
||||||
model=config["quick_think_llm"],
|
def _call_llm_api(prompt, config):
|
||||||
input=[
|
"""Helper function to call either OpenAI or Gemini API based on configuration"""
|
||||||
{
|
provider = config["llm_provider"]
|
||||||
"role": "system",
|
|
||||||
"content": [
|
if provider == "gemini":
|
||||||
|
# Use Gemini
|
||||||
|
import os
|
||||||
|
from google.api_core.exceptions import NotFound, ResourceExhausted
|
||||||
|
|
||||||
|
# Check if API key is available
|
||||||
|
api_key = os.getenv("GOOGLE_API_KEY")
|
||||||
|
if not api_key:
|
||||||
|
raise ValueError(
|
||||||
|
"❌ GOOGLE_API_KEY environment variable is not set.\n"
|
||||||
|
"Please set your Google API key to use Gemini:\n"
|
||||||
|
"export GOOGLE_API_KEY=your_key_here"
|
||||||
|
)
|
||||||
|
|
||||||
|
model = config["gemini_quick_think_llm"]
|
||||||
|
valid_models = _get_valid_models("gemini")
|
||||||
|
|
||||||
|
try:
|
||||||
|
gemini_model = ChatGoogleGenerativeAI(
|
||||||
|
model=model,
|
||||||
|
temperature=1,
|
||||||
|
max_tokens=4096,
|
||||||
|
google_api_key=api_key
|
||||||
|
)
|
||||||
|
response = gemini_model.invoke(prompt)
|
||||||
|
return response.content
|
||||||
|
|
||||||
|
except NotFound as e:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ Invalid Gemini model: '{model}'\n"
|
||||||
|
f"Valid Gemini models are:\n"
|
||||||
|
+ "\n".join(f" • {m}" for m in valid_models) +
|
||||||
|
f"\n\nPlease update your configuration in default_config.py:\n"
|
||||||
|
f" 'gemini_quick_think_llm': 'gemini-1.5-flash' # or another valid model"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
|
except ResourceExhausted as e:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ Gemini API quota exceeded for model '{model}'\n"
|
||||||
|
f"You have hit your usage limits for the Gemini API.\n"
|
||||||
|
f"Options:\n"
|
||||||
|
f" • Wait for quota to reset (check: https://ai.google.dev/gemini-api/docs/rate-limits)\n"
|
||||||
|
f" • Upgrade your Gemini API plan\n"
|
||||||
|
f" • Switch to OpenAI by setting: 'llm_provider': 'openai' in default_config.py\n"
|
||||||
|
f" • Use offline tools by setting: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# Catch any other Gemini API errors
|
||||||
|
error_str = str(e).lower()
|
||||||
|
if "connection" in error_str or "network" in error_str:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ Gemini API connection failed\n"
|
||||||
|
f"Unable to connect to Google's Gemini API\n"
|
||||||
|
f"This could be due to:\n"
|
||||||
|
f" • Network connectivity issues\n"
|
||||||
|
f" • Invalid API key\n"
|
||||||
|
f" • Firewall blocking the connection\n"
|
||||||
|
f" • Google AI service temporarily unavailable\n"
|
||||||
|
f"\nAlternatives:\n"
|
||||||
|
f" • Switch to OpenAI: 'llm_provider': 'openai' in default_config.py\n"
|
||||||
|
f" • Use offline tools: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
elif "authentication" in error_str or "api key" in error_str:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ Gemini API authentication failed\n"
|
||||||
|
f"Your Google API key appears to be invalid or expired.\n"
|
||||||
|
f"Please check your GOOGLE_API_KEY environment variable.\n"
|
||||||
|
f"Get a valid key from: https://aistudio.google.com/app/apikey"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
else:
|
||||||
|
# Re-raise other unexpected errors with provider context
|
||||||
|
error_msg = (
|
||||||
|
f"❌ Gemini API error with model '{model}'\n"
|
||||||
|
f"Error: {str(e)}\n"
|
||||||
|
f"Valid models: {', '.join(valid_models[:5])}...\n"
|
||||||
|
f"\nAlternatives:\n"
|
||||||
|
f" • Switch to OpenAI: 'llm_provider': 'openai' in default_config.py\n"
|
||||||
|
f" • Use offline tools: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Use OpenAI (default)
|
||||||
|
import os
|
||||||
|
from openai import OpenAI, AuthenticationError, RateLimitError, NotFoundError
|
||||||
|
|
||||||
|
# Check if API key is available
|
||||||
|
api_key = os.getenv("OPENAI_API_KEY")
|
||||||
|
if not api_key:
|
||||||
|
raise ValueError(
|
||||||
|
"❌ OPENAI_API_KEY environment variable is not set.\n"
|
||||||
|
"Please set your OpenAI API key:\n"
|
||||||
|
"export OPENAI_API_KEY=your_key_here"
|
||||||
|
)
|
||||||
|
|
||||||
|
model = config["quick_think_llm"]
|
||||||
|
valid_models = _get_valid_models("openai")
|
||||||
|
|
||||||
|
try:
|
||||||
|
client = OpenAI(base_url=config["backend_url"])
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model=model,
|
||||||
|
messages=[
|
||||||
{
|
{
|
||||||
"type": "input_text",
|
"role": "system",
|
||||||
"text": f"Can you search Social Media for {ticker} from 7 days before {curr_date} to {curr_date}? Make sure you only get the data posted during that period.",
|
"content": prompt,
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
}
|
temperature=1,
|
||||||
],
|
max_tokens=4096,
|
||||||
text={"format": {"type": "text"}},
|
top_p=1,
|
||||||
reasoning={},
|
)
|
||||||
tools=[
|
return response.choices[0].message.content
|
||||||
{
|
|
||||||
"type": "web_search_preview",
|
except NotFoundError as e:
|
||||||
"user_location": {"type": "approximate"},
|
error_msg = (
|
||||||
"search_context_size": "low",
|
f"❌ Invalid OpenAI model: '{model}'\n"
|
||||||
}
|
f"Valid OpenAI models are:\n"
|
||||||
],
|
+ "\n".join(f" • {m}" for m in valid_models) +
|
||||||
temperature=1,
|
f"\n\nPlease update your configuration in default_config.py:\n"
|
||||||
max_output_tokens=4096,
|
f" 'quick_think_llm': 'gpt-4o-mini' # or another valid model"
|
||||||
top_p=1,
|
)
|
||||||
store=True,
|
raise ValueError(error_msg) from e
|
||||||
)
|
|
||||||
|
except RateLimitError as e:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ OpenAI API quota exceeded for model '{model}'\n"
|
||||||
|
f"You have hit your usage limits for the OpenAI API.\n"
|
||||||
|
f"Options:\n"
|
||||||
|
f" • Check your billing and add credits: https://platform.openai.com/account/billing\n"
|
||||||
|
f" • Wait for quota to reset\n"
|
||||||
|
f" • Switch to Gemini by setting: 'llm_provider': 'gemini' in default_config.py\n"
|
||||||
|
f" • Use offline tools by setting: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
|
except AuthenticationError as e:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ OpenAI API authentication failed\n"
|
||||||
|
f"Your API key appears to be invalid or expired.\n"
|
||||||
|
f"Please check your OPENAI_API_KEY environment variable.\n"
|
||||||
|
f"Get a valid key from: https://platform.openai.com/api-keys"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# Catch any other OpenAI API errors (connection, etc.)
|
||||||
|
error_str = str(e).lower()
|
||||||
|
if "connection" in error_str:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ OpenAI API connection failed\n"
|
||||||
|
f"Unable to connect to OpenAI API at {config['backend_url']}\n"
|
||||||
|
f"This could be due to:\n"
|
||||||
|
f" • Network connectivity issues\n"
|
||||||
|
f" • Invalid backend URL\n"
|
||||||
|
f" • Firewall blocking the connection\n"
|
||||||
|
f" • OpenAI service temporarily unavailable\n"
|
||||||
|
f"\nAlternatives:\n"
|
||||||
|
f" • Switch to Gemini: 'llm_provider': 'gemini' in default_config.py\n"
|
||||||
|
f" • Use offline tools: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
elif "quota" in error_str or "rate limit" in error_str or "429" in error_str:
|
||||||
|
error_msg = (
|
||||||
|
f"❌ OpenAI API quota/rate limit exceeded\n"
|
||||||
|
f"You have hit your usage limits for the OpenAI API.\n"
|
||||||
|
f"Options:\n"
|
||||||
|
f" • Check billing: https://platform.openai.com/account/billing\n"
|
||||||
|
f" • Wait for quota to reset\n"
|
||||||
|
f" • Switch to Gemini: 'llm_provider': 'gemini' in default_config.py\n"
|
||||||
|
f" • Use offline tools: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
else:
|
||||||
|
# Re-raise other unexpected errors with provider context
|
||||||
|
error_msg = (
|
||||||
|
f"❌ OpenAI API error with model '{model}'\n"
|
||||||
|
f"Error: {str(e)}\n"
|
||||||
|
f"Valid models: {', '.join(valid_models[:5])}...\n"
|
||||||
|
f"\nAlternatives:\n"
|
||||||
|
f" • Switch to Gemini: 'llm_provider': 'gemini' in default_config.py\n"
|
||||||
|
f" • Use offline tools: 'online_tools': False in default_config.py"
|
||||||
|
)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
return response.output[1].content[0].text
|
|
||||||
|
def get_stock_news_openai(ticker, curr_date):
|
||||||
|
config = get_config()
|
||||||
|
prompt = f"Can you search Social Media for {ticker} from 7 days before {curr_date} to {curr_date}? Make sure you only get the data posted during that period."
|
||||||
|
return _call_llm_api(prompt, config)
|
||||||
|
|
||||||
|
|
||||||
def get_global_news_openai(curr_date):
|
def get_global_news_openai(curr_date):
|
||||||
config = get_config()
|
config = get_config()
|
||||||
client = OpenAI(base_url=config["backend_url"])
|
prompt = f"Can you search global or macroeconomics news from 7 days before {curr_date} to {curr_date} that would be informative for trading purposes? Make sure you only get the data posted during that period."
|
||||||
|
return _call_llm_api(prompt, config)
|
||||||
response = client.responses.create(
|
|
||||||
model=config["quick_think_llm"],
|
|
||||||
input=[
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": [
|
|
||||||
{
|
|
||||||
"type": "input_text",
|
|
||||||
"text": f"Can you search global or macroeconomics news from 7 days before {curr_date} to {curr_date} that would be informative for trading purposes? Make sure you only get the data posted during that period.",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
}
|
|
||||||
],
|
|
||||||
text={"format": {"type": "text"}},
|
|
||||||
reasoning={},
|
|
||||||
tools=[
|
|
||||||
{
|
|
||||||
"type": "web_search_preview",
|
|
||||||
"user_location": {"type": "approximate"},
|
|
||||||
"search_context_size": "low",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
temperature=1,
|
|
||||||
max_output_tokens=4096,
|
|
||||||
top_p=1,
|
|
||||||
store=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
return response.output[1].content[0].text
|
|
||||||
|
|
||||||
|
|
||||||
def get_fundamentals_openai(ticker, curr_date):
|
def get_fundamentals_openai(ticker, curr_date):
|
||||||
config = get_config()
|
config = get_config()
|
||||||
client = OpenAI(base_url=config["backend_url"])
|
prompt = f"Can you search for fundamental analysis discussions on {ticker} during the month before {curr_date} to the month of {curr_date}. Make sure you only get the data posted during that period. List as a table, with PE/PS/Cash flow/ etc"
|
||||||
|
return _call_llm_api(prompt, config)
|
||||||
response = client.responses.create(
|
|
||||||
model=config["quick_think_llm"],
|
|
||||||
input=[
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": [
|
|
||||||
{
|
|
||||||
"type": "input_text",
|
|
||||||
"text": f"Can you search Fundamental for discussions on {ticker} during of the month before {curr_date} to the month of {curr_date}. Make sure you only get the data posted during that period. List as a table, with PE/PS/Cash flow/ etc",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
}
|
|
||||||
],
|
|
||||||
text={"format": {"type": "text"}},
|
|
||||||
reasoning={},
|
|
||||||
tools=[
|
|
||||||
{
|
|
||||||
"type": "web_search_preview",
|
|
||||||
"user_location": {"type": "approximate"},
|
|
||||||
"search_context_size": "low",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
temperature=1,
|
|
||||||
max_output_tokens=4096,
|
|
||||||
top_p=1,
|
|
||||||
store=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
return response.output[1].content[0].text
|
|
||||||
|
|
|
||||||
|
|
@ -9,10 +9,13 @@ DEFAULT_CONFIG = {
|
||||||
"dataflows/data_cache",
|
"dataflows/data_cache",
|
||||||
),
|
),
|
||||||
# LLM settings
|
# LLM settings
|
||||||
"llm_provider": "openai",
|
"llm_provider": "openai", # "openai" or "gemini"
|
||||||
"deep_think_llm": "o4-mini",
|
"deep_think_llm": "o4-mini",
|
||||||
"quick_think_llm": "gpt-4o-mini",
|
"quick_think_llm": "gpt-4o-mini",
|
||||||
"backend_url": "https://api.openai.com/v1",
|
"backend_url": "https://api.openai.com/v1",
|
||||||
|
# Gemini settings (used when llm_provider is "gemini")
|
||||||
|
"gemini_deep_think_llm": "gemini-1.5-pro",
|
||||||
|
"gemini_quick_think_llm": "gemini-1.5-flash",
|
||||||
# Debate and discussion settings
|
# Debate and discussion settings
|
||||||
"max_debate_rounds": 1,
|
"max_debate_rounds": 1,
|
||||||
"max_risk_discuss_rounds": 1,
|
"max_risk_discuss_rounds": 1,
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue