This commit is contained in:
Sagar Roy 2026-03-23 12:35:01 +01:00 committed by GitHub
commit 8012856247
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 98 additions and 32 deletions

View File

@ -4,3 +4,8 @@ GOOGLE_API_KEY=
ANTHROPIC_API_KEY=
XAI_API_KEY=
OPENROUTER_API_KEY=
GROQ_API_KEY=
KILO_API_KEY=
# Data Vendors
ALPHA_VANTAGE_API_KEY=

View File

@ -13,13 +13,13 @@
<div align="center">
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=de">Deutsch</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=es">Español</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=fr">français</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ja">日本語</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ko">한국어</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=pt">Português</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ru">Русский</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=de">Deutsch</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=es">Español</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=fr">français</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ja">日本語</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ko">한국어</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=pt">Português</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ru">Русский</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=zh">中文</a>
</div>
@ -28,7 +28,8 @@
# TradingAgents: Multi-Agents LLM Financial Trading Framework
## News
- [2026-03] **TradingAgents v0.2.2** released with GPT-5.4/Gemini 3.1/Claude 4.6 model coverage, five-tier rating scale, OpenAI Responses API, Anthropic effort control, and cross-platform stability.
- [2026-03] **TradingAgents v0.2.2** released with GPT-5.4/Gemini 3.1/Claude 4.6/Groq/Kilo model coverage, five-tier rating scale, OpenAI Responses API, Anthropic effort control, and cross-platform stability.
- [2026-02] **TradingAgents v0.2.0** released with multi-provider LLM support (GPT-5.x, Gemini 3.x, Claude 4.x, Grok 4.x) and improved system architecture.
- [2026-01] **Trading-R1** [Technical Report](https://arxiv.org/abs/2509.11420) released, with [Terminal](https://github.com/TauricResearch/Trading-R1) expected to land soon.
@ -65,6 +66,7 @@ TradingAgents is a multi-agent trading framework that mirrors the dynamics of re
Our framework decomposes complex trading tasks into specialized roles. This ensures the system achieves a robust, scalable approach to market analysis and decision-making.
### Analyst Team
- Fundamentals Analyst: Evaluates company financials and performance metrics, identifying intrinsic values and potential red flags.
- Sentiment Analyst: Analyzes social media and public sentiment using sentiment scoring algorithms to gauge short-term market mood.
- News Analyst: Monitors global news and macroeconomic indicators, interpreting the impact of events on market conditions.
@ -75,6 +77,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
</p>
### Researcher Team
- Comprises both bullish and bearish researchers who critically assess the insights provided by the Analyst Team. Through structured debates, they balance potential gains against inherent risks.
<p align="center">
@ -82,6 +85,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
</p>
### Trader Agent
- Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.
<p align="center">
@ -89,6 +93,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
</p>
### Risk Management and Portfolio Manager
- Continuously evaluates portfolio risk by assessing market volatility, liquidity, and other risk factors. The risk management team evaluates and adjusts trading strategies, providing assessment reports to the Portfolio Manager for final decision.
- The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.
@ -101,18 +106,21 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
### Installation
Clone TradingAgents:
```bash
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
```
Create a virtual environment in any of your favorite environment managers:
```bash
conda create -n tradingagents python=3.13
conda activate tradingagents
```
Install the package and its dependencies:
```bash
pip install .
```
@ -127,12 +135,15 @@ export GOOGLE_API_KEY=... # Google (Gemini)
export ANTHROPIC_API_KEY=... # Anthropic (Claude)
export XAI_API_KEY=... # xAI (Grok)
export OPENROUTER_API_KEY=... # OpenRouter
export GROQ_API_KEY=... # Groq
export KILO_API_KEY=... # Kilo Gateway
export ALPHA_VANTAGE_API_KEY=... # Alpha Vantage
```
For local models, configure Ollama with `llm_provider: "ollama"` in your config.
Alternatively, copy `.env.example` to `.env` and fill in your keys:
```bash
cp .env.example .env
```
@ -140,10 +151,12 @@ cp .env.example .env
### CLI Usage
Launch the interactive CLI:
```bash
tradingagents # installed command
python -m cli.main # alternative: run directly from source
```
You will see a screen where you can select your desired tickers, analysis date, LLM provider, research depth, and more.
<p align="center">
@ -164,7 +177,7 @@ An interface will appear showing results as they load, letting you track the age
### Implementation Details
We built TradingAgents with LangGraph to ensure flexibility and modularity. The framework supports multiple LLM providers: OpenAI, Google, Anthropic, xAI, OpenRouter, and Ollama.
We built TradingAgents with LangGraph to ensure flexibility and modularity. The framework supports multiple LLM providers: OpenAI, Google, Anthropic, xAI, OpenRouter, Groq, Kilo Gateway, and Ollama.
### Python Usage
@ -188,7 +201,7 @@ from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "openai" # openai, google, anthropic, xai, openrouter, ollama
config["llm_provider"] = "openai" # openai, google, anthropic, xai, openrouter, groq, kilo, ollama
config["deep_think_llm"] = "gpt-5.2" # Model for complex reasoning
config["quick_think_llm"] = "gpt-5-mini" # Model for quick tasks
config["max_debate_rounds"] = 2

View File

@ -546,14 +546,19 @@ def get_user_selections():
)
selected_llm_provider, backend_url = select_llm_provider()
# Normalize provider name once for all uses (Kilo Gateway -> kilo)
normalized_provider = selected_llm_provider.lower().replace(" ", "")
if normalized_provider == "kilogateway":
normalized_provider = "kilo"
# Step 6: Thinking agents
console.print(
create_question_box(
"Step 6: Thinking Agents", "Select your thinking agents for analysis"
)
)
selected_shallow_thinker = select_shallow_thinking_agent(selected_llm_provider)
selected_deep_thinker = select_deep_thinking_agent(selected_llm_provider)
selected_shallow_thinker = select_shallow_thinking_agent(normalized_provider)
selected_deep_thinker = select_deep_thinking_agent(normalized_provider)
# Step 7: Provider-specific thinking configuration
thinking_level = None
@ -586,12 +591,13 @@ def get_user_selections():
)
anthropic_effort = ask_anthropic_effort()
# Use already normalized provider from earlier
return {
"ticker": selected_ticker,
"analysis_date": analysis_date,
"analysts": selected_analysts,
"research_depth": selected_research_depth,
"llm_provider": selected_llm_provider.lower(),
"llm_provider": normalized_provider,
"backend_url": backend_url,
"shallow_thinker": selected_shallow_thinker,
"deep_thinker": selected_deep_thinker,
@ -635,19 +641,19 @@ def save_report_to_disk(final_state, ticker: str, save_path: Path):
analyst_parts = []
if final_state.get("market_report"):
analysts_dir.mkdir(exist_ok=True)
(analysts_dir / "market.md").write_text(final_state["market_report"])
(analysts_dir / "market.md").write_text(final_state["market_report"], encoding="utf-8")
analyst_parts.append(("Market Analyst", final_state["market_report"]))
if final_state.get("sentiment_report"):
analysts_dir.mkdir(exist_ok=True)
(analysts_dir / "sentiment.md").write_text(final_state["sentiment_report"])
(analysts_dir / "sentiment.md").write_text(final_state["sentiment_report"], encoding="utf-8")
analyst_parts.append(("Social Analyst", final_state["sentiment_report"]))
if final_state.get("news_report"):
analysts_dir.mkdir(exist_ok=True)
(analysts_dir / "news.md").write_text(final_state["news_report"])
(analysts_dir / "news.md").write_text(final_state["news_report"], encoding="utf-8")
analyst_parts.append(("News Analyst", final_state["news_report"]))
if final_state.get("fundamentals_report"):
analysts_dir.mkdir(exist_ok=True)
(analysts_dir / "fundamentals.md").write_text(final_state["fundamentals_report"])
(analysts_dir / "fundamentals.md").write_text(final_state["fundamentals_report"], encoding="utf-8")
analyst_parts.append(("Fundamentals Analyst", final_state["fundamentals_report"]))
if analyst_parts:
content = "\n\n".join(f"### {name}\n{text}" for name, text in analyst_parts)
@ -660,15 +666,15 @@ def save_report_to_disk(final_state, ticker: str, save_path: Path):
research_parts = []
if debate.get("bull_history"):
research_dir.mkdir(exist_ok=True)
(research_dir / "bull.md").write_text(debate["bull_history"])
(research_dir / "bull.md").write_text(debate["bull_history"], encoding="utf-8")
research_parts.append(("Bull Researcher", debate["bull_history"]))
if debate.get("bear_history"):
research_dir.mkdir(exist_ok=True)
(research_dir / "bear.md").write_text(debate["bear_history"])
(research_dir / "bear.md").write_text(debate["bear_history"], encoding="utf-8")
research_parts.append(("Bear Researcher", debate["bear_history"]))
if debate.get("judge_decision"):
research_dir.mkdir(exist_ok=True)
(research_dir / "manager.md").write_text(debate["judge_decision"])
(research_dir / "manager.md").write_text(debate["judge_decision"], encoding="utf-8")
research_parts.append(("Research Manager", debate["judge_decision"]))
if research_parts:
content = "\n\n".join(f"### {name}\n{text}" for name, text in research_parts)
@ -678,7 +684,7 @@ def save_report_to_disk(final_state, ticker: str, save_path: Path):
if final_state.get("trader_investment_plan"):
trading_dir = save_path / "3_trading"
trading_dir.mkdir(exist_ok=True)
(trading_dir / "trader.md").write_text(final_state["trader_investment_plan"])
(trading_dir / "trader.md").write_text(final_state["trader_investment_plan"], encoding="utf-8")
sections.append(f"## III. Trading Team Plan\n\n### Trader\n{final_state['trader_investment_plan']}")
# 4. Risk Management
@ -688,15 +694,15 @@ def save_report_to_disk(final_state, ticker: str, save_path: Path):
risk_parts = []
if risk.get("aggressive_history"):
risk_dir.mkdir(exist_ok=True)
(risk_dir / "aggressive.md").write_text(risk["aggressive_history"])
(risk_dir / "aggressive.md").write_text(risk["aggressive_history"], encoding="utf-8")
risk_parts.append(("Aggressive Analyst", risk["aggressive_history"]))
if risk.get("conservative_history"):
risk_dir.mkdir(exist_ok=True)
(risk_dir / "conservative.md").write_text(risk["conservative_history"])
(risk_dir / "conservative.md").write_text(risk["conservative_history"], encoding="utf-8")
risk_parts.append(("Conservative Analyst", risk["conservative_history"]))
if risk.get("neutral_history"):
risk_dir.mkdir(exist_ok=True)
(risk_dir / "neutral.md").write_text(risk["neutral_history"])
(risk_dir / "neutral.md").write_text(risk["neutral_history"], encoding="utf-8")
risk_parts.append(("Neutral Analyst", risk["neutral_history"]))
if risk_parts:
content = "\n\n".join(f"### {name}\n{text}" for name, text in risk_parts)
@ -706,12 +712,12 @@ def save_report_to_disk(final_state, ticker: str, save_path: Path):
if risk.get("judge_decision"):
portfolio_dir = save_path / "5_portfolio"
portfolio_dir.mkdir(exist_ok=True)
(portfolio_dir / "decision.md").write_text(risk["judge_decision"])
(portfolio_dir / "decision.md").write_text(risk["judge_decision"], encoding="utf-8")
sections.append(f"## V. Portfolio Manager Decision\n\n### Portfolio Manager\n{risk['judge_decision']}")
# Write consolidated report
header = f"# Trading Analysis Report: {ticker}\n\nGenerated: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
(save_path / "complete_report.md").write_text(header + "\n\n".join(sections))
(save_path / "complete_report.md").write_text(header + "\n\n".join(sections), encoding="utf-8")
return save_path / "complete_report.md"
@ -968,7 +974,7 @@ def run_analysis():
func(*args, **kwargs)
timestamp, message_type, content = obj.messages[-1]
content = content.replace("\n", " ") # Replace newlines with spaces
with open(log_file, "a") as f:
with open(log_file, "a", encoding="utf-8") as f:
f.write(f"{timestamp} [{message_type}] {content}\n")
return wrapper
@ -979,7 +985,7 @@ def run_analysis():
func(*args, **kwargs)
timestamp, tool_name, args = obj.tool_calls[-1]
args_str = ", ".join(f"{k}={v}" for k, v in args.items())
with open(log_file, "a") as f:
with open(log_file, "a", encoding="utf-8") as f:
f.write(f"{timestamp} [Tool Call] {tool_name}({args_str})\n")
return wrapper
@ -993,7 +999,7 @@ def run_analysis():
if content:
file_name = f"{section_name}.md"
text = "\n".join(str(item) for item in content) if isinstance(content, list) else content
with open(report_dir / file_name, "w") as f:
with open(report_dir / file_name, "w", encoding="utf-8") as f:
f.write(text)
return wrapper

View File

@ -162,6 +162,18 @@ def select_shallow_thinking_agent(provider) -> str:
("Grok 4 Fast (Non-Reasoning) - Speed optimized", "grok-4-fast-non-reasoning"),
("Grok 4.1 Fast (Reasoning) - High-performance, 2M ctx", "grok-4-1-fast-reasoning"),
],
"groq": [
("Llama 3.1 8B Instant - Fastest, cheapest", "llama-3.1-8b-instant"),
("Llama 3 8B - Simple tasks", "llama-3-8b-8192"),
("Mixtral 8x7B - Fast mixture of experts", "mixtral-8x7b-32768"),
("GPT-Oss 120B via Groq - Large model", "openai/gpt-oss-120b"),
],
"kilo": [
("Llama 3.1 8B via Kilo - Fast", "llama-3.1-8b-instant"),
("Claude Haiku via Kilo - Fast", "anthropic/claude-haiku-4-5"),
("Gemini 2.5 Flash via Kilo - Balanced", "google/gemini-2.5-flash"),
("MiniMax M2.5 via Kilo - Free", "minimax/minimax-m2.5:free"),
],
"openrouter": [
("NVIDIA Nemotron 3 Nano 30B (free)", "nvidia/nemotron-3-nano-30b-a3b:free"),
("Z.AI GLM 4.5 Air (free)", "z-ai/glm-4.5-air:free"),
@ -229,6 +241,20 @@ def select_deep_thinking_agent(provider) -> str:
("Grok 4 Fast (Reasoning) - High-performance", "grok-4-fast-reasoning"),
("Grok 4.1 Fast (Non-Reasoning) - Speed optimized, 2M ctx", "grok-4-1-fast-non-reasoning"),
],
"groq": [
("Llama 3.3 70B Versatile - Best overall", "llama-3.3-70b-versatile"),
("Llama 3.1 70B Versatile - Strong reasoning", "llama-3.1-70b-versatile"),
("Mixtral 8x22B - Large mixture of experts", "mixtral-8x22b-32768"),
("Llama 3.1 8B Instant - Fastest", "llama-3.1-8b-instant"),
("GPT-Oss 120B via Groq - Large model", "openai/gpt-oss-120b"),
],
"kilo": [
("Claude Sonnet 4.5 via Kilo - Balanced", "anthropic/claude-sonnet-4-5"),
("Claude Opus 4.5 via Kilo - Most capable", "anthropic/claude-opus-4-5"),
("GPT-5 Mini via Kilo - Fast", "openai/gpt-5-mini"),
("Gemini 2.5 Pro via Kilo - Reasoning", "google/gemini-2.5-pro"),
("MiniMax M2.5 via Kilo - Free", "minimax/minimax-m2.5:free"),
],
"openrouter": [
("Z.AI GLM 4.5 Air (free)", "z-ai/glm-4.5-air:free"),
("NVIDIA Nemotron 3 Nano 30B (free)", "nvidia/nemotron-3-nano-30b-a3b:free"),
@ -270,6 +296,8 @@ def select_llm_provider() -> tuple[str, str]:
("Google", "https://generativelanguage.googleapis.com/v1"),
("Anthropic", "https://api.anthropic.com/"),
("xAI", "https://api.x.ai/v1"),
("Groq", "https://api.groq.com/openai/v1"),
("Kilo Gateway", "https://api.kilo.ai/api/gateway"),
("Openrouter", "https://openrouter.ai/api/v1"),
("Ollama", "http://localhost:11434/v1"),
]

View File

@ -34,7 +34,7 @@ def create_llm_client(
"""
provider_lower = provider.lower()
if provider_lower in ("openai", "ollama", "openrouter"):
if provider_lower in ("openai", "ollama", "openrouter", "groq", "kilo"):
return OpenAIClient(model, base_url, provider=provider_lower, **kwargs)
if provider_lower == "xai":

View File

@ -29,6 +29,8 @@ _PROVIDER_CONFIG = {
"xai": ("https://api.x.ai/v1", "XAI_API_KEY"),
"openrouter": ("https://openrouter.ai/api/v1", "OPENROUTER_API_KEY"),
"ollama": ("http://localhost:11434/v1", None),
"groq": ("https://api.groq.com/openai/v1", "GROQ_API_KEY"),
"kilo": ("https://api.kilo.ai/api/gateway", "KILO_API_KEY"),
}

View File

@ -48,17 +48,29 @@ VALID_MODELS = {
"grok-4-fast-reasoning",
"grok-4-fast-non-reasoning",
],
"groq": [
# Llama series (hosted by Groq)
"llama-3.3-70b-versatile",
"llama-3.1-70b-versatile",
"llama-3.1-8b-instant",
"llama-3-8b-8192",
# Mixtral series (hosted by Groq)
"mixtral-8x7b-32768",
"mixtral-8x22b-32768",
# Other models via Groq
"openai/gpt-oss-120b",
],
}
def validate_model(provider: str, model: str) -> bool:
"""Check if model name is valid for the given provider.
For ollama, openrouter - any model is accepted.
For ollama, openrouter, kilo - any model is accepted.
"""
provider_lower = provider.lower()
if provider_lower in ("ollama", "openrouter"):
if provider_lower in ("ollama", "openrouter", "kilo"):
return True
if provider_lower not in VALID_MODELS: