feat: add Polymarket prediction market analysis module
Add tradingagents/prediction_market/ as a parallel module adapting the multi-agent framework for Polymarket binary prediction markets. Key components: - 4 analysts: Event, Odds, Information, Sentiment - YES/NO researchers with structured debate - PM Trader with Kelly Criterion position sizing - Risk debate team (aggressive/conservative/neutral) - Polymarket REST API client (Gamma + CLOB, read-only) - LangGraph state machine mirroring existing stock flow - BM25 memory system for reflection/learning - CLI integration with analysis mode selection and URL resolution Analysis-only scope - no order placement in v1. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
589b351f2a
commit
4a11b4bf55
|
|
@ -0,0 +1,294 @@
|
|||
# Polymarket Prediction Market Agent Module
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
```bash
|
||||
# Create virtual environment (Python 3.10+)
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Set up API keys (at least one LLM provider required)
|
||||
cp .env.example .env
|
||||
# Edit .env and fill in your API key
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
from tradingagents.prediction_market import PMTradingAgentsGraph
|
||||
from tradingagents.prediction_market.pm_config import PM_DEFAULT_CONFIG
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
config = PM_DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "anthropic" # openai, google, anthropic, xai, openrouter, ollama
|
||||
config["deep_think_llm"] = "claude-sonnet-4-6"
|
||||
config["quick_think_llm"] = "claude-sonnet-4-6"
|
||||
|
||||
pm = PMTradingAgentsGraph(debug=True, config=config)
|
||||
|
||||
# market_id from Polymarket website or Gamma API
|
||||
_, decision = pm.propagate("<market_id>", "2026-03-23", "Market question (optional)")
|
||||
print(decision)
|
||||
```
|
||||
|
||||
### How to Get a Market ID
|
||||
|
||||
The market_id is a numeric ID from the Polymarket Gamma API. You can find it by:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
# Option 1: Browse top markets by volume
|
||||
resp = requests.get("https://gamma-api.polymarket.com/markets", params={
|
||||
"active": "true",
|
||||
"closed": "false",
|
||||
"order": "volume24hr",
|
||||
"ascending": "false",
|
||||
"limit": 10,
|
||||
})
|
||||
for m in resp.json():
|
||||
print(f'{m["id"]} | {m["question"]}')
|
||||
|
||||
# Option 2: Look up from a Polymarket web URL slug
|
||||
# e.g. https://polymarket.com/event/xxx → use the slug to search
|
||||
```
|
||||
|
||||
### CLI Usage
|
||||
|
||||
You can also use the CLI, which supports pasting Polymarket URLs directly:
|
||||
|
||||
```bash
|
||||
python -m cli.main
|
||||
# Step 1: Select "Polymarket Market ID (prediction market)"
|
||||
# Step 2: Paste a Polymarket URL or enter a numeric market ID
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The system is built as a **LangGraph** state machine with 4 phases and 10+ LLM agents:
|
||||
|
||||
```
|
||||
Input: market_id + trade_date + market_question
|
||||
|
|
||||
v
|
||||
+-------------------------------------+
|
||||
| Phase 1: Analyst Team (4 Analysts) |
|
||||
| Event -> Odds -> Information -> Sent|
|
||||
+----------------+--------------------+
|
||||
v
|
||||
+-------------------------------------+
|
||||
| Phase 2: Research Debate |
|
||||
| YES Researcher <-> NO Researcher |
|
||||
| -> Research Manager |
|
||||
+----------------+--------------------+
|
||||
v
|
||||
+-------------------------------------+
|
||||
| Phase 3: Trading Decision |
|
||||
| PM Trader (Kelly Criterion) |
|
||||
+----------------+--------------------+
|
||||
v
|
||||
+-------------------------------------+
|
||||
| Phase 4: Risk Management |
|
||||
| Aggressive <-> Conservative <-> Neut|
|
||||
| -> Risk Judge |
|
||||
+----------------+--------------------+
|
||||
v
|
||||
Structured JSON Output
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Analyst Team
|
||||
|
||||
Four analysts run sequentially, each with a tool loop that calls Polymarket APIs until sufficient data is collected. Uses `quick_think_llm`.
|
||||
|
||||
| Analyst | Tools | Responsibility |
|
||||
|---------|-------|----------------|
|
||||
| **Event Analyst** | `get_market_info`, `get_resolution_criteria`, `get_event_context` | Analyze the event: what is being predicted, resolution criteria clarity, timeline |
|
||||
| **Odds Analyst** | `get_market_info`, `get_market_price_history`, `get_order_book` | Market microstructure: current prices, liquidity, bid/ask spread, pricing efficiency |
|
||||
| **Information Analyst** | `get_news`, `get_global_news`, `get_related_markets`, `search_markets` | Find information not yet priced in, cross-reference related markets |
|
||||
| **Sentiment Analyst** | `get_news`, `get_global_news` | Public opinion analysis: news, social sentiment, expert vs crowd divergence |
|
||||
|
||||
Each analyst produces a report (`event_report`, `odds_report`, `information_report`, `sentiment_report`) that feeds into subsequent phases.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Research Debate
|
||||
|
||||
| Role | LLM | Responsibility |
|
||||
|------|-----|----------------|
|
||||
| **YES Researcher** | `quick_think_llm` | Build the case for the event occurring, citing analyst reports |
|
||||
| **NO Researcher** | `quick_think_llm` | Build the case against, rebutting YES arguments |
|
||||
| **Research Manager** | `deep_think_llm` | Synthesize both sides into an `investment_plan` |
|
||||
|
||||
- YES and NO debate for `max_debate_rounds` rounds (default 1 round = 2 turns)
|
||||
- Both researchers have a **BM25 memory system** that recalls lessons from past similar markets
|
||||
- Research Manager uses the stronger `deep_think_llm` for final synthesis
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Trading Decision
|
||||
|
||||
The **PM Trader** (using `quick_think_llm`) receives all reports and the investment plan, then:
|
||||
|
||||
1. Estimates the **true probability** based on all analysis
|
||||
2. Compares against the **market price** from the Odds report
|
||||
3. Calculates **edge** = |estimated probability - market price|
|
||||
4. If edge < **5% threshold** -> **PASS**
|
||||
5. If edge >= 5% -> calculate position size using **0.25x Fractional Kelly Criterion**:
|
||||
- Kelly fraction = edge / odds_against
|
||||
- Position size = 0.25 x Kelly fraction x bankroll
|
||||
|
||||
Decision options:
|
||||
- **BUY_YES**: Estimated probability > market price + 5% (event more likely than market implies)
|
||||
- **BUY_NO**: Estimated probability < market price - 5% (event less likely than market implies)
|
||||
- **PASS**: Edge below threshold or uncertainty too high
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Risk Management
|
||||
|
||||
Three-way debate + final ruling:
|
||||
|
||||
| Role | LLM | Stance |
|
||||
|------|-----|--------|
|
||||
| **Aggressive Analyst** | `quick_think_llm` | Advocates for the trade, emphasizes edge and upside |
|
||||
| **Conservative Analyst** | `quick_think_llm` | Argues against, emphasizes downside risk and uncertainty |
|
||||
| **Neutral Analyst** | `quick_think_llm` | Balanced perspective, proposes compromise |
|
||||
| **Risk Judge** | `deep_think_llm` | Final ruling after hearing all sides |
|
||||
|
||||
The three analysts debate for `max_risk_discuss_rounds` rounds (default 1 round = 3 turns, one per analyst).
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
The Risk Judge's natural language output is converted to structured JSON by the **Signal Processor**:
|
||||
|
||||
```json
|
||||
{
|
||||
"signal": "BUY_YES | BUY_NO | PASS",
|
||||
"estimated_probability": 0.65,
|
||||
"market_price": 0.50,
|
||||
"edge": 0.15,
|
||||
"position_size": 0.03,
|
||||
"confidence": "high | medium | low"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reflection & Learning
|
||||
|
||||
After a trade resolves, invoke the reflection mechanism to let agents learn from outcomes:
|
||||
|
||||
```python
|
||||
# After the trade resolves, pass the actual returns
|
||||
pm.reflect_and_remember(returns_losses=1000)
|
||||
```
|
||||
|
||||
The system will:
|
||||
1. Review each agent's decisions (YES/NO Researcher, Trader, Research Manager, Risk Judge)
|
||||
2. Analyze which judgments were correct or incorrect, and why
|
||||
3. Store lessons learned in a BM25 memory system
|
||||
4. Automatically reference past experience when encountering similar markets in the future
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
All parameters are in `tradingagents/prediction_market/pm_config.py`:
|
||||
|
||||
```python
|
||||
PM_DEFAULT_CONFIG = {
|
||||
# LLM settings
|
||||
"llm_provider": "openai", # openai, google, anthropic, xai, openrouter, ollama
|
||||
"deep_think_llm": "gpt-5.2", # For Research Manager, Risk Judge (deep reasoning)
|
||||
"quick_think_llm": "gpt-5-mini", # For Analysts, Researchers, Trader (speed priority)
|
||||
|
||||
# Polymarket API
|
||||
"polymarket_gamma_url": "https://gamma-api.polymarket.com",
|
||||
"polymarket_clob_url": "https://clob.polymarket.com",
|
||||
|
||||
# Trading parameters
|
||||
"kelly_fraction": 0.25, # Conservative Kelly multiplier (quarter Kelly)
|
||||
"min_edge_threshold": 0.05, # Minimum edge threshold (5%)
|
||||
"max_position_pct": 0.05, # Max single position as % of bankroll (5%)
|
||||
"max_cluster_exposure_pct": 0.15, # Max exposure to correlated markets (15%)
|
||||
"bankroll": 10000, # Simulated bankroll
|
||||
|
||||
# Debate settings
|
||||
"max_debate_rounds": 1, # YES/NO debate rounds
|
||||
"max_risk_discuss_rounds": 1, # Risk management debate rounds
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Sources
|
||||
|
||||
| Source | Purpose | API Key Required |
|
||||
|--------|---------|-----------------|
|
||||
| **Polymarket Gamma API** | Market info, resolution criteria, event context, search | No (public API) |
|
||||
| **Polymarket CLOB API** | Price history, order book | No (public API) |
|
||||
| **yfinance News** | News search (`get_news`, `get_global_news`) | No |
|
||||
|
||||
> **Note**: The news tools are shared with the stock analysis module (yfinance-based), so coverage for political markets may be limited.
|
||||
|
||||
---
|
||||
|
||||
## Current Limitations
|
||||
|
||||
- **Analysis only, no order execution**: v1 does not place actual trades on Polymarket
|
||||
- **Binary markets only**: Supports Yes/No outcomes; multi-outcome and numeric markets are not supported
|
||||
- **REST API only**: Uses polling, no WebSocket real-time streaming
|
||||
- **No backtesting**: No historical backtesting framework included
|
||||
- **Limited news coverage**: Political market news search is limited since the news tools are designed for stocks
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
tradingagents/prediction_market/
|
||||
├── __init__.py # Exports PMTradingAgentsGraph
|
||||
├── pm_config.py # Default configuration
|
||||
├── agents/
|
||||
│ ├── analysts/
|
||||
│ │ ├── event_analyst.py # Event analysis
|
||||
│ │ ├── odds_analyst.py # Odds/pricing analysis
|
||||
│ │ ├── information_analyst.py # Information gathering
|
||||
│ │ └── sentiment_analyst.py # Sentiment analysis
|
||||
│ ├── researchers/
|
||||
│ │ ├── yes_researcher.py # YES-side researcher
|
||||
│ │ └── no_researcher.py # NO-side researcher
|
||||
│ ├── trader/
|
||||
│ │ └── pm_trader.py # Trading decisions (Kelly Criterion)
|
||||
│ ├── managers/
|
||||
│ │ ├── research_manager.py # Research manager (debate synthesis)
|
||||
│ │ └── risk_manager.py # Risk manager (final ruling)
|
||||
│ ├── risk_mgmt/
|
||||
│ │ ├── aggressive_debator.py # Aggressive stance
|
||||
│ │ ├── conservative_debator.py # Conservative stance
|
||||
│ │ └── neutral_debator.py # Neutral stance
|
||||
│ └── utils/
|
||||
│ ├── pm_agent_states.py # LangGraph state definitions
|
||||
│ ├── pm_agent_utils.py # Shared utilities
|
||||
│ └── pm_tools.py # @tool decorator wrappers
|
||||
├── dataflows/
|
||||
│ └── polymarket.py # Polymarket API client (Gamma + CLOB)
|
||||
└── graph/
|
||||
├── pm_trading_graph.py # Main graph class
|
||||
├── setup.py # Graph construction logic
|
||||
├── propagation.py # State initialization & propagation
|
||||
├── conditional_logic.py # Conditional branching (tool loop, debate control)
|
||||
├── signal_processing.py # JSON output structuring
|
||||
└── reflection.py # Reflection & learning mechanism
|
||||
```
|
||||
172
cli/main.py
172
cli/main.py
|
|
@ -24,8 +24,10 @@ from rich.align import Align
|
|||
from rich.rule import Rule
|
||||
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.prediction_market import PMTradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from cli.models import AnalystType
|
||||
from tradingagents.prediction_market.pm_config import PM_DEFAULT_CONFIG
|
||||
from cli.models import AnalysisMode, AnalystType, PMAnalystType
|
||||
from cli.utils import *
|
||||
from cli.announcements import fetch_announcements, display_announcements
|
||||
from cli.stats_handler import StatsCallbackHandler
|
||||
|
|
@ -462,7 +464,7 @@ def update_display(layout, spinner_text=None, stats_handler=None, start_time=Non
|
|||
def get_user_selections():
|
||||
"""Get all user selections before starting the analysis display."""
|
||||
# Display ASCII art welcome message
|
||||
with open(Path(__file__).parent / "static" / "welcome.txt", "r") as f:
|
||||
with open("./cli/static/welcome.txt", "r", encoding="utf-8") as f:
|
||||
welcome_ascii = f.read()
|
||||
|
||||
# Create welcome box content
|
||||
|
|
@ -498,73 +500,103 @@ def get_user_selections():
|
|||
box_content += f"\n[dim]Default: {default}[/dim]"
|
||||
return Panel(box_content, border_style="blue", padding=(1, 2))
|
||||
|
||||
# Step 1: Ticker symbol
|
||||
# Step 1: Analysis mode (Stock or Polymarket)
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 1: Ticker Symbol",
|
||||
"Enter the exact ticker symbol to analyze, including exchange suffix when needed (examples: SPY, CNC.TO, 7203.T, 0700.HK)",
|
||||
"SPY",
|
||||
"Step 1: Ticker Symbol or Polymarket Market ID",
|
||||
"Choose between stock analysis or prediction market analysis",
|
||||
)
|
||||
)
|
||||
selected_ticker = get_ticker()
|
||||
selected_mode = select_analysis_mode()
|
||||
|
||||
# Step 2: Analysis date
|
||||
# Step 2: Ticker / Market ID based on mode
|
||||
selected_ticker = None
|
||||
market_id = None
|
||||
market_question = ""
|
||||
|
||||
if selected_mode == AnalysisMode.STOCK:
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 2: Ticker Symbol", "Enter the ticker symbol to analyze", "SPY"
|
||||
)
|
||||
)
|
||||
selected_ticker = get_ticker()
|
||||
else:
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 2: Polymarket Market",
|
||||
"Paste a Polymarket URL or enter a numeric market ID",
|
||||
)
|
||||
)
|
||||
market_id, market_question = get_market_id()
|
||||
|
||||
# Step 3: Analysis date
|
||||
default_date = datetime.datetime.now().strftime("%Y-%m-%d")
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 2: Analysis Date",
|
||||
"Step 3: Analysis Date",
|
||||
"Enter the analysis date (YYYY-MM-DD)",
|
||||
default_date,
|
||||
)
|
||||
)
|
||||
analysis_date = get_analysis_date()
|
||||
|
||||
# Step 3: Select analysts
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 3: Analysts Team", "Select your LLM analyst agents for the analysis"
|
||||
# Step 4: Select analysts
|
||||
if selected_mode == AnalysisMode.STOCK:
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 4: Analysts Team", "Select your LLM analyst agents for the analysis"
|
||||
)
|
||||
)
|
||||
selected_analysts = select_analysts()
|
||||
console.print(
|
||||
f"[green]Selected analysts:[/green] {', '.join(analyst.value for analyst in selected_analysts)}"
|
||||
)
|
||||
else:
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 4: PM Analysts Team", "Select your prediction market analyst agents"
|
||||
)
|
||||
)
|
||||
selected_analysts = select_pm_analysts()
|
||||
console.print(
|
||||
f"[green]Selected PM analysts:[/green] {', '.join(analyst.value for analyst in selected_analysts)}"
|
||||
)
|
||||
)
|
||||
selected_analysts = select_analysts()
|
||||
console.print(
|
||||
f"[green]Selected analysts:[/green] {', '.join(analyst.value for analyst in selected_analysts)}"
|
||||
)
|
||||
|
||||
# Step 4: Research depth
|
||||
# Step 5: Research depth
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 4: Research Depth", "Select your research depth level"
|
||||
"Step 5: Research Depth", "Select your research depth level"
|
||||
)
|
||||
)
|
||||
selected_research_depth = select_research_depth()
|
||||
|
||||
# Step 5: OpenAI backend
|
||||
# Step 6: LLM provider
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 5: OpenAI backend", "Select which service to talk to"
|
||||
"Step 6: LLM Provider", "Select which service to talk to"
|
||||
)
|
||||
)
|
||||
selected_llm_provider, backend_url = select_llm_provider()
|
||||
|
||||
# Step 6: Thinking agents
|
||||
|
||||
# Step 7: Thinking agents
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 6: Thinking Agents", "Select your thinking agents for analysis"
|
||||
"Step 7: Thinking Agents", "Select your thinking agents for analysis"
|
||||
)
|
||||
)
|
||||
selected_shallow_thinker = select_shallow_thinking_agent(selected_llm_provider)
|
||||
selected_deep_thinker = select_deep_thinking_agent(selected_llm_provider)
|
||||
|
||||
# Step 7: Provider-specific thinking configuration
|
||||
# Step 8: Provider-specific thinking configuration
|
||||
thinking_level = None
|
||||
reasoning_effort = None
|
||||
anthropic_effort = None
|
||||
|
||||
provider_lower = selected_llm_provider.lower()
|
||||
if provider_lower == "google":
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 7: Thinking Mode",
|
||||
"Step 8: Thinking Mode",
|
||||
"Configure Gemini thinking mode"
|
||||
)
|
||||
)
|
||||
|
|
@ -572,22 +604,17 @@ def get_user_selections():
|
|||
elif provider_lower == "openai":
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 7: Reasoning Effort",
|
||||
"Step 8: Reasoning Effort",
|
||||
"Configure OpenAI reasoning effort level"
|
||||
)
|
||||
)
|
||||
reasoning_effort = ask_openai_reasoning_effort()
|
||||
elif provider_lower == "anthropic":
|
||||
console.print(
|
||||
create_question_box(
|
||||
"Step 7: Effort Level",
|
||||
"Configure Claude effort level"
|
||||
)
|
||||
)
|
||||
anthropic_effort = ask_anthropic_effort()
|
||||
|
||||
return {
|
||||
"mode": selected_mode,
|
||||
"ticker": selected_ticker,
|
||||
"market_id": market_id,
|
||||
"market_question": market_question,
|
||||
"analysis_date": analysis_date,
|
||||
"analysts": selected_analysts,
|
||||
"research_depth": selected_research_depth,
|
||||
|
|
@ -597,7 +624,6 @@ def get_user_selections():
|
|||
"deep_thinker": selected_deep_thinker,
|
||||
"google_thinking_level": thinking_level,
|
||||
"openai_reasoning_effort": reasoning_effort,
|
||||
"anthropic_effort": anthropic_effort,
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -800,11 +826,9 @@ ANALYST_REPORT_MAP = {
|
|||
|
||||
|
||||
def update_analyst_statuses(message_buffer, chunk):
|
||||
"""Update analyst statuses based on accumulated report state.
|
||||
"""Update all analyst statuses based on current report state.
|
||||
|
||||
Logic:
|
||||
- Store new report content from the current chunk if present
|
||||
- Check accumulated report_sections (not just current chunk) for status
|
||||
- Analysts with reports = completed
|
||||
- First analyst without report = in_progress
|
||||
- Remaining analysts without reports = pending
|
||||
|
|
@ -819,16 +843,11 @@ def update_analyst_statuses(message_buffer, chunk):
|
|||
|
||||
agent_name = ANALYST_AGENT_NAMES[analyst_key]
|
||||
report_key = ANALYST_REPORT_MAP[analyst_key]
|
||||
|
||||
# Capture new report content from current chunk
|
||||
if chunk.get(report_key):
|
||||
message_buffer.update_report_section(report_key, chunk[report_key])
|
||||
|
||||
# Determine status from accumulated sections, not just current chunk
|
||||
has_report = bool(message_buffer.report_sections.get(report_key))
|
||||
has_report = bool(chunk.get(report_key))
|
||||
|
||||
if has_report:
|
||||
message_buffer.update_agent_status(agent_name, "completed")
|
||||
message_buffer.update_report_section(report_key, chunk[report_key])
|
||||
elif not found_active:
|
||||
message_buffer.update_agent_status(agent_name, "in_progress")
|
||||
found_active = True
|
||||
|
|
@ -919,6 +938,53 @@ def run_analysis():
|
|||
# First get all user selections
|
||||
selections = get_user_selections()
|
||||
|
||||
# Branch to Polymarket flow if selected
|
||||
if selections["mode"] == AnalysisMode.POLYMARKET:
|
||||
config = PM_DEFAULT_CONFIG.copy()
|
||||
config["max_debate_rounds"] = selections["research_depth"]
|
||||
config["max_risk_discuss_rounds"] = selections["research_depth"]
|
||||
config["quick_think_llm"] = selections["shallow_thinker"]
|
||||
config["deep_think_llm"] = selections["deep_thinker"]
|
||||
config["backend_url"] = selections["backend_url"]
|
||||
config["llm_provider"] = selections["llm_provider"].lower()
|
||||
config["google_thinking_level"] = selections.get("google_thinking_level")
|
||||
config["openai_reasoning_effort"] = selections.get("openai_reasoning_effort")
|
||||
|
||||
pm_analyst_order = ["event", "odds", "information", "sentiment"]
|
||||
selected_set = {analyst.value for analyst in selections["analysts"]}
|
||||
selected_analyst_keys = [a for a in pm_analyst_order if a in selected_set]
|
||||
|
||||
pm_graph = PMTradingAgentsGraph(
|
||||
selected_analyst_keys,
|
||||
config=config,
|
||||
debug=True,
|
||||
)
|
||||
|
||||
market_id = selections["market_id"]
|
||||
market_question = selections.get("market_question", "")
|
||||
analysis_date = selections["analysis_date"]
|
||||
|
||||
console.print(f"\n[bold cyan]Running Polymarket analysis...[/bold cyan]")
|
||||
console.print(f" Market ID: [green]{market_id}[/green]")
|
||||
if market_question:
|
||||
console.print(f" Question: [green]{market_question}[/green]")
|
||||
console.print(f" Date: [green]{analysis_date}[/green]")
|
||||
console.print(f" Analysts: [green]{', '.join(selected_analyst_keys)}[/green]")
|
||||
console.print()
|
||||
|
||||
_, decision = pm_graph.propagate(market_id, analysis_date, market_question)
|
||||
|
||||
console.print("\n[bold cyan]Analysis Complete![/bold cyan]\n")
|
||||
console.print(Panel(
|
||||
Markdown(f"```json\n{decision}\n```"),
|
||||
title="[bold]Final Decision[/bold]",
|
||||
border_style="green",
|
||||
padding=(1, 2),
|
||||
))
|
||||
return
|
||||
|
||||
# --- Stock analysis flow (original) ---
|
||||
|
||||
# Create config with selected research depth
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["max_debate_rounds"] = selections["research_depth"]
|
||||
|
|
@ -930,7 +996,6 @@ def run_analysis():
|
|||
# Provider-specific thinking configuration
|
||||
config["google_thinking_level"] = selections.get("google_thinking_level")
|
||||
config["openai_reasoning_effort"] = selections.get("openai_reasoning_effort")
|
||||
config["anthropic_effort"] = selections.get("anthropic_effort")
|
||||
|
||||
# Create stats callback handler for tracking LLM/tool calls
|
||||
stats_handler = StatsCallbackHandler()
|
||||
|
|
@ -968,7 +1033,7 @@ def run_analysis():
|
|||
func(*args, **kwargs)
|
||||
timestamp, message_type, content = obj.messages[-1]
|
||||
content = content.replace("\n", " ") # Replace newlines with spaces
|
||||
with open(log_file, "a") as f:
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
f.write(f"{timestamp} [{message_type}] {content}\n")
|
||||
return wrapper
|
||||
|
||||
|
|
@ -979,7 +1044,7 @@ def run_analysis():
|
|||
func(*args, **kwargs)
|
||||
timestamp, tool_name, args = obj.tool_calls[-1]
|
||||
args_str = ", ".join(f"{k}={v}" for k, v in args.items())
|
||||
with open(log_file, "a") as f:
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
f.write(f"{timestamp} [Tool Call] {tool_name}({args_str})\n")
|
||||
return wrapper
|
||||
|
||||
|
|
@ -992,9 +1057,8 @@ def run_analysis():
|
|||
content = obj.report_sections[section_name]
|
||||
if content:
|
||||
file_name = f"{section_name}.md"
|
||||
text = "\n".join(str(item) for item in content) if isinstance(content, list) else content
|
||||
with open(report_dir / file_name, "w") as f:
|
||||
f.write(text)
|
||||
with open(report_dir / file_name, "w", encoding="utf-8") as f:
|
||||
f.write(content)
|
||||
return wrapper
|
||||
|
||||
message_buffer.add_message = save_message_decorator(message_buffer, "add_message")
|
||||
|
|
|
|||
|
|
@ -3,8 +3,20 @@ from typing import List, Optional, Dict
|
|||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class AnalysisMode(str, Enum):
|
||||
STOCK = "stock"
|
||||
POLYMARKET = "polymarket"
|
||||
|
||||
|
||||
class AnalystType(str, Enum):
|
||||
MARKET = "market"
|
||||
SOCIAL = "social"
|
||||
NEWS = "news"
|
||||
FUNDAMENTALS = "fundamentals"
|
||||
|
||||
|
||||
class PMAnalystType(str, Enum):
|
||||
EVENT = "event"
|
||||
ODDS = "odds"
|
||||
INFORMATION = "information"
|
||||
SENTIMENT = "sentiment"
|
||||
|
|
|
|||
164
cli/utils.py
164
cli/utils.py
|
|
@ -3,7 +3,7 @@ from typing import List, Optional, Tuple, Dict
|
|||
|
||||
from rich.console import Console
|
||||
|
||||
from cli.models import AnalystType
|
||||
from cli.models import AnalysisMode, AnalystType, PMAnalystType
|
||||
|
||||
console = Console()
|
||||
|
||||
|
|
@ -16,6 +16,168 @@ ANALYST_ORDER = [
|
|||
("Fundamentals Analyst", AnalystType.FUNDAMENTALS),
|
||||
]
|
||||
|
||||
PM_ANALYST_ORDER = [
|
||||
("Event Analyst", PMAnalystType.EVENT),
|
||||
("Odds Analyst", PMAnalystType.ODDS),
|
||||
("Information Analyst", PMAnalystType.INFORMATION),
|
||||
("Sentiment Analyst", PMAnalystType.SENTIMENT),
|
||||
]
|
||||
|
||||
|
||||
def select_analysis_mode() -> AnalysisMode:
|
||||
"""Select between Stock and Polymarket analysis."""
|
||||
choice = questionary.select(
|
||||
"Select Analysis Mode:",
|
||||
choices=[
|
||||
questionary.Choice("Stock Ticker (e.g. NVDA, TSLA)", value=AnalysisMode.STOCK),
|
||||
questionary.Choice("Polymarket Market ID (prediction market)", value=AnalysisMode.POLYMARKET),
|
||||
],
|
||||
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
|
||||
style=questionary.Style(
|
||||
[
|
||||
("selected", "fg:cyan noinherit"),
|
||||
("highlighted", "fg:cyan noinherit"),
|
||||
("pointer", "fg:cyan noinherit"),
|
||||
]
|
||||
),
|
||||
).ask()
|
||||
|
||||
if choice is None:
|
||||
console.print("\n[red]No mode selected. Exiting...[/red]")
|
||||
exit(1)
|
||||
|
||||
return choice
|
||||
|
||||
|
||||
def _resolve_polymarket_url(url: str) -> tuple[str, str]:
|
||||
"""Resolve a Polymarket URL to a (market_id, market_question) tuple.
|
||||
|
||||
Supports formats:
|
||||
- https://polymarket.com/event/<event-slug>/<market-slug>
|
||||
- https://polymarket.com/event/<market-slug>
|
||||
"""
|
||||
from urllib.parse import urlparse
|
||||
import requests
|
||||
|
||||
parsed = urlparse(url)
|
||||
parts = [p for p in parsed.path.split("/") if p]
|
||||
|
||||
if len(parts) < 2 or parts[0] != "event":
|
||||
return "", ""
|
||||
|
||||
# Last segment is the market slug (or event slug if only 2 parts)
|
||||
market_slug = parts[-1]
|
||||
|
||||
# Try as market slug first
|
||||
try:
|
||||
resp = requests.get(
|
||||
"https://gamma-api.polymarket.com/markets",
|
||||
params={"slug": market_slug},
|
||||
timeout=15,
|
||||
)
|
||||
data = resp.json()
|
||||
if isinstance(data, list) and data:
|
||||
return str(data[0]["id"]), data[0].get("question", "")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# If 3+ parts, the second segment is the event slug — resolve event and pick first market
|
||||
if len(parts) >= 2:
|
||||
event_slug = parts[1]
|
||||
try:
|
||||
resp = requests.get(
|
||||
"https://gamma-api.polymarket.com/events",
|
||||
params={"slug": event_slug},
|
||||
timeout=15,
|
||||
)
|
||||
data = resp.json()
|
||||
if isinstance(data, list) and data:
|
||||
markets = data[0].get("markets", [])
|
||||
if markets:
|
||||
return str(markets[0]["id"]), markets[0].get("question", "")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return "", ""
|
||||
|
||||
|
||||
def get_market_id() -> tuple[str, str]:
|
||||
"""Prompt the user to enter a Polymarket URL or market ID."""
|
||||
user_input = questionary.text(
|
||||
"Paste a Polymarket URL or enter a numeric market ID:",
|
||||
validate=lambda x: len(x.strip()) > 0 or "Please enter a URL or market ID.",
|
||||
style=questionary.Style(
|
||||
[
|
||||
("text", "fg:green"),
|
||||
("highlighted", "noinherit"),
|
||||
]
|
||||
),
|
||||
).ask()
|
||||
|
||||
if not user_input:
|
||||
console.print("\n[red]No input provided. Exiting...[/red]")
|
||||
exit(1)
|
||||
|
||||
user_input = user_input.strip()
|
||||
|
||||
# Check if it's a URL
|
||||
if "polymarket.com" in user_input:
|
||||
console.print("[dim]Resolving Polymarket URL...[/dim]")
|
||||
market_id, market_question = _resolve_polymarket_url(user_input)
|
||||
if market_id:
|
||||
console.print(f"[green]Found:[/green] {market_question} (ID: {market_id})")
|
||||
return market_id, market_question
|
||||
else:
|
||||
console.print("[red]Could not resolve URL. Please enter a numeric market ID instead.[/red]")
|
||||
exit(1)
|
||||
|
||||
# Otherwise treat as numeric market ID
|
||||
market_id = user_input
|
||||
|
||||
# Try to fetch the question from the API
|
||||
market_question = ""
|
||||
try:
|
||||
import requests
|
||||
resp = requests.get(
|
||||
f"https://gamma-api.polymarket.com/markets/{market_id}",
|
||||
timeout=15,
|
||||
)
|
||||
if resp.status_code == 200:
|
||||
data = resp.json()
|
||||
market_question = data.get("question", "")
|
||||
if market_question:
|
||||
console.print(f"[green]Found:[/green] {market_question}")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return market_id, market_question
|
||||
|
||||
|
||||
def select_pm_analysts() -> List[PMAnalystType]:
|
||||
"""Select prediction market analysts using an interactive checkbox."""
|
||||
choices = questionary.checkbox(
|
||||
"Select Your [PM Analysts Team]:",
|
||||
choices=[
|
||||
questionary.Choice(display, value=value) for display, value in PM_ANALYST_ORDER
|
||||
],
|
||||
instruction="\n- Press Space to select/unselect analysts\n- Press 'a' to select/unselect all\n- Press Enter when done",
|
||||
validate=lambda x: len(x) > 0 or "You must select at least one analyst.",
|
||||
style=questionary.Style(
|
||||
[
|
||||
("checkbox-selected", "fg:green"),
|
||||
("selected", "fg:green noinherit"),
|
||||
("highlighted", "noinherit"),
|
||||
("pointer", "noinherit"),
|
||||
]
|
||||
),
|
||||
).ask()
|
||||
|
||||
if not choices:
|
||||
console.print("\n[red]No analysts selected. Exiting...[/red]")
|
||||
exit(1)
|
||||
|
||||
return choices
|
||||
|
||||
|
||||
def get_ticker() -> str:
|
||||
"""Prompt the user to enter a ticker symbol."""
|
||||
|
|
|
|||
|
|
@ -0,0 +1,3 @@
|
|||
from tradingagents.prediction_market.graph.pm_trading_graph import PMTradingAgentsGraph
|
||||
|
||||
__all__ = ["PMTradingAgentsGraph"]
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
from tradingagents.prediction_market.agents.analysts.event_analyst import create_event_analyst
|
||||
from tradingagents.prediction_market.agents.analysts.odds_analyst import create_odds_analyst
|
||||
from tradingagents.prediction_market.agents.analysts.information_analyst import create_information_analyst
|
||||
from tradingagents.prediction_market.agents.analysts.sentiment_analyst import create_sentiment_analyst
|
||||
from tradingagents.prediction_market.agents.researchers.yes_researcher import create_yes_researcher
|
||||
from tradingagents.prediction_market.agents.researchers.no_researcher import create_no_researcher
|
||||
from tradingagents.prediction_market.agents.managers.research_manager import create_pm_research_manager
|
||||
from tradingagents.prediction_market.agents.managers.risk_manager import create_pm_risk_manager
|
||||
from tradingagents.prediction_market.agents.trader.pm_trader import create_pm_trader
|
||||
from tradingagents.prediction_market.agents.risk_mgmt.aggressive_debator import create_pm_aggressive_debator
|
||||
from tradingagents.prediction_market.agents.risk_mgmt.conservative_debator import create_pm_conservative_debator
|
||||
from tradingagents.prediction_market.agents.risk_mgmt.neutral_debator import create_pm_neutral_debator
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_utils import create_msg_delete
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_utils import (
|
||||
get_market_info,
|
||||
get_resolution_criteria,
|
||||
get_event_context,
|
||||
)
|
||||
|
||||
|
||||
def create_event_analyst(llm):
|
||||
def event_analyst_node(state):
|
||||
current_date = state["trade_date"]
|
||||
market_id = state["market_id"]
|
||||
market_question = state["market_question"]
|
||||
|
||||
tools = [
|
||||
get_market_info,
|
||||
get_resolution_criteria,
|
||||
get_event_context,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are an Event Analyst for prediction markets. Your task is to analyze the prediction market event itself. "
|
||||
"Understand what is being predicted, how the market resolves, and the timeline. "
|
||||
"Use the available tools to gather market info and resolution criteria. "
|
||||
"Your analysis should cover:\n"
|
||||
"1. Event description and what exactly is being predicted\n"
|
||||
"2. Resolution criteria - how will the outcome be determined? Is it clear or ambiguous?\n"
|
||||
"3. Key dates and triggers that could cause resolution\n"
|
||||
"4. Resolution ambiguity assessment (clear/moderate/ambiguous)\n"
|
||||
"5. Related markets within the same event if applicable\n"
|
||||
"Do not simply state that the situation is unclear, provide detailed and finegrained analysis "
|
||||
"and insights that may help traders make decisions."
|
||||
""" Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
(
|
||||
"system",
|
||||
"You are a helpful AI assistant, collaborating with other assistants."
|
||||
" Use the provided tools to progress towards answering the question."
|
||||
" If you are unable to fully answer, that's OK; another assistant with different tools"
|
||||
" will help where you left off. Execute what you can to make progress."
|
||||
" If you or any other assistant has the FINAL PREDICTION: **YES/NO** or deliverable,"
|
||||
" prefix your response with FINAL PREDICTION: **YES/NO** so the team knows to stop."
|
||||
" You have access to the following tools: {tool_names}.\n{system_message}"
|
||||
"For your reference, the current date is {current_date}. Market ID: {market_id}. Question: {market_question}",
|
||||
),
|
||||
MessagesPlaceholder(variable_name="messages"),
|
||||
]
|
||||
)
|
||||
|
||||
prompt = prompt.partial(system_message=system_message)
|
||||
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
|
||||
prompt = prompt.partial(current_date=current_date)
|
||||
prompt = prompt.partial(market_id=market_id)
|
||||
prompt = prompt.partial(market_question=market_question)
|
||||
|
||||
chain = prompt | llm.bind_tools(tools)
|
||||
result = chain.invoke(state["messages"])
|
||||
|
||||
report = ""
|
||||
|
||||
if len(result.tool_calls) == 0:
|
||||
report = result.content
|
||||
|
||||
return {
|
||||
"messages": [result],
|
||||
"event_report": report,
|
||||
}
|
||||
|
||||
return event_analyst_node
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_utils import (
|
||||
get_news,
|
||||
get_global_news,
|
||||
get_related_markets,
|
||||
)
|
||||
|
||||
|
||||
def create_information_analyst(llm):
|
||||
def information_analyst_node(state):
|
||||
current_date = state["trade_date"]
|
||||
market_id = state["market_id"]
|
||||
market_question = state["market_question"]
|
||||
|
||||
tools = [
|
||||
get_news,
|
||||
get_global_news,
|
||||
get_related_markets,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are an Information Analyst for prediction markets. Your task is to find and analyze news, "
|
||||
"data, and developments that are relevant to the outcome of the prediction market event. "
|
||||
"Use the available tools to search for news and related markets. Your analysis should cover:\n"
|
||||
"1. Recent news and developments directly related to the event being predicted\n"
|
||||
"2. Broader macro or contextual factors that could influence the outcome\n"
|
||||
"3. Information the market may not have priced in yet (information edge)\n"
|
||||
"4. Assessment of how new information impacts the probability of each outcome\n"
|
||||
"5. Related markets and what their prices signal about this event\n"
|
||||
"6. Key upcoming catalysts or data releases that could move the market\n"
|
||||
"Do not simply state that the information is mixed, provide detailed and finegrained analysis "
|
||||
"and insights that may help traders make decisions."
|
||||
""" Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
(
|
||||
"system",
|
||||
"You are a helpful AI assistant, collaborating with other assistants."
|
||||
" Use the provided tools to progress towards answering the question."
|
||||
" If you are unable to fully answer, that's OK; another assistant with different tools"
|
||||
" will help where you left off. Execute what you can to make progress."
|
||||
" If you or any other assistant has the FINAL PREDICTION: **YES/NO** or deliverable,"
|
||||
" prefix your response with FINAL PREDICTION: **YES/NO** so the team knows to stop."
|
||||
" You have access to the following tools: {tool_names}.\n{system_message}"
|
||||
"For your reference, the current date is {current_date}. Market ID: {market_id}. Question: {market_question}",
|
||||
),
|
||||
MessagesPlaceholder(variable_name="messages"),
|
||||
]
|
||||
)
|
||||
|
||||
prompt = prompt.partial(system_message=system_message)
|
||||
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
|
||||
prompt = prompt.partial(current_date=current_date)
|
||||
prompt = prompt.partial(market_id=market_id)
|
||||
prompt = prompt.partial(market_question=market_question)
|
||||
|
||||
chain = prompt | llm.bind_tools(tools)
|
||||
result = chain.invoke(state["messages"])
|
||||
|
||||
report = ""
|
||||
|
||||
if len(result.tool_calls) == 0:
|
||||
report = result.content
|
||||
|
||||
return {
|
||||
"messages": [result],
|
||||
"information_report": report,
|
||||
}
|
||||
|
||||
return information_analyst_node
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_utils import (
|
||||
get_market_info,
|
||||
get_market_price_history,
|
||||
get_order_book,
|
||||
)
|
||||
|
||||
|
||||
def create_odds_analyst(llm):
|
||||
def odds_analyst_node(state):
|
||||
current_date = state["trade_date"]
|
||||
market_id = state["market_id"]
|
||||
market_question = state["market_question"]
|
||||
|
||||
tools = [
|
||||
get_market_info,
|
||||
get_market_price_history,
|
||||
get_order_book,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are an Odds Analyst for prediction markets. Your task is to analyze the market microstructure "
|
||||
"and pricing dynamics of the prediction market. Use the available tools to gather market data, "
|
||||
"price history, and order book information. Your analysis should cover:\n"
|
||||
"1. Current price/probability and what it implies about market consensus\n"
|
||||
"2. Bid-ask spread and liquidity assessment - how easy is it to enter/exit positions?\n"
|
||||
"3. Order book depth - are there large resting orders that indicate informed traders?\n"
|
||||
"4. Price history trends - has the market been trending, mean-reverting, or volatile?\n"
|
||||
"5. Market efficiency assessment - are there signs of mispricing or stale prices?\n"
|
||||
"6. Market lifecycle stage (early/mid/late) based on time to resolution and volume patterns\n"
|
||||
"Do not simply state that the trends are mixed, provide detailed and finegrained analysis "
|
||||
"and insights that may help traders make decisions."
|
||||
""" Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
(
|
||||
"system",
|
||||
"You are a helpful AI assistant, collaborating with other assistants."
|
||||
" Use the provided tools to progress towards answering the question."
|
||||
" If you are unable to fully answer, that's OK; another assistant with different tools"
|
||||
" will help where you left off. Execute what you can to make progress."
|
||||
" If you or any other assistant has the FINAL PREDICTION: **YES/NO** or deliverable,"
|
||||
" prefix your response with FINAL PREDICTION: **YES/NO** so the team knows to stop."
|
||||
" You have access to the following tools: {tool_names}.\n{system_message}"
|
||||
"For your reference, the current date is {current_date}. Market ID: {market_id}. Question: {market_question}",
|
||||
),
|
||||
MessagesPlaceholder(variable_name="messages"),
|
||||
]
|
||||
)
|
||||
|
||||
prompt = prompt.partial(system_message=system_message)
|
||||
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
|
||||
prompt = prompt.partial(current_date=current_date)
|
||||
prompt = prompt.partial(market_id=market_id)
|
||||
prompt = prompt.partial(market_question=market_question)
|
||||
|
||||
chain = prompt | llm.bind_tools(tools)
|
||||
result = chain.invoke(state["messages"])
|
||||
|
||||
report = ""
|
||||
|
||||
if len(result.tool_calls) == 0:
|
||||
report = result.content
|
||||
|
||||
return {
|
||||
"messages": [result],
|
||||
"odds_report": report,
|
||||
}
|
||||
|
||||
return odds_analyst_node
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_utils import (
|
||||
get_news,
|
||||
search_markets,
|
||||
)
|
||||
|
||||
|
||||
def create_sentiment_analyst(llm):
|
||||
def sentiment_analyst_node(state):
|
||||
current_date = state["trade_date"]
|
||||
market_id = state["market_id"]
|
||||
market_question = state["market_question"]
|
||||
|
||||
tools = [
|
||||
get_news,
|
||||
search_markets,
|
||||
]
|
||||
|
||||
system_message = (
|
||||
"You are a Sentiment Analyst for prediction markets. Your task is to analyze public opinion, "
|
||||
"social media discussions, and crowd sentiment around the prediction market event. "
|
||||
"Use the available tools to search for news sentiment and related market activity. "
|
||||
"Your analysis should cover:\n"
|
||||
"1. Public opinion and social media sentiment around the event\n"
|
||||
"2. Polls, surveys, or expert forecasts related to the predicted outcome\n"
|
||||
"3. Expert vs crowd divergence - where do domain experts disagree with market prices?\n"
|
||||
"4. Narrative momentum - is sentiment shifting in a particular direction?\n"
|
||||
"5. Sentiment extremes that may signal contrarian opportunities\n"
|
||||
"6. Related market sentiment and cross-market signals\n"
|
||||
"Do not simply state that the sentiment is mixed, provide detailed and finegrained analysis "
|
||||
"and insights that may help traders make decisions."
|
||||
""" Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read."""
|
||||
)
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
(
|
||||
"system",
|
||||
"You are a helpful AI assistant, collaborating with other assistants."
|
||||
" Use the provided tools to progress towards answering the question."
|
||||
" If you are unable to fully answer, that's OK; another assistant with different tools"
|
||||
" will help where you left off. Execute what you can to make progress."
|
||||
" If you or any other assistant has the FINAL PREDICTION: **YES/NO** or deliverable,"
|
||||
" prefix your response with FINAL PREDICTION: **YES/NO** so the team knows to stop."
|
||||
" You have access to the following tools: {tool_names}.\n{system_message}"
|
||||
"For your reference, the current date is {current_date}. Market ID: {market_id}. Question: {market_question}",
|
||||
),
|
||||
MessagesPlaceholder(variable_name="messages"),
|
||||
]
|
||||
)
|
||||
|
||||
prompt = prompt.partial(system_message=system_message)
|
||||
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
|
||||
prompt = prompt.partial(current_date=current_date)
|
||||
prompt = prompt.partial(market_id=market_id)
|
||||
prompt = prompt.partial(market_question=market_question)
|
||||
|
||||
chain = prompt | llm.bind_tools(tools)
|
||||
result = chain.invoke(state["messages"])
|
||||
|
||||
report = ""
|
||||
|
||||
if len(result.tool_calls) == 0:
|
||||
report = result.content
|
||||
|
||||
return {
|
||||
"messages": [result],
|
||||
"sentiment_report": report,
|
||||
}
|
||||
|
||||
return sentiment_analyst_node
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
import time
|
||||
import json
|
||||
|
||||
|
||||
def create_pm_research_manager(llm, memory):
|
||||
def research_manager_node(state) -> dict:
|
||||
history = state["investment_debate_state"].get("history", "")
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
investment_debate_state = state["investment_debate_state"]
|
||||
|
||||
curr_situation = f"{event_report}\n\n{odds_report}\n\n{information_report}\n\n{sentiment_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
||||
past_memory_str = ""
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""As the research manager and debate judge for this prediction market analysis, your role is to critically evaluate the YES/NO debate and produce a definitive investment thesis. You must commit to a clear directional view rather than defaulting to neutrality.
|
||||
|
||||
Synthesize the key arguments from both the YES and NO analysts, focusing on the most compelling evidence. Your output must include:
|
||||
|
||||
1. Estimated True Probability: Your best estimate of the actual probability the event occurs, expressed as a percentage.
|
||||
2. Market Price Comparison: How your estimated probability compares to the current market-implied odds.
|
||||
3. Edge Calculation: The difference between your estimated probability and the market price. Positive edge means YES is underpriced; negative edge means YES is overpriced.
|
||||
4. Confidence Level: How confident you are in your probability estimate (low, medium, or high), with justification.
|
||||
5. Recommendation: A decisive stance — BUY YES, BUY NO, or HOLD — supported by the strongest arguments from the debate.
|
||||
6. Rationale: An explanation of why these arguments lead to your conclusion.
|
||||
7. Strategic Actions: Concrete steps for implementing the recommendation, including position sizing guidance based on edge size and confidence.
|
||||
|
||||
Take into account your past mistakes on similar situations. Use these insights to refine your decision-making and ensure you are learning and improving. Present your analysis conversationally, as if speaking naturally, without special formatting.
|
||||
|
||||
Here are your past reflections on mistakes:
|
||||
\"{past_memory_str}\"
|
||||
|
||||
Here is the debate:
|
||||
Debate History:
|
||||
{history}"""
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
new_investment_debate_state = {
|
||||
"judge_decision": response.content,
|
||||
"history": investment_debate_state.get("history", ""),
|
||||
"no_history": investment_debate_state.get("no_history", ""),
|
||||
"yes_history": investment_debate_state.get("yes_history", ""),
|
||||
"current_response": response.content,
|
||||
"count": investment_debate_state["count"],
|
||||
}
|
||||
|
||||
return {
|
||||
"investment_debate_state": new_investment_debate_state,
|
||||
"investment_plan": response.content,
|
||||
}
|
||||
|
||||
return research_manager_node
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
def create_pm_risk_manager(llm, memory):
|
||||
def risk_manager_node(state) -> dict:
|
||||
market_question = state["market_question"]
|
||||
|
||||
history = state["risk_debate_state"]["history"]
|
||||
risk_debate_state = state["risk_debate_state"]
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
trader_plan = state["trader_investment_plan"]
|
||||
|
||||
curr_situation = f"{event_report}\n\n{odds_report}\n\n{information_report}\n\n{sentiment_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
||||
past_memory_str = ""
|
||||
if past_memories:
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
else:
|
||||
past_memory_str = "No past memories found."
|
||||
|
||||
prompt = f"""As the Risk Management Judge for prediction markets, your goal is to evaluate the debate between three risk analysts -- Aggressive, Neutral, and Conservative -- and determine the best course of action for the trader's proposed position on:
|
||||
|
||||
MARKET QUESTION: {market_question}
|
||||
|
||||
Your decision must result in a clear recommendation: APPROVE the trade as proposed, MODIFY the trade with specific adjustments, or REJECT the trade entirely. Choose PASS only if strongly justified by specific risk arguments, not as a fallback when all sides seem valid. Strive for clarity and decisiveness.
|
||||
|
||||
MANDATORY RISK ASSESSMENTS -- You must explicitly address each of the following:
|
||||
|
||||
1. **RESOLUTION RISK**: How clear are the resolution criteria? What is the probability of disputed or ambiguous resolution? Could the market resolve on a technicality that differs from the spirit of the question?
|
||||
|
||||
2. **LIQUIDITY RISK**: Can the position be exited if the thesis changes? What is the expected slippage? Is the position size appropriate relative to market depth?
|
||||
|
||||
3. **CORRELATION RISK**: Does this position create concentrated exposure to a single event type, domain, or correlated outcome? How would correlated losses across similar positions compound?
|
||||
|
||||
Guidelines for Decision-Making:
|
||||
1. **Summarize Key Arguments**: Extract the strongest points from each analyst, focusing on relevance to the prediction market context.
|
||||
2. **Provide Rationale**: Support your recommendation with direct quotes and counterarguments from the debate.
|
||||
3. **Refine the Trader's Plan**: Start with the trader's original plan and adjust it based on the analysts' insights. If the edge is insufficient or the risks too high, recommend PASS.
|
||||
4. **Learn from Past Mistakes**: Use lessons from past reflections to address prior misjudgments and improve the decision you are making now: {past_memory_str}
|
||||
|
||||
Deliverables:
|
||||
- Explicit assessment of resolution risk, liquidity risk, and correlation risk.
|
||||
- A clear and actionable recommendation: APPROVE (with the proposed sizing), MODIFY (with specific adjustments to size, direction, or conditions), or REJECT (with reasoning).
|
||||
- If APPROVE or MODIFY, state the final position: BUY_YES or BUY_NO with sizing guidance.
|
||||
- If REJECT, the final position is PASS.
|
||||
- Detailed reasoning anchored in the debate and past reflections.
|
||||
|
||||
---
|
||||
|
||||
**Trader's Proposed Plan:**
|
||||
{trader_plan}
|
||||
|
||||
**Analysts Debate History:**
|
||||
{history}
|
||||
|
||||
---
|
||||
|
||||
Focus on actionable insights and continuous improvement. Build on past lessons, critically evaluate all perspectives, and ensure each decision advances better outcomes.
|
||||
|
||||
Always conclude your response with 'FINAL TRADE DECISION: **BUY_YES/BUY_NO/PASS**' to confirm your recommendation."""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
new_risk_debate_state = {
|
||||
"judge_decision": response.content,
|
||||
"history": risk_debate_state["history"],
|
||||
"aggressive_history": risk_debate_state["aggressive_history"],
|
||||
"conservative_history": risk_debate_state["conservative_history"],
|
||||
"neutral_history": risk_debate_state["neutral_history"],
|
||||
"latest_speaker": "Judge",
|
||||
"current_aggressive_response": risk_debate_state["current_aggressive_response"],
|
||||
"current_conservative_response": risk_debate_state["current_conservative_response"],
|
||||
"current_neutral_response": risk_debate_state["current_neutral_response"],
|
||||
"count": risk_debate_state["count"],
|
||||
}
|
||||
|
||||
return {
|
||||
"risk_debate_state": new_risk_debate_state,
|
||||
"final_trade_decision": response.content,
|
||||
}
|
||||
|
||||
return risk_manager_node
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
from langchain_core.messages import AIMessage
|
||||
import time
|
||||
import json
|
||||
|
||||
|
||||
def create_no_researcher(llm, memory):
|
||||
def no_node(state) -> dict:
|
||||
investment_debate_state = state["investment_debate_state"]
|
||||
history = investment_debate_state.get("history", "")
|
||||
no_history = investment_debate_state.get("no_history", "")
|
||||
|
||||
current_response = investment_debate_state.get("current_response", "")
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
curr_situation = f"{event_report}\n\n{odds_report}\n\n{information_report}\n\n{sentiment_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
||||
past_memory_str = ""
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""You are a NO Analyst making the case that the prediction market event will NOT occur. Your goal is to present a well-reasoned argument that the YES probability should be lower than the current market price. Leverage the provided research and data to highlight potential obstacles and counter YES arguments effectively.
|
||||
|
||||
Key points to focus on:
|
||||
|
||||
- Risks and Obstacles: Highlight factors like structural barriers, historical base rates, opposing forces, or conditions that make the event unlikely to occur.
|
||||
- Market Overpricing: Argue why the current market odds overvalue the YES outcome, identifying where optimism bias or herding behavior may be inflating the price.
|
||||
- Negative Indicators: Use evidence from event analysis, historical precedent, expert opinions, or recent adverse developments to support your position.
|
||||
- YES Counterpoints: Critically analyze the YES argument with specific data and sound reasoning, exposing weaknesses or over-optimistic assumptions.
|
||||
- Engagement: Present your argument in a conversational style, directly engaging with the YES analyst's points and debating effectively rather than simply listing facts.
|
||||
|
||||
Resources available:
|
||||
|
||||
Event analysis report: {event_report}
|
||||
Market odds report: {odds_report}
|
||||
Information and news report: {information_report}
|
||||
Public sentiment report: {sentiment_report}
|
||||
Conversation history of the debate: {history}
|
||||
Last YES argument: {current_response}
|
||||
Reflections from similar situations and lessons learned: {past_memory_str}
|
||||
Use this information to deliver a compelling NO argument, refute the YES analyst's claims, and engage in a dynamic debate that demonstrates why the event is less likely to occur than the market currently implies. You must also address reflections and learn from lessons and mistakes you made in the past.
|
||||
"""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
argument = f"NO Analyst: {response.content}"
|
||||
|
||||
new_investment_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"no_history": no_history + "\n" + argument,
|
||||
"yes_history": investment_debate_state.get("yes_history", ""),
|
||||
"current_response": argument,
|
||||
"count": investment_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
return {"investment_debate_state": new_investment_debate_state}
|
||||
|
||||
return no_node
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
from langchain_core.messages import AIMessage
|
||||
import time
|
||||
import json
|
||||
|
||||
|
||||
def create_yes_researcher(llm, memory):
|
||||
def yes_node(state) -> dict:
|
||||
investment_debate_state = state["investment_debate_state"]
|
||||
history = investment_debate_state.get("history", "")
|
||||
yes_history = investment_debate_state.get("yes_history", "")
|
||||
|
||||
current_response = investment_debate_state.get("current_response", "")
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
curr_situation = f"{event_report}\n\n{odds_report}\n\n{information_report}\n\n{sentiment_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
||||
past_memory_str = ""
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
|
||||
prompt = f"""You are a YES Analyst advocating that the prediction market event WILL occur. Your task is to build a strong, evidence-based case that the YES probability should be higher than the current market price. Leverage the provided research and data to address concerns and counter NO arguments effectively.
|
||||
|
||||
Key points to focus on:
|
||||
- Supporting Evidence: Highlight concrete indicators, trends, and data points that suggest the event is likely to occur.
|
||||
- Probability Assessment: Argue why the current market odds undervalue the YES outcome, identifying where the market may be mispricing risk.
|
||||
- Positive Catalysts: Emphasize upcoming events, momentum shifts, or developments that increase the likelihood of the event occurring.
|
||||
- NO Counterpoints: Critically analyze the NO argument with specific data and sound reasoning, addressing concerns thoroughly and showing why the YES perspective holds stronger merit.
|
||||
- Engagement: Present your argument in a conversational style, engaging directly with the NO analyst's points and debating effectively rather than just listing data.
|
||||
|
||||
Resources available:
|
||||
Event analysis report: {event_report}
|
||||
Market odds report: {odds_report}
|
||||
Information and news report: {information_report}
|
||||
Public sentiment report: {sentiment_report}
|
||||
Conversation history of the debate: {history}
|
||||
Last NO argument: {current_response}
|
||||
Reflections from similar situations and lessons learned: {past_memory_str}
|
||||
Use this information to deliver a compelling YES argument, refute the NO analyst's concerns, and engage in a dynamic debate that demonstrates why the event is more likely to occur than the market currently implies. You must also address reflections and learn from lessons and mistakes you made in the past.
|
||||
"""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
argument = f"YES Analyst: {response.content}"
|
||||
|
||||
new_investment_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"yes_history": yes_history + "\n" + argument,
|
||||
"no_history": investment_debate_state.get("no_history", ""),
|
||||
"current_response": argument,
|
||||
"count": investment_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
return {"investment_debate_state": new_investment_debate_state}
|
||||
|
||||
return yes_node
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
def create_pm_aggressive_debator(llm):
|
||||
def aggressive_node(state) -> dict:
|
||||
risk_debate_state = state["risk_debate_state"]
|
||||
history = risk_debate_state.get("history", "")
|
||||
aggressive_history = risk_debate_state.get("aggressive_history", "")
|
||||
|
||||
current_conservative_response = risk_debate_state.get("current_conservative_response", "")
|
||||
current_neutral_response = risk_debate_state.get("current_neutral_response", "")
|
||||
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
trader_decision = state["trader_investment_plan"]
|
||||
|
||||
prompt = f"""As the Aggressive Risk Analyst for prediction markets, your role is to actively champion the trader's proposed position, emphasizing the magnitude of the identified edge and the information advantage it represents. When evaluating the trader's decision, focus intently on the potential upside, the strength of the probability estimate, and the favorable risk/reward ratio of the position. Use the provided market data and analysis to strengthen your arguments and challenge the opposing views.
|
||||
|
||||
Specifically, respond directly to each point made by the conservative and neutral analysts, countering with data-driven rebuttals and persuasive reasoning. Highlight where their caution might cause the team to miss a profitable opportunity or where their risk concerns are overblown relative to the identified edge.
|
||||
|
||||
Key arguments to emphasize:
|
||||
- The magnitude of the edge between estimated probability and market price justifies the position
|
||||
- The information advantage from our analyst team gives us superior probability estimates
|
||||
- Favorable odds structures mean limited downside with asymmetric upside
|
||||
- Market inefficiencies in prediction markets are well-documented and exploitable
|
||||
- Conservative concerns about resolution risk or liquidity are often overstated for well-structured markets
|
||||
- Time value of the position if the event resolves sooner than expected
|
||||
|
||||
Here is the trader's decision:
|
||||
|
||||
{trader_decision}
|
||||
|
||||
Your task is to create a compelling case for the trader's decision by questioning and critiquing the conservative and neutral stances to demonstrate why taking this position offers the best path forward. Incorporate insights from the following sources into your arguments:
|
||||
|
||||
Event Analysis Report: {event_report}
|
||||
Odds Analysis Report: {odds_report}
|
||||
Information Analysis Report: {information_report}
|
||||
Sentiment Analysis Report: {sentiment_report}
|
||||
Here is the current conversation history: {history} Here are the last arguments from the conservative analyst: {current_conservative_response} Here are the last arguments from the neutral analyst: {current_neutral_response}. If there are no responses from the other viewpoints, do not hallucinate and just present your point.
|
||||
|
||||
Engage actively by addressing any specific concerns raised, refuting the weaknesses in their logic, and asserting the benefits of taking the position to capitalize on the identified edge. Maintain a focus on debating and persuading, not just presenting data. Challenge each counterpoint to underscore why the proposed trade is optimal. Output conversationally as if you are speaking without any special formatting."""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
argument = f"Aggressive Analyst: {response.content}"
|
||||
|
||||
new_risk_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"aggressive_history": aggressive_history + "\n" + argument,
|
||||
"conservative_history": risk_debate_state.get("conservative_history", ""),
|
||||
"neutral_history": risk_debate_state.get("neutral_history", ""),
|
||||
"latest_speaker": "Aggressive",
|
||||
"current_aggressive_response": argument,
|
||||
"current_conservative_response": risk_debate_state.get("current_conservative_response", ""),
|
||||
"current_neutral_response": risk_debate_state.get(
|
||||
"current_neutral_response", ""
|
||||
),
|
||||
"count": risk_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
return {"risk_debate_state": new_risk_debate_state}
|
||||
|
||||
return aggressive_node
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
def create_pm_conservative_debator(llm):
|
||||
def conservative_node(state) -> dict:
|
||||
risk_debate_state = state["risk_debate_state"]
|
||||
history = risk_debate_state.get("history", "")
|
||||
conservative_history = risk_debate_state.get("conservative_history", "")
|
||||
|
||||
current_aggressive_response = risk_debate_state.get("current_aggressive_response", "")
|
||||
current_neutral_response = risk_debate_state.get("current_neutral_response", "")
|
||||
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
trader_decision = state["trader_investment_plan"]
|
||||
|
||||
prompt = f"""As the Conservative Risk Analyst for prediction markets, your primary objective is to protect capital and ensure that only positions with genuinely favorable risk/reward profiles are taken. You prioritize preservation of capital, careful assessment of downside scenarios, and thorough evaluation of all risks unique to prediction markets. When evaluating the trader's decision, critically examine high-risk elements and point out where the position may expose us to undue risk.
|
||||
|
||||
Key risks to focus on:
|
||||
- RESOLUTION AMBIGUITY RISK: How clear are the resolution criteria? Could the market resolve in an unexpected way due to vague or disputed criteria? Has the resolution source been reliable historically?
|
||||
- LIQUIDITY RISK: Can we exit the position if our thesis changes? What is the bid-ask spread? Could we be stuck in an illiquid position as resolution approaches?
|
||||
- CORRELATION EXPOSURE: Are we already exposed to similar outcomes through other positions? Does this position concentrate risk in a single domain or event type?
|
||||
- MODEL UNCERTAINTY: How confident can we really be in our probability estimate? What is the estimation error band? Small errors in probability estimation can eliminate the perceived edge entirely.
|
||||
- TIME DECAY: How long until resolution? Extended time horizons increase the chance of regime changes, new information, or shifts that invalidate our current analysis. Capital locked in long-duration positions has opportunity cost.
|
||||
|
||||
Here is the trader's decision:
|
||||
|
||||
{trader_decision}
|
||||
|
||||
Your task is to actively counter the arguments of the Aggressive and Neutral Analysts, highlighting where their views may overlook potential threats or fail to account for prediction-market-specific risks. Respond directly to their points, drawing from the following data sources to build a convincing case for a cautious approach or outright rejection of the position:
|
||||
|
||||
Event Analysis Report: {event_report}
|
||||
Odds Analysis Report: {odds_report}
|
||||
Information Analysis Report: {information_report}
|
||||
Sentiment Analysis Report: {sentiment_report}
|
||||
Here is the current conversation history: {history} Here is the last response from the aggressive analyst: {current_aggressive_response} Here is the last response from the neutral analyst: {current_neutral_response}. If there are no responses from the other viewpoints, do not hallucinate and just present your point.
|
||||
|
||||
Engage by questioning their optimism and emphasizing the potential downsides they may have overlooked. Address each of their counterpoints to showcase why a conservative stance is ultimately the safest path for preserving capital. Focus on debating and critiquing their arguments to demonstrate the strength of a cautious strategy over their approaches. Output conversationally as if you are speaking without any special formatting."""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
argument = f"Conservative Analyst: {response.content}"
|
||||
|
||||
new_risk_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"aggressive_history": risk_debate_state.get("aggressive_history", ""),
|
||||
"conservative_history": conservative_history + "\n" + argument,
|
||||
"neutral_history": risk_debate_state.get("neutral_history", ""),
|
||||
"latest_speaker": "Conservative",
|
||||
"current_aggressive_response": risk_debate_state.get(
|
||||
"current_aggressive_response", ""
|
||||
),
|
||||
"current_conservative_response": argument,
|
||||
"current_neutral_response": risk_debate_state.get(
|
||||
"current_neutral_response", ""
|
||||
),
|
||||
"count": risk_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
return {"risk_debate_state": new_risk_debate_state}
|
||||
|
||||
return conservative_node
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
def create_pm_neutral_debator(llm):
|
||||
def neutral_node(state) -> dict:
|
||||
risk_debate_state = state["risk_debate_state"]
|
||||
history = risk_debate_state.get("history", "")
|
||||
neutral_history = risk_debate_state.get("neutral_history", "")
|
||||
|
||||
current_aggressive_response = risk_debate_state.get("current_aggressive_response", "")
|
||||
current_conservative_response = risk_debate_state.get("current_conservative_response", "")
|
||||
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
trader_decision = state["trader_investment_plan"]
|
||||
|
||||
prompt = f"""As the Neutral Risk Analyst for prediction markets, your role is to provide a balanced perspective, weighing both the potential upside of the trade and the legitimate risks. You prioritize a well-rounded approach, evaluating the trader's probability estimate, the appropriateness of the position sizing, and whether the risk/reward truly justifies the position.
|
||||
|
||||
Key areas to focus on:
|
||||
- BALANCED RISK/REWARD ASSESSMENT: Does the identified edge truly compensate for the risks involved? Is the trader's probability estimate reasonable given the available evidence, or could it be biased by selective analysis?
|
||||
- FRACTIONAL KELLY APPROPRIATENESS: Is the proposed 0.25x fractional Kelly sizing appropriate for this specific market? Should it be more conservative (0.1x) given estimation uncertainty, or could a slightly larger fraction be justified if the edge is robust?
|
||||
- TIME-TO-RESOLUTION IMPACT: How does the time remaining until resolution affect the trade? Shorter durations reduce uncertainty but may also reduce edge as markets become more efficient near resolution. Longer durations increase the chance of new information invalidating the thesis.
|
||||
- POSITION SIZING CALIBRATION: Even if the direction is correct, is the size right? Consider the impact of estimation errors on Kelly sizing and whether partial positions or scaling strategies would be more prudent.
|
||||
- ALTERNATIVE STRUCTURES: Could the same thesis be expressed with less risk? For example, could we wait for better entry, use a smaller position, or combine with a correlated market for a hedged expression?
|
||||
|
||||
Here is the trader's decision:
|
||||
|
||||
{trader_decision}
|
||||
|
||||
Your task is to challenge both the Aggressive and Conservative Analysts, pointing out where each perspective may be overly optimistic or overly cautious. Use insights from the following data sources to support a moderate, well-calibrated approach:
|
||||
|
||||
Event Analysis Report: {event_report}
|
||||
Odds Analysis Report: {odds_report}
|
||||
Information Analysis Report: {information_report}
|
||||
Sentiment Analysis Report: {sentiment_report}
|
||||
Here is the current conversation history: {history} Here is the last response from the aggressive analyst: {current_aggressive_response} Here is the last response from the conservative analyst: {current_conservative_response}. If there are no responses from the other viewpoints, do not hallucinate and just present your point.
|
||||
|
||||
Engage actively by analyzing both sides critically, addressing weaknesses in the aggressive and conservative arguments to advocate for a properly calibrated approach. Challenge each of their points to illustrate why a balanced assessment of edge, sizing, and timing leads to the most reliable outcomes. Focus on debating rather than simply presenting data, aiming to show that careful calibration of both direction and size produces the best risk-adjusted returns. Output conversationally as if you are speaking without any special formatting."""
|
||||
|
||||
response = llm.invoke(prompt)
|
||||
|
||||
argument = f"Neutral Analyst: {response.content}"
|
||||
|
||||
new_risk_debate_state = {
|
||||
"history": history + "\n" + argument,
|
||||
"aggressive_history": risk_debate_state.get("aggressive_history", ""),
|
||||
"conservative_history": risk_debate_state.get("conservative_history", ""),
|
||||
"neutral_history": neutral_history + "\n" + argument,
|
||||
"latest_speaker": "Neutral",
|
||||
"current_aggressive_response": risk_debate_state.get(
|
||||
"current_aggressive_response", ""
|
||||
),
|
||||
"current_conservative_response": risk_debate_state.get("current_conservative_response", ""),
|
||||
"current_neutral_response": argument,
|
||||
"count": risk_debate_state["count"] + 1,
|
||||
}
|
||||
|
||||
return {"risk_debate_state": new_risk_debate_state}
|
||||
|
||||
return neutral_node
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
import functools
|
||||
|
||||
|
||||
def create_pm_trader(llm, memory):
|
||||
def trader_node(state, name):
|
||||
market_question = state["market_question"]
|
||||
investment_plan = state["investment_plan"]
|
||||
event_report = state["event_report"]
|
||||
odds_report = state["odds_report"]
|
||||
information_report = state["information_report"]
|
||||
sentiment_report = state["sentiment_report"]
|
||||
|
||||
curr_situation = f"{event_report}\n\n{odds_report}\n\n{information_report}\n\n{sentiment_report}"
|
||||
past_memories = memory.get_memories(curr_situation, n_matches=2)
|
||||
|
||||
past_memory_str = ""
|
||||
if past_memories:
|
||||
for i, rec in enumerate(past_memories, 1):
|
||||
past_memory_str += rec["recommendation"] + "\n\n"
|
||||
else:
|
||||
past_memory_str = "No past memories found."
|
||||
|
||||
context = {
|
||||
"role": "user",
|
||||
"content": (
|
||||
f"You are evaluating a prediction market position for the following question:\n\n"
|
||||
f"MARKET QUESTION: {market_question}\n\n"
|
||||
f"Based on a comprehensive analysis by a team of analysts, here is the investment plan "
|
||||
f"synthesized from event analysis, odds analysis, information research, and sentiment analysis. "
|
||||
f"Use this plan as a foundation for your trading decision.\n\n"
|
||||
f"Proposed Investment Plan:\n{investment_plan}\n\n"
|
||||
f"Event Analysis Report:\n{event_report}\n\n"
|
||||
f"Odds Analysis Report:\n{odds_report}\n\n"
|
||||
f"Information Analysis Report:\n{information_report}\n\n"
|
||||
f"Sentiment Analysis Report:\n{sentiment_report}\n\n"
|
||||
f"Leverage these insights to make an informed and strategic trading decision."
|
||||
),
|
||||
}
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": f"""You are a prediction market trader analyzing market data to make trading decisions on binary outcome markets. Your goal is to identify mispriced contracts and exploit the edge between your estimated true probability and the current market price.
|
||||
|
||||
DECISION FRAMEWORK:
|
||||
1. Estimate the TRUE PROBABILITY of the event occurring based on all available analysis.
|
||||
2. Compare your estimated probability against the current market price (from the odds report).
|
||||
3. Calculate your EDGE: Edge = |Estimated Probability - Market Price|
|
||||
4. Apply a MINIMUM EDGE THRESHOLD of 5%. If your edge is below 5%, you MUST recommend PASS regardless of direction.
|
||||
5. For position sizing, use 0.25x FRACTIONAL KELLY CRITERION:
|
||||
- Kelly fraction = edge / odds_against
|
||||
- Position size = 0.25 * Kelly fraction * bankroll
|
||||
- This conservative sizing protects against estimation errors.
|
||||
|
||||
YOUR ANALYSIS MUST INCLUDE:
|
||||
- Your estimated true probability (with reasoning)
|
||||
- The current market price
|
||||
- Your calculated edge (estimated probability minus market price)
|
||||
- Whether the edge exceeds the 5% minimum threshold
|
||||
- Position sizing reasoning using fractional Kelly
|
||||
- Key risks that could invalidate your probability estimate
|
||||
|
||||
DECISION OPTIONS:
|
||||
- BUY_YES: You believe the event is MORE likely than the market implies (your probability > market price + 5%)
|
||||
- BUY_NO: You believe the event is LESS likely than the market implies (your probability < market price - 5%)
|
||||
- PASS: Your edge is below 5%, or uncertainty is too high to have conviction
|
||||
|
||||
Do not forget to utilize lessons from past decisions to learn from your mistakes. Here are reflections from similar situations you traded in and the lessons learned:
|
||||
{past_memory_str}
|
||||
|
||||
Always conclude your response with 'FINAL TRADE PROPOSAL: **BUY_YES/BUY_NO/PASS**' to confirm your recommendation.""",
|
||||
},
|
||||
context,
|
||||
]
|
||||
|
||||
result = llm.invoke(messages)
|
||||
|
||||
return {
|
||||
"messages": [result],
|
||||
"trader_investment_plan": result.content,
|
||||
"sender": name,
|
||||
}
|
||||
|
||||
return functools.partial(trader_node, name="Trader")
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
from typing import Annotated
|
||||
from typing_extensions import TypedDict
|
||||
from langgraph.graph import MessagesState
|
||||
|
||||
|
||||
class PMInvestDebateState(TypedDict):
|
||||
yes_history: Annotated[str, "YES side debate history"]
|
||||
no_history: Annotated[str, "NO side debate history"]
|
||||
history: Annotated[str, "Full debate history"]
|
||||
current_response: Annotated[str, "Latest argument"]
|
||||
judge_decision: Annotated[str, "Research manager's synthesis"]
|
||||
count: Annotated[int, "Length of the current conversation"]
|
||||
|
||||
|
||||
class PMRiskDebateState(TypedDict):
|
||||
aggressive_history: Annotated[str, "Aggressive Agent's history"]
|
||||
conservative_history: Annotated[str, "Conservative Agent's history"]
|
||||
neutral_history: Annotated[str, "Neutral Agent's history"]
|
||||
history: Annotated[str, "Full debate history"]
|
||||
latest_speaker: Annotated[str, "Analyst that spoke last"]
|
||||
current_aggressive_response: Annotated[str, "Latest aggressive response"]
|
||||
current_conservative_response: Annotated[str, "Latest conservative response"]
|
||||
current_neutral_response: Annotated[str, "Latest neutral response"]
|
||||
judge_decision: Annotated[str, "Risk judge's decision"]
|
||||
count: Annotated[int, "Length of the current conversation"]
|
||||
|
||||
|
||||
class PMAgentState(MessagesState):
|
||||
market_id: Annotated[str, "Polymarket condition ID"]
|
||||
market_question: Annotated[str, "Full question text of the prediction market"]
|
||||
trade_date: Annotated[str, "Date of analysis"]
|
||||
|
||||
sender: Annotated[str, "Agent that sent this message"]
|
||||
|
||||
# Analyst reports
|
||||
event_report: Annotated[str, "Report from the Event Analyst"]
|
||||
odds_report: Annotated[str, "Report from the Odds Analyst"]
|
||||
information_report: Annotated[str, "Report from the Information Analyst"]
|
||||
sentiment_report: Annotated[str, "Report from the Sentiment Analyst"]
|
||||
|
||||
# Researcher debate
|
||||
investment_debate_state: Annotated[
|
||||
PMInvestDebateState, "State of the YES/NO investment debate"
|
||||
]
|
||||
investment_plan: Annotated[str, "Plan generated by the Research Manager"]
|
||||
|
||||
# Trading
|
||||
trader_investment_plan: Annotated[str, "Plan generated by the PM Trader"]
|
||||
|
||||
# Risk management debate
|
||||
risk_debate_state: Annotated[
|
||||
PMRiskDebateState, "State of the risk management debate"
|
||||
]
|
||||
final_trade_decision: Annotated[str, "Final decision from the Risk Manager"]
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
from langchain_core.messages import HumanMessage, RemoveMessage
|
||||
|
||||
from tradingagents.prediction_market.agents.utils.pm_tools import (
|
||||
get_market_info,
|
||||
get_market_price_history,
|
||||
get_order_book,
|
||||
get_resolution_criteria,
|
||||
get_event_context,
|
||||
get_related_markets,
|
||||
search_markets,
|
||||
)
|
||||
|
||||
# Re-export news tools from the existing stock module (news is useful for PM too)
|
||||
from tradingagents.agents.utils.agent_utils import get_news, get_global_news
|
||||
|
||||
|
||||
def create_msg_delete():
|
||||
def delete_messages(state):
|
||||
"""Clear messages and add placeholder for Anthropic compatibility."""
|
||||
messages = state["messages"]
|
||||
removal_operations = [RemoveMessage(id=m.id) for m in messages]
|
||||
placeholder = HumanMessage(content="Continue")
|
||||
return {"messages": removal_operations + [placeholder]}
|
||||
|
||||
return delete_messages
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
"""Tool definitions for prediction market agents.
|
||||
|
||||
Each tool is a @tool-decorated function that calls the Polymarket data layer.
|
||||
"""
|
||||
|
||||
from langchain_core.tools import tool
|
||||
|
||||
from tradingagents.prediction_market.dataflows.polymarket import (
|
||||
get_polymarket_market_info,
|
||||
get_polymarket_price_history,
|
||||
get_polymarket_order_book,
|
||||
get_polymarket_resolution_criteria,
|
||||
get_polymarket_event_context,
|
||||
get_polymarket_related_markets,
|
||||
get_polymarket_search,
|
||||
)
|
||||
|
||||
|
||||
@tool
|
||||
def get_market_info(market_id: str, curr_date: str) -> str:
|
||||
"""Get prediction market info including question, current prices, volume, liquidity, and resolution criteria.
|
||||
|
||||
Args:
|
||||
market_id: The Polymarket market/condition ID
|
||||
curr_date: Current date for reference (YYYY-MM-DD)
|
||||
"""
|
||||
return get_polymarket_market_info(market_id)
|
||||
|
||||
|
||||
@tool
|
||||
def get_market_price_history(market_id: str, start_date: str, end_date: str) -> str:
|
||||
"""Get historical probability time series for a prediction market.
|
||||
|
||||
Args:
|
||||
market_id: The Polymarket market/condition ID
|
||||
start_date: Start date (YYYY-MM-DD)
|
||||
end_date: End date (YYYY-MM-DD)
|
||||
"""
|
||||
return get_polymarket_price_history(market_id, start_date, end_date)
|
||||
|
||||
|
||||
@tool
|
||||
def get_order_book(market_id: str) -> str:
|
||||
"""Get current order book depth and spread analysis for a prediction market.
|
||||
|
||||
Args:
|
||||
market_id: The Polymarket market/condition ID
|
||||
"""
|
||||
return get_polymarket_order_book(market_id)
|
||||
|
||||
|
||||
@tool
|
||||
def get_resolution_criteria(market_id: str) -> str:
|
||||
"""Get detailed resolution criteria, source, and timeline for a prediction market.
|
||||
|
||||
Args:
|
||||
market_id: The Polymarket market/condition ID
|
||||
"""
|
||||
return get_polymarket_resolution_criteria(market_id)
|
||||
|
||||
|
||||
@tool
|
||||
def get_event_context(event_id: str, curr_date: str) -> str:
|
||||
"""Get all markets grouped under a prediction market event.
|
||||
|
||||
Args:
|
||||
event_id: The Polymarket event ID
|
||||
curr_date: Current date for reference (YYYY-MM-DD)
|
||||
"""
|
||||
return get_polymarket_event_context(event_id)
|
||||
|
||||
|
||||
@tool
|
||||
def get_related_markets(query: str, limit: int = 5) -> str:
|
||||
"""Search for active prediction market events sorted by volume.
|
||||
|
||||
Args:
|
||||
query: Search topic (unused for now, returns top by volume)
|
||||
limit: Maximum number of results (default 5)
|
||||
"""
|
||||
return get_polymarket_related_markets(query, limit)
|
||||
|
||||
|
||||
@tool
|
||||
def search_markets(query: str, limit: int = 10) -> str:
|
||||
"""Search Polymarket for markets matching a query string.
|
||||
|
||||
Args:
|
||||
query: Search query (e.g. 'US election', 'Bitcoin', 'Fed rate')
|
||||
limit: Maximum number of results (default 10)
|
||||
"""
|
||||
return get_polymarket_search(query, limit)
|
||||
|
|
@ -0,0 +1,406 @@
|
|||
"""Polymarket API client for prediction market data.
|
||||
|
||||
Uses the public Gamma API and CLOB API — no authentication required for read-only access.
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import hashlib
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
|
||||
GAMMA_BASE = "https://gamma-api.polymarket.com"
|
||||
CLOB_BASE = "https://clob.polymarket.com"
|
||||
|
||||
# Simple file-based cache
|
||||
_CACHE_DIR = None
|
||||
|
||||
|
||||
def _get_cache_dir():
|
||||
global _CACHE_DIR
|
||||
if _CACHE_DIR is None:
|
||||
_CACHE_DIR = os.path.join(
|
||||
os.path.dirname(__file__), "data_cache", "polymarket"
|
||||
)
|
||||
os.makedirs(_CACHE_DIR, exist_ok=True)
|
||||
return _CACHE_DIR
|
||||
|
||||
|
||||
def _cache_key(prefix: str, **kwargs) -> str:
|
||||
raw = f"{prefix}:{json.dumps(kwargs, sort_keys=True)}"
|
||||
return hashlib.md5(raw.encode()).hexdigest()
|
||||
|
||||
|
||||
def _get_cached(key: str, max_age_seconds: int = 300):
|
||||
path = os.path.join(_get_cache_dir(), f"{key}.json")
|
||||
if os.path.exists(path):
|
||||
mtime = os.path.getmtime(path)
|
||||
if time.time() - mtime < max_age_seconds:
|
||||
with open(path, "r") as f:
|
||||
return json.load(f)
|
||||
return None
|
||||
|
||||
|
||||
def _set_cached(key: str, data):
|
||||
path = os.path.join(_get_cache_dir(), f"{key}.json")
|
||||
with open(path, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
|
||||
def _gamma_get(endpoint: str, params: Optional[dict] = None, cache_seconds: int = 300):
|
||||
"""Make a GET request to the Gamma API with caching."""
|
||||
key = _cache_key("gamma", endpoint=endpoint, params=params)
|
||||
cached = _get_cached(key, cache_seconds)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
url = f"{GAMMA_BASE}{endpoint}"
|
||||
resp = requests.get(url, params=params, timeout=30)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
|
||||
_set_cached(key, data)
|
||||
return data
|
||||
|
||||
|
||||
def _clob_get(endpoint: str, params: Optional[dict] = None, cache_seconds: int = 60):
|
||||
"""Make a GET request to the CLOB API with caching."""
|
||||
key = _cache_key("clob", endpoint=endpoint, params=params)
|
||||
cached = _get_cached(key, cache_seconds)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
url = f"{CLOB_BASE}{endpoint}"
|
||||
resp = requests.get(url, params=params, timeout=30)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
|
||||
_set_cached(key, data)
|
||||
return data
|
||||
|
||||
|
||||
def get_polymarket_market_info(market_id: str) -> str:
|
||||
"""Get comprehensive info for a Polymarket market.
|
||||
|
||||
Returns: question, outcomes, prices, volume, liquidity, dates, resolution info.
|
||||
"""
|
||||
data = _gamma_get(f"/markets/{market_id}")
|
||||
|
||||
if not data:
|
||||
return f"No market found with ID: {market_id}"
|
||||
|
||||
outcomes = json.loads(data.get("outcomes", "[]")) if isinstance(data.get("outcomes"), str) else data.get("outcomes", [])
|
||||
prices = json.loads(data.get("outcomePrices", "[]")) if isinstance(data.get("outcomePrices"), str) else data.get("outcomePrices", [])
|
||||
|
||||
lines = [
|
||||
f"Market: {data.get('question', 'N/A')}",
|
||||
f"Market ID: {data.get('id', market_id)}",
|
||||
f"Status: {'Active' if data.get('active') else 'Closed' if data.get('closed') else 'Unknown'}",
|
||||
f"Accepting Orders: {data.get('acceptingOrders', 'N/A')}",
|
||||
"",
|
||||
"Outcomes and Prices:",
|
||||
]
|
||||
|
||||
for i, outcome in enumerate(outcomes):
|
||||
price = prices[i] if i < len(prices) else "N/A"
|
||||
lines.append(f" {outcome}: ${price} ({float(price)*100:.1f}% implied probability)" if price != "N/A" else f" {outcome}: N/A")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
f"Total Volume: ${data.get('volumeNum', data.get('volume', 'N/A'))}",
|
||||
f"24h Volume: ${data.get('volume24hr', 'N/A')}",
|
||||
f"Liquidity: ${data.get('liquidityNum', data.get('liquidity', 'N/A'))}",
|
||||
f"Best Bid: {data.get('bestBid', 'N/A')}",
|
||||
f"Best Ask: {data.get('bestAsk', 'N/A')}",
|
||||
f"Last Trade Price: {data.get('lastTradePrice', 'N/A')}",
|
||||
"",
|
||||
f"End Date: {data.get('endDate', 'N/A')}",
|
||||
f"Category: {data.get('category', 'N/A')}",
|
||||
f"Negative Risk: {data.get('negRisk', False)}",
|
||||
f"Maker Fee: {data.get('makerBaseFee', 'N/A')} bps",
|
||||
f"Taker Fee: {data.get('takerBaseFee', 'N/A')} bps",
|
||||
])
|
||||
|
||||
# Add CLOB token IDs for reference
|
||||
clob_ids = json.loads(data.get("clobTokenIds", "[]")) if isinstance(data.get("clobTokenIds"), str) else data.get("clobTokenIds", [])
|
||||
if clob_ids:
|
||||
lines.append("")
|
||||
lines.append("CLOB Token IDs:")
|
||||
for i, tid in enumerate(clob_ids):
|
||||
outcome_name = outcomes[i] if i < len(outcomes) else f"Outcome {i}"
|
||||
lines.append(f" {outcome_name}: {tid}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def get_polymarket_price_history(
|
||||
market_id: str, start_date: str, end_date: str
|
||||
) -> str:
|
||||
"""Get historical price/probability time series for a market.
|
||||
|
||||
Uses the CLOB API /prices-history endpoint.
|
||||
The market_id should be a CLOB token ID for the YES outcome.
|
||||
"""
|
||||
# First get market info to find the CLOB token ID
|
||||
market_data = _gamma_get(f"/markets/{market_id}")
|
||||
if not market_data:
|
||||
return f"No market found with ID: {market_id}"
|
||||
|
||||
clob_ids = json.loads(market_data.get("clobTokenIds", "[]")) if isinstance(market_data.get("clobTokenIds"), str) else market_data.get("clobTokenIds", [])
|
||||
if not clob_ids:
|
||||
return "No CLOB token IDs found for this market."
|
||||
|
||||
# Use the first token ID (YES outcome)
|
||||
token_id = clob_ids[0]
|
||||
|
||||
# Convert dates to unix timestamps
|
||||
try:
|
||||
start_ts = int(datetime.strptime(start_date, "%Y-%m-%d").timestamp())
|
||||
end_ts = int(datetime.strptime(end_date, "%Y-%m-%d").timestamp())
|
||||
except ValueError:
|
||||
return "Invalid date format. Use YYYY-MM-DD."
|
||||
|
||||
params = {
|
||||
"market": token_id,
|
||||
"startTs": start_ts,
|
||||
"endTs": end_ts,
|
||||
"interval": "1d",
|
||||
}
|
||||
|
||||
try:
|
||||
data = _clob_get("/prices-history", params=params, cache_seconds=300)
|
||||
except Exception as e:
|
||||
return f"Price history unavailable for this market (API error: {e}). The market may be too new or the date range too large."
|
||||
|
||||
history = data.get("history", [])
|
||||
if not history:
|
||||
return "No price history available for the specified period."
|
||||
|
||||
lines = [
|
||||
f"Price History for: {market_data.get('question', market_id)}",
|
||||
f"Period: {start_date} to {end_date}",
|
||||
f"Data points: {len(history)}",
|
||||
"",
|
||||
"Date | YES Price | Implied Probability",
|
||||
"--- | --- | ---",
|
||||
]
|
||||
|
||||
for point in history:
|
||||
ts = point.get("t", 0)
|
||||
price = point.get("p", 0)
|
||||
dt = datetime.utcfromtimestamp(ts).strftime("%Y-%m-%d %H:%M")
|
||||
lines.append(f"{dt} | ${price:.4f} | {price*100:.1f}%")
|
||||
|
||||
# Summary stats
|
||||
prices = [p.get("p", 0) for p in history]
|
||||
if prices:
|
||||
lines.extend([
|
||||
"",
|
||||
"Summary:",
|
||||
f" Current: {prices[-1]:.4f} ({prices[-1]*100:.1f}%)",
|
||||
f" Min: {min(prices):.4f} ({min(prices)*100:.1f}%)",
|
||||
f" Max: {max(prices):.4f} ({max(prices)*100:.1f}%)",
|
||||
f" Change: {(prices[-1] - prices[0]):+.4f} ({(prices[-1] - prices[0])*100:+.1f}pp)",
|
||||
])
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def get_polymarket_order_book(market_id: str) -> str:
|
||||
"""Get the current order book for a market."""
|
||||
market_data = _gamma_get(f"/markets/{market_id}")
|
||||
if not market_data:
|
||||
return f"No market found with ID: {market_id}"
|
||||
|
||||
clob_ids = json.loads(market_data.get("clobTokenIds", "[]")) if isinstance(market_data.get("clobTokenIds"), str) else market_data.get("clobTokenIds", [])
|
||||
if not clob_ids:
|
||||
return "No CLOB token IDs found for this market."
|
||||
|
||||
token_id = clob_ids[0]
|
||||
|
||||
try:
|
||||
data = _clob_get("/book", params={"token_id": token_id}, cache_seconds=30)
|
||||
except Exception as e:
|
||||
return f"Order book unavailable for this market (API error: {e})."
|
||||
|
||||
bids = data.get("bids", [])
|
||||
asks = data.get("asks", [])
|
||||
|
||||
lines = [
|
||||
f"Order Book for: {market_data.get('question', market_id)}",
|
||||
f"Token: YES outcome",
|
||||
f"Tick Size: {data.get('tick_size', 'N/A')}",
|
||||
f"Min Order Size: {data.get('min_order_size', 'N/A')}",
|
||||
f"Last Trade Price: {data.get('last_trade_price', 'N/A')}",
|
||||
"",
|
||||
]
|
||||
|
||||
# Bids (buyers)
|
||||
lines.append("BIDS (Buyers):")
|
||||
lines.append("Price | Size")
|
||||
lines.append("--- | ---")
|
||||
for bid in bids[:10]:
|
||||
lines.append(f"${bid.get('price', 'N/A')} | {bid.get('size', 'N/A')}")
|
||||
|
||||
lines.append("")
|
||||
|
||||
# Asks (sellers)
|
||||
lines.append("ASKS (Sellers):")
|
||||
lines.append("Price | Size")
|
||||
lines.append("--- | ---")
|
||||
for ask in asks[:10]:
|
||||
lines.append(f"${ask.get('price', 'N/A')} | {ask.get('size', 'N/A')}")
|
||||
|
||||
# Spread analysis
|
||||
if bids and asks:
|
||||
best_bid = float(bids[0].get("price", 0))
|
||||
best_ask = float(asks[0].get("price", 0))
|
||||
spread = best_ask - best_bid
|
||||
mid = (best_ask + best_bid) / 2
|
||||
lines.extend([
|
||||
"",
|
||||
"Spread Analysis:",
|
||||
f" Best Bid: ${best_bid:.4f}",
|
||||
f" Best Ask: ${best_ask:.4f}",
|
||||
f" Spread: ${spread:.4f} ({spread/mid*100:.2f}%)" if mid > 0 else f" Spread: ${spread:.4f}",
|
||||
f" Midpoint: ${mid:.4f} ({mid*100:.1f}% implied)",
|
||||
])
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def get_polymarket_resolution_criteria(market_id: str) -> str:
|
||||
"""Get the resolution criteria for a market."""
|
||||
data = _gamma_get(f"/markets/{market_id}")
|
||||
if not data:
|
||||
return f"No market found with ID: {market_id}"
|
||||
|
||||
lines = [
|
||||
f"Resolution Criteria for: {data.get('question', market_id)}",
|
||||
"",
|
||||
f"End Date: {data.get('endDate', 'N/A')}",
|
||||
f"Description: {data.get('description', 'No description available')}",
|
||||
"",
|
||||
f"Negative Risk: {data.get('negRisk', False)}",
|
||||
f"UMA Bond: {data.get('umaBond', 'N/A')}",
|
||||
f"UMA Reward: {data.get('umaReward', 'N/A')}",
|
||||
]
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def get_polymarket_event_context(event_id: str) -> str:
|
||||
"""Get all markets grouped under a prediction market event."""
|
||||
try:
|
||||
data = _gamma_get(f"/events/{event_id}")
|
||||
except Exception:
|
||||
return f"No event found with ID: {event_id}. Note: this may be a market ID, not an event ID. Use get_market_info with the market ID instead."
|
||||
if not data:
|
||||
return f"No event found with ID: {event_id}. Note: this may be a market ID, not an event ID. Use get_market_info with the market ID instead."
|
||||
|
||||
lines = [
|
||||
f"Event: {data.get('title', 'N/A')}",
|
||||
f"Description: {data.get('description', 'N/A')}",
|
||||
f"Negative Risk: {data.get('negRisk', False)}",
|
||||
"",
|
||||
"Markets in this event:",
|
||||
"",
|
||||
]
|
||||
|
||||
markets = data.get("markets", [])
|
||||
for i, market in enumerate(markets, 1):
|
||||
outcomes = json.loads(market.get("outcomes", "[]")) if isinstance(market.get("outcomes"), str) else market.get("outcomes", [])
|
||||
prices = json.loads(market.get("outcomePrices", "[]")) if isinstance(market.get("outcomePrices"), str) else market.get("outcomePrices", [])
|
||||
|
||||
lines.append(f"{i}. {market.get('question', 'N/A')}")
|
||||
lines.append(f" ID: {market.get('id', 'N/A')}")
|
||||
|
||||
for j, outcome in enumerate(outcomes):
|
||||
price = prices[j] if j < len(prices) else "N/A"
|
||||
lines.append(f" {outcome}: ${price}")
|
||||
|
||||
lines.append(f" Volume: ${market.get('volumeNum', market.get('volume', 'N/A'))}")
|
||||
lines.append(f" Active: {market.get('active', 'N/A')}")
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def get_polymarket_related_markets(query: str, limit: int = 5) -> str:
|
||||
"""Search for related prediction market events."""
|
||||
params = {
|
||||
"active": "true",
|
||||
"closed": "false",
|
||||
"order": "volume24hr",
|
||||
"ascending": "false",
|
||||
"limit": limit,
|
||||
}
|
||||
|
||||
data = _gamma_get("/events", params=params, cache_seconds=600)
|
||||
|
||||
if not data:
|
||||
return "No events found."
|
||||
|
||||
events = data if isinstance(data, list) else [data]
|
||||
|
||||
lines = [
|
||||
f"Top {limit} Active Events by 24h Volume:",
|
||||
"",
|
||||
]
|
||||
|
||||
for i, event in enumerate(events[:limit], 1):
|
||||
lines.append(f"{i}. {event.get('title', 'N/A')}")
|
||||
markets = event.get("markets", [])
|
||||
total_volume = sum(
|
||||
float(m.get("volume24hr", 0) or 0) for m in markets
|
||||
)
|
||||
lines.append(f" Markets: {len(markets)} | 24h Volume: ${total_volume:,.0f}")
|
||||
lines.append(f" ID: {event.get('id', 'N/A')}")
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def get_polymarket_search(query: str, limit: int = 10) -> str:
|
||||
"""Search Polymarket for markets matching a query."""
|
||||
params = {
|
||||
"active": "true",
|
||||
"closed": "false",
|
||||
"order": "volume24hr",
|
||||
"ascending": "false",
|
||||
"limit": limit,
|
||||
}
|
||||
if query:
|
||||
params["tag"] = query
|
||||
data = _gamma_get("/markets", params=params, cache_seconds=300)
|
||||
|
||||
if not data:
|
||||
return f"No results found for: {query}"
|
||||
|
||||
markets = data if isinstance(data, list) else data.get("markets", [])
|
||||
|
||||
lines = [
|
||||
f"Search results for: '{query}'",
|
||||
"",
|
||||
]
|
||||
|
||||
for i, item in enumerate(markets[:limit], 1):
|
||||
lines.append(f"{i}. {item.get('question', item.get('title', 'N/A'))}")
|
||||
lines.append(f" ID: {item.get('id', 'N/A')}")
|
||||
|
||||
prices = item.get("outcomePrices")
|
||||
if prices:
|
||||
if isinstance(prices, str):
|
||||
prices = json.loads(prices)
|
||||
if prices:
|
||||
lines.append(f" YES: ${prices[0]} | NO: ${prices[1] if len(prices) > 1 else 'N/A'}")
|
||||
|
||||
lines.append(f" Volume: ${item.get('volumeNum', item.get('volume', 'N/A'))}")
|
||||
lines.append(f" Active: {item.get('active', 'N/A')}")
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
|
@ -0,0 +1,67 @@
|
|||
# TradingAgents/prediction_market/graph/conditional_logic.py
|
||||
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_states import PMAgentState
|
||||
|
||||
|
||||
class PMConditionalLogic:
|
||||
"""Handles conditional logic for determining prediction market graph flow."""
|
||||
|
||||
def __init__(self, max_debate_rounds=1, max_risk_discuss_rounds=1):
|
||||
"""Initialize with configuration parameters."""
|
||||
self.max_debate_rounds = max_debate_rounds
|
||||
self.max_risk_discuss_rounds = max_risk_discuss_rounds
|
||||
|
||||
def should_continue_event(self, state: PMAgentState):
|
||||
"""Determine if event analysis should continue."""
|
||||
messages = state["messages"]
|
||||
last_message = messages[-1]
|
||||
if last_message.tool_calls:
|
||||
return "tools_event"
|
||||
return "Msg Clear Event"
|
||||
|
||||
def should_continue_odds(self, state: PMAgentState):
|
||||
"""Determine if odds analysis should continue."""
|
||||
messages = state["messages"]
|
||||
last_message = messages[-1]
|
||||
if last_message.tool_calls:
|
||||
return "tools_odds"
|
||||
return "Msg Clear Odds"
|
||||
|
||||
def should_continue_information(self, state: PMAgentState):
|
||||
"""Determine if information analysis should continue."""
|
||||
messages = state["messages"]
|
||||
last_message = messages[-1]
|
||||
if last_message.tool_calls:
|
||||
return "tools_information"
|
||||
return "Msg Clear Information"
|
||||
|
||||
def should_continue_sentiment(self, state: PMAgentState):
|
||||
"""Determine if sentiment analysis should continue."""
|
||||
messages = state["messages"]
|
||||
last_message = messages[-1]
|
||||
if last_message.tool_calls:
|
||||
return "tools_sentiment"
|
||||
return "Msg Clear Sentiment"
|
||||
|
||||
def should_continue_debate(self, state: PMAgentState) -> str:
|
||||
"""Determine if YES/NO debate should continue."""
|
||||
|
||||
if (
|
||||
state["investment_debate_state"]["count"] >= 2 * self.max_debate_rounds
|
||||
): # rounds of back-and-forth between 2 agents
|
||||
return "Research Manager"
|
||||
if state["investment_debate_state"]["current_response"].startswith("YES"):
|
||||
return "NO Researcher"
|
||||
return "YES Researcher"
|
||||
|
||||
def should_continue_risk_analysis(self, state: PMAgentState) -> str:
|
||||
"""Determine if risk analysis should continue."""
|
||||
if (
|
||||
state["risk_debate_state"]["count"] >= 3 * self.max_risk_discuss_rounds
|
||||
): # rounds of back-and-forth between 3 agents
|
||||
return "Risk Judge"
|
||||
if state["risk_debate_state"]["latest_speaker"].startswith("Aggressive"):
|
||||
return "Conservative Analyst"
|
||||
if state["risk_debate_state"]["latest_speaker"].startswith("Conservative"):
|
||||
return "Neutral Analyst"
|
||||
return "Aggressive Analyst"
|
||||
|
|
@ -0,0 +1,291 @@
|
|||
# TradingAgents/prediction_market/graph/pm_trading_graph.py
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
import json
|
||||
from datetime import date
|
||||
from typing import Dict, Any, Tuple, List, Optional
|
||||
|
||||
from langgraph.prebuilt import ToolNode
|
||||
|
||||
from tradingagents.llm_clients import create_llm_client
|
||||
|
||||
from tradingagents.prediction_market.agents import *
|
||||
from tradingagents.prediction_market.pm_config import PM_DEFAULT_CONFIG
|
||||
from tradingagents.agents.utils.memory import FinancialSituationMemory
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_states import (
|
||||
PMAgentState,
|
||||
PMInvestDebateState,
|
||||
PMRiskDebateState,
|
||||
)
|
||||
|
||||
# Import PM tool functions
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_utils import (
|
||||
get_market_info,
|
||||
get_market_price_history,
|
||||
get_order_book,
|
||||
get_resolution_criteria,
|
||||
get_event_context,
|
||||
get_related_markets,
|
||||
search_markets,
|
||||
get_news,
|
||||
get_global_news,
|
||||
)
|
||||
|
||||
from .conditional_logic import PMConditionalLogic
|
||||
from .setup import PMGraphSetup
|
||||
from .propagation import PMPropagator
|
||||
from .reflection import PMReflector
|
||||
from .signal_processing import PMSignalProcessor
|
||||
|
||||
|
||||
class PMTradingAgentsGraph:
|
||||
"""Main class that orchestrates the prediction market trading agents framework."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
selected_analysts=["event", "odds", "information", "sentiment"],
|
||||
debug=False,
|
||||
config: Dict[str, Any] = None,
|
||||
callbacks: Optional[List] = None,
|
||||
):
|
||||
"""Initialize the prediction market trading agents graph and components.
|
||||
|
||||
Args:
|
||||
selected_analysts: List of analyst types to include
|
||||
debug: Whether to run in debug mode
|
||||
config: Configuration dictionary. If None, uses PM default config
|
||||
callbacks: Optional list of callback handlers (e.g., for tracking LLM/tool stats)
|
||||
"""
|
||||
self.debug = debug
|
||||
self.config = config or PM_DEFAULT_CONFIG
|
||||
self.callbacks = callbacks or []
|
||||
|
||||
# Create necessary directories
|
||||
os.makedirs(
|
||||
os.path.join(self.config["project_dir"], "dataflows/data_cache"),
|
||||
exist_ok=True,
|
||||
)
|
||||
|
||||
# Initialize LLMs with provider-specific thinking configuration
|
||||
llm_kwargs = self._get_provider_kwargs()
|
||||
|
||||
# Add callbacks to kwargs if provided (passed to LLM constructor)
|
||||
if self.callbacks:
|
||||
llm_kwargs["callbacks"] = self.callbacks
|
||||
|
||||
deep_client = create_llm_client(
|
||||
provider=self.config["llm_provider"],
|
||||
model=self.config["deep_think_llm"],
|
||||
base_url=self.config.get("backend_url"),
|
||||
**llm_kwargs,
|
||||
)
|
||||
quick_client = create_llm_client(
|
||||
provider=self.config["llm_provider"],
|
||||
model=self.config["quick_think_llm"],
|
||||
base_url=self.config.get("backend_url"),
|
||||
**llm_kwargs,
|
||||
)
|
||||
|
||||
self.deep_thinking_llm = deep_client.get_llm()
|
||||
self.quick_thinking_llm = quick_client.get_llm()
|
||||
|
||||
# Initialize memories
|
||||
self.yes_memory = FinancialSituationMemory("yes_memory", self.config)
|
||||
self.no_memory = FinancialSituationMemory("no_memory", self.config)
|
||||
self.trader_memory = FinancialSituationMemory("trader_memory", self.config)
|
||||
self.invest_judge_memory = FinancialSituationMemory("invest_judge_memory", self.config)
|
||||
self.risk_manager_memory = FinancialSituationMemory("risk_manager_memory", self.config)
|
||||
|
||||
# Create tool nodes
|
||||
self.tool_nodes = self._create_tool_nodes()
|
||||
|
||||
# Initialize components
|
||||
self.conditional_logic = PMConditionalLogic(
|
||||
max_debate_rounds=self.config["max_debate_rounds"],
|
||||
max_risk_discuss_rounds=self.config["max_risk_discuss_rounds"],
|
||||
)
|
||||
self.graph_setup = PMGraphSetup(
|
||||
self.quick_thinking_llm,
|
||||
self.deep_thinking_llm,
|
||||
self.tool_nodes,
|
||||
self.yes_memory,
|
||||
self.no_memory,
|
||||
self.trader_memory,
|
||||
self.invest_judge_memory,
|
||||
self.risk_manager_memory,
|
||||
self.conditional_logic,
|
||||
)
|
||||
|
||||
self.propagator = PMPropagator()
|
||||
self.reflector = PMReflector(self.quick_thinking_llm)
|
||||
self.signal_processor = PMSignalProcessor(self.quick_thinking_llm)
|
||||
|
||||
# State tracking
|
||||
self.curr_state = None
|
||||
self.market_id = None
|
||||
self.log_states_dict = {} # date to full state dict
|
||||
|
||||
# Set up the graph
|
||||
self.graph = self.graph_setup.setup_graph(selected_analysts)
|
||||
|
||||
def _get_provider_kwargs(self) -> Dict[str, Any]:
|
||||
"""Get provider-specific kwargs for LLM client creation."""
|
||||
kwargs = {}
|
||||
provider = self.config.get("llm_provider", "").lower()
|
||||
|
||||
if provider == "google":
|
||||
thinking_level = self.config.get("google_thinking_level")
|
||||
if thinking_level:
|
||||
kwargs["thinking_level"] = thinking_level
|
||||
|
||||
elif provider == "openai":
|
||||
reasoning_effort = self.config.get("openai_reasoning_effort")
|
||||
if reasoning_effort:
|
||||
kwargs["reasoning_effort"] = reasoning_effort
|
||||
|
||||
return kwargs
|
||||
|
||||
def _create_tool_nodes(self) -> Dict[str, ToolNode]:
|
||||
"""Create tool nodes for different prediction market data sources."""
|
||||
return {
|
||||
"event": ToolNode(
|
||||
[
|
||||
# Event context and resolution
|
||||
get_market_info,
|
||||
get_resolution_criteria,
|
||||
get_event_context,
|
||||
]
|
||||
),
|
||||
"odds": ToolNode(
|
||||
[
|
||||
# Price, order book, and market data
|
||||
get_market_info,
|
||||
get_market_price_history,
|
||||
get_order_book,
|
||||
]
|
||||
),
|
||||
"information": ToolNode(
|
||||
[
|
||||
# News and related markets
|
||||
get_news,
|
||||
get_global_news,
|
||||
get_related_markets,
|
||||
search_markets,
|
||||
]
|
||||
),
|
||||
"sentiment": ToolNode(
|
||||
[
|
||||
# News for sentiment analysis
|
||||
get_news,
|
||||
get_global_news,
|
||||
]
|
||||
),
|
||||
}
|
||||
|
||||
def propagate(self, market_id, trade_date, market_question=""):
|
||||
"""Run the prediction market trading agents graph for a market on a specific date.
|
||||
|
||||
Args:
|
||||
market_id: The Polymarket condition ID or market identifier
|
||||
trade_date: The date of analysis
|
||||
market_question: Optional full text of the market question
|
||||
"""
|
||||
|
||||
self.market_id = market_id
|
||||
|
||||
# Initialize state
|
||||
init_agent_state = self.propagator.create_initial_state(
|
||||
market_id, trade_date, market_question
|
||||
)
|
||||
args = self.propagator.get_graph_args()
|
||||
|
||||
if self.debug:
|
||||
# Debug mode with tracing
|
||||
trace = []
|
||||
for chunk in self.graph.stream(init_agent_state, **args):
|
||||
if len(chunk["messages"]) == 0:
|
||||
pass
|
||||
else:
|
||||
chunk["messages"][-1].pretty_print()
|
||||
trace.append(chunk)
|
||||
|
||||
final_state = trace[-1]
|
||||
else:
|
||||
# Standard mode without tracing
|
||||
final_state = self.graph.invoke(init_agent_state, **args)
|
||||
|
||||
# Store current state for reflection
|
||||
self.curr_state = final_state
|
||||
|
||||
# Log state
|
||||
self._log_state(trade_date, final_state)
|
||||
|
||||
# Return decision and processed signal
|
||||
return final_state, self.process_signal(final_state["final_trade_decision"])
|
||||
|
||||
def _log_state(self, trade_date, final_state):
|
||||
"""Log the final state to a JSON file."""
|
||||
self.log_states_dict[str(trade_date)] = {
|
||||
"market_id": final_state["market_id"],
|
||||
"market_question": final_state["market_question"],
|
||||
"trade_date": final_state["trade_date"],
|
||||
"event_report": final_state["event_report"],
|
||||
"odds_report": final_state["odds_report"],
|
||||
"information_report": final_state["information_report"],
|
||||
"sentiment_report": final_state["sentiment_report"],
|
||||
"investment_debate_state": {
|
||||
"yes_history": final_state["investment_debate_state"]["yes_history"],
|
||||
"no_history": final_state["investment_debate_state"]["no_history"],
|
||||
"history": final_state["investment_debate_state"]["history"],
|
||||
"current_response": final_state["investment_debate_state"][
|
||||
"current_response"
|
||||
],
|
||||
"judge_decision": final_state["investment_debate_state"][
|
||||
"judge_decision"
|
||||
],
|
||||
},
|
||||
"trader_investment_decision": final_state["trader_investment_plan"],
|
||||
"risk_debate_state": {
|
||||
"aggressive_history": final_state["risk_debate_state"]["aggressive_history"],
|
||||
"conservative_history": final_state["risk_debate_state"]["conservative_history"],
|
||||
"neutral_history": final_state["risk_debate_state"]["neutral_history"],
|
||||
"history": final_state["risk_debate_state"]["history"],
|
||||
"judge_decision": final_state["risk_debate_state"]["judge_decision"],
|
||||
},
|
||||
"investment_plan": final_state["investment_plan"],
|
||||
"final_trade_decision": final_state["final_trade_decision"],
|
||||
}
|
||||
|
||||
# Save to file
|
||||
directory = Path(f"eval_results/{self.market_id}/PMTradingAgentsStrategy_logs/")
|
||||
directory.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(
|
||||
f"eval_results/{self.market_id}/PMTradingAgentsStrategy_logs/full_states_log_{trade_date}.json",
|
||||
"w",
|
||||
encoding="utf-8",
|
||||
) as f:
|
||||
json.dump(self.log_states_dict, f, indent=4)
|
||||
|
||||
def reflect_and_remember(self, returns_losses):
|
||||
"""Reflect on decisions and update memory based on returns."""
|
||||
self.reflector.reflect_yes_researcher(
|
||||
self.curr_state, returns_losses, self.yes_memory
|
||||
)
|
||||
self.reflector.reflect_no_researcher(
|
||||
self.curr_state, returns_losses, self.no_memory
|
||||
)
|
||||
self.reflector.reflect_trader(
|
||||
self.curr_state, returns_losses, self.trader_memory
|
||||
)
|
||||
self.reflector.reflect_invest_judge(
|
||||
self.curr_state, returns_losses, self.invest_judge_memory
|
||||
)
|
||||
self.reflector.reflect_risk_manager(
|
||||
self.curr_state, returns_losses, self.risk_manager_memory
|
||||
)
|
||||
|
||||
def process_signal(self, full_signal):
|
||||
"""Process a signal to extract the core decision."""
|
||||
return self.signal_processor.process_signal(full_signal)
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
# TradingAgents/prediction_market/graph/propagation.py
|
||||
|
||||
from typing import Dict, Any, List, Optional
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_states import (
|
||||
PMAgentState,
|
||||
PMInvestDebateState,
|
||||
PMRiskDebateState,
|
||||
)
|
||||
|
||||
|
||||
class PMPropagator:
|
||||
"""Handles state initialization and propagation through the prediction market graph."""
|
||||
|
||||
def __init__(self, max_recur_limit=100):
|
||||
"""Initialize with configuration parameters."""
|
||||
self.max_recur_limit = max_recur_limit
|
||||
|
||||
def create_initial_state(
|
||||
self, market_id: str, trade_date: str, market_question: str = ""
|
||||
) -> Dict[str, Any]:
|
||||
"""Create the initial state for the prediction market agent graph."""
|
||||
return {
|
||||
"messages": [("human", market_question or market_id)],
|
||||
"market_id": market_id,
|
||||
"market_question": market_question,
|
||||
"trade_date": str(trade_date),
|
||||
"investment_debate_state": PMInvestDebateState(
|
||||
{
|
||||
"yes_history": "",
|
||||
"no_history": "",
|
||||
"history": "",
|
||||
"current_response": "",
|
||||
"judge_decision": "",
|
||||
"count": 0,
|
||||
}
|
||||
),
|
||||
"risk_debate_state": PMRiskDebateState(
|
||||
{
|
||||
"aggressive_history": "",
|
||||
"conservative_history": "",
|
||||
"neutral_history": "",
|
||||
"history": "",
|
||||
"latest_speaker": "",
|
||||
"current_aggressive_response": "",
|
||||
"current_conservative_response": "",
|
||||
"current_neutral_response": "",
|
||||
"judge_decision": "",
|
||||
"count": 0,
|
||||
}
|
||||
),
|
||||
"event_report": "",
|
||||
"odds_report": "",
|
||||
"information_report": "",
|
||||
"sentiment_report": "",
|
||||
}
|
||||
|
||||
def get_graph_args(self, callbacks: Optional[List] = None) -> Dict[str, Any]:
|
||||
"""Get arguments for the graph invocation.
|
||||
|
||||
Args:
|
||||
callbacks: Optional list of callback handlers for tool execution tracking.
|
||||
Note: LLM callbacks are handled separately via LLM constructor.
|
||||
"""
|
||||
config = {"recursion_limit": self.max_recur_limit}
|
||||
if callbacks:
|
||||
config["callbacks"] = callbacks
|
||||
return {
|
||||
"stream_mode": "values",
|
||||
"config": config,
|
||||
}
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
# TradingAgents/prediction_market/graph/reflection.py
|
||||
|
||||
from typing import Dict, Any
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
|
||||
class PMReflector:
|
||||
"""Handles reflection on prediction market decisions and updating memory."""
|
||||
|
||||
def __init__(self, quick_thinking_llm: ChatOpenAI):
|
||||
"""Initialize the reflector with an LLM."""
|
||||
self.quick_thinking_llm = quick_thinking_llm
|
||||
self.reflection_system_prompt = self._get_reflection_prompt()
|
||||
|
||||
def _get_reflection_prompt(self) -> str:
|
||||
"""Get the system prompt for prediction market reflection."""
|
||||
return """
|
||||
You are an expert prediction market analyst tasked with reviewing trading decisions/analysis and providing a comprehensive, step-by-step analysis.
|
||||
Your goal is to deliver detailed insights into prediction market decisions and highlight opportunities for improvement, adhering strictly to the following guidelines:
|
||||
|
||||
1. Reasoning:
|
||||
- For each trading decision, determine whether it was correct or incorrect. A correct decision results in an increase in returns, while an incorrect decision does the opposite.
|
||||
- Analyze the contributing factors to each success or mistake. Consider:
|
||||
- Event analysis and understanding of the underlying question.
|
||||
- Odds and probability estimation accuracy.
|
||||
- Market price movement and order book analysis.
|
||||
- Information gathering quality and completeness.
|
||||
- Sentiment analysis from news and social media.
|
||||
- Calibration: was the estimated probability well-calibrated relative to the actual outcome?
|
||||
- Edge detection: was the perceived edge real or illusory?
|
||||
- Weight the importance of each factor in the decision-making process.
|
||||
|
||||
2. Improvement:
|
||||
- For any incorrect decisions, propose revisions to maximize returns.
|
||||
- Provide a detailed list of corrective actions or improvements, including specific recommendations (e.g., changing a decision from PASS to BUY_YES on a particular market).
|
||||
- Assess whether probability estimates were systematically biased (overconfident, underconfident, etc.).
|
||||
|
||||
3. Summary:
|
||||
- Summarize the lessons learned from the successes and mistakes.
|
||||
- Highlight how these lessons can be adapted for future prediction market scenarios and draw connections between similar market types to apply the knowledge gained.
|
||||
|
||||
4. Query:
|
||||
- Extract key insights from the summary into a concise sentence of no more than 1000 tokens.
|
||||
- Ensure the condensed sentence captures the essence of the lessons and reasoning for easy reference.
|
||||
|
||||
Adhere strictly to these instructions, and ensure your output is detailed, accurate, and actionable. You will also be given objective descriptions of the market from event, odds, information, and sentiment perspectives to provide more context for your analysis.
|
||||
"""
|
||||
|
||||
def _extract_current_situation(self, current_state: Dict[str, Any]) -> str:
|
||||
"""Extract the current market situation from the state."""
|
||||
curr_event_report = current_state["event_report"]
|
||||
curr_odds_report = current_state["odds_report"]
|
||||
curr_information_report = current_state["information_report"]
|
||||
curr_sentiment_report = current_state["sentiment_report"]
|
||||
|
||||
return f"{curr_event_report}\n\n{curr_odds_report}\n\n{curr_information_report}\n\n{curr_sentiment_report}"
|
||||
|
||||
def _reflect_on_component(
|
||||
self, component_type: str, report: str, situation: str, returns_losses
|
||||
) -> str:
|
||||
"""Generate reflection for a component."""
|
||||
messages = [
|
||||
("system", self.reflection_system_prompt),
|
||||
(
|
||||
"human",
|
||||
f"Returns: {returns_losses}\n\nAnalysis/Decision: {report}\n\nObjective Market Reports for Reference: {situation}",
|
||||
),
|
||||
]
|
||||
|
||||
result = self.quick_thinking_llm.invoke(messages).content
|
||||
return result
|
||||
|
||||
def reflect_yes_researcher(self, current_state, returns_losses, yes_memory):
|
||||
"""Reflect on YES researcher's analysis and update memory."""
|
||||
situation = self._extract_current_situation(current_state)
|
||||
yes_debate_history = current_state["investment_debate_state"]["yes_history"]
|
||||
|
||||
result = self._reflect_on_component(
|
||||
"YES", yes_debate_history, situation, returns_losses
|
||||
)
|
||||
yes_memory.add_situations([(situation, result)])
|
||||
|
||||
def reflect_no_researcher(self, current_state, returns_losses, no_memory):
|
||||
"""Reflect on NO researcher's analysis and update memory."""
|
||||
situation = self._extract_current_situation(current_state)
|
||||
no_debate_history = current_state["investment_debate_state"]["no_history"]
|
||||
|
||||
result = self._reflect_on_component(
|
||||
"NO", no_debate_history, situation, returns_losses
|
||||
)
|
||||
no_memory.add_situations([(situation, result)])
|
||||
|
||||
def reflect_trader(self, current_state, returns_losses, trader_memory):
|
||||
"""Reflect on trader's decision and update memory."""
|
||||
situation = self._extract_current_situation(current_state)
|
||||
trader_decision = current_state["trader_investment_plan"]
|
||||
|
||||
result = self._reflect_on_component(
|
||||
"TRADER", trader_decision, situation, returns_losses
|
||||
)
|
||||
trader_memory.add_situations([(situation, result)])
|
||||
|
||||
def reflect_invest_judge(self, current_state, returns_losses, invest_judge_memory):
|
||||
"""Reflect on investment judge's decision and update memory."""
|
||||
situation = self._extract_current_situation(current_state)
|
||||
judge_decision = current_state["investment_debate_state"]["judge_decision"]
|
||||
|
||||
result = self._reflect_on_component(
|
||||
"INVEST JUDGE", judge_decision, situation, returns_losses
|
||||
)
|
||||
invest_judge_memory.add_situations([(situation, result)])
|
||||
|
||||
def reflect_risk_manager(self, current_state, returns_losses, risk_manager_memory):
|
||||
"""Reflect on risk manager's decision and update memory."""
|
||||
situation = self._extract_current_situation(current_state)
|
||||
judge_decision = current_state["risk_debate_state"]["judge_decision"]
|
||||
|
||||
result = self._reflect_on_component(
|
||||
"RISK JUDGE", judge_decision, situation, returns_losses
|
||||
)
|
||||
risk_manager_memory.add_situations([(situation, result)])
|
||||
|
|
@ -0,0 +1,202 @@
|
|||
# TradingAgents/prediction_market/graph/setup.py
|
||||
|
||||
from typing import Dict, Any
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langgraph.graph import END, StateGraph, START
|
||||
from langgraph.prebuilt import ToolNode
|
||||
|
||||
from tradingagents.prediction_market.agents import *
|
||||
from tradingagents.prediction_market.agents.utils.pm_agent_states import PMAgentState
|
||||
|
||||
from .conditional_logic import PMConditionalLogic
|
||||
|
||||
|
||||
class PMGraphSetup:
|
||||
"""Handles the setup and configuration of the prediction market agent graph."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
quick_thinking_llm: ChatOpenAI,
|
||||
deep_thinking_llm: ChatOpenAI,
|
||||
tool_nodes: Dict[str, ToolNode],
|
||||
yes_memory,
|
||||
no_memory,
|
||||
trader_memory,
|
||||
invest_judge_memory,
|
||||
risk_manager_memory,
|
||||
conditional_logic: PMConditionalLogic,
|
||||
):
|
||||
"""Initialize with required components."""
|
||||
self.quick_thinking_llm = quick_thinking_llm
|
||||
self.deep_thinking_llm = deep_thinking_llm
|
||||
self.tool_nodes = tool_nodes
|
||||
self.yes_memory = yes_memory
|
||||
self.no_memory = no_memory
|
||||
self.trader_memory = trader_memory
|
||||
self.invest_judge_memory = invest_judge_memory
|
||||
self.risk_manager_memory = risk_manager_memory
|
||||
self.conditional_logic = conditional_logic
|
||||
|
||||
def setup_graph(
|
||||
self, selected_analysts=["event", "odds", "information", "sentiment"]
|
||||
):
|
||||
"""Set up and compile the prediction market agent workflow graph.
|
||||
|
||||
Args:
|
||||
selected_analysts (list): List of analyst types to include. Options are:
|
||||
- "event": Event analyst
|
||||
- "odds": Odds analyst
|
||||
- "information": Information analyst
|
||||
- "sentiment": Sentiment analyst
|
||||
"""
|
||||
if len(selected_analysts) == 0:
|
||||
raise ValueError("PM Graph Setup Error: no analysts selected!")
|
||||
|
||||
# Create analyst nodes
|
||||
analyst_nodes = {}
|
||||
delete_nodes = {}
|
||||
tool_nodes = {}
|
||||
|
||||
if "event" in selected_analysts:
|
||||
analyst_nodes["event"] = create_event_analyst(
|
||||
self.quick_thinking_llm
|
||||
)
|
||||
delete_nodes["event"] = create_msg_delete()
|
||||
tool_nodes["event"] = self.tool_nodes["event"]
|
||||
|
||||
if "odds" in selected_analysts:
|
||||
analyst_nodes["odds"] = create_odds_analyst(
|
||||
self.quick_thinking_llm
|
||||
)
|
||||
delete_nodes["odds"] = create_msg_delete()
|
||||
tool_nodes["odds"] = self.tool_nodes["odds"]
|
||||
|
||||
if "information" in selected_analysts:
|
||||
analyst_nodes["information"] = create_information_analyst(
|
||||
self.quick_thinking_llm
|
||||
)
|
||||
delete_nodes["information"] = create_msg_delete()
|
||||
tool_nodes["information"] = self.tool_nodes["information"]
|
||||
|
||||
if "sentiment" in selected_analysts:
|
||||
analyst_nodes["sentiment"] = create_sentiment_analyst(
|
||||
self.quick_thinking_llm
|
||||
)
|
||||
delete_nodes["sentiment"] = create_msg_delete()
|
||||
tool_nodes["sentiment"] = self.tool_nodes["sentiment"]
|
||||
|
||||
# Create researcher and manager nodes
|
||||
yes_researcher_node = create_yes_researcher(
|
||||
self.quick_thinking_llm, self.yes_memory
|
||||
)
|
||||
no_researcher_node = create_no_researcher(
|
||||
self.quick_thinking_llm, self.no_memory
|
||||
)
|
||||
research_manager_node = create_pm_research_manager(
|
||||
self.deep_thinking_llm, self.invest_judge_memory
|
||||
)
|
||||
trader_node = create_pm_trader(self.quick_thinking_llm, self.trader_memory)
|
||||
|
||||
# Create risk analysis nodes
|
||||
aggressive_analyst = create_pm_aggressive_debator(self.quick_thinking_llm)
|
||||
neutral_analyst = create_pm_neutral_debator(self.quick_thinking_llm)
|
||||
conservative_analyst = create_pm_conservative_debator(self.quick_thinking_llm)
|
||||
risk_manager_node = create_pm_risk_manager(
|
||||
self.deep_thinking_llm, self.risk_manager_memory
|
||||
)
|
||||
|
||||
# Create workflow
|
||||
workflow = StateGraph(PMAgentState)
|
||||
|
||||
# Add analyst nodes to the graph
|
||||
for analyst_type, node in analyst_nodes.items():
|
||||
workflow.add_node(f"{analyst_type.capitalize()} Analyst", node)
|
||||
workflow.add_node(
|
||||
f"Msg Clear {analyst_type.capitalize()}", delete_nodes[analyst_type]
|
||||
)
|
||||
workflow.add_node(f"tools_{analyst_type}", tool_nodes[analyst_type])
|
||||
|
||||
# Add other nodes
|
||||
workflow.add_node("YES Researcher", yes_researcher_node)
|
||||
workflow.add_node("NO Researcher", no_researcher_node)
|
||||
workflow.add_node("Research Manager", research_manager_node)
|
||||
workflow.add_node("Trader", trader_node)
|
||||
workflow.add_node("Aggressive Analyst", aggressive_analyst)
|
||||
workflow.add_node("Neutral Analyst", neutral_analyst)
|
||||
workflow.add_node("Conservative Analyst", conservative_analyst)
|
||||
workflow.add_node("Risk Judge", risk_manager_node)
|
||||
|
||||
# Define edges
|
||||
# Start with the first analyst
|
||||
first_analyst = selected_analysts[0]
|
||||
workflow.add_edge(START, f"{first_analyst.capitalize()} Analyst")
|
||||
|
||||
# Connect analysts in sequence
|
||||
for i, analyst_type in enumerate(selected_analysts):
|
||||
current_analyst = f"{analyst_type.capitalize()} Analyst"
|
||||
current_tools = f"tools_{analyst_type}"
|
||||
current_clear = f"Msg Clear {analyst_type.capitalize()}"
|
||||
|
||||
# Add conditional edges for current analyst
|
||||
workflow.add_conditional_edges(
|
||||
current_analyst,
|
||||
getattr(self.conditional_logic, f"should_continue_{analyst_type}"),
|
||||
[current_tools, current_clear],
|
||||
)
|
||||
workflow.add_edge(current_tools, current_analyst)
|
||||
|
||||
# Connect to next analyst or to YES Researcher if this is the last analyst
|
||||
if i < len(selected_analysts) - 1:
|
||||
next_analyst = f"{selected_analysts[i+1].capitalize()} Analyst"
|
||||
workflow.add_edge(current_clear, next_analyst)
|
||||
else:
|
||||
workflow.add_edge(current_clear, "YES Researcher")
|
||||
|
||||
# Add remaining edges
|
||||
workflow.add_conditional_edges(
|
||||
"YES Researcher",
|
||||
self.conditional_logic.should_continue_debate,
|
||||
{
|
||||
"NO Researcher": "NO Researcher",
|
||||
"Research Manager": "Research Manager",
|
||||
},
|
||||
)
|
||||
workflow.add_conditional_edges(
|
||||
"NO Researcher",
|
||||
self.conditional_logic.should_continue_debate,
|
||||
{
|
||||
"YES Researcher": "YES Researcher",
|
||||
"Research Manager": "Research Manager",
|
||||
},
|
||||
)
|
||||
workflow.add_edge("Research Manager", "Trader")
|
||||
workflow.add_edge("Trader", "Aggressive Analyst")
|
||||
workflow.add_conditional_edges(
|
||||
"Aggressive Analyst",
|
||||
self.conditional_logic.should_continue_risk_analysis,
|
||||
{
|
||||
"Conservative Analyst": "Conservative Analyst",
|
||||
"Risk Judge": "Risk Judge",
|
||||
},
|
||||
)
|
||||
workflow.add_conditional_edges(
|
||||
"Conservative Analyst",
|
||||
self.conditional_logic.should_continue_risk_analysis,
|
||||
{
|
||||
"Neutral Analyst": "Neutral Analyst",
|
||||
"Risk Judge": "Risk Judge",
|
||||
},
|
||||
)
|
||||
workflow.add_conditional_edges(
|
||||
"Neutral Analyst",
|
||||
self.conditional_logic.should_continue_risk_analysis,
|
||||
{
|
||||
"Aggressive Analyst": "Aggressive Analyst",
|
||||
"Risk Judge": "Risk Judge",
|
||||
},
|
||||
)
|
||||
|
||||
workflow.add_edge("Risk Judge", END)
|
||||
|
||||
# Compile and return
|
||||
return workflow.compile()
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
# TradingAgents/prediction_market/graph/signal_processing.py
|
||||
|
||||
import json
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
|
||||
class PMSignalProcessor:
|
||||
"""Processes prediction market trading signals to extract actionable decisions."""
|
||||
|
||||
def __init__(self, quick_thinking_llm: ChatOpenAI):
|
||||
"""Initialize with an LLM for processing."""
|
||||
self.quick_thinking_llm = quick_thinking_llm
|
||||
|
||||
def process_signal(self, full_signal: str) -> str:
|
||||
"""
|
||||
Process a full prediction market trading signal to extract the core decision
|
||||
and structured data.
|
||||
|
||||
Args:
|
||||
full_signal: Complete trading signal text from the risk manager
|
||||
|
||||
Returns:
|
||||
JSON string with signal, estimated_probability, market_price, edge,
|
||||
position_size, and confidence
|
||||
"""
|
||||
messages = [
|
||||
(
|
||||
"system",
|
||||
"""You are an efficient assistant designed to analyze paragraphs or financial reports provided by a group of prediction market analysts. Your task is to extract the investment decision and key metrics.
|
||||
|
||||
Extract the following from the report:
|
||||
1. signal: The investment decision - must be exactly one of: BUY_YES, BUY_NO, or PASS
|
||||
2. estimated_probability: The estimated true probability (0.0 to 1.0), or null if not stated
|
||||
3. market_price: The current market price/probability (0.0 to 1.0), or null if not stated
|
||||
4. edge: The perceived edge (estimated_probability - market_price for YES, or market_price - estimated_probability for NO), or null if not stated
|
||||
5. position_size: The recommended position size as a fraction (0.0 to 1.0), or null if not stated
|
||||
6. confidence: The confidence level (low, medium, high), or null if not stated
|
||||
|
||||
Respond with ONLY valid JSON, no other text. Example:
|
||||
{"signal": "BUY_YES", "estimated_probability": 0.65, "market_price": 0.50, "edge": 0.15, "position_size": 0.03, "confidence": "medium"}""",
|
||||
),
|
||||
("human", full_signal),
|
||||
]
|
||||
|
||||
result = self.quick_thinking_llm.invoke(messages).content
|
||||
|
||||
# Try to parse as JSON; if it fails, wrap the raw signal
|
||||
try:
|
||||
parsed = json.loads(result)
|
||||
# Ensure signal field is valid
|
||||
if parsed.get("signal") not in ("BUY_YES", "BUY_NO", "PASS"):
|
||||
parsed["signal"] = "PASS"
|
||||
return json.dumps(parsed)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
# Fallback: extract just the signal keyword
|
||||
upper_result = result.upper()
|
||||
if "BUY_YES" in upper_result:
|
||||
signal = "BUY_YES"
|
||||
elif "BUY_NO" in upper_result:
|
||||
signal = "BUY_NO"
|
||||
else:
|
||||
signal = "PASS"
|
||||
|
||||
return json.dumps({
|
||||
"signal": signal,
|
||||
"estimated_probability": None,
|
||||
"market_price": None,
|
||||
"edge": None,
|
||||
"position_size": None,
|
||||
"confidence": None,
|
||||
})
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
import os
|
||||
|
||||
PM_DEFAULT_CONFIG = {
|
||||
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
|
||||
"results_dir": os.getenv("TRADINGAGENTS_RESULTS_DIR", "./results"),
|
||||
"data_cache_dir": os.path.join(
|
||||
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
|
||||
"dataflows/data_cache",
|
||||
),
|
||||
# LLM settings
|
||||
"llm_provider": "openai",
|
||||
"deep_think_llm": "gpt-5.2",
|
||||
"quick_think_llm": "gpt-5-mini",
|
||||
"backend_url": "https://api.openai.com/v1",
|
||||
# Provider-specific thinking configuration
|
||||
"google_thinking_level": None,
|
||||
"openai_reasoning_effort": None,
|
||||
# Polymarket API
|
||||
"polymarket_gamma_url": "https://gamma-api.polymarket.com",
|
||||
"polymarket_clob_url": "https://clob.polymarket.com",
|
||||
# Trading parameters
|
||||
"kelly_fraction": 0.25,
|
||||
"min_edge_threshold": 0.05,
|
||||
"max_position_pct": 0.05,
|
||||
"max_cluster_exposure_pct": 0.15,
|
||||
"bankroll": 10000,
|
||||
# Debate and discussion settings
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
"max_recur_limit": 100,
|
||||
}
|
||||
Loading…
Reference in New Issue