feat: medium-term positioning upgrade (debate rounds, TTM, peer comparison, macro regime) (#14)

* docs: add implementation plan for medium-term positioning upgrade

Covers 4 objectives: increased debate rounds, 8-quarter TTM fundamentals,
sector/peer relative performance, and macro regime classification.

https://claude.ai/code/session_01TuPpssTo83whKkNgSu57HH

* feat: medium-term positioning upgrade (debate rounds, TTM, peer comparison, macro regime)

## Changes

### Step 1: Agentic Debate Depth
- Increase `max_debate_rounds` and `max_risk_discuss_rounds` from 1 → 2 in `default_config.py`
- Fix bug in `trading_graph.py`: wire config values into `ConditionalLogic()` (was ignoring config, using hardcoded defaults)

### Step 2: 8-Quarter TTM Fundamental Analysis
- New `tradingagents/dataflows/ttm_analysis.py`: parses quarterly income/balance/cashflow CSV strings, computes TTM (sum of last 4 quarters), QoQ/YoY growth rates, margin trends across 8 quarters
- New `@tool get_ttm_analysis` in `fundamental_data_tools.py`
- Wire into fundamentals ToolNode; register in `TOOLS_CATEGORIES`
- Update fundamentals analyst prompt: "last 8 quarters (2 years)" focus

### Step 3: Sector & Peer Relative Performance
- New `tradingagents/dataflows/peer_comparison.py`: sector peer lookup, 1W/1M/3M/6M/YTD return ranking, alpha vs sector ETF
- New `@tool get_peer_comparison` and `@tool get_sector_relative`
- Wire into fundamentals ToolNode

### Step 4: Macro Regime Flag
- New `tradingagents/dataflows/macro_regime.py`: 6-signal classifier (VIX level/trend, credit spread HYG/LQD, yield curve TLT/SHY, market breadth SPX vs 200-SMA, sector rotation) → risk-on / transition / risk-off
- New `@tool get_macro_regime`; add `macro_regime_report` field to AgentState
- Wire into market ToolNode; feed into research_manager and risk_manager prompts

### Step 5: Tests (88 new unit tests, 0 integration)
- `tests/test_debate_rounds.py` (17 tests)
- `tests/test_ttm_analysis.py` (18 tests)
- `tests/test_peer_comparison.py` (11 tests)
- `tests/test_macro_regime.py` (16 tests)
- `tests/test_config_wiring.py` (12 tests)

All 88 new unit tests pass; no regressions in existing tests.

https://claude.ai/code/session_01TuPpssTo83whKkNgSu57HH

* test: mark live yfinance network tests as integration

TestYfinanceIndustryPerformance, TestRouteToVendorFallback, and TestFallbackRouting
all make live HTTP calls to yfinance (yfinance.Sector / market movers). Mark them
@pytest.mark.integration so they're skipped in standard offline runs.

https://claude.ai/code/session_01TuPpssTo83whKkNgSu57HH

* docs: update memory files for medium-term positioning upgrade

- PROGRESS.md: add milestone section with all new files and changes
- DECISIONS.md: add decisions 008-010 (macro regime, TTM data source, peer comparison)
- MISTAKES.md: add mistakes 10-11 (Python 3.11 f-string, mock data precision)

https://claude.ai/code/session_01TuPpssTo83whKkNgSu57HH

* docs: document git remote setup (origin = aguzererler fork)

origin points to aguzererler/TradingAgents which IS the fork.
No upstream remote configured. All feature branches push to origin.

https://claude.ai/code/session_01TuPpssTo83whKkNgSu57HH

* docs: redirect tracking files to memory system

Replace DECISIONS.md/MISTAKES.md/PROGRESS.md references in CLAUDE.md
with instructions to use /remember memory system. A PreToolUse hook
in ~/.claude/settings.json enforces this by blocking writes to those files.

https://claude.ai/code/session_01TuPpssTo83whKkNgSu57HH

* Initial plan

* Add integration tests for yfinance and Alpha Vantage APIs (78 tests, all passing)

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* Initial plan

* fix: allow .env variables to override DEFAULT_CONFIG values

Merged origin/main and resolved all 8 conflicting files:
- CLAUDE.md: merged MISTAKES.md ref + Project Tracking section + env override docs
- cli/main.py: kept top-level json import, kept try/except in run_pipeline
- tool_runner.py: kept descriptive comments for MAX_TOOL_ROUNDS
- alpha_vantage_common.py: kept thread-safe rate limiter, robust error handling
- interface.py: kept broader exception catch (AlphaVantageError + ConnectionError + TimeoutError)
- default_config.py: kept _env()/_env_int() env var overrides with load_dotenv() at module level
- scanner_graph.py: kept debug mode fix (stream for debug, invoke for result)
- macro_bridge.py: kept get_running_loop() over deprecated get_event_loop()

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* fix: move rate limiter sleep outside lock to avoid blocking threads

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* docs: update PROGRESS, DECISIONS, MISTAKES, CLAUDE with env override implementation

- PROGRESS.md: added env override milestone, updated test count (38 total),
  marked Mistake #9 as resolved, added all new/modified files from PR #9
- DECISIONS.md: added Decision 008 (env var config overrides),
  Decision 009 (thread-safe rate limiter), Decision 010 (broader
  vendor fallback exceptions), updated Decision 007 status to superseded
- MISTAKES.md: updated Mistake #9 status to RESOLVED, added Mistake #10
  (rate limiter held lock during sleep)
- CLAUDE.md: added env var override convention docs, updated critical
  patterns with rate limiter and config fallback key lessons, updated
  mistake count to 10

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* merge main into branch (-X theirs) and fix tests to pass against current main code

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* feat: add scanner tests, global demo key in conftest, remove 48 inline key patches

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* feat: add agentic memory scaffold and migrate tracking files to docs/agent/

Migrate DECISIONS.md, MISTAKES.md, PROGRESS.md, agents/, plans/, and
tradingagents/llm_clients/TODO.md into a structured docs/agent/ scaffold
with ADR-style decisions, plans, templates, and a live state tracker.

This gives agent workflows a standard memory structure for decisions,
plans, logs, and session continuity via CURRENT_STATE.md.

Agent-Ref: docs/agent/plans/global-macro-scanner.md
State-Updated: Yes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: improve Industry Deep Dive report quality with enriched data, sector routing, and tool-call nudge

* Initial plan

* Improve Industry Deep Dive quality: enrich tool data, explicit sector keys, tool-call nudge

- Enrich get_industry_performance_yfinance with 1-day/1-week/1-month price returns
  via batched yf.download() for top 10 tickers (Step 1)
- Add VALID_SECTOR_KEYS, _DISPLAY_TO_KEY, _extract_top_sectors() to industry_deep_dive.py
  to pre-extract top sectors from Phase 1 report and inject them into the prompt (Step 2)
- Add tool-call nudge to run_tool_loop: if first LLM response has no tool calls and is
  under 500 chars, re-prompt with explicit instruction to call tools (Step 3)
- Update scanner_tools.py get_industry_performance docstring to list all valid sector keys (Step 4)
- Add 15 unit tests covering _extract_top_sectors, tool_runner nudge, and enriched output (Step 5)

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* Address code review: move cols[3] access into try block for IndexError safety

Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>

* fix: align display row count with download count in get_industry_performance_yfinance

The enriched function downloads price data for top 10 tickers but displayed
20 rows, causing rows 11-20 to show N/A in all price columns. This broke
test_industry_perf_falls_back_to_yfinance which asserts N/A count < 5.
Now both download and display use head(10) for consistency.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>
Co-authored-by: Ahmet Guzererler <guzererler@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* docs: update memory files after PR #13 (Industry Deep Dive quality fix)

- CURRENT_STATE.md: remove Industry Deep Dive blocker (resolved), update
  test count 38 → 53, add PR #13 to Recent Progress, update milestone focus
- decisions/009-industry-deep-dive-quality.md: new ADR documenting the
  three-pronged fix (enriched data, explicit sector routing, tool-call nudge)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add architecture-coordinator skill for mandatory ADR reading protocol

New Claude Code skill that enforces reading docs/agent/CURRENT_STATE.md,
decisions/, and plans/ before any code changes. Includes conflict resolution
protocol that stops work and quotes the violated ADR rule when user requests
conflict with established architectural decisions.

Files:
- .claude/skills/architecture-coordinator/SKILL.md
- .claude/skills/architecture-coordinator/references/adr-template.md
- .claude/skills/architecture-coordinator/references/reading-checklist.md

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>
This commit is contained in:
ahmet guzererler 2026-03-17 22:27:40 +01:00 committed by GitHub
parent 7f22b8e889
commit 7728f79e8d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 2648 additions and 14 deletions

299
PLAN.md Normal file
View File

@ -0,0 +1,299 @@
# Implementation Plan: Medium-Term Positioning Upgrade
## Lead Architect Overview
Four objectives to upgrade TradingAgents for medium-term (13 month) positioning:
1. **Agentic Debate** — Increase debate rounds to 23
2. **Fundamental Data** — Extend look-back to 8 quarters with TTM trend computation
3. **Relative Performance** — Sector & peer comparison tools
4. **Macro Regime Flag** — Classify market as risk-on / risk-off / transition
---
## Step 1: Agentic Debate — Increase Rounds (Architect + API Integrator)
**Assigned to: API Integrator Agent**
**Risk: LOW** — Config-only change, conditional logic already supports arbitrary round counts.
### Changes:
- **File:** `tradingagents/default_config.py`
- Change `"max_debate_rounds": 1``"max_debate_rounds": 2`
- Change `"max_risk_discuss_rounds": 1``"max_risk_discuss_rounds": 2`
- **File:** `tradingagents/graph/trading_graph.py` (line 146)
- Pass config values to `ConditionalLogic`:
```python
self.conditional_logic = ConditionalLogic(
max_debate_rounds=self.config.get("max_debate_rounds", 2),
max_risk_discuss_rounds=self.config.get("max_risk_discuss_rounds", 2),
)
```
- **NOTE:** Currently `ConditionalLogic()` is called with no args, so it uses its own defaults of 1. The config values are never actually wired in. This is a bug fix.
### Verification:
- Investment debate: count threshold = `2 * 2 = 4` → Bull speaks 2×, Bear speaks 2× before judge
- Risk debate: count threshold = `3 * 2 = 6` → Each of 3 analysts speaks 2× before judge
- `max_recur_limit` of 100 is sufficient (was 100, worst case ~20 graph steps)
---
## Step 2: Fundamental Data — 8-Quarter TTM Trend (API Integrator + Economist)
**Assigned to: API Integrator (data layer) + Economist (TTM computation logic)**
**Risk: MEDIUM** — Requires new data tool + prompt update + TTM computation module.
### 2A: New TTM Computation Module (Economist Agent)
- **New file:** `tradingagents/dataflows/ttm_analysis.py`
- `compute_ttm_metrics(income_df, balance_df, cashflow_df) -> dict`
- Sum last 4 quarters of income stmt for flow items (Revenue, Net Income, EBITDA, Operating Income, Gross Profit)
- Use latest quarter for balance sheet (stock items: Total Assets, Total Debt, Equity)
- Compute key ratios: Revenue Growth (QoQ and YoY), Margin trends (Gross, Operating, Net), ROE trend, Debt/Equity trend, FCF trend
- `format_ttm_report(metrics: dict, ticker: str) -> str`
- Markdown report with 8-quarter trend table + TTM summary + quarter-over-quarter trajectory
### 2B: New Tool — `get_ttm_analysis` (API Integrator Agent)
- **File:** `tradingagents/agents/utils/fundamental_data_tools.py`
- Add new `@tool` function `get_ttm_analysis(ticker, curr_date) -> str`
- Internally calls existing vendor-routed `get_income_statement`, `get_balance_sheet`, `get_cashflow` with `freq="quarterly"`
- Passes raw data to `compute_ttm_metrics()` and `format_ttm_report()`
- **File:** `tradingagents/agents/utils/agent_utils.py`
- Export `get_ttm_analysis` tool
### 2C: Update Fundamentals Analyst Prompt (Economist Agent)
- **File:** `tradingagents/agents/analysts/fundamentals_analyst.py`
- Add `get_ttm_analysis` to tools list
- Update system prompt from "past week" to:
> "You are a researcher tasked with analyzing fundamental information covering the last 8 quarters (2 years) for a company. First call `get_ttm_analysis` to obtain a Trailing Twelve Months (TTM) trend report including revenue growth, margin trajectories, and key ratio trends. Then supplement with `get_fundamentals` for the latest snapshot. Write a comprehensive report covering multi-quarter trends, not just the most recent filing."
- **File:** `tradingagents/graph/trading_graph.py``_create_tool_nodes()`
- Add `get_ttm_analysis` to the `"fundamentals"` ToolNode
### 2D: Data Layer — Ensure 8 Quarters Available
- **yfinance:** `ticker.quarterly_income_stmt` returns up to 5 quarters. To get 8, we need to combine quarterly + annual or make 2 calls. Actually, yfinance returns the last 4-5 quarters by default. We'll need to fetch 2+ years of data.
- Approach: Call `ticker.quarterly_income_stmt` which returns available quarters (typically 4-5). Also call `ticker.income_stmt` (annual) for older periods. Combine to reconstruct 8 quarters.
- **Alternative (preferred):** yfinance `ticker.get_income_stmt(freq="quarterly", as_dict=False)` can return more data. Test this.
- **Fallback:** Alpha Vantage INCOME_STATEMENT endpoint returns up to 20 quarterly reports — use this as the configured vendor for TTM.
- **File:** `tradingagents/default_config.py`
- Add to `tool_vendors`: `"get_ttm_analysis": "alpha_vantage,yfinance"` to prefer Alpha Vantage for richer quarterly history
### Data Source Assessment:
| Source | Quarters Available | Notes |
|--------|-------------------|-------|
| yfinance `quarterly_income_stmt` | 4-5 | Limited but free |
| Alpha Vantage `INCOME_STATEMENT` | Up to 20 quarterly | Best option, needs API key |
| Alpha Vantage `BALANCE_SHEET` | Up to 20 quarterly | Same |
| Alpha Vantage `CASH_FLOW` | Up to 20 quarterly | Same |
---
## Step 3: Relative Performance — Sector & Peer Comparison (API Integrator + Economist)
**Assigned to: API Integrator (tools) + Economist (comparison logic)**
**Risk: MEDIUM** — New tools leveraging existing scanner infrastructure.
### 3A: New Peer Comparison Module (Economist Agent)
- **New file:** `tradingagents/dataflows/peer_comparison.py`
- `get_sector_peers(ticker) -> list[str]`
- Use yfinance `Ticker.info["sector"]` to identify sector
- Return top 5-8 peers from same sector (use existing `_SECTOR_TICKERS` mapping from `alpha_vantage_scanner.py`, or yfinance Sector.top_companies)
- `compute_relative_performance(ticker, peers, period="6mo") -> str`
- Download price history for ticker + peers via `yf.download()`
- Compute: 1-week, 1-month, 3-month, 6-month returns for each
- Rank ticker among peers
- Compute ticker's alpha vs sector ETF
- Return markdown table with relative positioning
### 3B: New Tools — `get_peer_comparison` and `get_sector_relative` (API Integrator)
- **File:** `tradingagents/agents/utils/fundamental_data_tools.py`
- `get_peer_comparison(ticker, curr_date) -> str`@tool
- Calls `get_sector_peers()` and `compute_relative_performance()`
- Returns ranked peer table with ticker highlighted
- `get_sector_relative(ticker, curr_date) -> str`@tool
- Compares ticker vs its sector ETF over multiple time frames
- Returns outperformance/underperformance metrics
- **File:** `tradingagents/agents/utils/agent_utils.py`
- Export both new tools
### 3C: Wire Into Fundamentals Analyst (API Integrator)
- **File:** `tradingagents/agents/analysts/fundamentals_analyst.py`
- Add `get_peer_comparison` and `get_sector_relative` to tools list
- Update prompt to instruct: "Also analyze how the company performs relative to sector peers and its sector ETF benchmark over 1-week, 1-month, 3-month, and 6-month periods."
- **File:** `tradingagents/graph/trading_graph.py``_create_tool_nodes()`
- Add both tools to `"fundamentals"` ToolNode
### 3D: Vendor Routing
- These tools use yfinance directly (no Alpha Vantage endpoint for peer comparison)
- No vendor routing needed — direct yfinance calls inside the module
- Register in `TOOLS_CATEGORIES` under `"fundamental_data"` for consistency
---
## Step 4: Macro Regime Flag (Economist Agent)
**Assigned to: Economist Agent**
**Risk: MEDIUM** — New module + new state field + integration into Research Manager.
### 4A: Macro Regime Classifier Module (Economist)
- **New file:** `tradingagents/dataflows/macro_regime.py`
- `classify_macro_regime(curr_date: str = None) -> dict`
- Returns: `{"regime": "risk-on"|"risk-off"|"transition", "confidence": float, "signals": dict, "summary": str}`
- Signal sources (all via yfinance — free, no API key needed):
1. **VIX level**: `yf.Ticker("^VIX")`<16 risk-on, 16-25 transition, >25 risk-off
2. **VIX trend**: 5-day vs 20-day SMA — rising = risk-off signal
3. **Credit spread proxy**: `yf.Ticker("HYG")` vs `yf.Ticker("LQD")` — HYG/LQD ratio declining = risk-off
4. **Yield curve proxy**: `yf.Ticker("TLT")` (20yr) vs `yf.Ticker("SHY")` (1-3yr) — TLT outperforming = risk-off (flight to safety)
5. **Market breadth**: S&P 500 (`^GSPC`) above/below 200-SMA
6. **Sector rotation signal**: Defensive sectors (XLU, XLP, XLV) outperforming cyclicals (XLY, XLK, XLI) = risk-off
- Scoring: Each signal contributes -1 (risk-off), 0 (neutral), or +1 (risk-on). Aggregate:
- Sum >= 3: "risk-on"
- Sum <= -3: "risk-off"
- Otherwise: "transition"
- `format_macro_report(regime_data: dict) -> str`
- Markdown report with signal breakdown, regime classification, and confidence level
### 4B: New Tool — `get_macro_regime` (API Integrator)
- **File:** `tradingagents/agents/utils/fundamental_data_tools.py` (or new `macro_tools.py`)
- `get_macro_regime(curr_date) -> str`@tool
- Calls `classify_macro_regime()` and `format_macro_report()`
- **File:** `tradingagents/agents/utils/agent_utils.py`
- Export `get_macro_regime`
### 4C: Add Macro Regime to Agent State
- **File:** `tradingagents/agents/utils/agent_states.py`
- Add to `AgentState`:
```python
macro_regime_report: Annotated[str, "Macro regime classification (risk-on/risk-off/transition)"]
```
### 4D: Wire Into Market Analyst (API Integrator)
- **File:** `tradingagents/agents/analysts/market_analyst.py`
- Add `get_macro_regime` to tools list
- Update prompt to include: "Before analyzing individual stock technicals, call `get_macro_regime` to determine the current market environment (risk-on, risk-off, or transition). Interpret all subsequent technical signals through this macro lens."
- Return `macro_regime_report` in output dict
- **File:** `tradingagents/graph/trading_graph.py``_create_tool_nodes()`
- Add `get_macro_regime` to `"market"` ToolNode
### 4E: Feed Macro Regime Into Downstream Agents
- **File:** `tradingagents/agents/managers/research_manager.py`
- Add `macro_regime_report` to the `curr_situation` string that gets passed to the judge
- Update prompt to reference macro regime in decision-making
- **File:** `tradingagents/agents/managers/risk_manager.py`
- Include macro regime context in risk assessment prompt
- **File:** `tradingagents/graph/trading_graph.py``_log_state()`
- Add `macro_regime_report` to logged state
---
## Step 5: Integration Tests (Tester Agent)
**Assigned to: Tester Agent**
### 5A: Test Debate Rounds — `tests/test_debate_rounds.py`
- Test `ConditionalLogic` with `max_debate_rounds=2`:
- Verify bull/bear alternate correctly for 4 turns
- Verify routing to "Research Manager" after count >= 4
- Test `ConditionalLogic` with `max_risk_discuss_rounds=2`:
- Verify aggressive→conservative→neutral rotation for 6 turns
- Verify routing to "Risk Judge" after count >= 6
- Test config values are properly wired from `TradingAgentsGraph` config to `ConditionalLogic`
### 5B: Test TTM Analysis — `tests/test_ttm_analysis.py`
- Unit test `compute_ttm_metrics()` with mock 8-quarter DataFrames
- Verify TTM revenue = sum of last 4 quarters
- Verify margin calculations
- Verify QoQ and YoY growth rates
- Unit test `format_ttm_report()` output contains expected sections
- Integration test `get_ttm_analysis` tool with real ticker (mark `@pytest.mark.integration`)
### 5C: Test Peer Comparison — `tests/test_peer_comparison.py`
- Unit test `get_sector_peers()` returns valid tickers for known sectors
- Unit test `compute_relative_performance()` with mock price data
- Verify correct return calculations
- Verify ranking logic
- Integration test with real ticker (mark `@pytest.mark.integration`)
### 5D: Test Macro Regime — `tests/test_macro_regime.py`
- Unit test `classify_macro_regime()` with mocked yfinance data:
- All risk-on signals → "risk-on"
- All risk-off signals → "risk-off"
- Mixed signals → "transition"
- Unit test `format_macro_report()` output format
- Unit test scoring edge cases (VIX at boundaries, missing data gracefully handled)
- Integration test with real market data (mark `@pytest.mark.integration`)
### 5E: Test Config Wiring — `tests/test_config_wiring.py`
- Test that `TradingAgentsGraph(config={...})` properly passes debate rounds to `ConditionalLogic`
- Test that new tools appear in the correct ToolNodes
- Test that new state fields exist in `AgentState`
---
## File Change Summary
| File | Action | Objective |
|------|--------|-----------|
| `tradingagents/default_config.py` | EDIT | #1 debate rounds, #2 tool vendor |
| `tradingagents/graph/trading_graph.py` | EDIT | #1 wire config, #2/#3/#4 add tools to ToolNodes, #4 log macro |
| `tradingagents/graph/conditional_logic.py` | NO CHANGE | Already supports arbitrary rounds |
| `tradingagents/agents/utils/agent_states.py` | EDIT | #4 add macro_regime_report |
| `tradingagents/agents/analysts/fundamentals_analyst.py` | EDIT | #2/#3 new tools + prompt |
| `tradingagents/agents/analysts/market_analyst.py` | EDIT | #4 macro regime tool + prompt |
| `tradingagents/agents/managers/research_manager.py` | EDIT | #4 include macro in decision |
| `tradingagents/agents/managers/risk_manager.py` | EDIT | #4 include macro in risk assessment |
| `tradingagents/agents/utils/fundamental_data_tools.py` | EDIT | #2/#3 new tool functions |
| `tradingagents/agents/utils/agent_utils.py` | EDIT | #2/#3/#4 export new tools |
| `tradingagents/agents/__init__.py` | NO CHANGE | Tools don't need agent-level export |
| `tradingagents/dataflows/ttm_analysis.py` | NEW | #2 TTM computation |
| `tradingagents/dataflows/peer_comparison.py` | NEW | #3 peer comparison logic |
| `tradingagents/dataflows/macro_regime.py` | NEW | #4 macro regime classifier |
| `tradingagents/dataflows/interface.py` | EDIT | #2/#3 register new tools in TOOLS_CATEGORIES |
| `tests/test_debate_rounds.py` | NEW | #1 tests |
| `tests/test_ttm_analysis.py` | NEW | #2 tests |
| `tests/test_peer_comparison.py` | NEW | #3 tests |
| `tests/test_macro_regime.py` | NEW | #4 tests |
| `tests/test_config_wiring.py` | NEW | Integration wiring tests |
---
## Execution Order
1. **Step 1** (Debate Rounds) — Independent, can start immediately
2. **Step 4A** (Macro Regime Module) — Independent, can start in parallel
3. **Step 2A** (TTM Module) — Independent, can start in parallel
4. **Step 3A** (Peer Comparison Module) — Independent, can start in parallel
5. **Step 2B2D** (TTM Integration) — Depends on 2A
6. **Step 3B3D** (Peer Integration) — Depends on 3A
7. **Step 4B4E** (Macro Integration) — Depends on 4A
8. **Step 5** (All Tests) — Depends on all above
Steps 1, 2A, 3A, 4A can all run in parallel.
---
## Risk Mitigation
- **yfinance quarterly data limit:** If yfinance returns <8 quarters, TTM module gracefully computes with available data and notes the gap. Alpha Vantage fallback provides full 20 quarters.
- **New state field (macro_regime_report):** Default empty string. All existing agents that don't produce it will leave it empty — no reducer conflicts.
- **Rate limits:** Macro regime and peer comparison both call yfinance which has no rate limit. Sector performance via Alpha Vantage is already rate-limited at 75/min.
- **Backward compatibility:** All changes are additive. `max_debate_rounds=1` still works. New tools are optional in prompts. `macro_regime_report` defaults to empty.

View File

@ -39,3 +39,8 @@ tradingagents = "cli.main:app"
[tool.setuptools.packages.find] [tool.setuptools.packages.find]
include = ["tradingagents*", "cli*"] include = ["tradingagents*", "cli*"]
[dependency-groups]
dev = [
"pytest>=9.0.2",
]

118
tests/test_config_wiring.py Normal file
View File

@ -0,0 +1,118 @@
"""Tests for config wiring — new tools in ToolNodes, new state fields, etc."""
import pytest
class TestAgentStateFields:
def test_macro_regime_report_field_exists(self):
"""AgentState should have macro_regime_report field."""
from tradingagents.agents.utils.agent_states import AgentState
# TypedDict fields are accessible via __annotations__
assert "macro_regime_report" in AgentState.__annotations__
def test_all_original_fields_still_present(self):
from tradingagents.agents.utils.agent_states import AgentState
expected_fields = [
"company_of_interest", "trade_date", "sender",
"market_report", "sentiment_report", "news_report", "fundamentals_report",
"investment_debate_state", "investment_plan", "trader_investment_plan",
"risk_debate_state", "final_trade_decision",
]
for field in expected_fields:
assert field in AgentState.__annotations__, f"Missing field: {field}"
class TestNewToolsExported:
def test_get_ttm_analysis_exported(self):
from tradingagents.agents.utils.agent_utils import get_ttm_analysis
assert callable(get_ttm_analysis)
def test_get_peer_comparison_exported(self):
from tradingagents.agents.utils.agent_utils import get_peer_comparison
assert callable(get_peer_comparison)
def test_get_sector_relative_exported(self):
from tradingagents.agents.utils.agent_utils import get_sector_relative
assert callable(get_sector_relative)
def test_get_macro_regime_exported(self):
from tradingagents.agents.utils.agent_utils import get_macro_regime
assert callable(get_macro_regime)
def test_tools_are_langchain_tools(self):
"""All new tools should be LangChain @tool decorated (have .name attribute)."""
from tradingagents.agents.utils.agent_utils import (
get_ttm_analysis, get_peer_comparison, get_sector_relative, get_macro_regime
)
for tool in [get_ttm_analysis, get_peer_comparison, get_sector_relative, get_macro_regime]:
assert hasattr(tool, "name"), f"{tool} is not a LangChain tool"
class TestTTMToolInCategory:
def test_ttm_in_fundamental_data_category(self):
from tradingagents.dataflows.interface import TOOLS_CATEGORIES
assert "get_ttm_analysis" in TOOLS_CATEGORIES["fundamental_data"]["tools"]
class TestConditionalLogicWiring:
def test_default_config_debate_rounds(self):
from tradingagents.default_config import DEFAULT_CONFIG
assert DEFAULT_CONFIG["max_debate_rounds"] == 2
assert DEFAULT_CONFIG["max_risk_discuss_rounds"] == 2
def test_conditional_logic_accepts_config_values(self):
from tradingagents.graph.conditional_logic import ConditionalLogic
cl = ConditionalLogic(max_debate_rounds=3, max_risk_discuss_rounds=3)
assert cl.max_debate_rounds == 3
assert cl.max_risk_discuss_rounds == 3
def test_debate_threshold_calculation(self):
"""Threshold = 2 * max_debate_rounds."""
from tradingagents.graph.conditional_logic import ConditionalLogic
from tradingagents.agents.utils.agent_states import InvestDebateState
cl = ConditionalLogic(max_debate_rounds=2)
# At count=4, should route to Research Manager
state = {
"investment_debate_state": InvestDebateState(
bull_history="", bear_history="", history="",
current_response="Bull: argument", judge_decision="", count=4,
)
}
result = cl.should_continue_debate(state)
assert result == "Research Manager"
def test_risk_threshold_calculation(self):
"""Threshold = 3 * max_risk_discuss_rounds."""
from tradingagents.graph.conditional_logic import ConditionalLogic
from tradingagents.agents.utils.agent_states import RiskDebateState
cl = ConditionalLogic(max_risk_discuss_rounds=2)
state = {
"risk_debate_state": RiskDebateState(
aggressive_history="", conservative_history="", neutral_history="",
history="", latest_speaker="Aggressive",
current_aggressive_response="", current_conservative_response="",
current_neutral_response="", judge_decision="", count=6,
)
}
result = cl.should_continue_risk_analysis(state)
assert result == "Risk Judge"
class TestNewModulesImportable:
def test_ttm_analysis_importable(self):
from tradingagents.dataflows.ttm_analysis import compute_ttm_metrics, format_ttm_report
assert callable(compute_ttm_metrics)
assert callable(format_ttm_report)
def test_peer_comparison_importable(self):
from tradingagents.dataflows.peer_comparison import (
get_sector_peers, compute_relative_performance,
get_peer_comparison_report, get_sector_relative_report,
)
assert callable(get_sector_peers)
assert callable(compute_relative_performance)
def test_macro_regime_importable(self):
from tradingagents.dataflows.macro_regime import classify_macro_regime, format_macro_report
assert callable(classify_macro_regime)
assert callable(format_macro_report)

172
tests/test_debate_rounds.py Normal file
View File

@ -0,0 +1,172 @@
"""Tests for agentic debate round configuration and conditional logic."""
import pytest
from tradingagents.graph.conditional_logic import ConditionalLogic
from tradingagents.agents.utils.agent_states import InvestDebateState, RiskDebateState
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _make_invest_state(count: int, current_response: str = "Bull: some argument") -> dict:
return {
"investment_debate_state": InvestDebateState(
bull_history="",
bear_history="",
history="",
current_response=current_response,
judge_decision="",
count=count,
)
}
def _make_risk_state(count: int, latest_speaker: str = "Aggressive") -> dict:
return {
"risk_debate_state": RiskDebateState(
aggressive_history="",
conservative_history="",
neutral_history="",
history="",
latest_speaker=latest_speaker,
current_aggressive_response="",
current_conservative_response="",
current_neutral_response="",
judge_decision="",
count=count,
)
}
# ---------------------------------------------------------------------------
# ConditionalLogic default initialization
# ---------------------------------------------------------------------------
class TestConditionalLogicDefaults:
def test_default_max_debate_rounds(self):
cl = ConditionalLogic()
assert cl.max_debate_rounds == 1
def test_default_max_risk_discuss_rounds(self):
cl = ConditionalLogic()
assert cl.max_risk_discuss_rounds == 1
# ---------------------------------------------------------------------------
# Investment debate routing — 2 rounds
# ---------------------------------------------------------------------------
class TestInvestDebateRounds2:
def setup_method(self):
self.cl = ConditionalLogic(max_debate_rounds=2)
def test_bull_speaks_first(self):
# count=0, current_response starts with "Bull" → go to Bear
state = _make_invest_state(count=0, current_response="Bull: bullish case")
result = self.cl.should_continue_debate(state)
assert result == "Bear Researcher"
def test_bear_speaks_second(self):
# count=1, current_response does NOT start with "Bull" → go to Bull
state = _make_invest_state(count=1, current_response="Bear: bearish case")
result = self.cl.should_continue_debate(state)
assert result == "Bull Researcher"
def test_bull_speaks_third(self):
# count=2, threshold=2*2=4, not reached; Bull spoke last so Bear goes
state = _make_invest_state(count=2, current_response="Bull: second argument")
result = self.cl.should_continue_debate(state)
assert result == "Bear Researcher"
def test_bear_speaks_fourth(self):
# count=3, threshold=4, not reached; Bear spoke last so Bull goes
state = _make_invest_state(count=3, current_response="Bear: second rebuttal")
result = self.cl.should_continue_debate(state)
assert result == "Bull Researcher"
def test_routes_to_manager_at_threshold(self):
# count=4 == 2*2=4 → route to Research Manager
state = _make_invest_state(count=4, current_response="Bull: final word")
result = self.cl.should_continue_debate(state)
assert result == "Research Manager"
def test_routes_to_manager_above_threshold(self):
# count=6 > threshold → still route to Research Manager
state = _make_invest_state(count=6, current_response="Bull: anything")
result = self.cl.should_continue_debate(state)
assert result == "Research Manager"
# ---------------------------------------------------------------------------
# Investment debate routing — 3 rounds
# ---------------------------------------------------------------------------
class TestInvestDebateRounds3:
def setup_method(self):
self.cl = ConditionalLogic(max_debate_rounds=3)
def test_threshold_is_6(self):
# count=5, threshold=3*2=6, not reached
state = _make_invest_state(count=5, current_response="Bull: fifth turn")
result = self.cl.should_continue_debate(state)
assert result == "Bear Researcher"
def test_routes_to_manager_at_6(self):
state = _make_invest_state(count=6, current_response="Bull: sixth turn")
result = self.cl.should_continue_debate(state)
assert result == "Research Manager"
# ---------------------------------------------------------------------------
# Risk debate routing — 2 rounds
# ---------------------------------------------------------------------------
class TestRiskDebateRounds2:
def setup_method(self):
self.cl = ConditionalLogic(max_risk_discuss_rounds=2)
def test_aggressive_goes_to_conservative(self):
state = _make_risk_state(count=0, latest_speaker="Aggressive")
result = self.cl.should_continue_risk_analysis(state)
assert result == "Conservative Analyst"
def test_conservative_goes_to_neutral(self):
state = _make_risk_state(count=1, latest_speaker="Conservative")
result = self.cl.should_continue_risk_analysis(state)
assert result == "Neutral Analyst"
def test_neutral_goes_to_aggressive(self):
state = _make_risk_state(count=2, latest_speaker="Neutral")
result = self.cl.should_continue_risk_analysis(state)
assert result == "Aggressive Analyst"
def test_threshold_at_6(self):
# count=6 == 3*2=6 → route to Risk Judge
state = _make_risk_state(count=6, latest_speaker="Aggressive")
result = self.cl.should_continue_risk_analysis(state)
assert result == "Risk Judge"
def test_continues_at_count_5(self):
state = _make_risk_state(count=5, latest_speaker="Aggressive")
result = self.cl.should_continue_risk_analysis(state)
assert result == "Conservative Analyst"
# ---------------------------------------------------------------------------
# Config wiring — verify TradingAgentsGraph passes config to ConditionalLogic
# ---------------------------------------------------------------------------
class TestConfigWiring:
def test_trading_graph_wires_debate_rounds(self):
"""ConditionalLogic should use config values, not hardcoded defaults."""
from tradingagents.graph.conditional_logic import ConditionalLogic
cl = ConditionalLogic(max_debate_rounds=2, max_risk_discuss_rounds=2)
assert cl.max_debate_rounds == 2
assert cl.max_risk_discuss_rounds == 2
def test_default_config_has_updated_values(self):
"""Default config should now ship with max_debate_rounds=2."""
from tradingagents.default_config import DEFAULT_CONFIG
assert DEFAULT_CONFIG["max_debate_rounds"] == 2
assert DEFAULT_CONFIG["max_risk_discuss_rounds"] == 2

307
tests/test_macro_regime.py Normal file
View File

@ -0,0 +1,307 @@
"""Tests for macro regime classifier (risk-on / transition / risk-off)."""
import pytest
import pandas as pd
import numpy as np
from unittest.mock import patch, MagicMock
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _make_series(values: list[float], freq: str = "B") -> pd.Series:
dates = pd.date_range("2025-09-01", periods=len(values), freq=freq)
return pd.Series(values, index=dates)
def _flat_series(value: float, n: int = 100) -> pd.Series:
return _make_series([value] * n)
def _trending_series(start: float, end: float, n: int = 100) -> pd.Series:
return _make_series(list(np.linspace(start, end, n)))
# ---------------------------------------------------------------------------
# Individual signal tests
# ---------------------------------------------------------------------------
class TestSignalVixLevel:
def setup_method(self):
from tradingagents.dataflows.macro_regime import _signal_vix_level
self.fn = _signal_vix_level
def test_low_vix_is_risk_on(self):
score, desc = self.fn(14.0)
assert score == 1
assert "risk-on" in desc
def test_high_vix_is_risk_off(self):
score, desc = self.fn(30.0)
assert score == -1
assert "risk-off" in desc
def test_mid_vix_is_neutral(self):
score, desc = self.fn(20.0)
assert score == 0
def test_none_vix_is_neutral(self):
score, desc = self.fn(None)
assert score == 0
assert "unavailable" in desc
def test_boundary_at_16(self):
# Exactly at threshold — not below, so transition
score, _ = self.fn(16.0)
assert score == 0
def test_boundary_at_25(self):
# Exactly at threshold — not above, so transition
score, _ = self.fn(25.0)
assert score == 0
class TestSignalVixTrend:
def setup_method(self):
from tradingagents.dataflows.macro_regime import _signal_vix_trend
self.fn = _signal_vix_trend
def test_declining_vix_is_risk_on(self):
# SMA5 < SMA20: VIX is falling
vix = _trending_series(30, 15, 30)
score, desc = self.fn(vix)
assert score == 1
assert "risk-on" in desc
def test_rising_vix_is_risk_off(self):
# SMA5 > SMA20: VIX is rising
vix = _trending_series(10, 30, 30)
score, desc = self.fn(vix)
assert score == -1
assert "risk-off" in desc
def test_insufficient_history_is_neutral(self):
vix = _make_series([20.0] * 4)
score, desc = self.fn(vix)
assert score == 0
def test_none_series_is_neutral(self):
score, desc = self.fn(None)
assert score == 0
class TestSignalCreditSpread:
def setup_method(self):
from tradingagents.dataflows.macro_regime import _signal_credit_spread
self.fn = _signal_credit_spread
def test_improving_spread_is_risk_on(self):
# HYG/LQD ratio rising by >0.5% over 1 month
hyg = _trending_series(80, 85, 30)
lqd = _flat_series(100, 30)
score, desc = self.fn(hyg, lqd)
assert score == 1
def test_deteriorating_spread_is_risk_off(self):
# HYG/LQD ratio falling by >0.5%
hyg = _trending_series(85, 80, 30)
lqd = _flat_series(100, 30)
score, desc = self.fn(hyg, lqd)
assert score == -1
def test_none_data_is_neutral(self):
score, _ = self.fn(None, None)
assert score == 0
class TestSignalMarketBreadth:
def setup_method(self):
from tradingagents.dataflows.macro_regime import _signal_market_breadth
self.fn = _signal_market_breadth
def test_above_200sma_is_risk_on(self):
# Flat series ending above its own 200-SMA (which equals the series mean)
# Use upward trending — latest value > SMA
spx = _trending_series(4000, 6000, 250)
score, desc = self.fn(spx)
assert score == 1
assert "risk-on" in desc
def test_below_200sma_is_risk_off(self):
# Downward trending — latest value < SMA
spx = _trending_series(6000, 4000, 250)
score, desc = self.fn(spx)
assert score == -1
assert "risk-off" in desc
def test_insufficient_history_is_neutral(self):
spx = _make_series([5000.0] * 100)
score, _ = self.fn(spx)
assert score == 0 # < 200 points for SMA200
# ---------------------------------------------------------------------------
# Classify macro regime
# ---------------------------------------------------------------------------
class TestClassifyMacroRegime:
def _mock_download(self, scenario: str):
"""Return mock yfinance download data for different scenarios."""
n = 250
if scenario == "risk_on":
vix = _trending_series(30, 12, n) # VIX falling → +1 trend AND +1 level at end
spx = _trending_series(4000, 6000, n) # Above 200-SMA → +1
hyg = _trending_series(75, 90, n) # HYG rising sharply (credit improving) → +1
lqd = _flat_series(100, n)
tlt = _flat_series(100, n) # TLT flat (no flight to safety) → 0
shy = _flat_series(100, n)
xlu = _flat_series(60, n); xlp = _flat_series(70, n); xlv = _flat_series(80, n)
xly = _trending_series(100, 120, n); xlk = _trending_series(100, 120, n); xli = _trending_series(100, 120, n) # cyclicals up → +1
elif scenario == "risk_off":
vix = _flat_series(30.0, n) # High VIX
spx = _trending_series(6000, 4000, n) # Below 200-SMA
hyg = _trending_series(85, 80, n) # Deteriorating credit
lqd = _flat_series(100, n)
tlt = _trending_series(95, 105, n) # TLT outperforming (flight to safety)
shy = _flat_series(100, n)
xlu = _trending_series(60, 66, n); xlp = _trending_series(70, 77, n); xlv = _trending_series(80, 88, n)
xly = _flat_series(150, n); xlk = _flat_series(180, n); xli = _flat_series(100, n)
else: # transition
vix = _flat_series(20.0, n) # Mid VIX
spx = _trending_series(4900, 5100, n) # Near 200-SMA
hyg = _flat_series(82, n)
lqd = _flat_series(100, n)
tlt = _flat_series(100, n)
shy = _flat_series(100, n)
xlu = _flat_series(60, n); xlp = _flat_series(70, n); xlv = _flat_series(80, n)
xly = _flat_series(150, n); xlk = _flat_series(180, n); xli = _flat_series(100, n)
return {
"^VIX": vix, "^GSPC": spx,
"HYG": hyg, "LQD": lqd,
"TLT": tlt, "SHY": shy,
"XLU": xlu, "XLP": xlp, "XLV": xlv,
"XLY": xly, "XLK": xlk, "XLI": xli,
}
def _patch_download(self, scenario: str):
series_map = self._mock_download(scenario)
def fake_download(symbols, **kwargs):
if isinstance(symbols, str):
symbols = [symbols]
data = {s: series_map[s] for s in symbols if s in series_map}
if not data:
return pd.DataFrame()
df = pd.DataFrame(data)
return pd.concat({"Close": df}, axis=1)
return patch("yfinance.download", side_effect=fake_download)
def test_risk_on_regime(self):
with self._patch_download("risk_on"):
from tradingagents.dataflows.macro_regime import classify_macro_regime
result = classify_macro_regime()
assert result["regime"] == "risk-on"
assert result["score"] >= 3
def test_risk_off_regime(self):
with self._patch_download("risk_off"):
from tradingagents.dataflows.macro_regime import classify_macro_regime
result = classify_macro_regime()
assert result["regime"] == "risk-off"
assert result["score"] <= -3
def test_result_has_required_keys(self):
with self._patch_download("transition"):
from tradingagents.dataflows.macro_regime import classify_macro_regime
result = classify_macro_regime()
for key in ("regime", "score", "confidence", "signals", "summary"):
assert key in result
def test_signals_list_has_6_entries(self):
with self._patch_download("transition"):
from tradingagents.dataflows.macro_regime import classify_macro_regime
result = classify_macro_regime()
assert len(result["signals"]) == 6
def test_each_signal_has_score_and_description(self):
with self._patch_download("transition"):
from tradingagents.dataflows.macro_regime import classify_macro_regime
result = classify_macro_regime()
for sig in result["signals"]:
assert "score" in sig
assert "description" in sig
assert sig["score"] in (-1, 0, 1)
def test_confidence_is_valid(self):
with self._patch_download("risk_on"):
from tradingagents.dataflows.macro_regime import classify_macro_regime
result = classify_macro_regime()
assert result["confidence"] in ("high", "medium", "low")
# ---------------------------------------------------------------------------
# Format macro report
# ---------------------------------------------------------------------------
class TestFormatMacroReport:
def setup_method(self):
from tradingagents.dataflows.macro_regime import format_macro_report
self.format = format_macro_report
def _sample_regime(self, regime: str) -> dict:
return {
"regime": regime,
"score": 3 if regime == "risk-on" else -3 if regime == "risk-off" else 0,
"confidence": "high",
"vix": 14.5,
"signals": [
{"name": "vix_level", "score": 1, "description": "VIX low"},
{"name": "vix_trend", "score": 1, "description": "VIX declining"},
{"name": "credit_spread", "score": 1, "description": "Improving"},
{"name": "yield_curve", "score": 0, "description": "Neutral"},
{"name": "market_breadth", "score": 0, "description": "Above SMA"},
{"name": "sector_rotation", "score": 0, "description": "Cyclicals lead"},
],
"summary": f"Regime: {regime}",
}
def test_report_contains_regime_label(self):
for regime in ("risk-on", "risk-off", "transition"):
report = self.format(self._sample_regime(regime))
assert regime.upper() in report
def test_report_contains_signal_table(self):
report = self.format(self._sample_regime("risk-on"))
assert "Signal Breakdown" in report
assert "Vix Level" in report
def test_report_contains_trading_implications(self):
for regime in ("risk-on", "risk-off", "transition"):
report = self.format(self._sample_regime(regime))
assert "What This Means for Trading" in report
def test_risk_on_suggests_cyclicals(self):
report = self.format(self._sample_regime("risk-on"))
assert "cyclicals" in report.lower() or "growth" in report.lower()
def test_risk_off_suggests_defensives(self):
report = self.format(self._sample_regime("risk-off"))
assert "defensive" in report.lower()
# ---------------------------------------------------------------------------
# Integration test
# ---------------------------------------------------------------------------
@pytest.mark.integration
class TestMacroRegimeIntegration:
def test_get_macro_regime_tool(self):
from tradingagents.agents.utils.fundamental_data_tools import get_macro_regime
result = get_macro_regime.invoke({"curr_date": "2026-03-17"})
assert isinstance(result, str)
assert len(result) > 100
assert any(r in result.upper() for r in ("RISK-ON", "RISK-OFF", "TRANSITION"))

View File

@ -0,0 +1,178 @@
"""Tests for sector and peer relative performance comparison."""
import pytest
import pandas as pd
from unittest.mock import patch, MagicMock
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _make_price_series(n: int = 130, start: float = 100.0, growth: float = 0.001) -> pd.Series:
"""Create a synthetic daily price series."""
import numpy as np
dates = pd.date_range("2025-09-01", periods=n, freq="B")
prices = [start * (1 + growth) ** i for i in range(n)]
return pd.Series(prices, index=dates)
# ---------------------------------------------------------------------------
# Unit tests for get_sector_peers
# ---------------------------------------------------------------------------
class TestGetSectorPeers:
def test_technology_sector_returns_peers(self):
"""get_sector_peers should return known tech tickers for a tech stock."""
mock_info = {"sector": "Technology"}
with patch("yfinance.Ticker") as mock_ticker:
mock_ticker.return_value.info = mock_info
from tradingagents.dataflows.peer_comparison import get_sector_peers
sector_display, sector_key, peers = get_sector_peers("AAPL")
assert sector_key == "technology"
assert len(peers) > 0
assert "AAPL" not in peers # ticker excluded from its own peers
assert any(p in peers for p in ["MSFT", "NVDA", "GOOGL"])
def test_healthcare_sector(self):
mock_info = {"sector": "Healthcare"}
with patch("yfinance.Ticker") as mock_ticker:
mock_ticker.return_value.info = mock_info
from tradingagents.dataflows.peer_comparison import get_sector_peers
sector_display, sector_key, peers = get_sector_peers("JNJ")
assert sector_key == "healthcare"
assert "JNJ" not in peers
def test_unknown_sector_returns_empty_peers(self):
mock_info = {"sector": "Foobar"}
with patch("yfinance.Ticker") as mock_ticker:
mock_ticker.return_value.info = mock_info
from tradingagents.dataflows.peer_comparison import get_sector_peers
sector_display, sector_key, peers = get_sector_peers("XYZ")
assert peers == []
def test_network_error_returns_empty(self):
with patch("yfinance.Ticker") as mock_ticker:
mock_ticker.return_value.info = {}
mock_ticker.side_effect = Exception("network error")
from tradingagents.dataflows.peer_comparison import get_sector_peers
sector_display, sector_key, peers = get_sector_peers("AAPL")
assert peers == []
# ---------------------------------------------------------------------------
# Unit tests for compute_relative_performance
# ---------------------------------------------------------------------------
class TestComputeRelativePerformance:
def _mock_download(self, tickers: list[str]) -> pd.DataFrame:
"""Create mock multi-ticker close price DataFrame."""
n = 130
data = {}
for i, t in enumerate(tickers):
data[t] = _make_price_series(n=n, start=100.0, growth=0.001 * (i + 1))
df = pd.DataFrame(data, index=pd.date_range("2025-09-01", periods=n, freq="B"))
return pd.concat({"Close": df}, axis=1)
def test_returns_markdown_table(self):
tickers = ["AAPL", "MSFT", "NVDA", "XLK"]
mock_hist = self._mock_download(tickers)
with patch("yfinance.download", return_value=mock_hist):
from tradingagents.dataflows.peer_comparison import compute_relative_performance
result = compute_relative_performance("AAPL", "technology", ["MSFT", "NVDA", "XLK"])
assert "| Symbol |" in result
assert "AAPL" in result
assert "TARGET" in result
def test_ticker_appears_as_target(self):
tickers = ["AAPL", "MSFT", "XLK"]
mock_hist = self._mock_download(tickers)
with patch("yfinance.download", return_value=mock_hist):
from tradingagents.dataflows.peer_comparison import compute_relative_performance
result = compute_relative_performance("AAPL", "technology", ["MSFT"])
assert "► TARGET" in result
def test_etf_appears_as_benchmark(self):
tickers = ["AAPL", "MSFT", "XLK"]
mock_hist = self._mock_download(tickers)
with patch("yfinance.download", return_value=mock_hist):
from tradingagents.dataflows.peer_comparison import compute_relative_performance
result = compute_relative_performance("AAPL", "technology", ["MSFT"])
assert "ETF Benchmark" in result
def test_alpha_section_present(self):
tickers = ["AAPL", "XLK"]
mock_hist = self._mock_download(tickers)
with patch("yfinance.download", return_value=mock_hist):
from tradingagents.dataflows.peer_comparison import compute_relative_performance
result = compute_relative_performance("AAPL", "technology", [])
assert "Alpha vs Sector ETF" in result
def test_download_failure_returns_error_string(self):
with patch("yfinance.download", side_effect=Exception("timeout")):
from tradingagents.dataflows.peer_comparison import compute_relative_performance
result = compute_relative_performance("AAPL", "technology", ["MSFT"])
assert "Error" in result
# ---------------------------------------------------------------------------
# Unit tests for get_sector_relative_report
# ---------------------------------------------------------------------------
class TestGetSectorRelativeReport:
def _mock_download(self) -> pd.DataFrame:
n = 130
data = {
"AAPL": _make_price_series(n=n, start=150.0, growth=0.002),
"XLK": _make_price_series(n=n, start=200.0, growth=0.001),
}
df = pd.DataFrame(data, index=pd.date_range("2025-09-01", periods=n, freq="B"))
return pd.concat({"Close": df}, axis=1)
def test_returns_table_with_all_periods(self):
mock_hist = self._mock_download()
mock_info = {"sector": "Technology"}
with patch("yfinance.Ticker") as mock_ticker, \
patch("yfinance.download", return_value=mock_hist):
mock_ticker.return_value.info = mock_info
from tradingagents.dataflows.peer_comparison import get_sector_relative_report
result = get_sector_relative_report("AAPL")
for period in ["1-Week", "1-Month", "3-Month", "6-Month", "YTD"]:
assert period in result
def test_unknown_sector_returns_graceful_message(self):
mock_info = {"sector": "UnknownSector"}
with patch("yfinance.Ticker") as mock_ticker:
mock_ticker.return_value.info = mock_info
from tradingagents.dataflows.peer_comparison import get_sector_relative_report
result = get_sector_relative_report("XYZ")
assert "No ETF benchmark" in result
# ---------------------------------------------------------------------------
# Integration test
# ---------------------------------------------------------------------------
@pytest.mark.integration
class TestPeerComparisonIntegration:
def test_peer_comparison_tool(self):
from tradingagents.agents.utils.fundamental_data_tools import get_peer_comparison
result = get_peer_comparison.invoke({"ticker": "AAPL", "curr_date": "2026-03-17"})
assert isinstance(result, str)
assert len(result) > 50
def test_sector_relative_tool(self):
from tradingagents.agents.utils.fundamental_data_tools import get_sector_relative
result = get_sector_relative.invoke({"ticker": "AAPL", "curr_date": "2026-03-17"})
assert isinstance(result, str)
assert len(result) > 50

View File

@ -54,6 +54,7 @@ class TestYfinanceSectorPerformance:
assert "Error:" not in day_pct, f"Error in 1-day for {cols[0]}: {day_pct}" assert "Error:" not in day_pct, f"Error in 1-day for {cols[0]}: {day_pct}"
@pytest.mark.integration
class TestYfinanceIndustryPerformance: class TestYfinanceIndustryPerformance:
"""Verify yfinance industry performance uses index for ticker symbols.""" """Verify yfinance industry performance uses index for ticker symbols."""
@ -92,6 +93,7 @@ class TestAlphaVantageFailoverRaise:
get_industry_performance_alpha_vantage("technology") get_industry_performance_alpha_vantage("technology")
@pytest.mark.integration
class TestRouteToVendorFallback: class TestRouteToVendorFallback:
"""Verify route_to_vendor falls back from AV to yfinance.""" """Verify route_to_vendor falls back from AV to yfinance."""

View File

@ -50,6 +50,7 @@ class TestScannerRouting:
assert "News" in result assert "News" in result
@pytest.mark.integration
class TestFallbackRouting: class TestFallbackRouting:
def setup_method(self): def setup_method(self):

223
tests/test_ttm_analysis.py Normal file
View File

@ -0,0 +1,223 @@
"""Tests for TTM (Trailing Twelve Months) analysis module."""
import pytest
import pandas as pd
from io import StringIO
# ---------------------------------------------------------------------------
# Fixtures — synthetic quarterly data
# ---------------------------------------------------------------------------
def _make_income_csv(n_quarters: int = 8) -> str:
"""Create synthetic income statement CSV (yfinance layout: rows=metrics, cols=dates)."""
dates = [f"2023-0{i+1}-01" if i < 9 else f"2023-{i+1}-01" for i in range(n_quarters)]
# Revenue grows 5% each quarter
revenues = [10_000_000_000 * (1.05 ** i) for i in range(n_quarters)]
# Gross profit = 40% of revenue
gross_profits = [r * 0.40 for r in revenues]
# Operating income = 20% of revenue
op_incomes = [r * 0.20 for r in revenues]
# Net income = 15% of revenue
net_incomes = [r * 0.15 for r in revenues]
data = {
"Total Revenue": revenues,
"Gross Profit": gross_profits,
"Operating Income": op_incomes,
"Net Income": net_incomes,
}
df = pd.DataFrame(data, index=pd.to_datetime(dates))
return df.to_csv()
def _make_balance_csv(n_quarters: int = 8) -> str:
dates = [f"2023-0{i+1}-01" if i < 9 else f"2023-{i+1}-01" for i in range(n_quarters)]
data = {
"Total Assets": [50_000_000_000] * n_quarters,
"Total Debt": [10_000_000_000] * n_quarters,
"Stockholders Equity": [20_000_000_000] * n_quarters,
}
df = pd.DataFrame(data, index=pd.to_datetime(dates))
return df.to_csv()
def _make_cashflow_csv(n_quarters: int = 8) -> str:
dates = [f"2023-0{i+1}-01" if i < 9 else f"2023-{i+1}-01" for i in range(n_quarters)]
data = {
"Free Cash Flow": [2_000_000_000] * n_quarters,
"Operating Cash Flow": [3_000_000_000] * n_quarters,
}
df = pd.DataFrame(data, index=pd.to_datetime(dates))
return df.to_csv()
# ---------------------------------------------------------------------------
# Unit tests for compute_ttm_metrics
# ---------------------------------------------------------------------------
class TestComputeTTMMetrics:
def setup_method(self):
from tradingagents.dataflows.ttm_analysis import compute_ttm_metrics
self.compute = compute_ttm_metrics
def test_quarters_available_8(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
assert result["quarters_available"] == 8
def test_quarters_available_4(self):
"""Gracefully handles <8 quarters."""
result = self.compute(
_make_income_csv(4), _make_balance_csv(4), _make_cashflow_csv(4)
)
assert result["quarters_available"] == 4
def test_ttm_revenue_is_sum_of_last_4_quarters(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
# Last 4 quarters have indices 4,5,6,7 with revenues:
# 10B * 1.05^4, ..., 10B * 1.05^7
expected = sum(10_000_000_000 * (1.05 ** i) for i in range(4, 8))
actual = result["ttm"]["revenue"]
assert actual is not None
assert abs(actual - expected) / expected < 0.001 # within 0.1%
def test_ttm_net_income_is_sum_of_last_4_quarters(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
expected = sum(10_000_000_000 * (1.05 ** i) * 0.15 for i in range(4, 8))
actual = result["ttm"]["net_income"]
assert actual is not None
assert abs(actual - expected) / expected < 0.001
def test_ttm_gross_margin_approximately_40pct(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
gm = result["ttm"]["gross_margin_pct"]
assert gm is not None
assert abs(gm - 40.0) < 0.5
def test_ttm_net_margin_approximately_15pct(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
nm = result["ttm"]["net_margin_pct"]
assert nm is not None
assert abs(nm - 15.0) < 0.5
def test_ttm_roe_is_computed(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
roe = result["ttm"]["roe_pct"]
assert roe is not None
assert roe > 0
def test_ttm_debt_to_equity(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
de = result["ttm"]["debt_to_equity"]
assert de is not None
# Debt=10B, Equity=20B → D/E = 0.5
assert abs(de - 0.5) < 0.01
def test_quarterly_count(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
assert len(result["quarterly"]) == 8
def test_revenue_trend_fields(self):
result = self.compute(
_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8)
)
trends = result["trends"]
assert "revenue_qoq_pct" in trends
assert "revenue_yoy_pct" in trends
# Revenue growing at 5% QoQ
qoq = trends["revenue_qoq_pct"]
assert qoq is not None
assert abs(qoq - 5.0) < 0.5
def test_margin_trend_expanding(self):
"""Expanding margin should be detected."""
# Create data where net margin expands over time
dates = [f"2023-0{i+1}-01" for i in range(5)]
revenues = [10_000_000_000] * 5
# Net margin goes from 10% to 20% linearly
net_incomes = [10_000_000_000 * (0.10 + i * 0.025) for i in range(5)]
data = {"Total Revenue": revenues, "Net Income": net_incomes}
df = pd.DataFrame(data, index=pd.to_datetime(dates))
income_csv = df.to_csv()
result = self.compute(income_csv, _make_balance_csv(5), _make_cashflow_csv(5))
assert result["trends"].get("net_margin_direction") == "expanding"
def test_graceful_empty_income(self):
result = self.compute("", _make_balance_csv(4), _make_cashflow_csv(4))
assert result["quarters_available"] == 0
assert "income statement parse failed" in result["metadata"]["parse_errors"]
def test_graceful_partial_data(self):
"""Should work with just income data, returning None for balance/cashflow fields."""
result = self.compute(_make_income_csv(4), "", "")
assert result["quarters_available"] == 4
assert result["ttm"]["revenue"] is not None
assert result["ttm"]["total_debt"] is None
# ---------------------------------------------------------------------------
# Unit tests for format_ttm_report
# ---------------------------------------------------------------------------
class TestFormatTTMReport:
def setup_method(self):
from tradingagents.dataflows.ttm_analysis import compute_ttm_metrics, format_ttm_report
self.compute = compute_ttm_metrics
self.format = format_ttm_report
def test_report_contains_ticker(self):
metrics = self.compute(_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8))
report = self.format(metrics, "AAPL")
assert "AAPL" in report
def test_report_contains_ttm_section(self):
metrics = self.compute(_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8))
report = self.format(metrics, "AAPL")
assert "Trailing Twelve Months" in report
def test_report_contains_quarterly_history(self):
metrics = self.compute(_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8))
report = self.format(metrics, "AAPL")
assert "Quarter" in report
def test_report_contains_trend_signals(self):
metrics = self.compute(_make_income_csv(8), _make_balance_csv(8), _make_cashflow_csv(8))
report = self.format(metrics, "AAPL")
assert "Trend Signals" in report
def test_empty_data_report(self):
metrics = self.compute("", "", "")
report = self.format(metrics, "AAPL")
assert "No quarterly data available" in report
# ---------------------------------------------------------------------------
# Integration test — real ticker (requires network)
# ---------------------------------------------------------------------------
@pytest.mark.integration
class TestTTMIntegration:
def test_get_ttm_analysis_tool(self):
"""End-to-end: get_ttm_analysis tool returns a non-empty report."""
from tradingagents.agents.utils.fundamental_data_tools import get_ttm_analysis
result = get_ttm_analysis.invoke({"ticker": "AAPL", "curr_date": "2026-03-17"})
assert isinstance(result, str)
assert len(result) > 100
assert "AAPL" in result.upper()

View File

@ -1,7 +1,16 @@
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
import time import time
import json import json
from tradingagents.agents.utils.agent_utils import get_fundamentals, get_balance_sheet, get_cashflow, get_income_statement, get_insider_transactions from tradingagents.agents.utils.agent_utils import (
get_fundamentals,
get_balance_sheet,
get_cashflow,
get_income_statement,
get_insider_transactions,
get_ttm_analysis,
get_peer_comparison,
get_sector_relative,
)
from tradingagents.dataflows.config import get_config from tradingagents.dataflows.config import get_config
@ -12,16 +21,26 @@ def create_fundamentals_analyst(llm):
company_name = state["company_of_interest"] company_name = state["company_of_interest"]
tools = [ tools = [
get_ttm_analysis,
get_fundamentals, get_fundamentals,
get_balance_sheet, get_balance_sheet,
get_cashflow, get_cashflow,
get_income_statement, get_income_statement,
get_peer_comparison,
get_sector_relative,
] ]
system_message = ( system_message = (
"You are a researcher tasked with analyzing fundamental information over the past week about a company. Please write a comprehensive report of the company's fundamental information such as financial documents, company profile, basic company financials, and company financial history to gain a full view of the company's fundamental information to inform traders. Make sure to include as much detail as possible. Do not simply state the trends are mixed, provide detailed and finegrained analysis and insights that may help traders make decisions." "You are a researcher tasked with performing deep fundamental analysis of a company over the last 8 quarters (2 years) to support medium-term investment decisions."
+ " Make sure to append a Markdown table at the end of the report to organize key points in the report, organized and easy to read." " Follow this sequence:"
+ " Use the available tools: `get_fundamentals` for comprehensive company analysis, `get_balance_sheet`, `get_cashflow`, and `get_income_statement` for specific financial statements.", " 1. Call `get_ttm_analysis` first — this provides a Trailing Twelve Months (TTM) trend report covering revenue growth (QoQ and YoY), margin trajectories (gross, operating, net), return on equity trend, debt/equity trend, and free cash flow over 8 quarters."
" 2. Call `get_fundamentals` for the latest snapshot of key ratios (PE, PEG, price-to-book, beta, 52-week range)."
" 3. Call `get_peer_comparison` to see how the company ranks against sector peers over 1-week, 1-month, 3-month, and 6-month periods."
" 4. Call `get_sector_relative` to compute the company's alpha vs its sector ETF benchmark."
" 5. Optionally call `get_balance_sheet`, `get_cashflow`, or `get_income_statement` for additional detail."
" Write a comprehensive report covering: multi-quarter revenue and margin trends, TTM metrics, relative valuation vs peers, sector outperformance or underperformance, and a clear medium-term fundamental thesis."
" Do not simply state trends are mixed — provide detailed, fine-grained analysis that identifies inflection points, acceleration or deceleration in growth, and specific risks and opportunities."
" Make sure to append a Markdown summary table at the end of the report organising key metrics for easy reference.",
) )
prompt = ChatPromptTemplate.from_messages( prompt = ChatPromptTemplate.from_messages(

View File

@ -1,7 +1,7 @@
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
import time import time
import json import json
from tradingagents.agents.utils.agent_utils import get_stock_data, get_indicators from tradingagents.agents.utils.agent_utils import get_stock_data, get_indicators, get_macro_regime
from tradingagents.dataflows.config import get_config from tradingagents.dataflows.config import get_config
@ -13,12 +13,15 @@ def create_market_analyst(llm):
company_name = state["company_of_interest"] company_name = state["company_of_interest"]
tools = [ tools = [
get_macro_regime,
get_stock_data, get_stock_data,
get_indicators, get_indicators,
] ]
system_message = ( system_message = (
"""You are a trading assistant tasked with analyzing financial markets. Your role is to select the **most relevant indicators** for a given market condition or trading strategy from the following list. The goal is to choose up to **8 indicators** that provide complementary insights without redundancy. Categories and each category's indicators are: """You are a trading assistant tasked with analyzing financial markets. Start by calling `get_macro_regime` to classify the current macro environment as risk-on, risk-off, or transition. Use this macro context to frame all subsequent technical analysis — for example, in risk-off environments weight bearish signals more heavily, and in risk-on environments favour momentum and breakout signals.
Then, select the **most relevant indicators** for the given market condition from the following list. The goal is to choose up to **8 indicators** that provide complementary insights without redundancy. Categories and each category's indicators are:
Moving Averages: Moving Averages:
- close_50_sma: 50 SMA: A medium-term trend indicator. Usage: Identify trend direction and serve as dynamic support/resistance. Tips: It lags price; combine with faster indicators for timely signals. - close_50_sma: 50 SMA: A medium-term trend indicator. Usage: Identify trend direction and serve as dynamic support/resistance. Tips: It lags price; combine with faster indicators for timely signals.
@ -73,13 +76,18 @@ Volume-Based Indicators:
result = chain.invoke(state["messages"]) result = chain.invoke(state["messages"])
report = "" report = ""
macro_regime_report = ""
if len(result.tool_calls) == 0: if len(result.tool_calls) == 0:
report = result.content report = result.content
# Extract macro regime section if present
if "Macro Regime Classification" in report or "RISK-ON" in report.upper() or "RISK-OFF" in report.upper() or "TRANSITION" in report.upper():
macro_regime_report = report
return { return {
"messages": [result], "messages": [result],
"market_report": report, "market_report": report,
"macro_regime_report": macro_regime_report,
} }
return market_analyst_node return market_analyst_node

View File

@ -9,17 +9,22 @@ def create_research_manager(llm, memory):
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
news_report = state["news_report"] news_report = state["news_report"]
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
macro_regime_report = state.get("macro_regime_report", "")
investment_debate_state = state["investment_debate_state"] investment_debate_state = state["investment_debate_state"]
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" macro_section = f"\n\nMacro Regime:\n{macro_regime_report}" if macro_regime_report else ""
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}{macro_section}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
macro_context = f"\n\nCurrent Macro Regime:\n{macro_regime_report}\nWeight your decision in line with this macro environment — a risk-off regime raises the bar for BUY decisions, while risk-on supports them.\n" if macro_regime_report else ""
prompt = f"""As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented. prompt = f"""As the portfolio manager and debate facilitator, your role is to critically evaluate this round of debate and make a definitive decision: align with the bear analyst, the bull analyst, or choose Hold only if it is strongly justified based on the arguments presented.
{macro_context}
Summarize the key points from both sides concisely, focusing on the most compelling evidence or reasoning. Your recommendationBuy, Sell, or Holdmust be clear and actionable. Avoid defaulting to Hold simply because both sides have valid points; commit to a stance grounded in the debate's strongest arguments. Summarize the key points from both sides concisely, focusing on the most compelling evidence or reasoning. Your recommendationBuy, Sell, or Holdmust be clear and actionable. Avoid defaulting to Hold simply because both sides have valid points; commit to a stance grounded in the debate's strongest arguments.

View File

@ -14,15 +14,20 @@ def create_risk_manager(llm, memory):
fundamentals_report = state["fundamentals_report"] fundamentals_report = state["fundamentals_report"]
sentiment_report = state["sentiment_report"] sentiment_report = state["sentiment_report"]
trader_plan = state["investment_plan"] trader_plan = state["investment_plan"]
macro_regime_report = state.get("macro_regime_report", "")
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}" macro_section = f"\n\nMacro Regime:\n{macro_regime_report}" if macro_regime_report else ""
curr_situation = f"{market_research_report}\n\n{sentiment_report}\n\n{news_report}\n\n{fundamentals_report}{macro_section}"
past_memories = memory.get_memories(curr_situation, n_matches=2) past_memories = memory.get_memories(curr_situation, n_matches=2)
past_memory_str = "" past_memory_str = ""
for i, rec in enumerate(past_memories, 1): for i, rec in enumerate(past_memories, 1):
past_memory_str += rec["recommendation"] + "\n\n" past_memory_str += rec["recommendation"] + "\n\n"
macro_context = f"\n\nCurrent Macro Regime:\n{macro_regime_report}\nEnsure your risk assessment reflects the macro environment — in risk-off regimes, apply higher standards for position entry and tighter risk controls.\n" if macro_regime_report else ""
prompt = f"""As the Risk Management Judge and Debate Facilitator, your goal is to evaluate the debate between three risk analysts—Aggressive, Neutral, and Conservative—and determine the best course of action for the trader. Your decision must result in a clear recommendation: Buy, Sell, or Hold. Choose Hold only if strongly justified by specific arguments, not as a fallback when all sides seem valid. Strive for clarity and decisiveness. prompt = f"""As the Risk Management Judge and Debate Facilitator, your goal is to evaluate the debate between three risk analysts—Aggressive, Neutral, and Conservative—and determine the best course of action for the trader. Your decision must result in a clear recommendation: Buy, Sell, or Hold. Choose Hold only if strongly justified by specific arguments, not as a fallback when all sides seem valid. Strive for clarity and decisiveness.
{macro_context}
Guidelines for Decision-Making: Guidelines for Decision-Making:
1. **Summarize Key Arguments**: Extract the strongest points from each analyst, focusing on relevance to the context. 1. **Summarize Key Arguments**: Extract the strongest points from each analyst, focusing on relevance to the context.

View File

@ -74,3 +74,6 @@ class AgentState(MessagesState):
RiskDebateState, "Current state of the debate on evaluating risk" RiskDebateState, "Current state of the debate on evaluating risk"
] ]
final_trade_decision: Annotated[str, "Final decision made by the Risk Analysts"] final_trade_decision: Annotated[str, "Final decision made by the Risk Analysts"]
# macro regime
macro_regime_report: Annotated[str, "Macro regime classification (risk-on/risk-off/transition) from market analyst"]

View File

@ -11,7 +11,11 @@ from tradingagents.agents.utils.fundamental_data_tools import (
get_fundamentals, get_fundamentals,
get_balance_sheet, get_balance_sheet,
get_cashflow, get_cashflow,
get_income_statement get_income_statement,
get_ttm_analysis,
get_peer_comparison,
get_sector_relative,
get_macro_regime,
) )
from tradingagents.agents.utils.news_data_tools import ( from tradingagents.agents.utils.news_data_tools import (
get_news, get_news,

View File

@ -1,6 +1,9 @@
from langchain_core.tools import tool from langchain_core.tools import tool
from typing import Annotated from typing import Annotated
from tradingagents.dataflows.interface import route_to_vendor from tradingagents.dataflows.interface import route_to_vendor
from tradingagents.dataflows.ttm_analysis import compute_ttm_metrics, format_ttm_report
from tradingagents.dataflows.peer_comparison import get_peer_comparison_report, get_sector_relative_report
from tradingagents.dataflows.macro_regime import classify_macro_regime, format_macro_report
@tool @tool
@ -74,4 +77,79 @@ def get_income_statement(
Returns: Returns:
str: A formatted report containing income statement data str: A formatted report containing income statement data
""" """
return route_to_vendor("get_income_statement", ticker, freq, curr_date) return route_to_vendor("get_income_statement", ticker, freq, curr_date)
@tool
def get_ttm_analysis(
ticker: Annotated[str, "ticker symbol"],
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"] = None,
) -> str:
"""
Retrieve an 8-quarter Trailing Twelve Months (TTM) trend analysis for a company.
Computes revenue growth (QoQ and YoY), margin trajectories (gross, operating, net),
return on equity trend, debt/equity trend, and free cash flow trend across up to
8 quarterly periods.
Args:
ticker (str): Ticker symbol of the company
curr_date (str): Current date you are trading at, yyyy-mm-dd
Returns:
str: Formatted Markdown report with TTM summary and quarterly trend table
"""
income_csv = route_to_vendor("get_income_statement", ticker, "quarterly", curr_date)
balance_csv = route_to_vendor("get_balance_sheet", ticker, "quarterly", curr_date)
cashflow_csv = route_to_vendor("get_cashflow", ticker, "quarterly", curr_date)
metrics = compute_ttm_metrics(income_csv, balance_csv, cashflow_csv, n_quarters=8)
return format_ttm_report(metrics, ticker)
@tool
def get_peer_comparison(
ticker: Annotated[str, "ticker symbol"],
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"] = None,
) -> str:
"""
Compare a stock's performance vs its sector peers over 1-week, 1-month, 3-month,
6-month and YTD periods. Returns a ranked table and alpha vs sector ETF.
Args:
ticker (str): Ticker symbol of the company
curr_date (str): Current date you are trading at, yyyy-mm-dd
Returns:
str: Formatted Markdown report with peer ranking table
"""
return get_peer_comparison_report(ticker, curr_date)
@tool
def get_sector_relative(
ticker: Annotated[str, "ticker symbol"],
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"] = None,
) -> str:
"""
Compare a stock's return vs its sector ETF benchmark over multiple time horizons.
Shows 1-week, 1-month, 3-month, 6-month, and YTD alpha.
Args:
ticker (str): Ticker symbol of the company
curr_date (str): Current date you are trading at, yyyy-mm-dd
Returns:
str: Formatted Markdown report with outperformance/underperformance metrics
"""
return get_sector_relative_report(ticker, curr_date)
@tool
def get_macro_regime(
curr_date: Annotated[str, "current date you are trading at, yyyy-mm-dd"] = None,
) -> str:
"""
Classify the current macro market regime as risk-on, risk-off, or transition.
Uses 6 signals: VIX level, VIX trend, credit spread proxy (HYG/LQD),
yield curve proxy (TLT/SHY), S&P 500 market breadth, and sector rotation
(defensive vs cyclical). Returns a composite score and actionable interpretation.
Args:
curr_date (str): Current date you are trading at, yyyy-mm-dd (informational)
Returns:
str: Formatted Markdown report with regime classification and signal breakdown
"""
regime_data = classify_macro_regime(curr_date)
return format_macro_report(regime_data)

View File

@ -61,7 +61,8 @@ TOOLS_CATEGORIES = {
"get_fundamentals", "get_fundamentals",
"get_balance_sheet", "get_balance_sheet",
"get_cashflow", "get_cashflow",
"get_income_statement" "get_income_statement",
"get_ttm_analysis",
] ]
}, },
"news_data": { "news_data": {

View File

@ -0,0 +1,379 @@
"""Macro regime classifier: risk-on / transition / risk-off."""
from __future__ import annotations
from datetime import datetime
from typing import Optional
import pandas as pd
import yfinance as yf
# ---------------------------------------------------------------------------
# Signal thresholds
# ---------------------------------------------------------------------------
VIX_RISK_ON_THRESHOLD = 16.0 # VIX < 16 → risk-on
VIX_RISK_OFF_THRESHOLD = 25.0 # VIX > 25 → risk-off
REGIME_RISK_ON_THRESHOLD = 3 # score ≥ 3 → risk-on
REGIME_RISK_OFF_THRESHOLD = -3 # score ≤ -3 → risk-off
# Sector ETFs used for rotation signal
_DEFENSIVE_ETFS = ["XLU", "XLP", "XLV"] # Utilities, Staples, Health Care
_CYCLICAL_ETFS = ["XLY", "XLK", "XLI"] # Discretionary, Technology, Industrials
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _download(symbols: list[str], period: str = "3mo") -> Optional[pd.DataFrame]:
"""Download closing prices, returning None on failure."""
try:
hist = yf.download(symbols, period=period, auto_adjust=True, progress=False, threads=True)
if hist.empty:
return None
if len(symbols) == 1:
closes = hist["Close"]
if isinstance(closes, pd.DataFrame):
closes = closes.iloc[:, 0]
return closes.to_frame(name=symbols[0]).dropna()
return hist["Close"].dropna(how="all")
except Exception:
return None
def _latest(series: pd.Series) -> Optional[float]:
if series is None or series.empty:
return None
v = series.dropna()
return float(v.iloc[-1]) if len(v) > 0 else None
def _sma(series: pd.Series, window: int) -> Optional[float]:
if series is None or len(series.dropna()) < window:
return None
return float(series.dropna().rolling(window).mean().iloc[-1])
def _pct_change_n(series: pd.Series, n: int) -> Optional[float]:
s = series.dropna()
if len(s) < n + 1:
return None
base = float(s.iloc[-(n + 1)])
current = float(s.iloc[-1])
if base == 0:
return None
return (current - base) / base * 100
def _fmt_pct(val: Optional[float]) -> str:
if val is None:
return "N/A"
sign = "+" if val >= 0 else ""
return f"{sign}{val:.2f}%"
# ---------------------------------------------------------------------------
# Individual signal evaluators (each returns +1, 0, or -1)
# ---------------------------------------------------------------------------
def _signal_vix_level(vix_price: Optional[float]) -> tuple[int, str]:
"""VIX level: <16 risk-on (+1), >25 risk-off (-1), else transition (0)."""
if vix_price is None:
return 0, "VIX level: unavailable (neutral)"
if vix_price < VIX_RISK_ON_THRESHOLD:
return 1, f"VIX level: {vix_price:.1f} < {VIX_RISK_ON_THRESHOLD} → risk-on"
if vix_price > VIX_RISK_OFF_THRESHOLD:
return -1, f"VIX level: {vix_price:.1f} > {VIX_RISK_OFF_THRESHOLD} → risk-off"
return 0, f"VIX level: {vix_price:.1f} (neutral zone {VIX_RISK_ON_THRESHOLD}{VIX_RISK_OFF_THRESHOLD})"
def _signal_vix_trend(vix_series: Optional[pd.Series]) -> tuple[int, str]:
"""VIX 5-day SMA vs 20-day SMA: rising VIX = risk-off."""
if vix_series is None:
return 0, "VIX trend: unavailable (neutral)"
sma5 = _sma(vix_series, 5)
sma20 = _sma(vix_series, 20)
if sma5 is None or sma20 is None:
return 0, "VIX trend: insufficient history (neutral)"
if sma5 < sma20:
return 1, f"VIX trend: declining (SMA5={sma5:.1f} < SMA20={sma20:.1f}) → risk-on"
if sma5 > sma20:
return -1, f"VIX trend: rising (SMA5={sma5:.1f} > SMA20={sma20:.1f}) → risk-off"
return 0, f"VIX trend: flat (SMA5={sma5:.1f} ≈ SMA20={sma20:.1f}) → neutral"
def _signal_credit_spread(hyg_series: Optional[pd.Series], lqd_series: Optional[pd.Series]) -> tuple[int, str]:
"""HYG/LQD ratio: declining ratio = credit spreads widening = risk-off."""
if hyg_series is None or lqd_series is None:
return 0, "Credit spread proxy (HYG/LQD): unavailable (neutral)"
# Align on common dates
hyg = hyg_series.dropna()
lqd = lqd_series.dropna()
common = hyg.index.intersection(lqd.index)
if len(common) < 22:
return 0, "Credit spread proxy: insufficient history (neutral)"
hyg_c = hyg.loc[common]
lqd_c = lqd.loc[common]
ratio = hyg_c / lqd_c
ratio_1m = _pct_change_n(ratio, 21)
if ratio_1m is None:
return 0, "Credit spread proxy: cannot compute 1-month change (neutral)"
if ratio_1m > 0.5:
return 1, f"Credit spread (HYG/LQD) 1M: {_fmt_pct(ratio_1m)} → improving (risk-on)"
if ratio_1m < -0.5:
return -1, f"Credit spread (HYG/LQD) 1M: {_fmt_pct(ratio_1m)} → deteriorating (risk-off)"
return 0, f"Credit spread (HYG/LQD) 1M: {_fmt_pct(ratio_1m)} → stable (neutral)"
def _signal_yield_curve(tlt_series: Optional[pd.Series], shy_series: Optional[pd.Series]) -> tuple[int, str]:
"""TLT (20yr) vs SHY (1-3yr): TLT outperforming = flight to safety = risk-off."""
if tlt_series is None or shy_series is None:
return 0, "Yield curve proxy (TLT vs SHY): unavailable (neutral)"
tlt = tlt_series.dropna()
shy = shy_series.dropna()
tlt_1m = _pct_change_n(tlt, 21)
shy_1m = _pct_change_n(shy, 21)
if tlt_1m is None or shy_1m is None:
return 0, "Yield curve proxy: insufficient history (neutral)"
spread = tlt_1m - shy_1m
if spread > 1.0:
return -1, f"Yield curve: TLT {_fmt_pct(tlt_1m)} vs SHY {_fmt_pct(shy_1m)} → flight to safety (risk-off)"
if spread < -1.0:
return 1, f"Yield curve: TLT {_fmt_pct(tlt_1m)} vs SHY {_fmt_pct(shy_1m)} → risk appetite (risk-on)"
return 0, f"Yield curve: TLT {_fmt_pct(tlt_1m)} vs SHY {_fmt_pct(shy_1m)} → neutral"
def _signal_market_breadth(spx_series: Optional[pd.Series]) -> tuple[int, str]:
"""S&P 500 above/below 200-day SMA."""
if spx_series is None:
return 0, "Market breadth (SPX vs 200 SMA): unavailable (neutral)"
spx = spx_series.dropna()
sma200 = _sma(spx, 200)
current = _latest(spx)
if sma200 is None or current is None:
return 0, "Market breadth: insufficient history (neutral)"
pct_from_sma = (current - sma200) / sma200 * 100
if current > sma200:
return 1, f"Market breadth: SPX {pct_from_sma:+.1f}% above 200-SMA → risk-on"
return -1, f"Market breadth: SPX {pct_from_sma:+.1f}% below 200-SMA → risk-off"
def _signal_sector_rotation(
defensive_closes: dict[str, pd.Series],
cyclical_closes: dict[str, pd.Series],
) -> tuple[int, str]:
"""Defensive vs cyclical sector rotation over 1 month."""
def avg_return(closes_dict: dict[str, pd.Series], days: int) -> Optional[float]:
returns = []
for sym, s in closes_dict.items():
pct = _pct_change_n(s.dropna(), days)
if pct is not None:
returns.append(pct)
return sum(returns) / len(returns) if returns else None
def_ret = avg_return(defensive_closes, 21)
cyc_ret = avg_return(cyclical_closes, 21)
if def_ret is None or cyc_ret is None:
return 0, "Sector rotation: unavailable (neutral)"
spread = def_ret - cyc_ret
if spread > 1.0:
return -1, (
f"Sector rotation: defensives {_fmt_pct(def_ret)} vs cyclicals {_fmt_pct(cyc_ret)} "
f"(defensives leading → risk-off)"
)
if spread < -1.0:
return 1, (
f"Sector rotation: cyclicals {_fmt_pct(cyc_ret)} vs defensives {_fmt_pct(def_ret)} "
f"(cyclicals leading → risk-on)"
)
return 0, (
f"Sector rotation: defensives {_fmt_pct(def_ret)} vs cyclicals {_fmt_pct(cyc_ret)} → neutral"
)
# ---------------------------------------------------------------------------
# Main classifier
# ---------------------------------------------------------------------------
def classify_macro_regime(curr_date: str = None) -> dict:
"""
Classify current macro regime using 6 market signals.
Args:
curr_date: Optional reference date (informational only; always uses latest data)
Returns:
dict with keys:
regime (str): "risk-on" | "transition" | "risk-off"
score (int): Sum of signal scores (-6 to +6)
confidence (str): "high" | "medium" | "low"
signals (list[dict]): Per-signal breakdowns
summary (str): Human-readable summary
"""
signals = []
total_score = 0
# --- Download all required data ---
vix_data = _download(["^VIX"], period="3mo")
market_data = _download(["^GSPC"], period="14mo") # 14mo for 200-SMA
hyg_lqd_data = _download(["HYG", "LQD"], period="3mo")
tlt_shy_data = _download(["TLT", "SHY"], period="3mo")
sector_data = _download(_DEFENSIVE_ETFS + _CYCLICAL_ETFS, period="3mo")
# Extract series
vix_series = vix_data["^VIX"] if vix_data is not None and "^VIX" in vix_data.columns else None
spx_series = market_data["^GSPC"] if market_data is not None and "^GSPC" in market_data.columns else None
hyg_series = (hyg_lqd_data["HYG"] if hyg_lqd_data is not None and "HYG" in hyg_lqd_data.columns else None)
lqd_series = (hyg_lqd_data["LQD"] if hyg_lqd_data is not None and "LQD" in hyg_lqd_data.columns else None)
tlt_series = (tlt_shy_data["TLT"] if tlt_shy_data is not None and "TLT" in tlt_shy_data.columns else None)
shy_series = (tlt_shy_data["SHY"] if tlt_shy_data is not None and "SHY" in tlt_shy_data.columns else None)
defensive_closes: dict[str, pd.Series] = {}
cyclical_closes: dict[str, pd.Series] = {}
if sector_data is not None:
for sym in _DEFENSIVE_ETFS:
if sym in sector_data.columns:
defensive_closes[sym] = sector_data[sym]
for sym in _CYCLICAL_ETFS:
if sym in sector_data.columns:
cyclical_closes[sym] = sector_data[sym]
vix_price = _latest(vix_series)
# --- Evaluate each signal ---
evaluators = [
_signal_vix_level(vix_price),
_signal_vix_trend(vix_series),
_signal_credit_spread(hyg_series, lqd_series),
_signal_yield_curve(tlt_series, shy_series),
_signal_market_breadth(spx_series),
_signal_sector_rotation(defensive_closes, cyclical_closes),
]
signal_names = [
"vix_level", "vix_trend", "credit_spread",
"yield_curve", "market_breadth", "sector_rotation",
]
for name, (score, description) in zip(signal_names, evaluators):
signals.append({"name": name, "score": score, "description": description})
total_score += score
# --- Classify regime ---
if total_score >= REGIME_RISK_ON_THRESHOLD:
regime = "risk-on"
elif total_score <= REGIME_RISK_OFF_THRESHOLD:
regime = "risk-off"
else:
regime = "transition"
# Confidence based on how decisive the score is
abs_score = abs(total_score)
if abs_score >= 4:
confidence = "high"
elif abs_score >= 2:
confidence = "medium"
else:
confidence = "low"
risk_on_count = sum(1 for s in signals if s["score"] > 0)
risk_off_count = sum(1 for s in signals if s["score"] < 0)
neutral_count = sum(1 for s in signals if s["score"] == 0)
summary = (
f"Macro regime: **{regime.upper()}** "
f"(score {total_score:+d}/6, confidence: {confidence}). "
f"{risk_on_count} risk-on signals, {risk_off_count} risk-off signals, {neutral_count} neutral. "
f"VIX: {vix_price:.1f}" if vix_price else
f"Macro regime: **{regime.upper()}** "
f"(score {total_score:+d}/6, confidence: {confidence}). "
f"{risk_on_count} risk-on signals, {risk_off_count} risk-off signals, {neutral_count} neutral."
)
return {
"regime": regime,
"score": total_score,
"confidence": confidence,
"vix": vix_price,
"signals": signals,
"summary": summary,
}
def format_macro_report(regime_data: dict) -> str:
"""Format classify_macro_regime output as a Markdown report."""
regime = regime_data.get("regime", "unknown")
score = regime_data.get("score", 0)
confidence = regime_data.get("confidence", "unknown")
vix = regime_data.get("vix")
signals = regime_data.get("signals", [])
summary = regime_data.get("summary", "")
# Emoji-free regime indicator
regime_display = regime.upper()
lines = [
"# Macro Regime Classification",
f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
"",
f"## Regime: {regime_display}",
"",
f"| Attribute | Value |",
f"|-----------|-------|",
f"| Regime | **{regime_display}** |",
f"| Composite Score | {score:+d} / 6 |",
f"| Confidence | {confidence.title()} |",
f"| VIX | {f'{vix:.2f}' if vix is not None else 'N/A'} |",
"",
"## Signal Breakdown",
"",
"| Signal | Score | Assessment |",
"|--------|-------|------------|",
]
score_labels = {1: "+1 (risk-on)", 0: " 0 (neutral)", -1: "-1 (risk-off)"}
for sig in signals:
score_label = score_labels.get(sig["score"], str(sig["score"]))
lines.append(f"| {sig['name'].replace('_', ' ').title()} | {score_label} | {sig['description']} |")
lines += [
"",
"## Interpretation",
"",
summary,
"",
"### What This Means for Trading",
"",
]
if regime == "risk-on":
lines += [
"- **Prefer:** Growth, cyclicals, small-caps, high-beta equities",
"- **Reduce:** Defensive sectors, cash, long-duration bonds",
"- **Technicals:** Favour breakout entries; momentum strategies work well",
]
elif regime == "risk-off":
lines += [
"- **Prefer:** Defensive sectors (utilities, staples, healthcare), quality, low-beta",
"- **Reduce:** Cyclicals, high-beta names, speculative positions",
"- **Technicals:** Tighten stop-losses; favour mean-reversion over momentum",
]
else: # transition
lines += [
"- **Mixed signals:** No strong directional bias — size positions conservatively",
"- **Watch:** Upcoming catalysts (FOMC, earnings, geopolitical events) may resolve direction",
"- **Technicals:** Use wider stops; avoid overconfident entries",
]
return "\n".join(lines)

View File

@ -0,0 +1,345 @@
"""Sector and peer relative performance comparison using yfinance."""
from __future__ import annotations
from datetime import datetime
from typing import Optional
import yfinance as yf
import pandas as pd
# ---------------------------------------------------------------------------
# Reuse sector/ETF mappings from alpha_vantage_scanner to stay DRY
# ---------------------------------------------------------------------------
# Sector key (lowercase-dashes) → SPDR ETF
_SECTOR_ETFS: dict[str, str] = {
"technology": "XLK",
"healthcare": "XLV",
"financials": "XLF",
"energy": "XLE",
"consumer-discretionary": "XLY",
"consumer-staples": "XLP",
"industrials": "XLI",
"materials": "XLB",
"real-estate": "XLRE",
"utilities": "XLU",
"communication-services": "XLC",
}
# Representative large-cap peers per sector (same as alpha_vantage_scanner)
_SECTOR_TICKERS: dict[str, list[str]] = {
"technology": ["AAPL", "MSFT", "NVDA", "GOOGL", "META", "AVGO", "ADBE", "CRM", "AMD", "INTC"],
"healthcare": ["UNH", "JNJ", "LLY", "PFE", "ABT", "MRK", "TMO", "ABBV", "DHR", "AMGN"],
"financials": ["JPM", "BAC", "WFC", "GS", "MS", "BLK", "SCHW", "AXP", "C", "USB"],
"energy": ["XOM", "CVX", "COP", "SLB", "EOG", "MPC", "PSX", "VLO", "OXY", "HES"],
"consumer-discretionary": ["AMZN", "TSLA", "HD", "MCD", "NKE", "SBUX", "LOW", "TJX", "BKNG", "CMG"],
"consumer-staples": ["PG", "KO", "PEP", "COST", "WMT", "PM", "MDLZ", "CL", "KHC", "GIS"],
"industrials": ["CAT", "HON", "UNP", "UPS", "BA", "RTX", "DE", "LMT", "GE", "MMM"],
"materials": ["LIN", "APD", "SHW", "ECL", "FCX", "NEM", "NUE", "DOW", "DD", "PPG"],
"real-estate": ["PLD", "AMT", "CCI", "EQIX", "SPG", "PSA", "O", "WELL", "DLR", "AVB"],
"utilities": ["NEE", "DUK", "SO", "D", "AEP", "SRE", "EXC", "XEL", "WEC", "ED"],
"communication-services": ["META", "GOOGL", "NFLX", "DIS", "CMCSA", "T", "VZ", "CHTR", "TMUS", "EA"],
}
# Yahoo Finance sector string → normalised key
_SECTOR_NORMALISE: dict[str, str] = {
"Technology": "technology",
"Healthcare": "healthcare",
"Health Care": "healthcare",
"Financial Services": "financials",
"Financials": "financials",
"Energy": "energy",
"Consumer Cyclical": "consumer-discretionary",
"Consumer Discretionary": "consumer-discretionary",
"Consumer Defensive": "consumer-staples",
"Consumer Staples": "consumer-staples",
"Industrials": "industrials",
"Basic Materials": "materials",
"Materials": "materials",
"Real Estate": "real-estate",
"Utilities": "utilities",
"Communication Services": "communication-services",
}
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _safe_pct(closes: pd.Series, days_back: int) -> Optional[float]:
if len(closes) < days_back + 1:
return None
base = closes.iloc[-(days_back + 1)]
current = closes.iloc[-1]
if base == 0:
return None
return (current - base) / base * 100
def _ytd_pct(closes: pd.Series) -> Optional[float]:
if closes.empty:
return None
current_year = closes.index[-1].year
year_closes = closes[closes.index.year == current_year]
if len(year_closes) < 2:
return None
base = year_closes.iloc[0]
if base == 0:
return None
return (closes.iloc[-1] - base) / base * 100
def _fmt_pct(val: Optional[float]) -> str:
if val is None:
return "N/A"
sign = "+" if val >= 0 else ""
return f"{sign}{val:.2f}%"
# ---------------------------------------------------------------------------
# Public functions
# ---------------------------------------------------------------------------
def get_sector_peers(ticker: str) -> tuple[str, str, list[str]]:
"""
Identify a ticker's sector and return peer tickers.
Returns:
(sector_display_name, sector_key, peer_tickers)
sector_key is lowercase-dashed (e.g. "technology")
If sector cannot be identified, returns ("Unknown", "", [])
"""
try:
info = yf.Ticker(ticker.upper()).info
raw_sector = info.get("sector", "")
sector_key = _SECTOR_NORMALISE.get(raw_sector, raw_sector.lower().replace(" ", "-"))
peers = _SECTOR_TICKERS.get(sector_key, [])
# Exclude the ticker itself from peers
peers = [p for p in peers if p.upper() != ticker.upper()]
return raw_sector or "Unknown", sector_key, peers
except Exception:
return "Unknown", "", []
def compute_relative_performance(
ticker: str,
sector_key: str,
peers: list[str],
) -> str:
"""
Compare ticker's returns vs peers and sector ETF over multiple horizons.
Args:
ticker: The stock being analysed
sector_key: Normalised sector key (lowercase-dashes)
peers: List of peer ticker symbols
Returns:
Formatted Markdown report with ranked performance table.
"""
etf = _SECTOR_ETFS.get(sector_key)
# Build list of all symbols to download (max 8 peers + ticker + ETF)
all_symbols = [ticker.upper()] + peers[:8]
if etf and etf not in all_symbols:
all_symbols.append(etf)
try:
hist = yf.download(
all_symbols,
period="6mo",
auto_adjust=True,
progress=False,
threads=True,
)
except Exception as e:
return f"Error downloading price data for peer comparison: {e}"
if hist.empty:
return "No price data available for peer comparison."
# Extract closing prices
if len(all_symbols) > 1:
closes_raw = hist.get("Close", pd.DataFrame())
else:
closes_raw = hist.get("Close", pd.Series()).to_frame(name=all_symbols[0])
rows = []
for sym in all_symbols:
try:
if sym in closes_raw.columns:
s = closes_raw[sym].dropna()
else:
continue
if s.empty:
continue
w1 = _safe_pct(s, 5)
m1 = _safe_pct(s, 21)
m3 = _safe_pct(s, 63)
m6 = _safe_pct(s, 126)
ytd = _ytd_pct(s)
rows.append({
"symbol": sym,
"1W": w1, "1M": m1, "3M": m3, "6M": m6, "YTD": ytd,
"is_target": sym.upper() == ticker.upper(),
"is_etf": sym == etf,
})
except Exception:
continue
if not rows:
return "Unable to compute returns — no price data retrieved."
# Sort by 3-month return (descending) for ranking
rows.sort(key=lambda r: r["3M"] if r["3M"] is not None else float("-inf"), reverse=True)
# Determine ticker rank
target_rank = next(
(i + 1 for i, r in enumerate(rows) if r["is_target"]), None
)
n_peers = sum(1 for r in rows if not r["is_etf"])
header = [
f"# Relative Performance Analysis: {ticker.upper()}",
f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
f"# Sector: {sector_key.replace('-', ' ').title()} | Peer rank (3M): {target_rank}/{n_peers}",
"",
"| Symbol | Role | 1-Week | 1-Month | 3-Month | 6-Month | YTD |",
"|--------|------|--------|---------|---------|---------|-----|",
]
table_rows = []
for r in rows:
role = "► TARGET" if r["is_target"] else ("ETF Benchmark" if r["is_etf"] else "Peer")
table_rows.append(
f"| {r['symbol']} | {role} "
f"| {_fmt_pct(r['1W'])} "
f"| {_fmt_pct(r['1M'])} "
f"| {_fmt_pct(r['3M'])} "
f"| {_fmt_pct(r['6M'])} "
f"| {_fmt_pct(r['YTD'])} |"
)
# Alpha vs ETF
target_row = next((r for r in rows if r["is_target"]), None)
etf_row = next((r for r in rows if r["is_etf"]), None)
alpha_lines = []
if target_row and etf_row:
alpha_lines.append("")
alpha_lines.append("## Alpha vs Sector ETF")
alpha_lines.append("")
for period, tk, bm in [
("1-Month", target_row["1M"], etf_row["1M"]),
("3-Month", target_row["3M"], etf_row["3M"]),
("6-Month", target_row["6M"], etf_row["6M"]),
]:
if tk is not None and bm is not None:
alpha = tk - bm
alpha_lines.append(f"- **{period}**: {_fmt_pct(tk)} vs ETF {_fmt_pct(bm)} → Alpha {_fmt_pct(alpha)}")
else:
alpha_lines.append(f"- **{period}**: N/A")
return "\n".join(header + table_rows + alpha_lines)
def get_peer_comparison_report(ticker: str, curr_date: str = None) -> str:
"""
Full peer comparison report for a ticker.
Args:
ticker: Stock ticker symbol
curr_date: Current trading date (informational only)
Returns:
Formatted Markdown report
"""
sector_display, sector_key, peers = get_sector_peers(ticker)
if not peers:
return (
f"# Peer Comparison: {ticker.upper()}\n\n"
f"Could not identify sector peers for {ticker}. "
f"Sector detected: '{sector_display}'"
)
return compute_relative_performance(ticker, sector_key, peers)
def get_sector_relative_report(ticker: str, curr_date: str = None) -> str:
"""
Focused sector-vs-ticker comparison (ETF benchmark focus).
Args:
ticker: Stock ticker symbol
curr_date: Current trading date (informational only)
Returns:
Formatted Markdown report comparing ticker vs sector ETF only.
"""
sector_display, sector_key, _ = get_sector_peers(ticker)
etf = _SECTOR_ETFS.get(sector_key)
if not etf:
return (
f"# Sector Relative Performance: {ticker.upper()}\n\n"
f"No ETF benchmark found for sector '{sector_display}'."
)
try:
symbols = [ticker.upper(), etf]
hist = yf.download(
symbols,
period="6mo",
auto_adjust=True,
progress=False,
threads=True,
)
except Exception as e:
return f"Error downloading data for {ticker} vs {etf}: {e}"
if hist.empty:
return f"No price data available for {ticker} or {etf}."
closes = hist.get("Close", pd.DataFrame())
lines = [
f"# Sector Relative Performance: {ticker.upper()} vs {etf} ({sector_display})",
f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
"",
"| Period | Stock Return | ETF Return | Alpha |",
"|--------|-------------|------------|-------|",
]
for period_label, days_back in [("1-Week", 5), ("1-Month", 21), ("3-Month", 63), ("6-Month", 126)]:
tk_ret = etf_ret = None
for sym, col_type in [(ticker.upper(), "stock"), (etf, "etf")]:
if sym in closes.columns:
s = closes[sym].dropna()
pct = _safe_pct(s, days_back)
if col_type == "stock":
tk_ret = pct
else:
etf_ret = pct
alpha = (tk_ret - etf_ret) if tk_ret is not None and etf_ret is not None else None
lines.append(
f"| {period_label} | {_fmt_pct(tk_ret)} | {_fmt_pct(etf_ret)} | {_fmt_pct(alpha)} |"
)
# YTD
tk_ytd = etf_ytd = None
for sym, col_type in [(ticker.upper(), "stock"), (etf, "etf")]:
if sym in closes.columns:
s = closes[sym].dropna()
pct = _ytd_pct(s)
if col_type == "stock":
tk_ytd = pct
else:
etf_ytd = pct
ytd_alpha = (tk_ytd - etf_ytd) if tk_ytd is not None and etf_ytd is not None else None
lines.append(f"| YTD | {_fmt_pct(tk_ytd)} | {_fmt_pct(etf_ytd)} | {_fmt_pct(ytd_alpha)} |")
return "\n".join(lines)

View File

@ -0,0 +1,418 @@
"""Trailing Twelve Months (TTM) trend analysis across 8 quarters."""
from __future__ import annotations
from datetime import datetime
from io import StringIO
from typing import Optional
import pandas as pd
# ---------------------------------------------------------------------------
# Column name normalisers for inconsistent vendor schemas
# ---------------------------------------------------------------------------
_INCOME_REVENUE_COLS = [
"Total Revenue", "TotalRevenue", "totalRevenue",
"Revenue", "revenue",
]
_INCOME_GROSS_PROFIT_COLS = [
"Gross Profit", "GrossProfit", "grossProfit",
]
_INCOME_OPERATING_INCOME_COLS = [
"Operating Income", "OperatingIncome", "operatingIncome",
"Total Operating Income As Reported",
]
_INCOME_EBITDA_COLS = [
"EBITDA", "Ebitda", "ebitda",
"Normalized EBITDA",
]
_INCOME_NET_INCOME_COLS = [
"Net Income", "NetIncome", "netIncome",
"Net Income From Continuing Operation Net Minority Interest",
]
_BALANCE_TOTAL_ASSETS_COLS = [
"Total Assets", "TotalAssets", "totalAssets",
]
_BALANCE_TOTAL_DEBT_COLS = [
"Total Debt", "TotalDebt", "totalDebt",
"Long Term Debt", "LongTermDebt",
]
_BALANCE_EQUITY_COLS = [
"Stockholders Equity", "StockholdersEquity",
"Total Stockholder Equity", "TotalStockholderEquity",
"Common Stock Equity", "CommonStockEquity",
]
_CASHFLOW_FCF_COLS = [
"Free Cash Flow", "FreeCashFlow", "freeCashFlow",
]
_CASHFLOW_OPERATING_COLS = [
"Operating Cash Flow", "OperatingCashflow", "operatingCashflow",
"Total Cash From Operating Activities",
]
_CASHFLOW_CAPEX_COLS = [
"Capital Expenditure", "CapitalExpenditure", "capitalExpenditure",
"Capital Expenditures",
]
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _find_col(df: pd.DataFrame, candidates: list[str]) -> Optional[str]:
"""Return the first matching column name, or None."""
for col in candidates:
if col in df.columns:
return col
return None
def _parse_financial_csv(csv_text: str) -> Optional[pd.DataFrame]:
"""
Parse a CSV string returned by vendor data functions.
Alpha Vantage and yfinance both return CSV strings where:
- Rows are metrics, columns are dates (transposed layout for AV)
- OR columns are metrics, rows are dates (yfinance layout)
We normalise to: index=date (ascending), columns=metrics.
"""
if not csv_text or not csv_text.strip():
return None
try:
df = pd.read_csv(StringIO(csv_text), index_col=0)
except Exception:
return None
if df.empty:
return None
# Detect orientation: if index looks like dates, columns are metrics.
# If columns look like dates, transpose.
def _looks_like_dates(values) -> bool:
count = 0
for v in list(values)[:5]:
try:
pd.to_datetime(str(v))
count += 1
except Exception:
pass
return count >= min(2, len(list(values)[:5]))
if _looks_like_dates(df.columns):
# AV-style: rows=metrics, cols=dates — transpose
df = df.T
# Parse index as dates
try:
df.index = pd.to_datetime(df.index)
except Exception:
return None
df.sort_index(inplace=True) # ascending (oldest first)
# Convert all columns to numeric
for col in df.columns:
df[col] = pd.to_numeric(df[col], errors="coerce")
return df
def _safe_get(df: pd.DataFrame, col_candidates: list[str], row_idx: int) -> Optional[float]:
"""Get a value from a DataFrame by column candidates and row index."""
col = _find_col(df, col_candidates)
if col is None:
return None
try:
val = df.iloc[row_idx][col]
return float(val) if pd.notna(val) else None
except (IndexError, KeyError, TypeError, ValueError):
return None
def _pct_change(new: Optional[float], old: Optional[float]) -> Optional[float]:
if new is None or old is None or old == 0:
return None
return (new - old) / abs(old) * 100
def _fmt(val: Optional[float], billions: bool = True, suffix: str = "") -> str:
if val is None:
return "N/A"
if billions:
return f"${val / 1e9:.2f}B{suffix}"
return f"{val:.2f}{suffix}"
def _fmt_pct(val: Optional[float]) -> str:
if val is None:
return "N/A"
sign = "+" if val >= 0 else ""
return f"{sign}{val:.1f}%"
# ---------------------------------------------------------------------------
# Core computation
# ---------------------------------------------------------------------------
def compute_ttm_metrics(
income_csv: str,
balance_csv: str,
cashflow_csv: str,
n_quarters: int = 8,
) -> dict:
"""
Compute TTM and multi-quarter trend metrics from vendor CSV strings.
Args:
income_csv: CSV text from get_income_statement (quarterly)
balance_csv: CSV text from get_balance_sheet (quarterly)
cashflow_csv: CSV text from get_cashflow (quarterly)
n_quarters: Number of quarters to include (default 8)
Returns:
dict with keys: quarters_available, ttm, quarterly, trends, metadata
"""
income_df = _parse_financial_csv(income_csv)
balance_df = _parse_financial_csv(balance_csv)
cashflow_df = _parse_financial_csv(cashflow_csv)
result = {
"quarters_available": 0,
"ttm": {},
"quarterly": [],
"trends": {},
"metadata": {"parse_errors": []},
}
if income_df is None:
result["metadata"]["parse_errors"].append("income statement parse failed")
if balance_df is None:
result["metadata"]["parse_errors"].append("balance sheet parse failed")
if cashflow_df is None:
result["metadata"]["parse_errors"].append("cash flow parse failed")
# Use income statement to anchor quarters
if income_df is None:
return result
# Limit to last n_quarters
income_df = income_df.tail(n_quarters)
n = len(income_df)
result["quarters_available"] = n
if balance_df is not None:
balance_df = balance_df.tail(n_quarters)
if cashflow_df is not None:
cashflow_df = cashflow_df.tail(n_quarters)
# --- TTM: sum last 4 quarters for flow items ---
ttm_n = min(4, n)
ttm_income = income_df.tail(ttm_n)
def _ttm_sum(df, cols) -> Optional[float]:
col = _find_col(df, cols)
if col is None:
return None
vals = pd.to_numeric(df.tail(ttm_n)[col], errors="coerce").dropna()
return float(vals.sum()) if len(vals) > 0 else None
def _ttm_latest(df, cols) -> Optional[float]:
"""Stock items: use most recent value."""
if df is None:
return None
col = _find_col(df, cols)
if col is None:
return None
series = pd.to_numeric(df[col], errors="coerce").dropna()
return float(series.iloc[-1]) if len(series) > 0 else None
ttm_revenue = _ttm_sum(ttm_income, _INCOME_REVENUE_COLS)
ttm_gross_profit = _ttm_sum(ttm_income, _INCOME_GROSS_PROFIT_COLS)
ttm_operating_income = _ttm_sum(ttm_income, _INCOME_OPERATING_INCOME_COLS)
ttm_ebitda = _ttm_sum(ttm_income, _INCOME_EBITDA_COLS)
ttm_net_income = _ttm_sum(ttm_income, _INCOME_NET_INCOME_COLS)
ttm_total_assets = _ttm_latest(balance_df, _BALANCE_TOTAL_ASSETS_COLS)
ttm_total_debt = _ttm_latest(balance_df, _BALANCE_TOTAL_DEBT_COLS)
ttm_equity = _ttm_latest(balance_df, _BALANCE_EQUITY_COLS)
ttm_fcf = _ttm_sum(cashflow_df, _CASHFLOW_FCF_COLS) if cashflow_df is not None else None
ttm_operating_cf = _ttm_sum(cashflow_df, _CASHFLOW_OPERATING_COLS) if cashflow_df is not None else None
# Derived ratios
ttm_gross_margin = (ttm_gross_profit / ttm_revenue * 100) if ttm_revenue and ttm_gross_profit else None
ttm_operating_margin = (ttm_operating_income / ttm_revenue * 100) if ttm_revenue and ttm_operating_income else None
ttm_net_margin = (ttm_net_income / ttm_revenue * 100) if ttm_revenue and ttm_net_income else None
ttm_roe = (ttm_net_income / ttm_equity * 100) if ttm_net_income and ttm_equity and ttm_equity != 0 else None
ttm_debt_to_equity = (ttm_total_debt / ttm_equity) if ttm_total_debt and ttm_equity and ttm_equity != 0 else None
result["ttm"] = {
"revenue": ttm_revenue,
"gross_profit": ttm_gross_profit,
"operating_income": ttm_operating_income,
"ebitda": ttm_ebitda,
"net_income": ttm_net_income,
"free_cash_flow": ttm_fcf,
"operating_cash_flow": ttm_operating_cf,
"total_assets": ttm_total_assets,
"total_debt": ttm_total_debt,
"equity": ttm_equity,
"gross_margin_pct": ttm_gross_margin,
"operating_margin_pct": ttm_operating_margin,
"net_margin_pct": ttm_net_margin,
"roe_pct": ttm_roe,
"debt_to_equity": ttm_debt_to_equity,
}
# --- Quarterly breakdown ---
quarterly = []
for i in range(n):
q_date = income_df.index[i].strftime("%Y-%m-%d") if hasattr(income_df.index[i], "strftime") else str(income_df.index[i])
q_rev = _safe_get(income_df, _INCOME_REVENUE_COLS, i)
q_gp = _safe_get(income_df, _INCOME_GROSS_PROFIT_COLS, i)
q_oi = _safe_get(income_df, _INCOME_OPERATING_INCOME_COLS, i)
q_ni = _safe_get(income_df, _INCOME_NET_INCOME_COLS, i)
q_gm = (q_gp / q_rev * 100) if q_rev and q_gp else None
q_om = (q_oi / q_rev * 100) if q_rev and q_oi else None
q_nm = (q_ni / q_rev * 100) if q_rev and q_ni else None
q_eq = _safe_get(balance_df, _BALANCE_EQUITY_COLS, i) if balance_df is not None and i < len(balance_df) else None
q_debt = _safe_get(balance_df, _BALANCE_TOTAL_DEBT_COLS, i) if balance_df is not None and i < len(balance_df) else None
q_fcf = _safe_get(cashflow_df, _CASHFLOW_FCF_COLS, i) if cashflow_df is not None and i < len(cashflow_df) else None
quarterly.append({
"date": q_date,
"revenue": q_rev,
"gross_profit": q_gp,
"operating_income": q_oi,
"net_income": q_ni,
"gross_margin_pct": q_gm,
"operating_margin_pct": q_om,
"net_margin_pct": q_nm,
"equity": q_eq,
"total_debt": q_debt,
"free_cash_flow": q_fcf,
})
result["quarterly"] = quarterly
# --- Trend analysis ---
if n >= 2:
latest_rev = quarterly[-1]["revenue"]
prev_rev = quarterly[-2]["revenue"]
yoy_rev = quarterly[-4]["revenue"] if n >= 5 else None
result["trends"] = {
"revenue_qoq_pct": _pct_change(latest_rev, prev_rev),
"revenue_yoy_pct": _pct_change(latest_rev, yoy_rev),
"gross_margin_direction": _margin_trend([q["gross_margin_pct"] for q in quarterly]),
"operating_margin_direction": _margin_trend([q["operating_margin_pct"] for q in quarterly]),
"net_margin_direction": _margin_trend([q["net_margin_pct"] for q in quarterly]),
}
return result
def _margin_trend(margins: list) -> str:
"""Classify margin trend from list of quarterly values (oldest first)."""
clean = [m for m in margins if m is not None]
if len(clean) < 3:
return "insufficient data"
recent = clean[-3:]
if recent[-1] > recent[0]:
return "expanding"
elif recent[-1] < recent[0]:
return "contracting"
return "stable"
# ---------------------------------------------------------------------------
# Report formatting
# ---------------------------------------------------------------------------
def format_ttm_report(metrics: dict, ticker: str) -> str:
"""Format compute_ttm_metrics output as a detailed Markdown report."""
n = metrics["quarters_available"]
ttm = metrics["ttm"]
quarterly = metrics["quarterly"]
trends = metrics.get("trends", {})
errors = metrics["metadata"].get("parse_errors", [])
lines = [
f"# TTM Fundamental Analysis: {ticker.upper()}",
f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
f"# Quarters available: {n} (target: 8)",
"",
]
if errors:
lines.append(f"**Data warnings:** {'; '.join(errors)}")
lines.append("")
if n == 0:
lines.append("_No quarterly data available._")
return "\n".join(lines)
# TTM Summary
lines += [
"## Trailing Twelve Months (TTM) Summary",
"",
f"| Metric | TTM Value |",
f"|--------|-----------|",
f"| Revenue | {_fmt(ttm.get('revenue'))} |",
f"| Gross Profit | {_fmt(ttm.get('gross_profit'))} |",
f"| Operating Income | {_fmt(ttm.get('operating_income'))} |",
f"| EBITDA | {_fmt(ttm.get('ebitda'))} |",
f"| Net Income | {_fmt(ttm.get('net_income'))} |",
f"| Free Cash Flow | {_fmt(ttm.get('free_cash_flow'))} |",
f"| Operating Cash Flow | {_fmt(ttm.get('operating_cash_flow'))} |",
f"| Total Debt | {_fmt(ttm.get('total_debt'))} |",
f"| Equity | {_fmt(ttm.get('equity'))} |",
f"| Gross Margin | {_fmt_pct(ttm.get('gross_margin_pct'))} |",
f"| Operating Margin | {_fmt_pct(ttm.get('operating_margin_pct'))} |",
f"| Net Margin | {_fmt_pct(ttm.get('net_margin_pct'))} |",
f"| Return on Equity | {_fmt_pct(ttm.get('roe_pct'))} |",
f"| Debt / Equity | {(str(round(ttm['debt_to_equity'], 2)) + 'x') if ttm.get('debt_to_equity') is not None else 'N/A'} |",
"",
]
# Trend signals
if trends:
lines += [
"## Trend Signals",
"",
f"| Signal | Value |",
f"|--------|-------|",
f"| Revenue QoQ Growth | {_fmt_pct(trends.get('revenue_qoq_pct'))} |",
f"| Revenue YoY Growth | {_fmt_pct(trends.get('revenue_yoy_pct'))} |",
f"| Gross Margin Trend | {trends.get('gross_margin_direction', 'N/A')} |",
f"| Operating Margin Trend | {trends.get('operating_margin_direction', 'N/A')} |",
f"| Net Margin Trend | {trends.get('net_margin_direction', 'N/A')} |",
"",
]
# 8-quarter table
if quarterly:
lines += [
f"## {n}-Quarter Revenue & Margin History (oldest → newest)",
"",
"| Quarter | Revenue | Gross Margin | Operating Margin | Net Margin | FCF |",
"|---------|---------|--------------|------------------|------------|-----|",
]
for q in quarterly:
lines.append(
f"| {q['date']} "
f"| {_fmt(q['revenue'])} "
f"| {_fmt_pct(q['gross_margin_pct'])} "
f"| {_fmt_pct(q['operating_margin_pct'])} "
f"| {_fmt_pct(q['net_margin_pct'])} "
f"| {_fmt(q['free_cash_flow'])} |"
)
lines.append("")
return "\n".join(lines)

View File

@ -30,7 +30,11 @@ from tradingagents.agents.utils.agent_utils import (
get_income_statement, get_income_statement,
get_news, get_news,
get_insider_transactions, get_insider_transactions,
get_global_news get_global_news,
get_ttm_analysis,
get_peer_comparison,
get_sector_relative,
get_macro_regime,
) )
from .conditional_logic import ConditionalLogic from .conditional_logic import ConditionalLogic
@ -142,8 +146,11 @@ class TradingAgentsGraph:
# Create tool nodes # Create tool nodes
self.tool_nodes = self._create_tool_nodes() self.tool_nodes = self._create_tool_nodes()
# Initialize components # Initialize components — wire debate/risk rounds from config
self.conditional_logic = ConditionalLogic() self.conditional_logic = ConditionalLogic(
max_debate_rounds=self.config.get("max_debate_rounds", 2),
max_risk_discuss_rounds=self.config.get("max_risk_discuss_rounds", 2),
)
self.graph_setup = GraphSetup( self.graph_setup = GraphSetup(
self.quick_thinking_llm, self.quick_thinking_llm,
self.mid_thinking_llm, self.mid_thinking_llm,
@ -210,6 +217,8 @@ class TradingAgentsGraph:
get_stock_data, get_stock_data,
# Technical indicators # Technical indicators
get_indicators, get_indicators,
# Macro regime classification
get_macro_regime,
] ]
), ),
"social": ToolNode( "social": ToolNode(
@ -233,6 +242,11 @@ class TradingAgentsGraph:
get_balance_sheet, get_balance_sheet,
get_cashflow, get_cashflow,
get_income_statement, get_income_statement,
# TTM trend analysis (8 quarters)
get_ttm_analysis,
# Relative performance tools
get_peer_comparison,
get_sector_relative,
] ]
), ),
} }
@ -278,6 +292,7 @@ class TradingAgentsGraph:
"company_of_interest": final_state["company_of_interest"], "company_of_interest": final_state["company_of_interest"],
"trade_date": final_state["trade_date"], "trade_date": final_state["trade_date"],
"market_report": final_state["market_report"], "market_report": final_state["market_report"],
"macro_regime_report": final_state.get("macro_regime_report", ""),
"sentiment_report": final_state["sentiment_report"], "sentiment_report": final_state["sentiment_report"],
"news_report": final_state["news_report"], "news_report": final_state["news_report"],
"fundamentals_report": final_state["fundamentals_report"], "fundamentals_report": final_state["fundamentals_report"],

49
uv.lock
View File

@ -752,6 +752,7 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/92/db/b4c12cff13ebac2786f4f217f06588bccd8b53d260453404ef22b121fc3a/greenlet-3.2.3-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:1afd685acd5597349ee6d7a88a8bec83ce13c106ac78c196ee9dde7c04fe87be", size = 268977, upload-time = "2025-06-05T16:10:24.001Z" }, { url = "https://files.pythonhosted.org/packages/92/db/b4c12cff13ebac2786f4f217f06588bccd8b53d260453404ef22b121fc3a/greenlet-3.2.3-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:1afd685acd5597349ee6d7a88a8bec83ce13c106ac78c196ee9dde7c04fe87be", size = 268977, upload-time = "2025-06-05T16:10:24.001Z" },
{ url = "https://files.pythonhosted.org/packages/52/61/75b4abd8147f13f70986df2801bf93735c1bd87ea780d70e3b3ecda8c165/greenlet-3.2.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:761917cac215c61e9dc7324b2606107b3b292a8349bdebb31503ab4de3f559ac", size = 627351, upload-time = "2025-06-05T16:38:50.685Z" }, { url = "https://files.pythonhosted.org/packages/52/61/75b4abd8147f13f70986df2801bf93735c1bd87ea780d70e3b3ecda8c165/greenlet-3.2.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:761917cac215c61e9dc7324b2606107b3b292a8349bdebb31503ab4de3f559ac", size = 627351, upload-time = "2025-06-05T16:38:50.685Z" },
{ url = "https://files.pythonhosted.org/packages/35/aa/6894ae299d059d26254779a5088632874b80ee8cf89a88bca00b0709d22f/greenlet-3.2.3-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:a433dbc54e4a37e4fff90ef34f25a8c00aed99b06856f0119dcf09fbafa16392", size = 638599, upload-time = "2025-06-05T16:41:34.057Z" }, { url = "https://files.pythonhosted.org/packages/35/aa/6894ae299d059d26254779a5088632874b80ee8cf89a88bca00b0709d22f/greenlet-3.2.3-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:a433dbc54e4a37e4fff90ef34f25a8c00aed99b06856f0119dcf09fbafa16392", size = 638599, upload-time = "2025-06-05T16:41:34.057Z" },
{ url = "https://files.pythonhosted.org/packages/30/64/e01a8261d13c47f3c082519a5e9dbf9e143cc0498ed20c911d04e54d526c/greenlet-3.2.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:72e77ed69312bab0434d7292316d5afd6896192ac4327d44f3d613ecb85b037c", size = 634482, upload-time = "2025-06-05T16:48:16.26Z" },
{ url = "https://files.pythonhosted.org/packages/47/48/ff9ca8ba9772d083a4f5221f7b4f0ebe8978131a9ae0909cf202f94cd879/greenlet-3.2.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:68671180e3849b963649254a882cd544a3c75bfcd2c527346ad8bb53494444db", size = 633284, upload-time = "2025-06-05T16:13:01.599Z" }, { url = "https://files.pythonhosted.org/packages/47/48/ff9ca8ba9772d083a4f5221f7b4f0ebe8978131a9ae0909cf202f94cd879/greenlet-3.2.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:68671180e3849b963649254a882cd544a3c75bfcd2c527346ad8bb53494444db", size = 633284, upload-time = "2025-06-05T16:13:01.599Z" },
{ url = "https://files.pythonhosted.org/packages/e9/45/626e974948713bc15775b696adb3eb0bd708bec267d6d2d5c47bb47a6119/greenlet-3.2.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:49c8cfb18fb419b3d08e011228ef8a25882397f3a859b9fe1436946140b6756b", size = 582206, upload-time = "2025-06-05T16:12:48.51Z" }, { url = "https://files.pythonhosted.org/packages/e9/45/626e974948713bc15775b696adb3eb0bd708bec267d6d2d5c47bb47a6119/greenlet-3.2.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:49c8cfb18fb419b3d08e011228ef8a25882397f3a859b9fe1436946140b6756b", size = 582206, upload-time = "2025-06-05T16:12:48.51Z" },
{ url = "https://files.pythonhosted.org/packages/b1/8e/8b6f42c67d5df7db35b8c55c9a850ea045219741bb14416255616808c690/greenlet-3.2.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:efc6dc8a792243c31f2f5674b670b3a95d46fa1c6a912b8e310d6f542e7b0712", size = 1111412, upload-time = "2025-06-05T16:36:45.479Z" }, { url = "https://files.pythonhosted.org/packages/b1/8e/8b6f42c67d5df7db35b8c55c9a850ea045219741bb14416255616808c690/greenlet-3.2.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:efc6dc8a792243c31f2f5674b670b3a95d46fa1c6a912b8e310d6f542e7b0712", size = 1111412, upload-time = "2025-06-05T16:36:45.479Z" },
@ -760,6 +761,7 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fc/2e/d4fcb2978f826358b673f779f78fa8a32ee37df11920dc2bb5589cbeecef/greenlet-3.2.3-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822", size = 270219, upload-time = "2025-06-05T16:10:10.414Z" }, { url = "https://files.pythonhosted.org/packages/fc/2e/d4fcb2978f826358b673f779f78fa8a32ee37df11920dc2bb5589cbeecef/greenlet-3.2.3-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:784ae58bba89fa1fa5733d170d42486580cab9decda3484779f4759345b29822", size = 270219, upload-time = "2025-06-05T16:10:10.414Z" },
{ url = "https://files.pythonhosted.org/packages/16/24/929f853e0202130e4fe163bc1d05a671ce8dcd604f790e14896adac43a52/greenlet-3.2.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83", size = 630383, upload-time = "2025-06-05T16:38:51.785Z" }, { url = "https://files.pythonhosted.org/packages/16/24/929f853e0202130e4fe163bc1d05a671ce8dcd604f790e14896adac43a52/greenlet-3.2.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0921ac4ea42a5315d3446120ad48f90c3a6b9bb93dd9b3cf4e4d84a66e42de83", size = 630383, upload-time = "2025-06-05T16:38:51.785Z" },
{ url = "https://files.pythonhosted.org/packages/d1/b2/0320715eb61ae70c25ceca2f1d5ae620477d246692d9cc284c13242ec31c/greenlet-3.2.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf", size = 642422, upload-time = "2025-06-05T16:41:35.259Z" }, { url = "https://files.pythonhosted.org/packages/d1/b2/0320715eb61ae70c25ceca2f1d5ae620477d246692d9cc284c13242ec31c/greenlet-3.2.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:d2971d93bb99e05f8c2c0c2f4aa9484a18d98c4c3bd3c62b65b7e6ae33dfcfaf", size = 642422, upload-time = "2025-06-05T16:41:35.259Z" },
{ url = "https://files.pythonhosted.org/packages/bd/49/445fd1a210f4747fedf77615d941444349c6a3a4a1135bba9701337cd966/greenlet-3.2.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c667c0bf9d406b77a15c924ef3285e1e05250948001220368e039b6aa5b5034b", size = 638375, upload-time = "2025-06-05T16:48:18.235Z" },
{ url = "https://files.pythonhosted.org/packages/7e/c8/ca19760cf6eae75fa8dc32b487e963d863b3ee04a7637da77b616703bc37/greenlet-3.2.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147", size = 637627, upload-time = "2025-06-05T16:13:02.858Z" }, { url = "https://files.pythonhosted.org/packages/7e/c8/ca19760cf6eae75fa8dc32b487e963d863b3ee04a7637da77b616703bc37/greenlet-3.2.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:592c12fb1165be74592f5de0d70f82bc5ba552ac44800d632214b76089945147", size = 637627, upload-time = "2025-06-05T16:13:02.858Z" },
{ url = "https://files.pythonhosted.org/packages/65/89/77acf9e3da38e9bcfca881e43b02ed467c1dedc387021fc4d9bd9928afb8/greenlet-3.2.3-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5", size = 585502, upload-time = "2025-06-05T16:12:49.642Z" }, { url = "https://files.pythonhosted.org/packages/65/89/77acf9e3da38e9bcfca881e43b02ed467c1dedc387021fc4d9bd9928afb8/greenlet-3.2.3-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:29e184536ba333003540790ba29829ac14bb645514fbd7e32af331e8202a62a5", size = 585502, upload-time = "2025-06-05T16:12:49.642Z" },
{ url = "https://files.pythonhosted.org/packages/97/c6/ae244d7c95b23b7130136e07a9cc5aadd60d59b5951180dc7dc7e8edaba7/greenlet-3.2.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc", size = 1114498, upload-time = "2025-06-05T16:36:46.598Z" }, { url = "https://files.pythonhosted.org/packages/97/c6/ae244d7c95b23b7130136e07a9cc5aadd60d59b5951180dc7dc7e8edaba7/greenlet-3.2.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:93c0bb79844a367782ec4f429d07589417052e621aa39a5ac1fb99c5aa308edc", size = 1114498, upload-time = "2025-06-05T16:36:46.598Z" },
@ -768,6 +770,7 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f3/94/ad0d435f7c48debe960c53b8f60fb41c2026b1d0fa4a99a1cb17c3461e09/greenlet-3.2.3-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:25ad29caed5783d4bd7a85c9251c651696164622494c00802a139c00d639242d", size = 271992, upload-time = "2025-06-05T16:11:23.467Z" }, { url = "https://files.pythonhosted.org/packages/f3/94/ad0d435f7c48debe960c53b8f60fb41c2026b1d0fa4a99a1cb17c3461e09/greenlet-3.2.3-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:25ad29caed5783d4bd7a85c9251c651696164622494c00802a139c00d639242d", size = 271992, upload-time = "2025-06-05T16:11:23.467Z" },
{ url = "https://files.pythonhosted.org/packages/93/5d/7c27cf4d003d6e77749d299c7c8f5fd50b4f251647b5c2e97e1f20da0ab5/greenlet-3.2.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:88cd97bf37fe24a6710ec6a3a7799f3f81d9cd33317dcf565ff9950c83f55e0b", size = 638820, upload-time = "2025-06-05T16:38:52.882Z" }, { url = "https://files.pythonhosted.org/packages/93/5d/7c27cf4d003d6e77749d299c7c8f5fd50b4f251647b5c2e97e1f20da0ab5/greenlet-3.2.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:88cd97bf37fe24a6710ec6a3a7799f3f81d9cd33317dcf565ff9950c83f55e0b", size = 638820, upload-time = "2025-06-05T16:38:52.882Z" },
{ url = "https://files.pythonhosted.org/packages/c6/7e/807e1e9be07a125bb4c169144937910bf59b9d2f6d931578e57f0bce0ae2/greenlet-3.2.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:baeedccca94880d2f5666b4fa16fc20ef50ba1ee353ee2d7092b383a243b0b0d", size = 653046, upload-time = "2025-06-05T16:41:36.343Z" }, { url = "https://files.pythonhosted.org/packages/c6/7e/807e1e9be07a125bb4c169144937910bf59b9d2f6d931578e57f0bce0ae2/greenlet-3.2.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:baeedccca94880d2f5666b4fa16fc20ef50ba1ee353ee2d7092b383a243b0b0d", size = 653046, upload-time = "2025-06-05T16:41:36.343Z" },
{ url = "https://files.pythonhosted.org/packages/9d/ab/158c1a4ea1068bdbc78dba5a3de57e4c7aeb4e7fa034320ea94c688bfb61/greenlet-3.2.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:be52af4b6292baecfa0f397f3edb3c6092ce071b499dd6fe292c9ac9f2c8f264", size = 647701, upload-time = "2025-06-05T16:48:19.604Z" },
{ url = "https://files.pythonhosted.org/packages/cc/0d/93729068259b550d6a0288da4ff72b86ed05626eaf1eb7c0d3466a2571de/greenlet-3.2.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0cc73378150b8b78b0c9fe2ce56e166695e67478550769536a6742dca3651688", size = 649747, upload-time = "2025-06-05T16:13:04.628Z" }, { url = "https://files.pythonhosted.org/packages/cc/0d/93729068259b550d6a0288da4ff72b86ed05626eaf1eb7c0d3466a2571de/greenlet-3.2.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0cc73378150b8b78b0c9fe2ce56e166695e67478550769536a6742dca3651688", size = 649747, upload-time = "2025-06-05T16:13:04.628Z" },
{ url = "https://files.pythonhosted.org/packages/f6/f6/c82ac1851c60851302d8581680573245c8fc300253fc1ff741ae74a6c24d/greenlet-3.2.3-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:706d016a03e78df129f68c4c9b4c4f963f7d73534e48a24f5f5a7101ed13dbbb", size = 605461, upload-time = "2025-06-05T16:12:50.792Z" }, { url = "https://files.pythonhosted.org/packages/f6/f6/c82ac1851c60851302d8581680573245c8fc300253fc1ff741ae74a6c24d/greenlet-3.2.3-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:706d016a03e78df129f68c4c9b4c4f963f7d73534e48a24f5f5a7101ed13dbbb", size = 605461, upload-time = "2025-06-05T16:12:50.792Z" },
{ url = "https://files.pythonhosted.org/packages/98/82/d022cf25ca39cf1200650fc58c52af32c90f80479c25d1cbf57980ec3065/greenlet-3.2.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:419e60f80709510c343c57b4bb5a339d8767bf9aef9b8ce43f4f143240f88b7c", size = 1121190, upload-time = "2025-06-05T16:36:48.59Z" }, { url = "https://files.pythonhosted.org/packages/98/82/d022cf25ca39cf1200650fc58c52af32c90f80479c25d1cbf57980ec3065/greenlet-3.2.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:419e60f80709510c343c57b4bb5a339d8767bf9aef9b8ce43f4f143240f88b7c", size = 1121190, upload-time = "2025-06-05T16:36:48.59Z" },
@ -776,6 +779,7 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b1/cf/f5c0b23309070ae93de75c90d29300751a5aacefc0a3ed1b1d8edb28f08b/greenlet-3.2.3-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:500b8689aa9dd1ab26872a34084503aeddefcb438e2e7317b89b11eaea1901ad", size = 270732, upload-time = "2025-06-05T16:10:08.26Z" }, { url = "https://files.pythonhosted.org/packages/b1/cf/f5c0b23309070ae93de75c90d29300751a5aacefc0a3ed1b1d8edb28f08b/greenlet-3.2.3-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:500b8689aa9dd1ab26872a34084503aeddefcb438e2e7317b89b11eaea1901ad", size = 270732, upload-time = "2025-06-05T16:10:08.26Z" },
{ url = "https://files.pythonhosted.org/packages/48/ae/91a957ba60482d3fecf9be49bc3948f341d706b52ddb9d83a70d42abd498/greenlet-3.2.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:a07d3472c2a93117af3b0136f246b2833fdc0b542d4a9799ae5f41c28323faef", size = 639033, upload-time = "2025-06-05T16:38:53.983Z" }, { url = "https://files.pythonhosted.org/packages/48/ae/91a957ba60482d3fecf9be49bc3948f341d706b52ddb9d83a70d42abd498/greenlet-3.2.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:a07d3472c2a93117af3b0136f246b2833fdc0b542d4a9799ae5f41c28323faef", size = 639033, upload-time = "2025-06-05T16:38:53.983Z" },
{ url = "https://files.pythonhosted.org/packages/6f/df/20ffa66dd5a7a7beffa6451bdb7400d66251374ab40b99981478c69a67a8/greenlet-3.2.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:8704b3768d2f51150626962f4b9a9e4a17d2e37c8a8d9867bbd9fa4eb938d3b3", size = 652999, upload-time = "2025-06-05T16:41:37.89Z" }, { url = "https://files.pythonhosted.org/packages/6f/df/20ffa66dd5a7a7beffa6451bdb7400d66251374ab40b99981478c69a67a8/greenlet-3.2.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:8704b3768d2f51150626962f4b9a9e4a17d2e37c8a8d9867bbd9fa4eb938d3b3", size = 652999, upload-time = "2025-06-05T16:41:37.89Z" },
{ url = "https://files.pythonhosted.org/packages/51/b4/ebb2c8cb41e521f1d72bf0465f2f9a2fd803f674a88db228887e6847077e/greenlet-3.2.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:5035d77a27b7c62db6cf41cf786cfe2242644a7a337a0e155c80960598baab95", size = 647368, upload-time = "2025-06-05T16:48:21.467Z" },
{ url = "https://files.pythonhosted.org/packages/8e/6a/1e1b5aa10dced4ae876a322155705257748108b7fd2e4fae3f2a091fe81a/greenlet-3.2.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2d8aa5423cd4a396792f6d4580f88bdc6efcb9205891c9d40d20f6e670992efb", size = 650037, upload-time = "2025-06-05T16:13:06.402Z" }, { url = "https://files.pythonhosted.org/packages/8e/6a/1e1b5aa10dced4ae876a322155705257748108b7fd2e4fae3f2a091fe81a/greenlet-3.2.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2d8aa5423cd4a396792f6d4580f88bdc6efcb9205891c9d40d20f6e670992efb", size = 650037, upload-time = "2025-06-05T16:13:06.402Z" },
{ url = "https://files.pythonhosted.org/packages/26/f2/ad51331a157c7015c675702e2d5230c243695c788f8f75feba1af32b3617/greenlet-3.2.3-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2c724620a101f8170065d7dded3f962a2aea7a7dae133a009cada42847e04a7b", size = 608402, upload-time = "2025-06-05T16:12:51.91Z" }, { url = "https://files.pythonhosted.org/packages/26/f2/ad51331a157c7015c675702e2d5230c243695c788f8f75feba1af32b3617/greenlet-3.2.3-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2c724620a101f8170065d7dded3f962a2aea7a7dae133a009cada42847e04a7b", size = 608402, upload-time = "2025-06-05T16:12:51.91Z" },
{ url = "https://files.pythonhosted.org/packages/26/bc/862bd2083e6b3aff23300900a956f4ea9a4059de337f5c8734346b9b34fc/greenlet-3.2.3-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:873abe55f134c48e1f2a6f53f7d1419192a3d1a4e873bace00499a4e45ea6af0", size = 1119577, upload-time = "2025-06-05T16:36:49.787Z" }, { url = "https://files.pythonhosted.org/packages/26/bc/862bd2083e6b3aff23300900a956f4ea9a4059de337f5c8734346b9b34fc/greenlet-3.2.3-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:873abe55f134c48e1f2a6f53f7d1419192a3d1a4e873bace00499a4e45ea6af0", size = 1119577, upload-time = "2025-06-05T16:36:49.787Z" },
@ -784,6 +788,7 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d8/ca/accd7aa5280eb92b70ed9e8f7fd79dc50a2c21d8c73b9a0856f5b564e222/greenlet-3.2.3-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:3d04332dddb10b4a211b68111dabaee2e1a073663d117dc10247b5b1642bac86", size = 271479, upload-time = "2025-06-05T16:10:47.525Z" }, { url = "https://files.pythonhosted.org/packages/d8/ca/accd7aa5280eb92b70ed9e8f7fd79dc50a2c21d8c73b9a0856f5b564e222/greenlet-3.2.3-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:3d04332dddb10b4a211b68111dabaee2e1a073663d117dc10247b5b1642bac86", size = 271479, upload-time = "2025-06-05T16:10:47.525Z" },
{ url = "https://files.pythonhosted.org/packages/55/71/01ed9895d9eb49223280ecc98a557585edfa56b3d0e965b9fa9f7f06b6d9/greenlet-3.2.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8186162dffde068a465deab08fc72c767196895c39db26ab1c17c0b77a6d8b97", size = 683952, upload-time = "2025-06-05T16:38:55.125Z" }, { url = "https://files.pythonhosted.org/packages/55/71/01ed9895d9eb49223280ecc98a557585edfa56b3d0e965b9fa9f7f06b6d9/greenlet-3.2.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8186162dffde068a465deab08fc72c767196895c39db26ab1c17c0b77a6d8b97", size = 683952, upload-time = "2025-06-05T16:38:55.125Z" },
{ url = "https://files.pythonhosted.org/packages/ea/61/638c4bdf460c3c678a0a1ef4c200f347dff80719597e53b5edb2fb27ab54/greenlet-3.2.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f4bfbaa6096b1b7a200024784217defedf46a07c2eee1a498e94a1b5f8ec5728", size = 696917, upload-time = "2025-06-05T16:41:38.959Z" }, { url = "https://files.pythonhosted.org/packages/ea/61/638c4bdf460c3c678a0a1ef4c200f347dff80719597e53b5edb2fb27ab54/greenlet-3.2.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f4bfbaa6096b1b7a200024784217defedf46a07c2eee1a498e94a1b5f8ec5728", size = 696917, upload-time = "2025-06-05T16:41:38.959Z" },
{ url = "https://files.pythonhosted.org/packages/22/cc/0bd1a7eb759d1f3e3cc2d1bc0f0b487ad3cc9f34d74da4b80f226fde4ec3/greenlet-3.2.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:ed6cfa9200484d234d8394c70f5492f144b20d4533f69262d530a1a082f6ee9a", size = 692443, upload-time = "2025-06-05T16:48:23.113Z" },
{ url = "https://files.pythonhosted.org/packages/67/10/b2a4b63d3f08362662e89c103f7fe28894a51ae0bc890fabf37d1d780e52/greenlet-3.2.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:02b0df6f63cd15012bed5401b47829cfd2e97052dc89da3cfaf2c779124eb892", size = 692995, upload-time = "2025-06-05T16:13:07.972Z" }, { url = "https://files.pythonhosted.org/packages/67/10/b2a4b63d3f08362662e89c103f7fe28894a51ae0bc890fabf37d1d780e52/greenlet-3.2.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:02b0df6f63cd15012bed5401b47829cfd2e97052dc89da3cfaf2c779124eb892", size = 692995, upload-time = "2025-06-05T16:13:07.972Z" },
{ url = "https://files.pythonhosted.org/packages/5a/c6/ad82f148a4e3ce9564056453a71529732baf5448ad53fc323e37efe34f66/greenlet-3.2.3-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:86c2d68e87107c1792e2e8d5399acec2487a4e993ab76c792408e59394d52141", size = 655320, upload-time = "2025-06-05T16:12:53.453Z" }, { url = "https://files.pythonhosted.org/packages/5a/c6/ad82f148a4e3ce9564056453a71529732baf5448ad53fc323e37efe34f66/greenlet-3.2.3-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:86c2d68e87107c1792e2e8d5399acec2487a4e993ab76c792408e59394d52141", size = 655320, upload-time = "2025-06-05T16:12:53.453Z" },
{ url = "https://files.pythonhosted.org/packages/5c/4f/aab73ecaa6b3086a4c89863d94cf26fa84cbff63f52ce9bc4342b3087a06/greenlet-3.2.3-cp314-cp314-win_amd64.whl", hash = "sha256:8c47aae8fbbfcf82cc13327ae802ba13c9c36753b67e760023fd116bc124a62a", size = 301236, upload-time = "2025-06-05T16:15:20.111Z" }, { url = "https://files.pythonhosted.org/packages/5c/4f/aab73ecaa6b3086a4c89863d94cf26fa84cbff63f52ce9bc4342b3087a06/greenlet-3.2.3-cp314-cp314-win_amd64.whl", hash = "sha256:8c47aae8fbbfcf82cc13327ae802ba13c9c36753b67e760023fd116bc124a62a", size = 301236, upload-time = "2025-06-05T16:15:20.111Z" },
@ -961,6 +966,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/59/91/aa6bde563e0085a02a435aa99b49ef75b0a4b062635e606dab23ce18d720/inflection-0.5.1-py2.py3-none-any.whl", hash = "sha256:f38b2b640938a4f35ade69ac3d053042959b62a0f1076a5bbaa1b9526605a8a2", size = 9454, upload-time = "2020-08-22T08:16:27.816Z" }, { url = "https://files.pythonhosted.org/packages/59/91/aa6bde563e0085a02a435aa99b49ef75b0a4b062635e606dab23ce18d720/inflection-0.5.1-py2.py3-none-any.whl", hash = "sha256:f38b2b640938a4f35ade69ac3d053042959b62a0f1076a5bbaa1b9526605a8a2", size = 9454, upload-time = "2020-08-22T08:16:27.816Z" },
] ]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
]
[[package]] [[package]]
name = "jinja2" name = "jinja2"
version = "3.1.6" version = "3.1.6"
@ -2599,6 +2613,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fe/39/979e8e21520d4e47a0bbe349e2713c0aac6f3d853d0e5b34d76206c439aa/platformdirs-4.3.8-py3-none-any.whl", hash = "sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4", size = 18567, upload-time = "2025-05-07T22:47:40.376Z" }, { url = "https://files.pythonhosted.org/packages/fe/39/979e8e21520d4e47a0bbe349e2713c0aac6f3d853d0e5b34d76206c439aa/platformdirs-4.3.8-py3-none-any.whl", hash = "sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4", size = 18567, upload-time = "2025-05-07T22:47:40.376Z" },
] ]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
]
[[package]] [[package]]
name = "posthog" name = "posthog"
version = "3.25.0" version = "3.25.0"
@ -2907,6 +2930,24 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/61/ad/689f02752eeec26aed679477e80e632ef1b682313be70793d798c1d5fc8f/PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb", size = 22997, upload-time = "2024-11-28T03:43:27.893Z" }, { url = "https://files.pythonhosted.org/packages/61/ad/689f02752eeec26aed679477e80e632ef1b682313be70793d798c1d5fc8f/PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb", size = 22997, upload-time = "2024-11-28T03:43:27.893Z" },
] ]
[[package]]
name = "pytest"
version = "9.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
{ name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
]
[[package]] [[package]]
name = "python-dateutil" name = "python-dateutil"
version = "2.9.0.post0" version = "2.9.0.post0"
@ -3526,6 +3567,11 @@ dependencies = [
{ name = "yfinance" }, { name = "yfinance" },
] ]
[package.dev-dependencies]
dev = [
{ name = "pytest" },
]
[package.metadata] [package.metadata]
requires-dist = [ requires-dist = [
{ name = "backtrader", specifier = ">=1.9.78.123" }, { name = "backtrader", specifier = ">=1.9.78.123" },
@ -3552,6 +3598,9 @@ requires-dist = [
{ name = "yfinance", specifier = ">=0.2.63" }, { name = "yfinance", specifier = ">=0.2.63" },
] ]
[package.metadata.requires-dev]
dev = [{ name = "pytest", specifier = ">=9.0.2" }]
[[package]] [[package]]
name = "typer" name = "typer"
version = "0.21.1" version = "0.21.1"