research(autonomous): 2026-04-14 — automated research run

This commit is contained in:
github-actions[bot] 2026-04-14 20:22:15 +00:00 committed by Youssef Aitousarrah
parent 37029e554e
commit 17e77f036f
4 changed files with 217 additions and 0 deletions

View File

@ -14,6 +14,7 @@
| social_dd | scanners/social_dd.md | 2026-04-14 | 57.1% 30d win rate (+1.41% avg 30d, n=26) — only scanner positive at 30d; eval horizon mismatch persists |
| volume_accumulation | scanners/volume_accumulation.md | — | No data yet |
| short_squeeze | scanners/short_squeeze.md | 2026-04-14 | 60% 7d win rate (n=11), best 7d performer; BUT 30% 30d — short-term signal only, degrades at 30d |
| earnings_beat | scanners/earnings_beat.md | 2026-04-14 | New PEAD scanner: recent EPS beats ≥5% surprise; 15% annualized academic edge; distinct from earnings_calendar |
## Research
@ -21,6 +22,7 @@
|-------|------|------|---------|
| Short Interest Squeeze Scanner | research/2026-04-12-short-interest-squeeze.md | 2026-04-12 | High SI (>20%) + DTC >5 as squeeze-risk discovery; implemented as short_squeeze scanner |
| 52-Week High Breakout Momentum | research/2026-04-13-52-week-high-breakout.md | 2026-04-13 | George & Hwang (2004) validated: 52w high crossing + 1.5x volume = 72% win rate, +11.4% avg over 31d; implemented as high_52w_breakout scanner |
| PEAD Post-Earnings Drift | research/2026-04-14-pead-earnings-beat.md | 2026-04-14 | Bernard & Thomas (1989): 18% annualized PEAD; QuantPedia: 15% annualized (1987-2004); implemented as earnings_beat scanner (distinct from earnings_calendar's upcoming-only scope) |
| reddit_dd | scanners/reddit_dd.md | — | No data yet |
| reddit_trending | scanners/reddit_trending.md | — | No data yet |
| semantic_news | scanners/semantic_news.md | — | No data yet |

View File

@ -0,0 +1,81 @@
# Research: Post-Earnings Announcement Drift (PEAD)
**Date:** 2026-04-14
**Mode:** autonomous
## Summary
PEAD is one of finance's most-studied anomalies: stocks that beat earnings estimates
continue drifting upward for days to weeks after the announcement. QuantPedia backtests
(19872004) show 15% annualized returns; the effect is strongest in small-to-mid caps
with >10% EPS surprise. Our pipeline has an `earnings_calendar` scanner that predicts
upcoming earnings but nothing that captures the drift *after* a beat — this is the gap.
## Sources Reviewed
- **QuantPedia — Post-Earnings Announcement Effect**: Combined EAR+SUE strategy generates
~12.5% abnormal returns p.a. (19872004); optimal hold ~60 trading days; effect strongest
in small caps; most returns on long side; -11.2% max drawdown observed.
- **Ball & Brown (1968) / Bernard & Thomas (1989)**: Foundational PEAD literature;
B&T (1989) documented ~18% annualized abnormal returns; magnitude has declined since
but effect persists — particularly in small caps.
- **DayTrading.com PEAD guide**: Drift persists through approximately day 9 before
plateauing; 520 day hold periods are optimal for tactical implementations.
- **SSRN / Philadelphia Fed (PEAD.txt, 2021)**: NLP-enhanced PEAD achieves 8.01%
drift over 1-year window; suggests signal is durable when combined with text signals.
- **QuantConnect price+earnings momentum**: Combined momentum strategy showed mixed results
(Sharpe -0.27) when using *price* momentum alongside earnings growth — not the same as
surprise-based PEAD.
- **Alpha Architect — 13F data quality warning**: 13F-based institutional signals have 45-day
lag and data quality issues — screened out as alternative. PEAD is clearly superior for
short-horizon plays.
- **Finnhub API docs / finnhub-python**: `earnings_calendar(from_date, to_date)` returns
`epsActual` and `epsEstimate` for all US stocks in the window. Surprise detection requires
only a lookback call — no extra data sources needed.
## Fit Evaluation
| Dimension | Score | Notes |
|-----------|-------|-------|
| Data availability | ✅ | `finnhub_api.get_earnings_calendar()` already integrated; returns `epsActual` + `epsEstimate`; lookback call detects recent beats |
| Complexity | moderate | ~3h: query past-14d earnings calendar, filter for beats, compute surprise%, sort by magnitude |
| Signal uniqueness | low overlap | `earnings_calendar` scanner = UPCOMING earnings; PEAD scanner = RECENT beats + drift capture; different timing and signal |
| Evidence quality | backtested | QuantPedia: 15% annualized returns (19872004); Bernard & Thomas (1989); 60+ years of academic literature |
## Recommendation
**Implement** — All auto-implement thresholds pass.
Key implementation notes:
- Focus on small-to-mid cap stocks where PEAD effect is strongest (B&T 1989)
- Minimum 5% surprise threshold to filter noise
- CRITICAL at >20% surprise, HIGH at 1020%, MEDIUM at 510%
- Hold horizon: 714 days (primary drift window per DayTrading.com)
- Declining US large-cap PEAD mitigated by: small-cap bias + significant surprise filter
## Known Failure Modes
- US large-cap PEAD has declined since 1989 (more efficient pricing); strategy most
effective for small/mid caps and significant surprises (>10%)
- SUE reversal after 3 quarters (price reverts on next earnings); this is beyond our
30d evaluation window so not immediately harmful
- Overlapping earnings: same ticker may appear in `earnings_calendar` (upcoming) and
`earnings_beat` (recent); ranker should treat these as separate signals
## Proposed Scanner Spec
- **Scanner name:** `earnings_beat`
- **Strategy:** `pead_drift`
- **Pipeline:** `events`
- **Data source:** `tradingagents/dataflows/finnhub_api.py``get_earnings_calendar(from_date, to_date, return_structured=True)`
- **Signal logic:**
- Query past `lookback_days` (default 14) of earnings calendar
- Compute `surprise_pct = (epsActual - epsEstimate) / abs(epsEstimate) * 100`
- Filter: `surprise_pct >= min_surprise_pct` (default 5.0%)
- Filter: `epsEstimate != 0` and both fields not None
- Sort by `surprise_pct` descending
- **Priority rules:**
- CRITICAL if `surprise_pct >= 20`
- HIGH if `surprise_pct >= 10`
- MEDIUM otherwise
- **Context format:** `"Earnings beat Xd ago: actual $A vs est $B (+Z% surprise) — PEAD drift window open"`

View File

@ -3,6 +3,7 @@
# Import all scanners to trigger registration
from . import (
analyst_upgrades, # noqa: F401
earnings_beat, # noqa: F401
earnings_calendar, # noqa: F401
high_52w_breakout, # noqa: F401
insider_buying, # noqa: F401

View File

@ -0,0 +1,133 @@
"""Post-Earnings Announcement Drift (PEAD) scanner.
Surfaces stocks that recently reported significant EPS beats, capturing
the well-documented post-earnings drift effect: beaten stocks tend to
continue drifting upward for 730 days after the announcement.
Research basis: docs/iterations/research/2026-04-14-pead-earnings-beat.md
Key insight: PEAD edge is strongest for small-to-mid caps with >10% EPS
surprise (Bernard & Thomas 1989; QuantPedia 15% annualized, 1987-2004).
Hold window: 714 days (primary drift window; effect plateaus ~day 9).
"""
from datetime import datetime, timedelta
from typing import Any, Dict, List
from tradingagents.dataflows.discovery.scanner_registry import SCANNER_REGISTRY, BaseScanner
from tradingagents.dataflows.discovery.utils import Priority
from tradingagents.utils.logger import get_logger
logger = get_logger(__name__)
class EarningsBeatScanner(BaseScanner):
"""Scan for recent EPS beats to capture post-earnings drift (PEAD)."""
name = "earnings_beat"
pipeline = "events"
strategy = "pead_drift"
def __init__(self, config: Dict[str, Any]):
super().__init__(config)
self.lookback_days = self.scanner_config.get("lookback_days", 14)
self.min_surprise_pct = self.scanner_config.get("min_surprise_pct", 5.0)
def scan(self, state: Dict[str, Any]) -> List[Dict[str, Any]]:
if not self.is_enabled():
return []
logger.info(
f"📈 Scanning earnings beats (past {self.lookback_days}d, "
f">={self.min_surprise_pct}% surprise)..."
)
try:
from tradingagents.dataflows.finnhub_api import get_earnings_calendar
to_date = datetime.now().strftime("%Y-%m-%d")
from_date = (datetime.now() - timedelta(days=self.lookback_days)).strftime("%Y-%m-%d")
earnings = get_earnings_calendar(
from_date=from_date,
to_date=to_date,
return_structured=True,
)
if not earnings:
logger.info("No recent earnings data found")
return []
today = datetime.now().date()
candidates = []
for event in earnings:
ticker = event.get("symbol", "").upper().strip()
if not ticker:
continue
eps_actual = event.get("epsActual")
eps_estimate = event.get("epsEstimate")
earnings_date_str = event.get("date", "")
# Need both actual and estimate to compute surprise
if eps_actual is None or eps_estimate is None:
continue
# Avoid division by zero; skip stub/loss estimates near zero
if eps_estimate == 0:
continue
surprise_pct = ((eps_actual - eps_estimate) / abs(eps_estimate)) * 100
if surprise_pct < self.min_surprise_pct:
continue
# Days since announcement
try:
earnings_date = datetime.strptime(earnings_date_str, "%Y-%m-%d").date()
days_ago = (today - earnings_date).days
except (ValueError, TypeError):
days_ago = None
# Priority by surprise magnitude
if surprise_pct >= 20:
priority = Priority.CRITICAL.value
elif surprise_pct >= 10:
priority = Priority.HIGH.value
else:
priority = Priority.MEDIUM.value
days_ago_str = f"{days_ago}d ago" if days_ago is not None else "recently"
context = (
f"Earnings beat {days_ago_str}: actual ${eps_actual:.2f} vs "
f"est ${eps_estimate:.2f} (+{surprise_pct:.1f}% surprise) "
f"— PEAD drift window open"
)
candidates.append(
{
"ticker": ticker,
"source": self.name,
"context": context,
"priority": priority,
"strategy": self.strategy,
"eps_surprise_pct": surprise_pct,
"eps_actual": eps_actual,
"eps_estimate": eps_estimate,
"days_since_earnings": days_ago,
}
)
# Sort by surprise magnitude (largest beats first)
candidates.sort(key=lambda x: x.get("eps_surprise_pct", 0), reverse=True)
candidates = candidates[: self.limit]
logger.info(f"Earnings beats (PEAD): {len(candidates)} candidates")
return candidates
except Exception as e:
logger.warning(f"⚠️ Earnings beat scanner failed: {e}")
return []
SCANNER_REGISTRY.register(EarningsBeatScanner)