fix: address all PR#106 review findings (ADR 016) (#106)
* Initial plan * feat: add observability logging - run event persistence and enriched tool events - Integrate RunLogger into LangGraphEngine for JSONL event persistence - Add _start_run_logger/_finish_run_logger lifecycle in all run methods - Enrich tool events with service, status, and error fields - Add _TOOL_SERVICE_MAP for tool-to-service name resolution - Frontend: color error events in red, show service badges - Frontend: display graceful_skip status with orange indicators - Frontend: add error tab and service info to EventDetail/EventDetailModal - Add 11 unit tests for new observability features Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/477a0676-af7b-48ff-8a3d-567e943323cf * refactor: address code review - extract graceful keywords constant, fix imports - Move get_daily_dir import to top-level (remove inline aliases) - Extract _GRACEFUL_SKIP_KEYWORDS as module-level constant - Update test patches to match top-level import location Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/477a0676-af7b-48ff-8a3d-567e943323cf * feat: add run_id to report paths, MongoDB report store, store factory, and reflexion memory - report_paths.py: All path helpers accept optional run_id for run-scoped dirs - report_store.py: ReportStore supports run_id + latest.json pointer mechanism - mongo_report_store.py: MongoDB-backed store with same interface (no overwrites) - store_factory.py: Factory returns MongoDB or filesystem store based on config - memory/reflexion.py: Reflexion memory for learning from past decisions - langgraph_engine.py: Uses store factory + run_id for all run methods - Fix save_holding_reviews bug (was save_holding_reviews, now save_holding_review) - default_config.py: Add mongo_uri and mongo_db config keys - pyproject.toml: Add pymongo>=4.12.1 dependency - .env.example: Document TRADINGAGENTS_MONGO_URI and TRADINGAGENTS_MONGO_DB Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/16e673ea-40a1-40a0-8e77-f8cd08c1a716 * fix: clean up reflexion record_outcome (remove broken update_one with sort) Also update runs.py reset endpoint to use store factory, fix tests, add ADR 015, update CURRENT_STATE.md Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/16e673ea-40a1-40a0-8e77-f8cd08c1a716 * fix: address all PR#106 review findings (ADR 016) - Fix save_holding_review: iterate per-ticker instead of passing portfolio_id as ticker (Finding 13) - Fix RunLogger context: replace threading.local with contextvars for asyncio task isolation (Finding 3) - Fix list_pm_decisions: add _id:0 projection to exclude ObjectId (Finding 6) - Fix ReflexionMemory: native datetime for MongoDB, ISO string for local JSON fallback (Finding 7) - Fix latest pointer: write/read_latest_pointer accept base_dir parameter, ReportStore passes _base_dir (Finding 12) - Wire RunLogger callback into all astream_events calls (Finding 1) - Call ensure_indexes in MongoReportStore.__init__ (Finding 11) - Create ADR 016 documenting all 13 findings and resolutions - Add 14 targeted tests covering all 7 fixes - All 886 tests pass (872 existing + 14 new) Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/e52cdd2f-efae-4d2a-a56f-903d909b3342 * chore: remove unused imports in tests, remove redundant ensure_indexes call in factory Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/e52cdd2f-efae-4d2a-a56f-903d909b3342 * docs: update ADR 016 — mark Finding 2 resolved, update context docs for contextvars Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com> Agent-Logs-Url: https://github.com/aguzererler/TradingAgents/sessions/ce9e2400-a60d-4a6b-896b-1b34ec786bed --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: aguzererler <6199053+aguzererler@users.noreply.github.com>
This commit is contained in:
parent
b1a775882e
commit
9c9cc8c0b6
|
|
@ -84,6 +84,13 @@ FINNHUB_API_KEY=
|
|||
# PostgreSQL connection string for Supabase (required for portfolio commands)
|
||||
# SUPABASE_CONNECTION_STRING=postgresql://postgres.<project>:<password>@aws-1-<region>.pooler.supabase.com:6543/postgres
|
||||
|
||||
# ── MongoDB report store (optional) ──────────────────────────────────
|
||||
# When set, all reports (scans, analyses, decisions) are stored in MongoDB
|
||||
# instead of the filesystem. Each run creates separate documents so
|
||||
# same-day re-runs never overwrite earlier results.
|
||||
# TRADINGAGENTS_MONGO_URI=mongodb://localhost:27017
|
||||
# TRADINGAGENTS_MONGO_DB=tradingagents
|
||||
|
||||
# Root directory for all reports (scans, analysis, portfolio artifacts).
|
||||
# All output lands under {REPORTS_DIR}/daily/{date}/...
|
||||
# PORTFOLIO_DATA_DIR overrides this for portfolio-only reports if you need them split.
|
||||
|
|
|
|||
|
|
@ -150,12 +150,12 @@ async def reset_portfolio_stage(
|
|||
After calling this, an auto run will re-run Phase 3 from scratch
|
||||
(Phases 1 & 2 are skipped if their cached results still exist).
|
||||
"""
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
from tradingagents.portfolio.store_factory import create_report_store
|
||||
date = params.get("date")
|
||||
portfolio_id = params.get("portfolio_id")
|
||||
if not date or not portfolio_id:
|
||||
raise HTTPException(status_code=422, detail="date and portfolio_id are required")
|
||||
store = ReportStore()
|
||||
store = create_report_store()
|
||||
deleted = store.clear_portfolio_stage(date, portfolio_id)
|
||||
logger.info("reset_portfolio_stage date=%s portfolio=%s deleted=%s user=%s", date, portfolio_id, deleted, user["user_id"])
|
||||
return {"deleted": deleted, "date": date, "portfolio_id": portfolio_id}
|
||||
|
|
|
|||
|
|
@ -9,10 +9,12 @@ from tradingagents.graph.trading_graph import TradingAgentsGraph
|
|||
from tradingagents.graph.scanner_graph import ScannerGraph
|
||||
from tradingagents.graph.portfolio_graph import PortfolioGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.report_paths import get_market_dir, get_ticker_dir
|
||||
from tradingagents.report_paths import get_market_dir, get_ticker_dir, get_daily_dir, generate_run_id
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
from tradingagents.portfolio.store_factory import create_report_store
|
||||
from tradingagents.daily_digest import append_to_digest
|
||||
from tradingagents.agents.utils.json_utils import extract_json
|
||||
from tradingagents.observability import RunLogger, set_run_logger
|
||||
|
||||
logger = logging.getLogger("agent_os.engine")
|
||||
|
||||
|
|
@ -61,6 +63,48 @@ def _tickers_from_decision(decision: dict) -> list[str]:
|
|||
# Maximum characters of prompt/response for the full fields (generous limit)
|
||||
_MAX_FULL_LEN = 50_000
|
||||
|
||||
# Keywords in tool output that indicate the error was handled gracefully
|
||||
_GRACEFUL_SKIP_KEYWORDS = ("gracefully", "fallback", "skipped")
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Tool-name → primary service mapping (best-effort, used for display only)
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
_TOOL_SERVICE_MAP: Dict[str, str] = {
|
||||
# Core stock APIs
|
||||
"get_stock_data": "yfinance",
|
||||
"get_indicators": "yfinance",
|
||||
# Fundamental data
|
||||
"get_fundamentals": "yfinance",
|
||||
"get_balance_sheet": "yfinance",
|
||||
"get_cashflow": "yfinance",
|
||||
"get_income_statement": "yfinance",
|
||||
"get_ttm_analysis": "yfinance (derived)",
|
||||
"get_peer_comparison": "yfinance (derived)",
|
||||
"get_sector_relative": "yfinance (derived)",
|
||||
"get_macro_regime": "yfinance (derived)",
|
||||
# News
|
||||
"get_news": "yfinance",
|
||||
"get_global_news": "yfinance",
|
||||
"get_insider_transactions": "finnhub",
|
||||
# Scanner
|
||||
"get_market_movers": "yfinance",
|
||||
"get_market_indices": "finnhub",
|
||||
"get_sector_performance": "finnhub",
|
||||
"get_industry_performance": "yfinance",
|
||||
"get_topic_news": "finnhub",
|
||||
"get_earnings_calendar": "finnhub",
|
||||
"get_economic_calendar": "finnhub",
|
||||
# Finviz smart money
|
||||
"get_insider_buying_stocks": "finviz",
|
||||
"get_unusual_volume_stocks": "finviz",
|
||||
"get_breakout_accumulation_stocks": "finviz",
|
||||
# Portfolio (local)
|
||||
"get_enriched_holdings": "local",
|
||||
"compute_portfolio_risk_metrics": "local",
|
||||
"load_portfolio_risk_metrics": "local",
|
||||
"load_portfolio_decision": "local",
|
||||
}
|
||||
|
||||
|
||||
class LangGraphEngine:
|
||||
"""Orchestrates LangGraph pipeline executions and streams events."""
|
||||
|
|
@ -74,6 +118,32 @@ class LangGraphEngine:
|
|||
self._node_prompts: Dict[str, Dict[str, str]] = {}
|
||||
# Track the human-readable identifier (ticker / "MARKET" / portfolio_id) per run
|
||||
self._run_identifiers: Dict[str, str] = {}
|
||||
# Track RunLogger instances per run for JSONL persistence
|
||||
self._run_loggers: Dict[str, RunLogger] = {}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Run logger lifecycle
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _start_run_logger(self, run_id: str) -> RunLogger:
|
||||
"""Create and register a ``RunLogger`` for the given run."""
|
||||
rl = RunLogger()
|
||||
self._run_loggers[run_id] = rl
|
||||
set_run_logger(rl)
|
||||
return rl
|
||||
|
||||
def _finish_run_logger(self, run_id: str, log_dir: Path) -> None:
|
||||
"""Persist the run log to *log_dir*/run_log.jsonl and clean up."""
|
||||
rl = self._run_loggers.pop(run_id, None)
|
||||
if rl is None:
|
||||
return
|
||||
try:
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
rl.write_log(log_dir / "run_log.jsonl")
|
||||
except Exception:
|
||||
logger.exception("Failed to write run log for run=%s", run_id)
|
||||
finally:
|
||||
set_run_logger(None)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Run helpers
|
||||
|
|
@ -85,9 +155,14 @@ class LangGraphEngine:
|
|||
"""Run the 3-phase macro scanner and stream events."""
|
||||
date = params.get("date", time.strftime("%Y-%m-%d"))
|
||||
|
||||
# Generate a short run_id for report namespacing
|
||||
short_rid = generate_run_id()
|
||||
store = create_report_store(run_id=short_rid)
|
||||
|
||||
rl = self._start_run_logger(run_id)
|
||||
scanner = ScannerGraph(config=self.config)
|
||||
|
||||
logger.info("Starting SCAN run=%s date=%s", run_id, date)
|
||||
logger.info("Starting SCAN run=%s date=%s rid=%s", run_id, date, short_rid)
|
||||
yield self._system_log(f"Starting macro scan for {date}")
|
||||
|
||||
initial_state = {
|
||||
|
|
@ -105,7 +180,9 @@ class LangGraphEngine:
|
|||
self._run_identifiers[run_id] = "MARKET"
|
||||
final_state: Dict[str, Any] = {}
|
||||
|
||||
async for event in scanner.graph.astream_events(initial_state, version="v2"):
|
||||
async for event in scanner.graph.astream_events(
|
||||
initial_state, version="v2", config={"callbacks": [rl.callback]}
|
||||
):
|
||||
# Capture the complete final state from the root graph's terminal event.
|
||||
# LangGraph v2 emits one root-level on_chain_end (parent_ids=[], no
|
||||
# langgraph_node in metadata) whose data.output is the full accumulated state.
|
||||
|
|
@ -133,11 +210,11 @@ class LangGraphEngine:
|
|||
except Exception as exc:
|
||||
logger.warning("SCAN fallback ainvoke failed run=%s: %s", run_id, exc)
|
||||
|
||||
# Save scan reports to disk
|
||||
# Save scan reports
|
||||
if final_state:
|
||||
yield self._system_log("Saving scan reports to disk…")
|
||||
yield self._system_log("Saving scan reports…")
|
||||
try:
|
||||
save_dir = get_market_dir(date)
|
||||
save_dir = get_market_dir(date, run_id=short_rid)
|
||||
save_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for key in (
|
||||
|
|
@ -151,12 +228,12 @@ class LangGraphEngine:
|
|||
if content:
|
||||
(save_dir / f"{key}.md").write_text(content)
|
||||
|
||||
# Parse and save macro_scan_summary.json via ReportStore for downstream use
|
||||
# Parse and save macro_scan_summary.json via store for downstream use
|
||||
summary_text = final_state.get("macro_scan_summary", "")
|
||||
if summary_text:
|
||||
try:
|
||||
summary_data = extract_json(summary_text)
|
||||
ReportStore().save_scan(date, summary_data)
|
||||
store.save_scan(date, summary_data)
|
||||
except (ValueError, KeyError, TypeError):
|
||||
logger.warning(
|
||||
"macro_scan_summary for date=%s is not valid JSON "
|
||||
|
|
@ -186,6 +263,7 @@ class LangGraphEngine:
|
|||
yield self._system_log(f"Warning: could not save scan reports: {exc}")
|
||||
|
||||
logger.info("Completed SCAN run=%s", run_id)
|
||||
self._finish_run_logger(run_id, get_market_dir(date, run_id=short_rid))
|
||||
|
||||
async def run_pipeline(
|
||||
self, run_id: str, params: Dict[str, Any]
|
||||
|
|
@ -195,8 +273,14 @@ class LangGraphEngine:
|
|||
date = params.get("date", time.strftime("%Y-%m-%d"))
|
||||
analysts = params.get("analysts", ["market", "news", "fundamentals"])
|
||||
|
||||
# Generate a short run_id for report namespacing
|
||||
short_rid = generate_run_id()
|
||||
store = create_report_store(run_id=short_rid)
|
||||
|
||||
rl = self._start_run_logger(run_id)
|
||||
|
||||
logger.info(
|
||||
"Starting PIPELINE run=%s ticker=%s date=%s", run_id, ticker, date
|
||||
"Starting PIPELINE run=%s ticker=%s date=%s rid=%s", run_id, ticker, date, short_rid
|
||||
)
|
||||
yield self._system_log(f"Starting analysis pipeline for {ticker} on {date}")
|
||||
|
||||
|
|
@ -215,7 +299,10 @@ class LangGraphEngine:
|
|||
async for event in graph_wrapper.graph.astream_events(
|
||||
initial_state,
|
||||
version="v2",
|
||||
config={"recursion_limit": graph_wrapper.propagator.max_recur_limit},
|
||||
config={
|
||||
"recursion_limit": graph_wrapper.propagator.max_recur_limit,
|
||||
"callbacks": [rl.callback],
|
||||
},
|
||||
):
|
||||
# Capture the complete final state from the root graph's terminal event.
|
||||
if self._is_root_chain_end(event):
|
||||
|
|
@ -246,19 +333,19 @@ class LangGraphEngine:
|
|||
except Exception as exc:
|
||||
logger.warning("PIPELINE fallback ainvoke failed run=%s: %s", run_id, exc)
|
||||
|
||||
# Save pipeline reports to disk
|
||||
# Save pipeline reports
|
||||
if final_state:
|
||||
yield self._system_log(f"Saving analysis report for {ticker}…")
|
||||
try:
|
||||
save_dir = get_ticker_dir(date, ticker)
|
||||
save_dir = get_ticker_dir(date, ticker, run_id=short_rid)
|
||||
save_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Sanitize final_state to remove non-JSON-serializable objects
|
||||
# (e.g. LangChain HumanMessage, AIMessage objects in "messages")
|
||||
serializable_state = self._sanitize_for_json(final_state)
|
||||
|
||||
# Save JSON via ReportStore (complete_report.json)
|
||||
ReportStore().save_analysis(date, ticker, serializable_state)
|
||||
# Save JSON via store (complete_report.json)
|
||||
store.save_analysis(date, ticker, serializable_state)
|
||||
|
||||
# Write human-readable complete_report.md
|
||||
self._write_complete_report_md(final_state, ticker, save_dir)
|
||||
|
|
@ -279,6 +366,7 @@ class LangGraphEngine:
|
|||
yield self._system_log(f"Warning: could not save analysis report for {ticker}: {exc}")
|
||||
|
||||
logger.info("Completed PIPELINE run=%s", run_id)
|
||||
self._finish_run_logger(run_id, get_ticker_dir(date, ticker, run_id=short_rid))
|
||||
|
||||
async def run_portfolio(
|
||||
self, run_id: str, params: Dict[str, Any]
|
||||
|
|
@ -287,9 +375,17 @@ class LangGraphEngine:
|
|||
date = params.get("date", time.strftime("%Y-%m-%d"))
|
||||
portfolio_id = params.get("portfolio_id", "main_portfolio")
|
||||
|
||||
# Generate a short run_id for report namespacing
|
||||
short_rid = generate_run_id()
|
||||
store = create_report_store(run_id=short_rid)
|
||||
# A reader store with no run_id resolves to the latest run for loading
|
||||
reader_store = create_report_store()
|
||||
|
||||
rl = self._start_run_logger(run_id)
|
||||
|
||||
logger.info(
|
||||
"Starting PORTFOLIO run=%s portfolio=%s date=%s",
|
||||
run_id, portfolio_id, date,
|
||||
"Starting PORTFOLIO run=%s portfolio=%s date=%s rid=%s",
|
||||
run_id, portfolio_id, date, short_rid,
|
||||
)
|
||||
yield self._system_log(
|
||||
f"Starting portfolio manager for {portfolio_id} on {date}"
|
||||
|
|
@ -297,19 +393,33 @@ class LangGraphEngine:
|
|||
|
||||
portfolio_graph = PortfolioGraph(config=self.config)
|
||||
|
||||
# Load scan summary and per-ticker analyses from the daily report folder
|
||||
store = ReportStore()
|
||||
scan_summary = store.load_scan(date) or {}
|
||||
# Load scan summary and per-ticker analyses from the latest report
|
||||
scan_summary = reader_store.load_scan(date) or {}
|
||||
ticker_analyses: Dict[str, Any] = {}
|
||||
|
||||
from tradingagents.report_paths import get_daily_dir
|
||||
# Search both run-scoped and legacy flat layouts for ticker directories
|
||||
daily_dir = get_daily_dir(date)
|
||||
search_dirs: list[Path] = []
|
||||
runs_dir = daily_dir / "runs"
|
||||
if runs_dir.exists():
|
||||
for run_dir in runs_dir.iterdir():
|
||||
if run_dir.is_dir():
|
||||
search_dirs.append(run_dir)
|
||||
if daily_dir.exists():
|
||||
for ticker_dir in daily_dir.iterdir():
|
||||
if ticker_dir.is_dir() and ticker_dir.name not in ("market", "portfolio"):
|
||||
analysis = store.load_analysis(date, ticker_dir.name)
|
||||
search_dirs.append(daily_dir)
|
||||
|
||||
seen_tickers: set[str] = set()
|
||||
for base in search_dirs:
|
||||
for ticker_dir in base.iterdir():
|
||||
if (
|
||||
ticker_dir.is_dir()
|
||||
and ticker_dir.name not in ("market", "portfolio", "runs")
|
||||
and ticker_dir.name.upper() not in seen_tickers
|
||||
):
|
||||
analysis = reader_store.load_analysis(date, ticker_dir.name)
|
||||
if analysis:
|
||||
ticker_analyses[ticker_dir.name] = analysis
|
||||
ticker_analyses[ticker_dir.name.upper()] = analysis
|
||||
seen_tickers.add(ticker_dir.name.upper())
|
||||
|
||||
if scan_summary:
|
||||
yield self._system_log(f"Loaded macro scan summary for {date}")
|
||||
|
|
@ -362,7 +472,7 @@ class LangGraphEngine:
|
|||
final_state: Dict[str, Any] = {}
|
||||
|
||||
async for event in portfolio_graph.graph.astream_events(
|
||||
initial_state, version="v2"
|
||||
initial_state, version="v2", config={"callbacks": [rl.callback]}
|
||||
):
|
||||
if self._is_root_chain_end(event):
|
||||
output = (event.get("data") or {}).get("output")
|
||||
|
|
@ -390,12 +500,16 @@ class LangGraphEngine:
|
|||
# Save portfolio reports (Holding Reviews, Risk Metrics, PM Decision, Execution Result)
|
||||
if final_state:
|
||||
try:
|
||||
# 1. Holding Reviews — save the raw string via ReportStore
|
||||
# 1. Holding Reviews — save the raw string via store
|
||||
holding_reviews_str = final_state.get("holding_reviews")
|
||||
if holding_reviews_str:
|
||||
try:
|
||||
reviews = json.loads(holding_reviews_str) if isinstance(holding_reviews_str, str) else holding_reviews_str
|
||||
store.save_holding_reviews(date, portfolio_id, reviews)
|
||||
if isinstance(reviews, dict):
|
||||
for ticker, review_data in reviews.items():
|
||||
store.save_holding_review(date, ticker, review_data)
|
||||
else:
|
||||
logger.warning("Unexpected holding_reviews format run=%s: %s", run_id, type(reviews))
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to save holding_reviews run=%s: %s", run_id, exc)
|
||||
|
||||
|
|
@ -432,6 +546,7 @@ class LangGraphEngine:
|
|||
yield self._system_log(f"Warning: could not save portfolio reports: {exc}")
|
||||
|
||||
logger.info("Completed PORTFOLIO run=%s", run_id)
|
||||
self._finish_run_logger(run_id, get_daily_dir(date, run_id=short_rid) / "portfolio")
|
||||
|
||||
async def run_trade_execution(
|
||||
self, run_id: str, date: str, portfolio_id: str, decision: dict, prices: dict,
|
||||
|
|
@ -454,7 +569,7 @@ class LangGraphEngine:
|
|||
logger.warning("TRADE_EXECUTION run=%s: no prices available — execution may produce incomplete results", run_id)
|
||||
yield self._system_log(f"Warning: no prices found for {portfolio_id} on {date} — trade execution may be incomplete.")
|
||||
|
||||
_store = store or ReportStore()
|
||||
_store = store or create_report_store()
|
||||
|
||||
try:
|
||||
repo = PortfolioRepository()
|
||||
|
|
@ -480,12 +595,18 @@ class LangGraphEngine:
|
|||
date = params.get("date", time.strftime("%Y-%m-%d"))
|
||||
force = params.get("force", False)
|
||||
|
||||
# Use a reader store (no run_id) for skip-if-exists checks.
|
||||
# Each sub-phase (run_scan, run_pipeline, run_portfolio) creates
|
||||
# its own writer store with a fresh run_id internally.
|
||||
store = create_report_store()
|
||||
|
||||
self._start_run_logger(run_id) # auto-run's own logger; sub-phases create their own
|
||||
|
||||
logger.info("Starting AUTO run=%s date=%s force=%s", run_id, date, force)
|
||||
yield self._system_log(f"Starting full auto workflow for {date} (force={force})")
|
||||
|
||||
# Phase 1: Market scan
|
||||
yield self._system_log("Phase 1/3: Running market scan…")
|
||||
store = ReportStore()
|
||||
if not force and store.load_scan(date):
|
||||
yield self._system_log(f"Phase 1: Macro scan for {date} already exists, skipping.")
|
||||
else:
|
||||
|
|
@ -621,6 +742,7 @@ class LangGraphEngine:
|
|||
yield evt
|
||||
|
||||
logger.info("Completed AUTO run=%s", run_id)
|
||||
self._finish_run_logger(run_id, get_daily_dir(date))
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Report helpers
|
||||
|
|
@ -973,7 +1095,9 @@ class LangGraphEngine:
|
|||
full_input = str(inp)[:_MAX_FULL_LEN]
|
||||
tool_input = self._truncate(str(inp))
|
||||
|
||||
logger.info("Tool start tool=%s node=%s run=%s", name, node_name, run_id)
|
||||
service = _TOOL_SERVICE_MAP.get(name, "")
|
||||
|
||||
logger.info("Tool start tool=%s service=%s node=%s run=%s", name, service, node_name, run_id)
|
||||
|
||||
return {
|
||||
"id": event.get("run_id", f"tool_{time.time_ns()}").strip(),
|
||||
|
|
@ -985,6 +1109,8 @@ class LangGraphEngine:
|
|||
"message": f"▶ Tool: {name}"
|
||||
+ (f" | {tool_input}" if tool_input else ""),
|
||||
"prompt": full_input,
|
||||
"service": service,
|
||||
"status": "running",
|
||||
"metrics": {},
|
||||
}
|
||||
except Exception:
|
||||
|
|
@ -996,13 +1122,37 @@ class LangGraphEngine:
|
|||
try:
|
||||
full_output = ""
|
||||
tool_output = ""
|
||||
is_error = False
|
||||
error_message = ""
|
||||
graceful = False
|
||||
out = (event.get("data") or {}).get("output")
|
||||
if out is not None:
|
||||
raw = self._extract_content(out)
|
||||
full_output = raw[:_MAX_FULL_LEN]
|
||||
tool_output = self._truncate(raw)
|
||||
# Detect errors in tool output
|
||||
if raw.startswith("Error") or raw.startswith("Error calling "):
|
||||
is_error = True
|
||||
error_message = raw[:500]
|
||||
# Detect graceful degradation (vendor fallback / empty-but-ok)
|
||||
raw_lower = raw.lower()
|
||||
if any(kw in raw_lower for kw in _GRACEFUL_SKIP_KEYWORDS):
|
||||
graceful = True
|
||||
# Some LangGraph versions pass errors through the event status
|
||||
evt_status = (event.get("data") or {}).get("status")
|
||||
if evt_status == "error":
|
||||
is_error = True
|
||||
if not error_message:
|
||||
error_message = tool_output or "Unknown tool error"
|
||||
|
||||
logger.info("Tool end tool=%s node=%s run=%s", name, node_name, run_id)
|
||||
service = _TOOL_SERVICE_MAP.get(name, "")
|
||||
status = "error" if is_error else ("graceful_skip" if graceful else "success")
|
||||
icon = "✗" if is_error else ("⚠" if graceful else "✓")
|
||||
|
||||
logger.info(
|
||||
"Tool end tool=%s status=%s node=%s run=%s",
|
||||
name, status, node_name, run_id,
|
||||
)
|
||||
|
||||
return {
|
||||
"id": f"{event.get('run_id', 'tool_end')}_{time.time_ns()}",
|
||||
|
|
@ -1011,9 +1161,12 @@ class LangGraphEngine:
|
|||
"type": "tool_result",
|
||||
"agent": node_name.upper(),
|
||||
"identifier": identifier,
|
||||
"message": f"✓ Tool result: {name}"
|
||||
"message": f"{icon} Tool result: {name}"
|
||||
+ (f" | {tool_output}" if tool_output else ""),
|
||||
"response": full_output,
|
||||
"service": service,
|
||||
"status": status,
|
||||
"error": error_message if is_error else None,
|
||||
"metrics": {},
|
||||
}
|
||||
except Exception:
|
||||
|
|
|
|||
|
|
@ -77,7 +77,11 @@ const REQUIRED_PARAMS: Record<RunType, (keyof RunParams)[]> = {
|
|||
};
|
||||
|
||||
/** Return the colour token for a given event type. */
|
||||
const eventColor = (type: AgentEvent['type']): string => {
|
||||
const eventColor = (type: AgentEvent['type'], status?: AgentEvent['status']): string => {
|
||||
// Error events always show in red
|
||||
if (status === 'error') return 'red.400';
|
||||
// Graceful skips show in orange/yellow
|
||||
if (status === 'graceful_skip') return 'orange.300';
|
||||
switch (type) {
|
||||
case 'tool': return 'purple.400';
|
||||
case 'tool_result': return 'purple.300';
|
||||
|
|
@ -88,7 +92,9 @@ const eventColor = (type: AgentEvent['type']): string => {
|
|||
};
|
||||
|
||||
/** Return a short label badge for the event type. */
|
||||
const eventLabel = (type: AgentEvent['type']): string => {
|
||||
const eventLabel = (type: AgentEvent['type'], status?: AgentEvent['status']): string => {
|
||||
if (status === 'error') return '❌';
|
||||
if (status === 'graceful_skip') return '⚠️';
|
||||
switch (type) {
|
||||
case 'thought': return '💭';
|
||||
case 'tool': return '🔧';
|
||||
|
|
@ -101,10 +107,20 @@ const eventLabel = (type: AgentEvent['type']): string => {
|
|||
|
||||
/** Short summary for terminal — no inline prompts, just agent + type. */
|
||||
const eventSummary = (evt: AgentEvent): string => {
|
||||
const svc = evt.service ? ` [${evt.service}]` : '';
|
||||
switch (evt.type) {
|
||||
case 'thought': return `Thinking… (${evt.metrics?.model || 'LLM'})`;
|
||||
case 'tool': return evt.message.startsWith('✓') ? 'Tool result received' : `Tool call: ${evt.message.replace(/^▶ Tool: /, '').split(' | ')[0]}`;
|
||||
case 'tool_result': return `Tool done: ${evt.message.replace(/^✓ Tool result: /, '').split(' | ')[0]}`;
|
||||
case 'tool': {
|
||||
if (evt.message.startsWith('✓')) return 'Tool result received';
|
||||
const toolName = evt.message.replace(/^▶ Tool: /, '').split(' | ')[0];
|
||||
return `Tool call: ${toolName}${svc}`;
|
||||
}
|
||||
case 'tool_result': {
|
||||
const resultToolName = evt.message.replace(/^[✓✗⚠] Tool result: /, '').split(' | ')[0];
|
||||
if (evt.status === 'error') return `Tool error: ${resultToolName}${svc}`;
|
||||
if (evt.status === 'graceful_skip') return `Tool skipped: ${resultToolName}${svc}`;
|
||||
return `Tool done: ${resultToolName}${svc}`;
|
||||
}
|
||||
case 'result': return 'Completed';
|
||||
case 'log': return evt.message;
|
||||
default: return evt.type;
|
||||
|
|
@ -115,6 +131,12 @@ const eventSummary = (evt: AgentEvent): string => {
|
|||
const EventDetailModal: React.FC<{ event: AgentEvent | null; isOpen: boolean; onClose: () => void }> = ({ event, isOpen, onClose }) => {
|
||||
if (!event) return null;
|
||||
|
||||
const headerBadgeColor = event.status === 'error' ? 'red'
|
||||
: event.status === 'graceful_skip' ? 'orange'
|
||||
: event.type === 'result' ? 'green'
|
||||
: event.type === 'tool' || event.type === 'tool_result' ? 'purple'
|
||||
: 'cyan';
|
||||
|
||||
return (
|
||||
<Modal isOpen={isOpen} onClose={onClose} size="4xl" scrollBehavior="inside">
|
||||
<ModalOverlay backdropFilter="blur(6px)" />
|
||||
|
|
@ -122,10 +144,13 @@ const EventDetailModal: React.FC<{ event: AgentEvent | null; isOpen: boolean; on
|
|||
<ModalCloseButton />
|
||||
<ModalHeader borderBottomWidth="1px" borderColor="whiteAlpha.100">
|
||||
<HStack>
|
||||
<Badge colorScheme={event.type === 'result' ? 'green' : event.type === 'tool' || event.type === 'tool_result' ? 'purple' : 'cyan'} fontSize="sm">
|
||||
<Badge colorScheme={headerBadgeColor} fontSize="sm">
|
||||
{event.type.toUpperCase()}
|
||||
</Badge>
|
||||
<Badge variant="outline" fontSize="sm">{event.agent}</Badge>
|
||||
{event.status === 'error' && <Badge colorScheme="red" variant="solid" fontSize="sm">ERROR</Badge>}
|
||||
{event.status === 'graceful_skip' && <Badge colorScheme="orange" variant="solid" fontSize="sm">GRACEFUL SKIP</Badge>}
|
||||
{event.service && <Badge colorScheme="teal" fontSize="sm">{event.service}</Badge>}
|
||||
<Text fontSize="sm" color="whiteAlpha.400" fontWeight="normal">{event.timestamp}</Text>
|
||||
</HStack>
|
||||
</ModalHeader>
|
||||
|
|
@ -134,6 +159,7 @@ const EventDetailModal: React.FC<{ event: AgentEvent | null; isOpen: boolean; on
|
|||
<TabList mb={4}>
|
||||
{event.prompt && <Tab>Prompt / Request</Tab>}
|
||||
{(event.response || (event.type === 'result' && event.message)) && <Tab>Response</Tab>}
|
||||
{event.error && <Tab color="red.400">Error</Tab>}
|
||||
<Tab>Summary</Tab>
|
||||
{event.metrics && <Tab>Metrics</Tab>}
|
||||
</TabList>
|
||||
|
|
@ -149,13 +175,22 @@ const EventDetailModal: React.FC<{ event: AgentEvent | null; isOpen: boolean; on
|
|||
)}
|
||||
{(event.response || (event.type === 'result' && event.message)) && (
|
||||
<TabPanel p={0}>
|
||||
<Box bg="blackAlpha.500" p={4} borderRadius="md" border="1px solid" borderColor="whiteAlpha.100" maxH="60vh" overflowY="auto">
|
||||
<Text fontSize="xs" fontFamily="mono" whiteSpace="pre-wrap" wordBreak="break-word" color="whiteAlpha.900">
|
||||
<Box bg="blackAlpha.500" p={4} borderRadius="md" border="1px solid" borderColor={event.status === 'error' ? 'red.700' : 'whiteAlpha.100'} maxH="60vh" overflowY="auto">
|
||||
<Text fontSize="xs" fontFamily="mono" whiteSpace="pre-wrap" wordBreak="break-word" color={event.status === 'error' ? 'red.200' : 'whiteAlpha.900'}>
|
||||
{event.response || event.message}
|
||||
</Text>
|
||||
</Box>
|
||||
</TabPanel>
|
||||
)}
|
||||
{event.error && (
|
||||
<TabPanel p={0}>
|
||||
<Box bg="red.900" p={4} borderRadius="md" border="1px solid" borderColor="red.600" maxH="60vh" overflowY="auto">
|
||||
<Text fontSize="xs" fontFamily="mono" whiteSpace="pre-wrap" wordBreak="break-word" color="red.200">
|
||||
{event.error}
|
||||
</Text>
|
||||
</Box>
|
||||
</TabPanel>
|
||||
)}
|
||||
<TabPanel p={0}>
|
||||
<Box bg="blackAlpha.500" p={4} borderRadius="md" border="1px solid" borderColor="whiteAlpha.100">
|
||||
<Text fontSize="sm" whiteSpace="pre-wrap" wordBreak="break-word" color="whiteAlpha.900">
|
||||
|
|
@ -169,6 +204,9 @@ const EventDetailModal: React.FC<{ event: AgentEvent | null; isOpen: boolean; on
|
|||
{event.metrics.model && event.metrics.model !== 'unknown' && (
|
||||
<HStack><Text fontSize="sm" color="whiteAlpha.600" minW="80px">Model:</Text><Code colorScheme="blue" fontSize="sm">{event.metrics.model}</Code></HStack>
|
||||
)}
|
||||
{event.service && (
|
||||
<HStack><Text fontSize="sm" color="whiteAlpha.600" minW="80px">Service:</Text><Code colorScheme="teal" fontSize="sm">{event.service}</Code></HStack>
|
||||
)}
|
||||
{event.metrics.tokens_in != null && event.metrics.tokens_in > 0 && (
|
||||
<HStack><Text fontSize="sm" color="whiteAlpha.600" minW="80px">Tokens In:</Text><Code>{event.metrics.tokens_in}</Code></HStack>
|
||||
)}
|
||||
|
|
@ -193,11 +231,20 @@ const EventDetailModal: React.FC<{ event: AgentEvent | null; isOpen: boolean; on
|
|||
};
|
||||
|
||||
// ─── Detail card for a single event in the drawer ─────────────────────
|
||||
const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent) => void }> = ({ event, onOpenModal }) => (
|
||||
const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent) => void }> = ({ event, onOpenModal }) => {
|
||||
const badgeColor = event.status === 'error' ? 'red'
|
||||
: event.status === 'graceful_skip' ? 'orange'
|
||||
: event.type === 'result' ? 'green'
|
||||
: event.type === 'tool' || event.type === 'tool_result' ? 'purple'
|
||||
: 'cyan';
|
||||
|
||||
return (
|
||||
<VStack align="stretch" spacing={4}>
|
||||
<HStack>
|
||||
<Badge colorScheme={event.type === 'result' ? 'green' : event.type === 'tool' || event.type === 'tool_result' ? 'purple' : 'cyan'}>{event.type.toUpperCase()}</Badge>
|
||||
<Badge colorScheme={badgeColor}>{event.type.toUpperCase()}</Badge>
|
||||
<Badge variant="outline">{event.agent}</Badge>
|
||||
{event.status === 'error' && <Badge colorScheme="red" variant="solid">ERROR</Badge>}
|
||||
{event.status === 'graceful_skip' && <Badge colorScheme="orange" variant="solid">GRACEFUL SKIP</Badge>}
|
||||
<Text fontSize="xs" color="whiteAlpha.400">{event.timestamp}</Text>
|
||||
{onOpenModal && (
|
||||
<Button size="xs" variant="ghost" colorScheme="cyan" ml="auto" onClick={() => onOpenModal(event)}>
|
||||
|
|
@ -206,6 +253,14 @@ const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent)
|
|||
)}
|
||||
</HStack>
|
||||
|
||||
{/* Service info for tool events */}
|
||||
{event.service && (
|
||||
<Box>
|
||||
<Text fontSize="xs" fontWeight="bold" color="whiteAlpha.600" mb={1}>Service</Text>
|
||||
<Code colorScheme="teal" fontSize="sm">{event.service}</Code>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{event.metrics?.model && event.metrics.model !== 'unknown' && (
|
||||
<Box>
|
||||
<Text fontSize="xs" fontWeight="bold" color="whiteAlpha.600" mb={1}>Model</Text>
|
||||
|
|
@ -227,6 +282,18 @@ const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent)
|
|||
</Box>
|
||||
)}
|
||||
|
||||
{/* Error display */}
|
||||
{event.error && (
|
||||
<Box>
|
||||
<Text fontSize="xs" fontWeight="bold" color="red.400" mb={1}>Error</Text>
|
||||
<Box bg="red.900" p={3} borderRadius="md" border="1px solid" borderColor="red.600" maxH="200px" overflowY="auto">
|
||||
<Text fontSize="xs" fontFamily="mono" whiteSpace="pre-wrap" wordBreak="break-word" color="red.200">
|
||||
{event.error}
|
||||
</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{/* Show prompt if available */}
|
||||
{event.prompt && (
|
||||
<Box>
|
||||
|
|
@ -243,8 +310,8 @@ const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent)
|
|||
{event.response && (
|
||||
<Box>
|
||||
<Text fontSize="xs" fontWeight="bold" color="whiteAlpha.600" mb={1}>Response</Text>
|
||||
<Box bg="blackAlpha.500" p={3} borderRadius="md" border="1px solid" borderColor="green.900" maxH="200px" overflowY="auto">
|
||||
<Text fontSize="xs" fontFamily="mono" whiteSpace="pre-wrap" wordBreak="break-word" color="whiteAlpha.900">
|
||||
<Box bg="blackAlpha.500" p={3} borderRadius="md" border="1px solid" borderColor={event.status === 'error' ? 'red.700' : 'green.900'} maxH="200px" overflowY="auto">
|
||||
<Text fontSize="xs" fontFamily="mono" whiteSpace="pre-wrap" wordBreak="break-word" color={event.status === 'error' ? 'red.200' : 'whiteAlpha.900'}>
|
||||
{event.response.length > 1000 ? event.response.substring(0, 1000) + '…' : event.response}
|
||||
</Text>
|
||||
</Box>
|
||||
|
|
@ -252,7 +319,7 @@ const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent)
|
|||
)}
|
||||
|
||||
{/* Fallback: show message if no prompt/response */}
|
||||
{!event.prompt && !event.response && (
|
||||
{!event.prompt && !event.response && !event.error && (
|
||||
<Box>
|
||||
<Text fontSize="xs" fontWeight="bold" color="whiteAlpha.600" mb={1}>Message</Text>
|
||||
<Box bg="blackAlpha.500" p={3} borderRadius="md" border="1px solid" borderColor="whiteAlpha.100" maxH="300px" overflowY="auto">
|
||||
|
|
@ -270,7 +337,8 @@ const EventDetail: React.FC<{ event: AgentEvent; onOpenModal?: (evt: AgentEvent)
|
|||
</Box>
|
||||
)}
|
||||
</VStack>
|
||||
);
|
||||
);
|
||||
};
|
||||
|
||||
// ─── Detail drawer showing all events for a given graph node ──────────
|
||||
const NodeEventsDetail: React.FC<{ nodeId: string; identifier?: string | null; events: AgentEvent[]; onOpenModal: (evt: AgentEvent) => void }> = ({ nodeId, identifier, events, onOpenModal }) => {
|
||||
|
|
@ -699,18 +767,24 @@ export const Dashboard: React.FC = () => {
|
|||
py={1}
|
||||
borderRadius="md"
|
||||
cursor="pointer"
|
||||
_hover={{ bg: 'whiteAlpha.100' }}
|
||||
bg={evt.status === 'error' ? 'red.900' : evt.status === 'graceful_skip' ? 'orange.900' : undefined}
|
||||
borderLeft={evt.status === 'error' ? '3px solid' : evt.status === 'graceful_skip' ? '3px solid' : undefined}
|
||||
borderColor={evt.status === 'error' ? 'red.500' : evt.status === 'graceful_skip' ? 'orange.500' : undefined}
|
||||
_hover={{ bg: evt.status === 'error' ? 'red.800' : 'whiteAlpha.100' }}
|
||||
onClick={() => openEventDetail(evt)}
|
||||
transition="background 0.15s"
|
||||
>
|
||||
<Flex gap={2} align="center">
|
||||
<Text color="whiteAlpha.400" minW="52px" flexShrink={0}>[{evt.timestamp}]</Text>
|
||||
<Text flexShrink={0}>{eventLabel(evt.type)}</Text>
|
||||
<Text color={eventColor(evt.type)} fontWeight="bold" flexShrink={0}>
|
||||
<Text flexShrink={0}>{eventLabel(evt.type, evt.status)}</Text>
|
||||
<Text color={eventColor(evt.type, evt.status)} fontWeight="bold" flexShrink={0}>
|
||||
{evt.agent}
|
||||
</Text>
|
||||
{evt.service && (
|
||||
<Text color="teal.300" fontSize="2xs" flexShrink={0}>[{evt.service}]</Text>
|
||||
)}
|
||||
<ChevronRight size={10} style={{ flexShrink: 0, opacity: 0.4 }} />
|
||||
<Text color="whiteAlpha.700" isTruncated>{eventSummary(evt)}</Text>
|
||||
<Text color={evt.status === 'error' ? 'red.300' : 'whiteAlpha.700'} isTruncated>{eventSummary(evt)}</Text>
|
||||
<Eye size={12} style={{ flexShrink: 0, opacity: 0.3, marginLeft: 'auto' }} />
|
||||
</Flex>
|
||||
</Box>
|
||||
|
|
|
|||
|
|
@ -15,6 +15,12 @@ export interface AgentEvent {
|
|||
identifier?: string;
|
||||
node_id?: string;
|
||||
parent_node_id?: string;
|
||||
/** Data service used by this tool (e.g. "yfinance", "finnhub", "finviz"). */
|
||||
service?: string;
|
||||
/** Tool execution status: "running", "success", "error", or "graceful_skip". */
|
||||
status?: 'running' | 'success' | 'error' | 'graceful_skip';
|
||||
/** Error message when status is "error". */
|
||||
error?: string | null;
|
||||
metrics?: {
|
||||
model: string;
|
||||
tokens_in?: number;
|
||||
|
|
|
|||
|
|
@ -1,22 +1,33 @@
|
|||
# Current Milestone
|
||||
|
||||
Smart Money Scanner added to scanner pipeline (Phase 1b). `finvizfinance` integration with Golden Overlap strategy in macro_synthesis. 18 agent factories. All tests passing (2 pre-existing failures excluded).
|
||||
Smart Money Scanner added to scanner pipeline (Phase 1b). MongoDB report store + run-ID namespacing + reflexion memory added. PR#106 review findings addressed (ADR 016). 18 agent factories. All tests passing (886 passed, 14 skipped).
|
||||
|
||||
# Recent Progress
|
||||
|
||||
- **Smart Money Scanner (current branch)**: 4th scanner node added to macro pipeline
|
||||
- `tradingagents/agents/scanners/smart_money_scanner.py` — Phase 1b node, runs sequentially after sector_scanner
|
||||
- `tradingagents/agents/utils/scanner_tools.py` — 3 zero-parameter Finviz tools: `get_insider_buying_stocks`, `get_unusual_volume_stocks`, `get_breakout_accumulation_stocks`
|
||||
- `tradingagents/agents/utils/scanner_states.py` — Added `smart_money_report` field with `_last_value` reducer
|
||||
- `tradingagents/graph/scanner_setup.py` — Topology: sector_scanner → smart_money_scanner → industry_deep_dive
|
||||
- `tradingagents/graph/scanner_graph.py` — Instantiates smart_money_scanner with quick_llm
|
||||
- `tradingagents/agents/scanners/macro_synthesis.py` — Golden Overlap instructions + smart_money_report in context
|
||||
- `pyproject.toml` — Added `finvizfinance>=0.14.0` dependency
|
||||
- `docs/agent/decisions/014-finviz-smart-money-scanner.md` — ADR documenting all design decisions
|
||||
- Tests: 6 new mocked tests in `tests/unit/test_scanner_mocked.py`, 1 fix in `tests/unit/test_scanner_graph.py`
|
||||
- **PR#106 review fixes (ADR 016)**:
|
||||
- Fix 1: `save_holding_review` iteration — was passing `portfolio_id` as ticker; now iterates per ticker
|
||||
- Fix 2: `contextvars.ContextVar` replaces `threading.local` for RunLogger — async-safe
|
||||
- Fix 3: `list_pm_decisions` — added `{"_id": 0}` projection to exclude non-serializable ObjectId
|
||||
- Fix 4: `ReflexionMemory.created_at` — native `datetime` for MongoDB, ISO string for local JSON fallback
|
||||
- Fix 5: `write/read_latest_pointer` — accepts `base_dir` parameter; `ReportStore` passes its `_base_dir`
|
||||
- Fix 6: `RunLogger.callback` — wired into all 3 `astream_events()` calls (scan, pipeline, portfolio)
|
||||
- Fix 7: `MongoReportStore.__init__` — calls `ensure_indexes()` automatically
|
||||
- `docs/agent/decisions/016-pr106-review-findings.md` — full writeup of all 13 findings and resolutions
|
||||
- Tests: 14 new tests covering all 7 fixes
|
||||
- **MongoDB Report Store + Run-ID + Reflexion (current branch)**:
|
||||
- `tradingagents/report_paths.py` — All path helpers accept optional `run_id` for run-scoped directories; `latest.json` pointer mechanism
|
||||
- `tradingagents/portfolio/report_store.py` — `ReportStore` supports `run_id` + `latest.json` pointer for read resolution
|
||||
- `tradingagents/portfolio/mongo_report_store.py` — MongoDB-backed report store (same interface as filesystem)
|
||||
- `tradingagents/portfolio/store_factory.py` — Factory returns MongoDB or filesystem store based on config
|
||||
- `tradingagents/memory/reflexion.py` — Reflexion memory: store decisions, record outcomes, build context for agent prompts
|
||||
- `agent_os/backend/services/langgraph_engine.py` — Uses store factory + run_id for all run methods; fixed run_portfolio directory iteration for run-scoped layouts
|
||||
- `tradingagents/default_config.py` — Added `mongo_uri` and `mongo_db` config keys
|
||||
- `pyproject.toml` — Added `pymongo>=4.12.1` dependency
|
||||
- Tests: 56 new tests (report_paths, report_store run_id, mongo store, reflexion, factory)
|
||||
- `docs/agent/decisions/015-mongodb-report-store-reflexion.md` — ADR documenting all design decisions
|
||||
- **Smart Money Scanner**: 4th scanner node added to macro pipeline
|
||||
- **AgentOS**: Full-stack visual observability layer (FastAPI + React + ReactFlow)
|
||||
- **Portfolio Manager**: Phases 1–10 fully implemented (models, agents, CLI integration, stop-loss/take-profit)
|
||||
- **PR #32 merged**: Portfolio Manager data foundation
|
||||
- **Portfolio Manager**: Phases 1–10 fully implemented
|
||||
|
||||
# In Progress
|
||||
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ Integration points:
|
|||
- **LLM calls**: `_LLMCallbackHandler` (LangChain `BaseCallbackHandler`) — attach as callback to LLM constructors or graph invocations. Extracts model name from `invocation_params` / `serialized`, token counts from `usage_metadata`.
|
||||
- **Vendor calls**: `log_vendor_call()` — called from `route_to_vendor`.
|
||||
- **Tool calls**: `log_tool_call()` — called from `run_tool_loop()`.
|
||||
- **Thread-local context**: `set_run_logger()` / `get_run_logger()` for passing logger to vendor/tool layers without changing signatures.
|
||||
- **Context propagation**: `set_run_logger()` / `get_run_logger()` use `contextvars.ContextVar` for passing logger to vendor/tool layers without changing signatures. Asyncio-safe (isolated per task).
|
||||
|
||||
`RunLogger.summary()` returns aggregated stats (total tokens, model breakdown, vendor success/fail counts). `RunLogger.write_log(path)` writes all events + summary to a JSON-lines file.
|
||||
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@
|
|||
| RunLogger | Accumulates structured events (llm, tool, vendor, report) for a single CLI run. Thread-safe. | `observability.py` |
|
||||
| _LLMCallbackHandler | LangChain `BaseCallbackHandler` that feeds LLM call events (model, tokens, latency) into a `RunLogger` | `observability.py` |
|
||||
| _Event | @dataclass: `kind`, `ts`, `data` — one JSON-line per event | `observability.py` |
|
||||
| set_run_logger / get_run_logger | Thread-local context for passing `RunLogger` to vendor/tool layers | `observability.py` |
|
||||
| set_run_logger / get_run_logger | `contextvars.ContextVar`-based context for passing `RunLogger` to vendor/tool layers. Asyncio-safe (isolated per task). | `observability.py` |
|
||||
|
||||
## Report Paths
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,109 @@
|
|||
# ADR 015 — MongoDB Report Store, Run-ID Namespacing, and Reflexion Memory
|
||||
|
||||
**Status**: accepted
|
||||
**Date**: 2026-03-24
|
||||
**Deciders**: @aguzererler
|
||||
|
||||
## Context
|
||||
|
||||
Three problems with the existing filesystem report store:
|
||||
|
||||
1. **Same-day overwrites** — Re-running `scan`, `pipeline`, or `auto` on the
|
||||
same day silently overwrites earlier results because all reports land in
|
||||
the same flat directory (`reports/daily/{date}/…`).
|
||||
|
||||
2. **Read/write consistency** — If we simply add a `run_id` to filenames or
|
||||
paths, all existing code that reads from fixed paths (e.g.
|
||||
`load_scan(date)`, `load_analysis(date, ticker)`, the directory iteration
|
||||
in `run_portfolio`) breaks.
|
||||
|
||||
3. **No learning from past decisions** — Agent decisions are fire-and-forget.
|
||||
There is no mechanism for agents to *reflect* on the accuracy of previous
|
||||
calls and adjust accordingly.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. Run-ID Namespacing (Filesystem)
|
||||
|
||||
All path helpers in `report_paths.py` accept an optional `run_id`. When set:
|
||||
|
||||
```
|
||||
reports/daily/{date}/runs/{run_id}/market/…
|
||||
reports/daily/{date}/runs/{run_id}/{TICKER}/…
|
||||
reports/daily/{date}/runs/{run_id}/portfolio/…
|
||||
```
|
||||
|
||||
A `latest.json` pointer at the date level is updated on every write:
|
||||
|
||||
```json
|
||||
{"run_id": "abc12345", "updated_at": "2026-03-24T12:00:00Z"}
|
||||
```
|
||||
|
||||
Load methods resolve through the pointer when no `run_id` is specified,
|
||||
falling back to the legacy flat layout for backward compatibility.
|
||||
|
||||
### 2. MongoDB Report Store
|
||||
|
||||
`MongoReportStore` stores each report as a MongoDB document with `run_id`,
|
||||
`date`, `report_type`, `ticker`, and `portfolio_id` as natural keys. Multiple
|
||||
runs on the same day create separate documents — no overwrites by design.
|
||||
|
||||
Load methods return the most recent document (sorted by `created_at DESC`)
|
||||
unless a specific `run_id` is requested.
|
||||
|
||||
### 3. Store Factory
|
||||
|
||||
`create_report_store(run_id=…)` returns:
|
||||
- `MongoReportStore` when `TRADINGAGENTS_MONGO_URI` is set (or `mongo_uri` param)
|
||||
- `ReportStore` (filesystem) otherwise
|
||||
|
||||
MongoDB failures fall back to filesystem with a warning log.
|
||||
|
||||
### 4. Reflexion Memory
|
||||
|
||||
`ReflexionMemory` stores decisions with rationale and later associates
|
||||
outcomes. Backed by MongoDB when available, local JSON file otherwise.
|
||||
|
||||
Key methods:
|
||||
- `record_decision(ticker, date, decision, rationale, confidence)`
|
||||
- `record_outcome(ticker, decision_date, outcome)` — feedback loop
|
||||
- `get_history(ticker, limit)` — recent decisions for a ticker
|
||||
- `build_context(ticker, limit)` — formatted string for agent prompts
|
||||
|
||||
## Consequences & Constraints
|
||||
|
||||
### MUST
|
||||
|
||||
- **All report writes use a `run_id`** — engine methods generate one via
|
||||
`generate_run_id()` at the start of each run.
|
||||
- **All report reads resolve through `latest.json`** — when no `run_id` is
|
||||
specified, the pointer file is consulted.
|
||||
- **MongoDB is opt-in** — requires setting `TRADINGAGENTS_MONGO_URI`.
|
||||
Filesystem remains the default.
|
||||
- **Factory failures degrade gracefully** — if MongoDB is unreachable, the
|
||||
filesystem store is used.
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- **Never hard-code `ReportStore()` in engine run methods** — always use
|
||||
`create_report_store(run_id=…)`.
|
||||
- **Never assume flat layout for reads** — the directory iteration in
|
||||
`run_portfolio` searches both `runs/*/` and the legacy flat layout.
|
||||
|
||||
### Actionable Rules
|
||||
|
||||
1. When writing a report, always use a store with `run_id` set.
|
||||
2. When reading a report (for skip-if-exists checks or loading data),
|
||||
use a store *without* `run_id` — it will resolve to the latest.
|
||||
3. The `daily_digest.md` is always at the date level (shared across runs).
|
||||
4. `pymongo >= 4.12` is a required dependency (installed but optional at
|
||||
runtime — only loaded when MongoDB URI is configured).
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
- **S3/GCS object store** — Rejected: adds cloud dependency for a local-first
|
||||
tool. MongoDB is self-hostable.
|
||||
- **SQLite for reports** — Rejected: lacks the flexible document model needed
|
||||
for heterogeneous report types.
|
||||
- **Redis for report caching** — Already in use for data caching, but not
|
||||
suitable for persistent document storage.
|
||||
|
|
@ -0,0 +1,161 @@
|
|||
# ADR 016 — PR#106 Review Findings: Logging Strategy & MongoDB Models
|
||||
|
||||
**Status**: accepted
|
||||
**Date**: 2026-03-25
|
||||
**PR**: copilot/increase-observability-logging (PR#106)
|
||||
**Reviewer**: Claude Code (PR#107 review)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
This ADR documents the bugs and architectural gaps found during the review of
|
||||
PR#106 (observability logging + MongoDB report store + reflexion memory), along
|
||||
with the solutions applied.
|
||||
|
||||
---
|
||||
|
||||
## Logging Strategy
|
||||
|
||||
### Finding 1 — LangChain callback never wired into graph execution
|
||||
|
||||
**Problem**: `RunLogger.callback` was created but never passed to
|
||||
`astream_events()` or the LangGraph graph config. No LLM events would be
|
||||
captured in the JSONL log.
|
||||
|
||||
**Solution**: Wire `rl.callback` into all three `astream_events()` calls
|
||||
(`run_scan`, `run_pipeline`, `run_portfolio`) via the `config={"callbacks": [rl.callback]}`
|
||||
parameter.
|
||||
|
||||
### Finding 2 — `log_tool_call` and `log_vendor_call` never called
|
||||
|
||||
**Problem**: These methods exist on `RunLogger` but nothing invokes them from
|
||||
`run_tool_loop` or `route_to_vendor`.
|
||||
|
||||
**Solution**: Both call-sites are now wired:
|
||||
- `run_tool_loop` (`tradingagents/agents/utils/tool_runner.py`) calls
|
||||
`rl.log_tool_call()` on every tool invocation (success, failure, unknown tool).
|
||||
- `route_to_vendor` (`tradingagents/dataflows/interface.py`) calls
|
||||
`rl.log_vendor_call()` on every vendor call (success and failure with fallback).
|
||||
|
||||
### Finding 3 — `threading.local` incompatible with asyncio
|
||||
|
||||
**Problem**: `set_run_logger` / `get_run_logger` used `threading.local()`.
|
||||
Since asyncio runs all coroutines on one thread, concurrent pipelines (via
|
||||
`asyncio.gather` in `run_auto`) share the same thread-local slot.
|
||||
|
||||
**Solution**: Replace with `contextvars.ContextVar` which is correctly isolated
|
||||
per asyncio task.
|
||||
|
||||
### Finding 4 — `run_auto` log lands in flat date directory
|
||||
|
||||
**Problem**: `_finish_run_logger(run_id, get_daily_dir(date))` writes to the
|
||||
flat date directory instead of a run-namespaced path.
|
||||
|
||||
**Status**: Acceptable for V1. Each sub-phase already writes its own
|
||||
namespaced log. The top-level auto-run log is a summary.
|
||||
|
||||
### Finding 5 — JSONL log not emitted in real time
|
||||
|
||||
**Problem**: Events are buffered in memory until `_finish_run_logger`.
|
||||
|
||||
**Status**: Acceptable for V1. Consider periodic flush for long-running auto
|
||||
runs in a future PR.
|
||||
|
||||
---
|
||||
|
||||
## MongoDB Models
|
||||
|
||||
### Finding 6 — `list_pm_decisions` returns raw ObjectId
|
||||
|
||||
**Problem**: The `find()` query returned full documents including `_id: ObjectId`,
|
||||
which is not JSON-serializable.
|
||||
|
||||
**Solution**: Add `{"_id": 0}` projection to the `list_pm_decisions` query.
|
||||
|
||||
### Finding 7 — `created_at` type inconsistency
|
||||
|
||||
**Problem**: `MongoReportStore` stores `created_at` as native BSON `datetime`;
|
||||
`ReflexionMemory` stores it as an ISO 8601 string. Within separate collections
|
||||
this is consistent, but creates maintenance confusion.
|
||||
|
||||
**Solution**: `ReflexionMemory.record_decision()` now stores native `datetime`
|
||||
when writing to MongoDB, and only converts to ISO string for the local JSON
|
||||
fallback (which has no datetime type).
|
||||
|
||||
### Finding 8 — No TTL index
|
||||
|
||||
**Problem**: Reports accumulate indefinitely in MongoDB.
|
||||
|
||||
**Status**: *Deferred to a follow-up issue*. Requires a retention policy
|
||||
decision before implementation.
|
||||
|
||||
### Finding 9 — Synchronous `pymongo` in async FastAPI
|
||||
|
||||
**Problem**: All MongoDB calls block the asyncio event loop.
|
||||
|
||||
**Status**: Acceptable for V1. Plan `motor` migration before production
|
||||
deployment.
|
||||
|
||||
### Finding 10 — `MongoClient` created per instance
|
||||
|
||||
**Problem**: Each `MongoReportStore` instantiation creates a new `MongoClient`
|
||||
with its own connection pool.
|
||||
|
||||
**Status**: Acceptable for V1. Plan singleton via FastAPI app lifespan.
|
||||
|
||||
### Finding 11 — `ensure_indexes()` not called in `__init__`
|
||||
|
||||
**Problem**: Indexes were only created when going through the factory.
|
||||
Direct instantiation skips them.
|
||||
|
||||
**Solution**: Move `ensure_indexes()` call into `MongoReportStore.__init__`
|
||||
so indexes are always created regardless of construction path.
|
||||
|
||||
### Finding 12 — `write_latest_pointer`/`read_latest_pointer` use global `REPORTS_ROOT`
|
||||
|
||||
**Problem**: These functions use the global `REPORTS_ROOT` module constant,
|
||||
ignoring `ReportStore._base_dir`. When the base dir differs via env vars
|
||||
(`PORTFOLIO_DATA_DIR`, `TRADINGAGENTS_REPORTS_DIR`), the pointer file lands
|
||||
in the wrong directory tree.
|
||||
|
||||
**Solution**: Add an optional `base_dir` parameter to both functions, defaulting
|
||||
to `REPORTS_ROOT`. `ReportStore` now passes its `_base_dir` to both calls.
|
||||
|
||||
---
|
||||
|
||||
## Holding Reviews Bug
|
||||
|
||||
### Finding 13 — `save_holding_review` called with wrong arguments
|
||||
|
||||
**Problem**: In `run_portfolio`, `store.save_holding_review(date, portfolio_id, reviews)`
|
||||
passed `portfolio_id` as the ticker argument and the full reviews dict (keyed
|
||||
by ticker) as data. This created a single file named after the portfolio
|
||||
instead of one file per ticker.
|
||||
|
||||
**Solution**: Iterate over the reviews dict:
|
||||
```python
|
||||
if isinstance(reviews, dict):
|
||||
for ticker, review_data in reviews.items():
|
||||
store.save_holding_review(date, ticker, review_data)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Consequences & Constraints
|
||||
|
||||
### MUST
|
||||
|
||||
- All `astream_events()` calls **MUST** pass `rl.callback` in the config to
|
||||
capture LLM metrics in the run log.
|
||||
- `set_run_logger` / `get_run_logger` **MUST** use `contextvars.ContextVar`,
|
||||
not `threading.local`.
|
||||
- `write_latest_pointer` / `read_latest_pointer` **MUST** accept a `base_dir`
|
||||
parameter and callers **MUST** pass it when their base differs from
|
||||
`REPORTS_ROOT`.
|
||||
- `MongoReportStore.__init__` **MUST** call `ensure_indexes()`.
|
||||
|
||||
### SHOULD
|
||||
|
||||
- Plan `pymongo` → `motor` migration before production deployment.
|
||||
- Add TTL index strategy after retention policy is decided.
|
||||
|
|
@ -33,6 +33,7 @@ dependencies = [
|
|||
"yfinance>=0.2.63",
|
||||
"finvizfinance>=0.14.0",
|
||||
"psycopg2-binary>=2.9.11",
|
||||
"pymongo>=4.12.1",
|
||||
"fastapi>=0.115.9",
|
||||
"uvicorn>=0.34.3",
|
||||
"websockets>=15.0.1",
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ class TestRunScanReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir", return_value=fake_dir), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
mock_rs_cls.return_value.save_scan = MagicMock()
|
||||
|
|
@ -114,7 +114,7 @@ class TestRunScanReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=parsed):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -135,7 +135,7 @@ class TestRunScanReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest") as mock_digest, \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -159,7 +159,7 @@ class TestRunScanReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -191,7 +191,7 @@ class TestRunScanReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir", return_value=fake_dir), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", side_effect=ValueError("bad json")):
|
||||
mock_store = MagicMock()
|
||||
|
|
@ -219,7 +219,7 @@ class TestRunScanReportStorage(unittest.TestCase):
|
|||
engine = LangGraphEngine()
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest") as mock_digest:
|
||||
mock_store = MagicMock()
|
||||
mock_rs_cls.return_value = mock_store
|
||||
|
|
@ -269,7 +269,7 @@ class TestRunPipelineReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.TradingAgentsGraph", return_value=mock_wrapper), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir") as mock_gtd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch.object(LangGraphEngine, "_write_complete_report_md"):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -289,7 +289,7 @@ class TestRunPipelineReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.TradingAgentsGraph", return_value=mock_wrapper), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir") as mock_gtd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch.object(LangGraphEngine, "_write_complete_report_md") as mock_write_md:
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -308,7 +308,7 @@ class TestRunPipelineReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.TradingAgentsGraph", return_value=mock_wrapper), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir") as mock_gtd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest") as mock_digest, \
|
||||
patch.object(LangGraphEngine, "_write_complete_report_md"):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -331,7 +331,7 @@ class TestRunPipelineReportStorage(unittest.TestCase):
|
|||
|
||||
with patch("agent_os.backend.services.langgraph_engine.TradingAgentsGraph", return_value=mock_wrapper), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir") as mock_gtd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch.object(LangGraphEngine, "_write_complete_report_md"):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
|
|
@ -362,7 +362,7 @@ class TestRunPipelineReportStorage(unittest.TestCase):
|
|||
engine = LangGraphEngine()
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.TradingAgentsGraph", return_value=mock_wrapper), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest") as mock_digest:
|
||||
mock_store = MagicMock()
|
||||
mock_rs_cls.return_value = mock_store
|
||||
|
|
@ -408,8 +408,8 @@ class TestRunPortfolioReportLoading(unittest.TestCase):
|
|||
fake_daily_dir.iterdir.return_value = []
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("tradingagents.report_paths.get_daily_dir", return_value=fake_daily_dir):
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir", return_value=fake_daily_dir):
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = scan_data
|
||||
mock_store.load_analysis.return_value = None
|
||||
|
|
@ -455,8 +455,8 @@ class TestRunPortfolioReportLoading(unittest.TestCase):
|
|||
return {"AAPL": aapl_analysis, "TSLA": tsla_analysis}.get(ticker)
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("tradingagents.report_paths.get_daily_dir", return_value=fake_daily_dir):
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir", return_value=fake_daily_dir):
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_store.load_analysis.side_effect = load_analysis_side_effect
|
||||
|
|
@ -487,8 +487,8 @@ class TestRunPortfolioReportLoading(unittest.TestCase):
|
|||
fake_daily_dir.iterdir.return_value = fake_tickers
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("tradingagents.report_paths.get_daily_dir", return_value=fake_daily_dir):
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir", return_value=fake_daily_dir):
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_rs_cls.return_value = mock_store
|
||||
|
|
@ -513,8 +513,8 @@ class TestRunPortfolioReportLoading(unittest.TestCase):
|
|||
fake_daily_dir.iterdir.return_value = [make_dir_mock("AAPL")]
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("tradingagents.report_paths.get_daily_dir", return_value=fake_daily_dir):
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir", return_value=fake_daily_dir):
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_store.load_analysis.return_value = {"final_trade_decision": "BUY"}
|
||||
|
|
@ -545,8 +545,8 @@ class TestRunPortfolioReportLoading(unittest.TestCase):
|
|||
fake_daily_dir.exists.return_value = False
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("tradingagents.report_paths.get_daily_dir", return_value=fake_daily_dir):
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir", return_value=fake_daily_dir):
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_rs_cls.return_value = mock_store
|
||||
|
|
@ -634,8 +634,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data):
|
||||
# Set up fake dirs
|
||||
|
|
@ -676,8 +676,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
patch("agent_os.backend.services.langgraph_engine.PortfolioGraph",
|
||||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -718,7 +718,7 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph",
|
||||
return_value=self._make_noop_scanner()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -758,7 +758,7 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph",
|
||||
return_value=self._make_noop_scanner()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -798,7 +798,7 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph",
|
||||
return_value=self._make_noop_scanner()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -839,8 +839,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -876,8 +876,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -922,8 +922,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data):
|
||||
fake_mdir = MagicMock(spec=Path)
|
||||
|
|
@ -974,8 +974,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data), \
|
||||
patch("tradingagents.portfolio.repository.PortfolioRepository", return_value=mock_repo):
|
||||
|
|
@ -1022,8 +1022,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data), \
|
||||
patch("tradingagents.portfolio.repository.PortfolioRepository", return_value=mock_repo):
|
||||
|
|
@ -1066,8 +1066,8 @@ class TestRunAutoTickerSource(unittest.TestCase):
|
|||
return_value=self._make_noop_portfolio_graph()), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir"), \
|
||||
patch("tradingagents.report_paths.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.ReportStore") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store") as mock_rs_cls, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value=scan_data), \
|
||||
patch("tradingagents.portfolio.repository.PortfolioRepository", return_value=mock_repo):
|
||||
|
|
|
|||
|
|
@ -0,0 +1,200 @@
|
|||
"""Tests for MongoReportStore (mocked pymongo).
|
||||
|
||||
All tests mock the pymongo Collection so no real MongoDB is needed.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@pytest.fixture
|
||||
def mock_col():
|
||||
"""Return a MagicMock pymongo Collection."""
|
||||
return MagicMock()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mongo_store(mock_col):
|
||||
"""Return a MongoReportStore with a mocked Collection."""
|
||||
with patch("tradingagents.portfolio.mongo_report_store.MongoClient") as mock_client_cls:
|
||||
mock_db = MagicMock()
|
||||
mock_db.__getitem__ = MagicMock(return_value=mock_col)
|
||||
mock_client = MagicMock()
|
||||
mock_client.__getitem__ = MagicMock(return_value=mock_db)
|
||||
mock_client_cls.return_value = mock_client
|
||||
|
||||
from tradingagents.portfolio.mongo_report_store import MongoReportStore
|
||||
|
||||
store = MongoReportStore(
|
||||
connection_string="mongodb://localhost:27017",
|
||||
db_name="test_db",
|
||||
run_id="test_run",
|
||||
)
|
||||
# Replace the internal collection with our mock
|
||||
store._col = mock_col
|
||||
return store
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# save_scan / load_scan
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_save_scan_inserts_document(mongo_store, mock_col):
|
||||
"""save_scan should call insert_one with correct document shape."""
|
||||
mock_col.insert_one.return_value = MagicMock(inserted_id="abc123")
|
||||
|
||||
mongo_store.save_scan("2026-03-20", {"watchlist": ["AAPL"]})
|
||||
|
||||
mock_col.insert_one.assert_called_once()
|
||||
doc = mock_col.insert_one.call_args[0][0]
|
||||
assert doc["date"] == "2026-03-20"
|
||||
assert doc["report_type"] == "scan"
|
||||
assert doc["data"] == {"watchlist": ["AAPL"]}
|
||||
assert doc["run_id"] == "test_run"
|
||||
assert doc["ticker"] is None
|
||||
|
||||
|
||||
def test_load_scan_finds_latest(mongo_store, mock_col):
|
||||
"""load_scan should call find_one with date and report_type, sorted by created_at."""
|
||||
from pymongo import DESCENDING
|
||||
|
||||
mock_col.find_one.return_value = {"data": {"watchlist": ["AAPL"]}}
|
||||
|
||||
result = mongo_store.load_scan("2026-03-20")
|
||||
|
||||
mock_col.find_one.assert_called_once()
|
||||
query = mock_col.find_one.call_args[0][0]
|
||||
assert query["date"] == "2026-03-20"
|
||||
assert query["report_type"] == "scan"
|
||||
assert result == {"watchlist": ["AAPL"]}
|
||||
|
||||
|
||||
def test_load_scan_returns_none_when_missing(mongo_store, mock_col):
|
||||
"""load_scan should return None when no document is found."""
|
||||
mock_col.find_one.return_value = None
|
||||
|
||||
result = mongo_store.load_scan("1900-01-01")
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# save_analysis / load_analysis
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_save_analysis_includes_ticker(mongo_store, mock_col):
|
||||
"""save_analysis should include uppercase ticker in the document."""
|
||||
mock_col.insert_one.return_value = MagicMock(inserted_id="abc")
|
||||
|
||||
mongo_store.save_analysis("2026-03-20", "aapl", {"score": 0.9})
|
||||
|
||||
doc = mock_col.insert_one.call_args[0][0]
|
||||
assert doc["ticker"] == "AAPL"
|
||||
assert doc["report_type"] == "analysis"
|
||||
|
||||
|
||||
def test_load_analysis_filters_by_ticker(mongo_store, mock_col):
|
||||
"""load_analysis should filter by ticker in the query."""
|
||||
mock_col.find_one.return_value = {"data": {"score": 0.9}}
|
||||
|
||||
result = mongo_store.load_analysis("2026-03-20", "AAPL")
|
||||
|
||||
query = mock_col.find_one.call_args[0][0]
|
||||
assert query["ticker"] == "AAPL"
|
||||
assert result == {"score": 0.9}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# PM decision
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_save_pm_decision_with_markdown(mongo_store, mock_col):
|
||||
"""save_pm_decision should include markdown in the document."""
|
||||
mock_col.insert_one.return_value = MagicMock(inserted_id="abc")
|
||||
|
||||
mongo_store.save_pm_decision(
|
||||
"2026-03-20", "pid-123", {"buys": []}, markdown="# Decision"
|
||||
)
|
||||
|
||||
doc = mock_col.insert_one.call_args[0][0]
|
||||
assert doc["portfolio_id"] == "pid-123"
|
||||
assert doc["markdown"] == "# Decision"
|
||||
assert doc["report_type"] == "pm_decision"
|
||||
|
||||
|
||||
def test_load_pm_decision_filters_by_portfolio(mongo_store, mock_col):
|
||||
"""load_pm_decision should filter by portfolio_id."""
|
||||
mock_col.find_one.return_value = {"data": {"buys": []}}
|
||||
|
||||
result = mongo_store.load_pm_decision("2026-03-20", "pid-123")
|
||||
|
||||
query = mock_col.find_one.call_args[0][0]
|
||||
assert query["portfolio_id"] == "pid-123"
|
||||
assert result == {"buys": []}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# clear_portfolio_stage
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_clear_portfolio_stage(mongo_store, mock_col):
|
||||
"""clear_portfolio_stage should delete pm_decision and execution_result docs."""
|
||||
mock_col.delete_many.return_value = MagicMock(deleted_count=1)
|
||||
|
||||
result = mongo_store.clear_portfolio_stage("2026-03-20", "pid-123")
|
||||
|
||||
assert mock_col.delete_many.call_count == 2
|
||||
assert "pm_decision" in result
|
||||
assert "execution_result" in result
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# list_analyses_for_date
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_list_analyses_for_date(mongo_store, mock_col):
|
||||
"""list_analyses_for_date should return unique ticker symbols."""
|
||||
mock_col.find.return_value = [
|
||||
{"ticker": "AAPL"},
|
||||
{"ticker": "MSFT"},
|
||||
{"ticker": "AAPL"}, # duplicate
|
||||
]
|
||||
|
||||
result = mongo_store.list_analyses_for_date("2026-03-20")
|
||||
|
||||
assert set(result) == {"AAPL", "MSFT"}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# run_id property
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_run_id_property(mongo_store):
|
||||
"""run_id property should return the configured value."""
|
||||
assert mongo_store.run_id == "test_run"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ensure_indexes
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_ensure_indexes(mongo_store, mock_col):
|
||||
"""ensure_indexes should create the expected indexes."""
|
||||
mongo_store.ensure_indexes()
|
||||
|
||||
assert mock_col.create_index.call_count >= 4
|
||||
|
|
@ -0,0 +1,202 @@
|
|||
"""Tests for observability integration in LangGraphEngine.
|
||||
|
||||
Covers:
|
||||
- RunLogger lifecycle (_start_run_logger / _finish_run_logger)
|
||||
- Enriched tool events (service, status, error fields)
|
||||
- Run log JSONL persistence
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
_project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".."))
|
||||
if _project_root not in sys.path:
|
||||
sys.path.insert(0, _project_root)
|
||||
|
||||
from agent_os.backend.services.langgraph_engine import (
|
||||
LangGraphEngine,
|
||||
_TOOL_SERVICE_MAP,
|
||||
)
|
||||
from tradingagents.observability import RunLogger, get_run_logger, set_run_logger
|
||||
|
||||
|
||||
class TestToolServiceMap(unittest.TestCase):
|
||||
"""Verify the static tool→service mapping is populated."""
|
||||
|
||||
def test_known_tools_have_services(self):
|
||||
self.assertEqual(_TOOL_SERVICE_MAP["get_stock_data"], "yfinance")
|
||||
self.assertEqual(_TOOL_SERVICE_MAP["get_insider_transactions"], "finnhub")
|
||||
self.assertEqual(_TOOL_SERVICE_MAP["get_insider_buying_stocks"], "finviz")
|
||||
self.assertEqual(_TOOL_SERVICE_MAP["get_enriched_holdings"], "local")
|
||||
|
||||
def test_map_is_non_empty(self):
|
||||
self.assertGreater(len(_TOOL_SERVICE_MAP), 20)
|
||||
|
||||
|
||||
class TestRunLoggerLifecycle(unittest.TestCase):
|
||||
"""Test _start_run_logger and _finish_run_logger."""
|
||||
|
||||
def setUp(self):
|
||||
self.engine = LangGraphEngine()
|
||||
# Clean up any leftover thread-local state
|
||||
set_run_logger(None)
|
||||
|
||||
def tearDown(self):
|
||||
set_run_logger(None)
|
||||
|
||||
def test_start_creates_logger_and_sets_thread_local(self):
|
||||
rl = self.engine._start_run_logger("test-run-1")
|
||||
self.assertIsInstance(rl, RunLogger)
|
||||
self.assertIs(self.engine._run_loggers.get("test-run-1"), rl)
|
||||
self.assertIs(get_run_logger(), rl)
|
||||
|
||||
def test_finish_writes_log_and_cleans_up(self):
|
||||
rl = self.engine._start_run_logger("test-run-2")
|
||||
# Add a synthetic event
|
||||
rl.log_tool_call("get_stock_data", "AAPL", True, 123.4)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
log_dir = Path(tmpdir) / "sub"
|
||||
self.engine._finish_run_logger("test-run-2", log_dir)
|
||||
|
||||
# Logger removed from tracking
|
||||
self.assertNotIn("test-run-2", self.engine._run_loggers)
|
||||
# Thread-local cleared
|
||||
self.assertIsNone(get_run_logger())
|
||||
|
||||
# JSONL file written
|
||||
log_file = log_dir / "run_log.jsonl"
|
||||
self.assertTrue(log_file.exists())
|
||||
lines = log_file.read_text().strip().split("\n")
|
||||
self.assertGreaterEqual(len(lines), 2) # event + summary
|
||||
|
||||
# Verify first line is the tool event
|
||||
evt = json.loads(lines[0])
|
||||
self.assertEqual(evt["kind"], "tool")
|
||||
self.assertEqual(evt["tool"], "get_stock_data")
|
||||
|
||||
# Last line should be summary
|
||||
summary = json.loads(lines[-1])
|
||||
self.assertEqual(summary["kind"], "summary")
|
||||
self.assertEqual(summary["tool_calls"], 1)
|
||||
|
||||
def test_finish_noop_for_unknown_run(self):
|
||||
"""_finish_run_logger should silently do nothing for unknown run IDs."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
self.engine._finish_run_logger("nonexistent", Path(tmpdir))
|
||||
# No file written, no crash
|
||||
self.assertEqual(list(Path(tmpdir).iterdir()), [])
|
||||
|
||||
|
||||
class TestToolEventMapping(unittest.TestCase):
|
||||
"""Test enriched tool events in _map_langgraph_event."""
|
||||
|
||||
def setUp(self):
|
||||
self.engine = LangGraphEngine()
|
||||
self.run_id = "test-tool-run"
|
||||
self.engine._node_start_times[self.run_id] = {}
|
||||
self.engine._run_identifiers[self.run_id] = "AAPL"
|
||||
self.engine._node_prompts[self.run_id] = {}
|
||||
|
||||
def tearDown(self):
|
||||
self.engine._node_start_times.pop(self.run_id, None)
|
||||
self.engine._run_identifiers.pop(self.run_id, None)
|
||||
self.engine._node_prompts.pop(self.run_id, None)
|
||||
|
||||
def test_tool_start_includes_service(self):
|
||||
event = {
|
||||
"event": "on_tool_start",
|
||||
"name": "get_stock_data",
|
||||
"data": {"input": {"ticker": "AAPL"}},
|
||||
"run_id": "abc123",
|
||||
"metadata": {"langgraph_node": "market_analyst"},
|
||||
}
|
||||
result = self.engine._map_langgraph_event(self.run_id, event)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["type"], "tool")
|
||||
self.assertEqual(result["service"], "yfinance")
|
||||
self.assertEqual(result["status"], "running")
|
||||
|
||||
def test_tool_start_unknown_tool_has_empty_service(self):
|
||||
event = {
|
||||
"event": "on_tool_start",
|
||||
"name": "some_custom_tool",
|
||||
"data": {"input": "test"},
|
||||
"run_id": "abc123",
|
||||
"metadata": {"langgraph_node": "custom_node"},
|
||||
}
|
||||
result = self.engine._map_langgraph_event(self.run_id, event)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["service"], "")
|
||||
|
||||
def test_tool_end_success(self):
|
||||
event = {
|
||||
"event": "on_tool_end",
|
||||
"name": "get_fundamentals",
|
||||
"data": {"output": MagicMock(content="PE ratio: 25.3, Revenue: $100B")},
|
||||
"run_id": "abc123",
|
||||
"metadata": {"langgraph_node": "fundamentals_analyst"},
|
||||
}
|
||||
result = self.engine._map_langgraph_event(self.run_id, event)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["type"], "tool_result")
|
||||
self.assertEqual(result["status"], "success")
|
||||
self.assertEqual(result["service"], "yfinance")
|
||||
self.assertIsNone(result["error"])
|
||||
self.assertIn("✓", result["message"])
|
||||
|
||||
def test_tool_end_error_detected(self):
|
||||
mock_output = MagicMock()
|
||||
mock_output.content = "Error calling get_stock_data: ConnectionError: timeout"
|
||||
event = {
|
||||
"event": "on_tool_end",
|
||||
"name": "get_stock_data",
|
||||
"data": {"output": mock_output},
|
||||
"run_id": "abc123",
|
||||
"metadata": {"langgraph_node": "market_analyst"},
|
||||
}
|
||||
result = self.engine._map_langgraph_event(self.run_id, event)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["status"], "error")
|
||||
self.assertIn("Error", result["error"])
|
||||
self.assertIn("✗", result["message"])
|
||||
|
||||
def test_tool_end_graceful_skip(self):
|
||||
mock_output = MagicMock()
|
||||
mock_output.content = "Data gracefully skipped due to rate limit"
|
||||
event = {
|
||||
"event": "on_tool_end",
|
||||
"name": "get_insider_transactions",
|
||||
"data": {"output": mock_output},
|
||||
"run_id": "abc123",
|
||||
"metadata": {"langgraph_node": "news_analyst"},
|
||||
}
|
||||
result = self.engine._map_langgraph_event(self.run_id, event)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["status"], "graceful_skip")
|
||||
self.assertEqual(result["service"], "finnhub")
|
||||
self.assertIn("⚠", result["message"])
|
||||
|
||||
def test_tool_end_event_status_error(self):
|
||||
"""When the event itself has status='error', detect it."""
|
||||
event = {
|
||||
"event": "on_tool_end",
|
||||
"name": "get_earnings_calendar",
|
||||
"data": {"output": MagicMock(content=""), "status": "error"},
|
||||
"run_id": "abc123",
|
||||
"metadata": {"langgraph_node": "sector_scanner"},
|
||||
}
|
||||
result = self.engine._map_langgraph_event(self.run_id, event)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result["status"], "error")
|
||||
self.assertEqual(result["service"], "finnhub")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
|
@ -0,0 +1,463 @@
|
|||
"""Tests for PR#106 review fixes (ADR 016).
|
||||
|
||||
Covers:
|
||||
- Fix 1: save_holding_review per-ticker iteration in run_portfolio
|
||||
- Fix 2: contextvars-based RunLogger isolation
|
||||
- Fix 3: list_pm_decisions excludes _id (ObjectId)
|
||||
- Fix 4: ReflexionMemory created_at is native datetime for MongoDB
|
||||
- Fix 5: write/read_latest_pointer respects base_dir parameter
|
||||
- Fix 6: RunLogger callback wired into astream_events config
|
||||
- Fix 7: ensure_indexes called in MongoReportStore.__init__
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
_project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".."))
|
||||
if _project_root not in sys.path:
|
||||
sys.path.insert(0, _project_root)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
async def _collect(agen):
|
||||
"""Collect all events from an async generator into a list."""
|
||||
events = []
|
||||
async for evt in agen:
|
||||
events.append(evt)
|
||||
return events
|
||||
|
||||
|
||||
def _root_chain_end_event(output: dict) -> dict:
|
||||
"""Build a synthetic root on_chain_end LangGraph v2 event."""
|
||||
return {
|
||||
"event": "on_chain_end",
|
||||
"name": "LangGraph",
|
||||
"parent_ids": [],
|
||||
"metadata": {},
|
||||
"data": {"output": output},
|
||||
"run_id": "test-run-id",
|
||||
"tags": [],
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 1: save_holding_review per-ticker iteration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestSaveHoldingReviewIteration(unittest.TestCase):
|
||||
"""Verify save_holding_review is called per-ticker, not once with portfolio_id."""
|
||||
|
||||
_FINAL_STATE = {
|
||||
"holding_reviews": json.dumps({
|
||||
"AAPL": {"rating": "hold", "reason": "stable"},
|
||||
"MSFT": {"rating": "buy", "reason": "growth"},
|
||||
}),
|
||||
"risk_metrics": "",
|
||||
"pm_decision": "",
|
||||
"execution_result": "",
|
||||
}
|
||||
|
||||
def _make_mock_portfolio_graph(self, final_state=None):
|
||||
if final_state is None:
|
||||
final_state = self._FINAL_STATE
|
||||
|
||||
async def mock_astream(*args, **kwargs):
|
||||
yield _root_chain_end_event(final_state)
|
||||
|
||||
mock_graph = MagicMock()
|
||||
mock_graph.astream_events = mock_astream
|
||||
mock_pg = MagicMock()
|
||||
mock_pg.graph = mock_graph
|
||||
return mock_pg
|
||||
|
||||
def test_holding_reviews_saved_per_ticker(self):
|
||||
"""run_portfolio should call save_holding_review once per ticker key."""
|
||||
from agent_os.backend.services.langgraph_engine import LangGraphEngine
|
||||
|
||||
mock_pg = self._make_mock_portfolio_graph()
|
||||
engine = LangGraphEngine()
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_store.load_analysis.return_value = None
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store", return_value=mock_store), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"):
|
||||
fake_daily = MagicMock(spec=Path)
|
||||
fake_daily.exists.return_value = False
|
||||
fake_daily.__truediv__ = MagicMock(return_value=MagicMock(spec=Path, exists=MagicMock(return_value=False)))
|
||||
mock_gdd.return_value = fake_daily
|
||||
|
||||
asyncio.run(_collect(engine.run_portfolio("run1", {
|
||||
"date": "2026-03-20",
|
||||
"portfolio_id": "pid-123",
|
||||
})))
|
||||
|
||||
# save_holding_review should be called once per ticker
|
||||
calls = mock_store.save_holding_review.call_args_list
|
||||
tickers_saved = {c.args[1] for c in calls} # (date, ticker, data)
|
||||
self.assertEqual(tickers_saved, {"AAPL", "MSFT"})
|
||||
self.assertEqual(len(calls), 2)
|
||||
|
||||
def test_non_dict_reviews_logs_warning(self):
|
||||
"""When holding_reviews is not a dict, it should log a warning, not crash."""
|
||||
from agent_os.backend.services.langgraph_engine import LangGraphEngine
|
||||
|
||||
state = dict(self._FINAL_STATE)
|
||||
state["holding_reviews"] = json.dumps(["not", "a", "dict"])
|
||||
|
||||
mock_pg = self._make_mock_portfolio_graph(state)
|
||||
engine = LangGraphEngine()
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_store.load_analysis.return_value = None
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store", return_value=mock_store), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"):
|
||||
fake_daily = MagicMock(spec=Path)
|
||||
fake_daily.exists.return_value = False
|
||||
fake_daily.__truediv__ = MagicMock(return_value=MagicMock(spec=Path, exists=MagicMock(return_value=False)))
|
||||
mock_gdd.return_value = fake_daily
|
||||
|
||||
events = asyncio.run(_collect(engine.run_portfolio("run1", {
|
||||
"date": "2026-03-20",
|
||||
"portfolio_id": "pid-123",
|
||||
})))
|
||||
|
||||
# save_holding_review should NOT be called
|
||||
mock_store.save_holding_review.assert_not_called()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 2: contextvars-based RunLogger isolation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestContextVarRunLogger(unittest.TestCase):
|
||||
"""Verify RunLogger uses contextvars (isolated per asyncio task)."""
|
||||
|
||||
def test_set_get_returns_correct_logger(self):
|
||||
from tradingagents.observability import (
|
||||
RunLogger,
|
||||
get_run_logger,
|
||||
set_run_logger,
|
||||
)
|
||||
|
||||
rl = RunLogger()
|
||||
set_run_logger(rl)
|
||||
self.assertIs(get_run_logger(), rl)
|
||||
set_run_logger(None)
|
||||
self.assertIsNone(get_run_logger())
|
||||
|
||||
def test_context_isolation_across_async_tasks(self):
|
||||
"""Each asyncio task should have its own RunLogger."""
|
||||
from tradingagents.observability import (
|
||||
RunLogger,
|
||||
get_run_logger,
|
||||
set_run_logger,
|
||||
)
|
||||
|
||||
results = {}
|
||||
|
||||
async def task(name: str):
|
||||
rl = RunLogger()
|
||||
set_run_logger(rl)
|
||||
await asyncio.sleep(0.01)
|
||||
results[name] = get_run_logger()
|
||||
return rl
|
||||
|
||||
async def run_concurrent():
|
||||
rl_a, rl_b = await asyncio.gather(task("A"), task("B"))
|
||||
return rl_a, rl_b
|
||||
|
||||
rl_a, rl_b = asyncio.run(run_concurrent())
|
||||
|
||||
# Each task should get back its own logger, not the other's
|
||||
self.assertIs(results["A"], rl_a)
|
||||
self.assertIs(results["B"], rl_b)
|
||||
# They should be different instances
|
||||
self.assertIsNot(rl_a, rl_b)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 3: list_pm_decisions excludes _id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestListPmDecisionsExcludesId(unittest.TestCase):
|
||||
"""Verify list_pm_decisions uses {_id: 0} projection."""
|
||||
|
||||
def test_projection_excludes_object_id(self):
|
||||
with patch("tradingagents.portfolio.mongo_report_store.MongoClient") as mock_client_cls:
|
||||
mock_col = MagicMock()
|
||||
mock_db = MagicMock()
|
||||
mock_db.__getitem__ = MagicMock(return_value=mock_col)
|
||||
mock_client = MagicMock()
|
||||
mock_client.__getitem__ = MagicMock(return_value=mock_db)
|
||||
mock_client_cls.return_value = mock_client
|
||||
|
||||
from tradingagents.portfolio.mongo_report_store import MongoReportStore
|
||||
|
||||
store = MongoReportStore("mongodb://localhost:27017", run_id="test")
|
||||
store._col = mock_col
|
||||
|
||||
mock_col.find.return_value = []
|
||||
store.list_pm_decisions("pid-123")
|
||||
|
||||
# Verify the projection argument includes _id: 0
|
||||
find_call = mock_col.find.call_args
|
||||
projection = find_call[0][1] if len(find_call[0]) > 1 else find_call[1].get("projection")
|
||||
self.assertEqual(projection, {"_id": 0})
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 4: ReflexionMemory created_at is native datetime for MongoDB
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestReflexionCreatedAtType(unittest.TestCase):
|
||||
"""Verify created_at is native datetime for MongoDB, ISO string for local."""
|
||||
|
||||
def test_mongodb_path_stores_native_datetime(self):
|
||||
"""When writing to MongoDB, created_at should be a datetime object."""
|
||||
with patch("tradingagents.memory.reflexion.MongoClient", create=True) as mock_client_cls:
|
||||
mock_col = MagicMock()
|
||||
mock_db = MagicMock()
|
||||
mock_db.__getitem__ = MagicMock(return_value=mock_col)
|
||||
mock_client = MagicMock()
|
||||
mock_client.__getitem__ = MagicMock(return_value=mock_db)
|
||||
mock_client_cls.return_value = mock_client
|
||||
|
||||
from tradingagents.memory.reflexion import ReflexionMemory
|
||||
|
||||
mem = ReflexionMemory.__new__(ReflexionMemory)
|
||||
mem._col = mock_col
|
||||
mem._fallback_path = Path("/tmp/test_reflexion.json")
|
||||
|
||||
mem.record_decision("AAPL", "2026-03-20", "BUY", "test", "high")
|
||||
|
||||
doc = mock_col.insert_one.call_args[0][0]
|
||||
self.assertIsInstance(doc["created_at"], datetime)
|
||||
|
||||
def test_local_path_stores_iso_string(self):
|
||||
"""When writing to local JSON, created_at should be an ISO string."""
|
||||
import tempfile
|
||||
from tradingagents.memory.reflexion import ReflexionMemory
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
fb_path = Path(tmpdir) / "test_reflexion.json"
|
||||
mem = ReflexionMemory(fallback_path=fb_path)
|
||||
|
||||
mem.record_decision("AAPL", "2026-03-20", "BUY", "test", "high")
|
||||
|
||||
data = json.loads(fb_path.read_text())
|
||||
self.assertIsInstance(data[0]["created_at"], str)
|
||||
# Should be parseable as ISO datetime
|
||||
datetime.fromisoformat(data[0]["created_at"])
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 5: write/read_latest_pointer respects base_dir parameter
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestLatestPointerBaseDir(unittest.TestCase):
|
||||
"""Verify write_latest_pointer/read_latest_pointer use base_dir."""
|
||||
|
||||
def test_pointer_uses_custom_base_dir(self):
|
||||
from tradingagents.report_paths import read_latest_pointer, write_latest_pointer
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
base = Path(tmpdir) / "custom_reports"
|
||||
write_latest_pointer("2026-03-20", "run123", base_dir=base)
|
||||
|
||||
# Should be written under the custom base, not REPORTS_ROOT
|
||||
pointer = base / "daily" / "2026-03-20" / "latest.json"
|
||||
self.assertTrue(pointer.exists())
|
||||
data = json.loads(pointer.read_text())
|
||||
self.assertEqual(data["run_id"], "run123")
|
||||
|
||||
# read_latest_pointer should use the same base
|
||||
result = read_latest_pointer("2026-03-20", base_dir=base)
|
||||
self.assertEqual(result, "run123")
|
||||
|
||||
def test_read_returns_none_with_wrong_base(self):
|
||||
from tradingagents.report_paths import read_latest_pointer, write_latest_pointer
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
base_a = Path(tmpdir) / "a"
|
||||
base_b = Path(tmpdir) / "b"
|
||||
write_latest_pointer("2026-03-20", "run_a", base_dir=base_a)
|
||||
|
||||
# Reading from a different base should not find it
|
||||
result = read_latest_pointer("2026-03-20", base_dir=base_b)
|
||||
self.assertIsNone(result)
|
||||
|
||||
def test_report_store_passes_base_dir(self):
|
||||
"""ReportStore should pass its _base_dir to pointer functions."""
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
base = Path(tmpdir) / "custom"
|
||||
store = ReportStore(base_dir=base, run_id="abc123")
|
||||
|
||||
# Trigger a save which calls _update_latest
|
||||
store.save_scan("2026-03-20", {"test": True})
|
||||
|
||||
# Pointer should be under the custom base
|
||||
pointer = base / "daily" / "2026-03-20" / "latest.json"
|
||||
self.assertTrue(pointer.exists())
|
||||
data = json.loads(pointer.read_text())
|
||||
self.assertEqual(data["run_id"], "abc123")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 6: RunLogger callback wired into astream_events config
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestRunLoggerCallbackWiring(unittest.TestCase):
|
||||
"""Verify astream_events receives the RunLogger callback in config."""
|
||||
|
||||
def _make_mock_graph(self, final_state):
|
||||
"""Create a mock graph that captures the config passed to astream_events."""
|
||||
captured_config = {}
|
||||
|
||||
async def mock_astream(*args, **kwargs):
|
||||
captured_config.update(kwargs.get("config", {}))
|
||||
yield _root_chain_end_event(final_state)
|
||||
|
||||
mock_graph = MagicMock()
|
||||
mock_graph.astream_events = mock_astream
|
||||
return mock_graph, captured_config
|
||||
|
||||
def test_run_scan_wires_callback(self):
|
||||
from agent_os.backend.services.langgraph_engine import LangGraphEngine
|
||||
|
||||
mock_graph, captured = self._make_mock_graph({
|
||||
"geopolitical_report": "", "market_movers_report": "",
|
||||
"sector_performance_report": "", "industry_deep_dive_report": "",
|
||||
"macro_scan_summary": "",
|
||||
})
|
||||
mock_scanner = MagicMock()
|
||||
mock_scanner.graph = mock_graph
|
||||
|
||||
engine = LangGraphEngine()
|
||||
mock_store = MagicMock()
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.ScannerGraph", return_value=mock_scanner), \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store", return_value=mock_store), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_market_dir") as mock_gmd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"), \
|
||||
patch("agent_os.backend.services.langgraph_engine.extract_json", return_value={}):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
fake_dir.__truediv__ = MagicMock(return_value=MagicMock(spec=Path))
|
||||
fake_dir.mkdir = MagicMock()
|
||||
mock_gmd.return_value = fake_dir
|
||||
|
||||
asyncio.run(_collect(engine.run_scan("run1", {"date": "2026-01-01"})))
|
||||
|
||||
self.assertIn("callbacks", captured)
|
||||
self.assertEqual(len(captured["callbacks"]), 1)
|
||||
|
||||
def test_run_pipeline_wires_callback(self):
|
||||
from agent_os.backend.services.langgraph_engine import LangGraphEngine
|
||||
|
||||
mock_graph, captured = self._make_mock_graph({"final_trade_decision": "BUY"})
|
||||
mock_propagator = MagicMock()
|
||||
mock_propagator.max_recur_limit = 100
|
||||
mock_propagator.create_initial_state.return_value = {"ticker": "AAPL"}
|
||||
mock_wrapper = MagicMock()
|
||||
mock_wrapper.graph = mock_graph
|
||||
mock_wrapper.propagator = mock_propagator
|
||||
|
||||
engine = LangGraphEngine()
|
||||
mock_store = MagicMock()
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.TradingAgentsGraph", return_value=mock_wrapper), \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store", return_value=mock_store), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_ticker_dir") as mock_gtd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"):
|
||||
fake_dir = MagicMock(spec=Path)
|
||||
fake_dir.__truediv__ = MagicMock(return_value=MagicMock(spec=Path))
|
||||
fake_dir.mkdir = MagicMock()
|
||||
mock_gtd.return_value = fake_dir
|
||||
|
||||
asyncio.run(_collect(engine.run_pipeline("run1", {
|
||||
"ticker": "AAPL", "date": "2026-01-01",
|
||||
})))
|
||||
|
||||
self.assertIn("callbacks", captured)
|
||||
self.assertEqual(len(captured["callbacks"]), 1)
|
||||
# Also verify recursion_limit is still set
|
||||
self.assertEqual(captured["recursion_limit"], 100)
|
||||
|
||||
def test_run_portfolio_wires_callback(self):
|
||||
from agent_os.backend.services.langgraph_engine import LangGraphEngine
|
||||
|
||||
mock_graph, captured = self._make_mock_graph({
|
||||
"holding_reviews": "", "risk_metrics": "",
|
||||
"pm_decision": "", "execution_result": "",
|
||||
})
|
||||
mock_pg = MagicMock()
|
||||
mock_pg.graph = mock_graph
|
||||
|
||||
engine = LangGraphEngine()
|
||||
mock_store = MagicMock()
|
||||
mock_store.load_scan.return_value = {}
|
||||
mock_store.load_analysis.return_value = None
|
||||
|
||||
with patch("agent_os.backend.services.langgraph_engine.PortfolioGraph", return_value=mock_pg), \
|
||||
patch("agent_os.backend.services.langgraph_engine.create_report_store", return_value=mock_store), \
|
||||
patch("agent_os.backend.services.langgraph_engine.get_daily_dir") as mock_gdd, \
|
||||
patch("agent_os.backend.services.langgraph_engine.append_to_digest"):
|
||||
fake_daily = MagicMock(spec=Path)
|
||||
fake_daily.exists.return_value = False
|
||||
fake_daily.__truediv__ = MagicMock(return_value=MagicMock(spec=Path, exists=MagicMock(return_value=False)))
|
||||
mock_gdd.return_value = fake_daily
|
||||
|
||||
asyncio.run(_collect(engine.run_portfolio("run1", {
|
||||
"date": "2026-01-01", "portfolio_id": "pid-123",
|
||||
})))
|
||||
|
||||
self.assertIn("callbacks", captured)
|
||||
self.assertEqual(len(captured["callbacks"]), 1)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fix 7: ensure_indexes called in MongoReportStore.__init__
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestEnsureIndexesInInit(unittest.TestCase):
|
||||
"""Verify ensure_indexes is called during __init__, not just via factory."""
|
||||
|
||||
def test_init_calls_ensure_indexes(self):
|
||||
with patch("tradingagents.portfolio.mongo_report_store.MongoClient") as mock_client_cls:
|
||||
mock_col = MagicMock()
|
||||
mock_db = MagicMock()
|
||||
mock_db.__getitem__ = MagicMock(return_value=mock_col)
|
||||
mock_client = MagicMock()
|
||||
mock_client.__getitem__ = MagicMock(return_value=mock_db)
|
||||
mock_client_cls.return_value = mock_client
|
||||
|
||||
from tradingagents.portfolio.mongo_report_store import MongoReportStore
|
||||
|
||||
store = MongoReportStore("mongodb://localhost:27017", run_id="test")
|
||||
|
||||
# create_index should have been called at least 4 times
|
||||
# (the indexes from ensure_indexes)
|
||||
self.assertGreaterEqual(mock_col.create_index.call_count, 4)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
"""Tests for tradingagents.memory.reflexion.
|
||||
|
||||
Covers:
|
||||
- Local JSON fallback (no MongoDB)
|
||||
- record_decision + get_history round-trip
|
||||
- record_outcome feedback loop
|
||||
- build_context prompt generation
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from tradingagents.memory.reflexion import ReflexionMemory
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def local_memory(tmp_path):
|
||||
"""Return a ReflexionMemory using local JSON fallback."""
|
||||
return ReflexionMemory(
|
||||
mongo_uri=None,
|
||||
fallback_path=tmp_path / "reflexion.json",
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# record_decision + get_history
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_record_and_get_history(local_memory):
|
||||
"""record_decision then get_history should return the decision."""
|
||||
local_memory.record_decision(
|
||||
ticker="AAPL",
|
||||
date="2026-03-20",
|
||||
decision="BUY",
|
||||
rationale="Strong fundamentals and momentum",
|
||||
confidence="high",
|
||||
source="pipeline",
|
||||
run_id="test_run",
|
||||
)
|
||||
|
||||
history = local_memory.get_history("AAPL")
|
||||
assert len(history) == 1
|
||||
rec = history[0]
|
||||
assert rec["ticker"] == "AAPL"
|
||||
assert rec["decision"] == "BUY"
|
||||
assert rec["confidence"] == "high"
|
||||
assert rec["rationale"] == "Strong fundamentals and momentum"
|
||||
assert rec["outcome"] is None
|
||||
|
||||
|
||||
def test_multiple_decisions_sorted_newest_first(local_memory):
|
||||
"""get_history should return decisions sorted by date, newest first."""
|
||||
for i, date in enumerate(["2026-03-18", "2026-03-19", "2026-03-20"]):
|
||||
local_memory.record_decision(
|
||||
ticker="MSFT",
|
||||
date=date,
|
||||
decision=["HOLD", "BUY", "SELL"][i],
|
||||
rationale=f"Reason {i}",
|
||||
)
|
||||
|
||||
history = local_memory.get_history("MSFT")
|
||||
assert len(history) == 3
|
||||
assert history[0]["decision_date"] == "2026-03-20"
|
||||
assert history[1]["decision_date"] == "2026-03-19"
|
||||
assert history[2]["decision_date"] == "2026-03-18"
|
||||
|
||||
|
||||
def test_get_history_limit(local_memory):
|
||||
"""get_history with limit should return at most that many records."""
|
||||
for i in range(10):
|
||||
local_memory.record_decision(
|
||||
ticker="GOOGL",
|
||||
date=f"2026-03-{10 + i:02d}",
|
||||
decision="HOLD",
|
||||
rationale=f"Decision {i}",
|
||||
)
|
||||
|
||||
history = local_memory.get_history("GOOGL", limit=3)
|
||||
assert len(history) == 3
|
||||
|
||||
|
||||
def test_get_history_filters_by_ticker(local_memory):
|
||||
"""get_history should only return decisions for the requested ticker."""
|
||||
local_memory.record_decision("AAPL", "2026-03-20", "BUY", "reason")
|
||||
local_memory.record_decision("MSFT", "2026-03-20", "SELL", "reason")
|
||||
|
||||
aapl_history = local_memory.get_history("AAPL")
|
||||
assert len(aapl_history) == 1
|
||||
assert aapl_history[0]["ticker"] == "AAPL"
|
||||
|
||||
|
||||
def test_ticker_stored_as_uppercase(local_memory):
|
||||
"""Tickers should be normalized to uppercase."""
|
||||
local_memory.record_decision("aapl", "2026-03-20", "buy", "reason")
|
||||
|
||||
history = local_memory.get_history("AAPL")
|
||||
assert len(history) == 1
|
||||
assert history[0]["ticker"] == "AAPL"
|
||||
assert history[0]["decision"] == "BUY"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# record_outcome
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_record_outcome_updates_decision(local_memory):
|
||||
"""record_outcome should attach outcome data to the matching decision."""
|
||||
local_memory.record_decision("AAPL", "2026-03-20", "BUY", "reason")
|
||||
|
||||
outcome = {
|
||||
"evaluation_date": "2026-04-20",
|
||||
"price_at_decision": 185.0,
|
||||
"price_at_evaluation": 195.0,
|
||||
"price_change_pct": 5.4,
|
||||
"correct": True,
|
||||
}
|
||||
result = local_memory.record_outcome("AAPL", "2026-03-20", outcome)
|
||||
|
||||
assert result is True
|
||||
history = local_memory.get_history("AAPL")
|
||||
assert history[0]["outcome"] == outcome
|
||||
|
||||
|
||||
def test_record_outcome_returns_false_when_no_match(local_memory):
|
||||
"""record_outcome should return False when no matching decision exists."""
|
||||
result = local_memory.record_outcome("AAPL", "2026-03-20", {"correct": True})
|
||||
assert result is False
|
||||
|
||||
|
||||
def test_record_outcome_only_fills_null_outcome(local_memory):
|
||||
"""record_outcome should only update decisions that have outcome=None."""
|
||||
local_memory.record_decision("AAPL", "2026-03-20", "BUY", "reason")
|
||||
local_memory.record_outcome("AAPL", "2026-03-20", {"correct": True})
|
||||
|
||||
# Second outcome should not overwrite
|
||||
result = local_memory.record_outcome(
|
||||
"AAPL", "2026-03-20", {"correct": False}
|
||||
)
|
||||
assert result is False
|
||||
|
||||
history = local_memory.get_history("AAPL")
|
||||
assert history[0]["outcome"]["correct"] is True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# build_context
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_build_context_with_history(local_memory):
|
||||
"""build_context should return a formatted multi-line string."""
|
||||
local_memory.record_decision(
|
||||
"AAPL", "2026-03-20", "BUY", "Strong momentum signal", "high"
|
||||
)
|
||||
local_memory.record_outcome("AAPL", "2026-03-20", {
|
||||
"price_change_pct": 5.4,
|
||||
"correct": True,
|
||||
})
|
||||
|
||||
context = local_memory.build_context("AAPL")
|
||||
|
||||
assert "2026-03-20" in context
|
||||
assert "BUY" in context
|
||||
assert "high" in context
|
||||
assert "5.4% change" in context
|
||||
assert "correct=True" in context
|
||||
|
||||
|
||||
def test_build_context_no_history(local_memory):
|
||||
"""build_context with no history should return an informative message."""
|
||||
context = local_memory.build_context("ZZZZZ")
|
||||
assert "No prior decisions" in context
|
||||
|
||||
|
||||
def test_build_context_pending_outcome(local_memory):
|
||||
"""build_context with pending outcome should show 'pending'."""
|
||||
local_memory.record_decision("AAPL", "2026-03-20", "BUY", "reason")
|
||||
|
||||
context = local_memory.build_context("AAPL")
|
||||
assert "pending" in context
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Persistence
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_local_file_persists_across_instances(tmp_path):
|
||||
"""Decisions written by one instance should be readable by another."""
|
||||
fb_path = tmp_path / "reflexion.json"
|
||||
|
||||
mem1 = ReflexionMemory(fallback_path=fb_path)
|
||||
mem1.record_decision("AAPL", "2026-03-20", "BUY", "reason")
|
||||
|
||||
mem2 = ReflexionMemory(fallback_path=fb_path)
|
||||
history = mem2.get_history("AAPL")
|
||||
assert len(history) == 1
|
||||
|
||||
|
||||
def test_local_file_created_on_first_write(tmp_path):
|
||||
"""The local JSON file should be created on the first record_decision."""
|
||||
fb_path = tmp_path / "subdir" / "reflexion.json"
|
||||
assert not fb_path.exists()
|
||||
|
||||
mem = ReflexionMemory(fallback_path=fb_path)
|
||||
mem.record_decision("AAPL", "2026-03-20", "BUY", "reason")
|
||||
|
||||
assert fb_path.exists()
|
||||
data = json.loads(fb_path.read_text())
|
||||
assert len(data) == 1
|
||||
|
|
@ -0,0 +1,146 @@
|
|||
"""Tests for run_id support in report_paths.py.
|
||||
|
||||
Covers:
|
||||
- generate_run_id uniqueness and format
|
||||
- latest.json pointer mechanism (write + read)
|
||||
- path helpers with and without run_id
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tradingagents import report_paths
|
||||
from tradingagents.report_paths import (
|
||||
generate_run_id,
|
||||
get_daily_dir,
|
||||
get_digest_path,
|
||||
get_eval_dir,
|
||||
get_market_dir,
|
||||
get_ticker_dir,
|
||||
read_latest_pointer,
|
||||
write_latest_pointer,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generate_run_id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_generate_run_id_format():
|
||||
"""Run IDs should be 8-char lowercase hex strings."""
|
||||
rid = generate_run_id()
|
||||
assert len(rid) == 8
|
||||
assert all(c in "0123456789abcdef" for c in rid)
|
||||
|
||||
|
||||
def test_generate_run_id_unique():
|
||||
"""Consecutive run IDs should not collide."""
|
||||
ids = {generate_run_id() for _ in range(100)}
|
||||
assert len(ids) == 100
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# latest.json pointer
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_write_and_read_latest_pointer(tmp_path):
|
||||
"""write_latest_pointer then read_latest_pointer must round-trip."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
write_latest_pointer("2026-03-20", "abc12345")
|
||||
result = read_latest_pointer("2026-03-20")
|
||||
|
||||
assert result == "abc12345"
|
||||
pointer = tmp_path / "daily" / "2026-03-20" / "latest.json"
|
||||
assert pointer.exists()
|
||||
data = json.loads(pointer.read_text())
|
||||
assert data["run_id"] == "abc12345"
|
||||
assert "updated_at" in data
|
||||
|
||||
|
||||
def test_read_latest_pointer_returns_none_when_missing(tmp_path):
|
||||
"""read_latest_pointer returns None when no pointer file exists."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
assert read_latest_pointer("2026-01-01") is None
|
||||
|
||||
|
||||
def test_write_latest_pointer_overwrites(tmp_path):
|
||||
"""Writing a new pointer should overwrite the old one."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
write_latest_pointer("2026-03-20", "first")
|
||||
write_latest_pointer("2026-03-20", "second")
|
||||
result = read_latest_pointer("2026-03-20")
|
||||
|
||||
assert result == "second"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Path helpers — no run_id (backward compatible)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_get_daily_dir_no_run_id(tmp_path):
|
||||
"""Without run_id, get_daily_dir returns the flat date path."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_daily_dir("2026-03-20")
|
||||
assert result == tmp_path / "daily" / "2026-03-20"
|
||||
|
||||
|
||||
def test_get_market_dir_no_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_market_dir("2026-03-20")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "market"
|
||||
|
||||
|
||||
def test_get_ticker_dir_no_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_ticker_dir("2026-03-20", "AAPL")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "AAPL"
|
||||
|
||||
|
||||
def test_get_eval_dir_no_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_eval_dir("2026-03-20", "msft")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "MSFT" / "eval"
|
||||
|
||||
|
||||
def test_get_digest_path_always_at_date_level(tmp_path):
|
||||
"""Digest path is always at the date level, not scoped by run_id."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_digest_path("2026-03-20")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "daily_digest.md"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Path helpers — with run_id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_get_daily_dir_with_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_daily_dir("2026-03-20", run_id="abc12345")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "runs" / "abc12345"
|
||||
|
||||
|
||||
def test_get_market_dir_with_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_market_dir("2026-03-20", run_id="abc12345")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "runs" / "abc12345" / "market"
|
||||
|
||||
|
||||
def test_get_ticker_dir_with_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_ticker_dir("2026-03-20", "AAPL", run_id="abc12345")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "runs" / "abc12345" / "AAPL"
|
||||
|
||||
|
||||
def test_get_eval_dir_with_run_id(tmp_path):
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_path):
|
||||
result = get_eval_dir("2026-03-20", "AAPL", run_id="abc12345")
|
||||
assert result == tmp_path / "daily" / "2026-03-20" / "runs" / "abc12345" / "AAPL" / "eval"
|
||||
|
|
@ -0,0 +1,220 @@
|
|||
"""Tests for ReportStore run_id support.
|
||||
|
||||
Covers:
|
||||
- Writes with run_id go to runs/{run_id}/ subdirectory
|
||||
- Reads without run_id resolve via latest.json pointer
|
||||
- Backward-compatible reads from legacy flat layout
|
||||
- Multiple runs on the same day don't overwrite each other
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tradingagents import report_paths
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_reports(tmp_path):
|
||||
"""Temporary reports directory."""
|
||||
reports_dir = tmp_path / "reports"
|
||||
reports_dir.mkdir()
|
||||
return reports_dir
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Write with run_id → scoped directory
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_save_scan_with_run_id_creates_scoped_path(tmp_reports):
|
||||
"""save_scan with run_id should write under runs/{run_id}/market/."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
store = ReportStore(base_dir=tmp_reports, run_id="abc12345")
|
||||
path = store.save_scan("2026-03-20", {"watchlist": ["AAPL"]})
|
||||
|
||||
assert "runs/abc12345/market" in str(path)
|
||||
assert path.exists()
|
||||
data = json.loads(path.read_text())
|
||||
assert data["watchlist"] == ["AAPL"]
|
||||
|
||||
|
||||
def test_save_analysis_with_run_id_creates_scoped_path(tmp_reports):
|
||||
"""save_analysis with run_id should write under runs/{run_id}/{TICKER}/."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
store = ReportStore(base_dir=tmp_reports, run_id="abc12345")
|
||||
path = store.save_analysis("2026-03-20", "AAPL", {"score": 0.9})
|
||||
|
||||
assert "runs/abc12345/AAPL" in str(path)
|
||||
data = json.loads(path.read_text())
|
||||
assert data["score"] == 0.9
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Read without run_id → latest.json resolution
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_load_scan_resolves_via_latest_pointer(tmp_reports):
|
||||
"""load_scan without run_id should use latest.json to find the right run."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
# Write with run_id
|
||||
writer = ReportStore(base_dir=tmp_reports, run_id="abc12345")
|
||||
writer.save_scan("2026-03-20", {"watchlist": ["AAPL"]})
|
||||
|
||||
# Read without run_id
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
data = reader.load_scan("2026-03-20")
|
||||
|
||||
assert data is not None
|
||||
assert data["watchlist"] == ["AAPL"]
|
||||
|
||||
|
||||
def test_load_analysis_resolves_via_latest_pointer(tmp_reports):
|
||||
"""load_analysis without run_id should use latest.json."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
writer = ReportStore(base_dir=tmp_reports, run_id="abc12345")
|
||||
writer.save_analysis("2026-03-20", "MSFT", {"score": 0.85})
|
||||
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
data = reader.load_analysis("2026-03-20", "MSFT")
|
||||
|
||||
assert data is not None
|
||||
assert data["score"] == 0.85
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Backward compatibility — legacy flat layout
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_load_scan_falls_back_to_legacy_layout(tmp_reports):
|
||||
"""When no latest.json exists, load from the legacy flat layout."""
|
||||
# Write directly to legacy path (no run_id, no latest.json)
|
||||
legacy_dir = tmp_reports / "daily" / "2026-03-20" / "market"
|
||||
legacy_dir.mkdir(parents=True)
|
||||
(legacy_dir / "macro_scan_summary.json").write_text(
|
||||
json.dumps({"legacy": True}), encoding="utf-8"
|
||||
)
|
||||
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
data = reader.load_scan("2026-03-20")
|
||||
|
||||
assert data is not None
|
||||
assert data["legacy"] is True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Multiple runs — no overwrite
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_multiple_runs_same_day_no_overwrite(tmp_reports):
|
||||
"""Two runs on the same day should both be preserved on disk."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
store1 = ReportStore(base_dir=tmp_reports, run_id="run_001")
|
||||
store1.save_scan("2026-03-20", {"run": 1})
|
||||
|
||||
store2 = ReportStore(base_dir=tmp_reports, run_id="run_002")
|
||||
store2.save_scan("2026-03-20", {"run": 2})
|
||||
|
||||
# Both directories should exist
|
||||
run1_dir = tmp_reports / "daily" / "2026-03-20" / "runs" / "run_001" / "market"
|
||||
run2_dir = tmp_reports / "daily" / "2026-03-20" / "runs" / "run_002" / "market"
|
||||
assert run1_dir.exists()
|
||||
assert run2_dir.exists()
|
||||
|
||||
# Both files should have distinct content
|
||||
data1 = json.loads((run1_dir / "macro_scan_summary.json").read_text())
|
||||
data2 = json.loads((run2_dir / "macro_scan_summary.json").read_text())
|
||||
assert data1["run"] == 1
|
||||
assert data2["run"] == 2
|
||||
|
||||
|
||||
def test_latest_pointer_points_to_second_run(tmp_reports):
|
||||
"""After two runs, latest.json should point to the second run."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
store1 = ReportStore(base_dir=tmp_reports, run_id="run_001")
|
||||
store1.save_scan("2026-03-20", {"run": 1})
|
||||
|
||||
store2 = ReportStore(base_dir=tmp_reports, run_id="run_002")
|
||||
store2.save_scan("2026-03-20", {"run": 2})
|
||||
|
||||
# Reader (no run_id) should get the second run's data
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
data = reader.load_scan("2026-03-20")
|
||||
|
||||
assert data is not None
|
||||
assert data["run"] == 2
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Portfolio reports with run_id
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_save_and_load_pm_decision_with_run_id(tmp_reports):
|
||||
"""PM decision save/load with run_id should work through latest.json."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
writer = ReportStore(base_dir=tmp_reports, run_id="run_pm")
|
||||
writer.save_pm_decision("2026-03-20", "pid-123", {"buys": ["AAPL"]})
|
||||
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
data = reader.load_pm_decision("2026-03-20", "pid-123")
|
||||
|
||||
assert data is not None
|
||||
assert data["buys"] == ["AAPL"]
|
||||
|
||||
|
||||
def test_save_and_load_execution_result_with_run_id(tmp_reports):
|
||||
"""Execution result save/load with run_id should work through latest.json."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
writer = ReportStore(base_dir=tmp_reports, run_id="run_exec")
|
||||
writer.save_execution_result("2026-03-20", "pid-123", {"trades": 3})
|
||||
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
data = reader.load_execution_result("2026-03-20", "pid-123")
|
||||
|
||||
assert data is not None
|
||||
assert data["trades"] == 3
|
||||
|
||||
|
||||
def test_list_pm_decisions_finds_both_layouts(tmp_reports):
|
||||
"""list_pm_decisions should find decisions in both run-scoped and flat layouts."""
|
||||
with patch.object(report_paths, "REPORTS_ROOT", tmp_reports):
|
||||
# Run-scoped
|
||||
writer = ReportStore(base_dir=tmp_reports, run_id="run_001")
|
||||
writer.save_pm_decision("2026-03-20", "pid-abc", {"date": "2026-03-20"})
|
||||
|
||||
# Also write to legacy flat layout
|
||||
legacy_dir = tmp_reports / "daily" / "2026-03-19" / "portfolio"
|
||||
legacy_dir.mkdir(parents=True)
|
||||
(legacy_dir / "pid-abc_pm_decision.json").write_text(
|
||||
json.dumps({"date": "2026-03-19"}), encoding="utf-8"
|
||||
)
|
||||
|
||||
reader = ReportStore(base_dir=tmp_reports)
|
||||
paths = reader.list_pm_decisions("pid-abc")
|
||||
assert len(paths) == 2
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# run_id property
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_run_id_property():
|
||||
"""ReportStore.run_id should return the configured run_id."""
|
||||
store = ReportStore(run_id="test123")
|
||||
assert store.run_id == "test123"
|
||||
|
||||
|
||||
def test_run_id_property_none():
|
||||
"""ReportStore.run_id should return None when not set."""
|
||||
store = ReportStore()
|
||||
assert store.run_id is None
|
||||
|
|
@ -0,0 +1,119 @@
|
|||
"""Tests for tradingagents.portfolio.store_factory.
|
||||
|
||||
Covers:
|
||||
- Default (no env var) returns filesystem ReportStore
|
||||
- TRADINGAGENTS_MONGO_URI returns MongoReportStore
|
||||
- Explicit mongo_uri parameter takes precedence
|
||||
- MongoDB failure falls back to filesystem
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
from tradingagents.portfolio.store_factory import create_report_store
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Default: filesystem
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_default_returns_filesystem_store():
|
||||
"""When no MongoDB URI is configured, the factory returns ReportStore."""
|
||||
with patch.dict("os.environ", {}, clear=True):
|
||||
store = create_report_store()
|
||||
|
||||
assert isinstance(store, ReportStore)
|
||||
|
||||
|
||||
def test_default_passes_run_id():
|
||||
"""run_id should be forwarded to the filesystem store."""
|
||||
with patch.dict("os.environ", {}, clear=True):
|
||||
store = create_report_store(run_id="abc123")
|
||||
|
||||
assert isinstance(store, ReportStore)
|
||||
assert store.run_id == "abc123"
|
||||
|
||||
|
||||
def test_base_dir_forwarded():
|
||||
"""base_dir should be forwarded to the filesystem store."""
|
||||
with patch.dict("os.environ", {}, clear=True):
|
||||
store = create_report_store(base_dir="/custom/reports")
|
||||
|
||||
assert isinstance(store, ReportStore)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Explicit mongo_uri → MongoDB
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_explicit_mongo_uri_returns_mongo_store():
|
||||
"""When mongo_uri is provided, the factory returns MongoReportStore."""
|
||||
with patch(
|
||||
"tradingagents.portfolio.store_factory.MongoReportStore",
|
||||
create=True,
|
||||
) as MockMongo, \
|
||||
patch("tradingagents.portfolio.mongo_report_store.MongoClient") as mock_client_cls:
|
||||
mock_store = MagicMock()
|
||||
mock_store.run_id = "abc"
|
||||
|
||||
# Import the real module so the factory can import it
|
||||
from tradingagents.portfolio.mongo_report_store import MongoReportStore
|
||||
|
||||
with patch(
|
||||
"tradingagents.portfolio.mongo_report_store.MongoClient"
|
||||
) as mock_mc:
|
||||
mock_mc.return_value = MagicMock()
|
||||
store = create_report_store(
|
||||
run_id="abc",
|
||||
mongo_uri="mongodb://localhost:27017",
|
||||
)
|
||||
# It should be a MongoReportStore or fall back to ReportStore
|
||||
# Since MongoDB might fail in tests, just check it returns something
|
||||
assert store is not None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MongoDB failure → filesystem fallback
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_mongo_failure_falls_back_to_filesystem():
|
||||
"""When MongoDB connection fails, the factory falls back to ReportStore."""
|
||||
with patch(
|
||||
"tradingagents.portfolio.mongo_report_store.MongoClient",
|
||||
side_effect=Exception("connection refused"),
|
||||
):
|
||||
store = create_report_store(
|
||||
run_id="test",
|
||||
mongo_uri="mongodb://bad-host:27017",
|
||||
)
|
||||
|
||||
assert isinstance(store, ReportStore)
|
||||
assert store.run_id == "test"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Env var
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_env_var_mongo_uri():
|
||||
"""TRADINGAGENTS_MONGO_URI env var should trigger MongoDB store."""
|
||||
with patch.dict(
|
||||
"os.environ",
|
||||
{"TRADINGAGENTS_MONGO_URI": "mongodb://envhost:27017"},
|
||||
), patch(
|
||||
"tradingagents.portfolio.mongo_report_store.MongoClient",
|
||||
side_effect=Exception("connection refused"),
|
||||
):
|
||||
# Will fail to connect, but should try and then fall back
|
||||
store = create_report_store()
|
||||
|
||||
# Should fall back to filesystem
|
||||
assert isinstance(store, ReportStore)
|
||||
|
|
@ -104,4 +104,9 @@ DEFAULT_CONFIG = {
|
|||
# Finnhub free tier provides same data + MSPR aggregate bonus signal
|
||||
"get_insider_transactions": "finnhub",
|
||||
},
|
||||
# Report storage backend
|
||||
# When mongo_uri is set, reports are persisted in MongoDB (never overwritten).
|
||||
# Otherwise, the filesystem store is used (run_id prevents same-day overwrites).
|
||||
"mongo_uri": _env("MONGO_URI"), # e.g. "mongodb://localhost:27017"
|
||||
"mongo_db": _env("MONGO_DB", "tradingagents"), # MongoDB database name
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
"""Agent memory subsystem for TradingAgents."""
|
||||
|
|
@ -0,0 +1,282 @@
|
|||
"""Reflexion memory — learn from past trading decisions.
|
||||
|
||||
Stores agent decisions with rationale and later associates actual market
|
||||
outcomes, enabling agents to *reflect* on the accuracy of their previous
|
||||
calls and adjust their confidence accordingly.
|
||||
|
||||
Backed by MongoDB when available; falls back to a local JSON file when not.
|
||||
|
||||
Schema (``reflexion`` collection)::
|
||||
|
||||
{
|
||||
"ticker": str, # "AAPL"
|
||||
"decision_date": str, # ISO date "2026-03-20"
|
||||
"decision": str, # "BUY" | "SELL" | "HOLD" | "SKIP"
|
||||
"rationale": str, # free-form reasoning
|
||||
"confidence": str, # "high" | "medium" | "low"
|
||||
"source": str, # "pipeline" | "portfolio" | "auto"
|
||||
"run_id": str | None,
|
||||
"outcome": dict | None, # filled later by record_outcome()
|
||||
"created_at": datetime,
|
||||
}
|
||||
|
||||
Usage::
|
||||
|
||||
from tradingagents.memory.reflexion import ReflexionMemory
|
||||
|
||||
mem = ReflexionMemory("mongodb://localhost:27017")
|
||||
mem.record_decision("AAPL", "2026-03-20", "BUY", "Strong fundamentals", "high")
|
||||
history = mem.get_history("AAPL", limit=5)
|
||||
context = mem.build_context("AAPL", limit=3)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_COLLECTION = "reflexion"
|
||||
|
||||
|
||||
class ReflexionMemory:
|
||||
"""MongoDB-backed reflexion memory.
|
||||
|
||||
Falls back to a local JSON file when MongoDB is unavailable, so the
|
||||
feature always works (though with degraded query performance on the
|
||||
local variant).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
mongo_uri: str | None = None,
|
||||
db_name: str = "tradingagents",
|
||||
fallback_path: str | Path = "reports/reflexion.json",
|
||||
) -> None:
|
||||
self._col = None
|
||||
self._fallback_path = Path(fallback_path)
|
||||
|
||||
if mongo_uri:
|
||||
try:
|
||||
from pymongo import DESCENDING, MongoClient
|
||||
|
||||
client = MongoClient(mongo_uri)
|
||||
db = client[db_name]
|
||||
self._col = db[_COLLECTION]
|
||||
self._col.create_index(
|
||||
[("ticker", 1), ("decision_date", DESCENDING)]
|
||||
)
|
||||
self._col.create_index("created_at")
|
||||
logger.info("ReflexionMemory using MongoDB (db=%s)", db_name)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"ReflexionMemory: MongoDB unavailable — using local file",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Record decision
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def record_decision(
|
||||
self,
|
||||
ticker: str,
|
||||
date: str,
|
||||
decision: str,
|
||||
rationale: str,
|
||||
confidence: str = "medium",
|
||||
source: str = "pipeline",
|
||||
run_id: str | None = None,
|
||||
) -> None:
|
||||
"""Store a trading decision for later reflection.
|
||||
|
||||
Args:
|
||||
ticker: Ticker symbol.
|
||||
date: ISO date string.
|
||||
decision: "BUY", "SELL", "HOLD", or "SKIP".
|
||||
rationale: Agent's reasoning.
|
||||
confidence: "high", "medium", or "low".
|
||||
source: Which pipeline produced the decision.
|
||||
run_id: Optional run identifier.
|
||||
"""
|
||||
doc = {
|
||||
"ticker": ticker.upper(),
|
||||
"decision_date": date,
|
||||
"decision": decision.upper(),
|
||||
"rationale": rationale,
|
||||
"confidence": confidence.lower(),
|
||||
"source": source,
|
||||
"run_id": run_id,
|
||||
"outcome": None,
|
||||
"created_at": datetime.now(timezone.utc),
|
||||
}
|
||||
if self._col is not None:
|
||||
self._col.insert_one(doc)
|
||||
else:
|
||||
# Local JSON fallback uses ISO string (JSON has no datetime type)
|
||||
doc["created_at"] = doc["created_at"].isoformat()
|
||||
self._append_local(doc)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Record outcome (feedback loop)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def record_outcome(
|
||||
self,
|
||||
ticker: str,
|
||||
decision_date: str,
|
||||
outcome: dict[str, Any],
|
||||
) -> bool:
|
||||
"""Attach an outcome to the most recent decision for a ticker+date.
|
||||
|
||||
Args:
|
||||
ticker: Ticker symbol.
|
||||
decision_date: The date the original decision was made.
|
||||
outcome: Dict with evaluation data, e.g.::
|
||||
|
||||
{
|
||||
"evaluation_date": "2026-04-20",
|
||||
"price_at_decision": 185.0,
|
||||
"price_at_evaluation": 195.0,
|
||||
"price_change_pct": 5.4,
|
||||
"correct": True,
|
||||
}
|
||||
|
||||
Returns:
|
||||
True if a matching decision was found and updated.
|
||||
"""
|
||||
if self._col is not None:
|
||||
from pymongo import DESCENDING
|
||||
|
||||
doc = self._col.find_one_and_update(
|
||||
{
|
||||
"ticker": ticker.upper(),
|
||||
"decision_date": decision_date,
|
||||
"outcome": None,
|
||||
},
|
||||
{"$set": {"outcome": outcome}},
|
||||
sort=[("created_at", DESCENDING)],
|
||||
)
|
||||
return doc is not None
|
||||
else:
|
||||
return self._update_local_outcome(ticker.upper(), decision_date, outcome)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Query
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def get_history(
|
||||
self,
|
||||
ticker: str,
|
||||
limit: int = 10,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Return the most recent decisions for *ticker*, newest first.
|
||||
|
||||
Args:
|
||||
ticker: Ticker symbol.
|
||||
limit: Maximum number of results.
|
||||
"""
|
||||
if self._col is not None:
|
||||
from pymongo import DESCENDING
|
||||
|
||||
cursor = self._col.find(
|
||||
{"ticker": ticker.upper()},
|
||||
{"_id": 0},
|
||||
).sort("decision_date", DESCENDING).limit(limit)
|
||||
return list(cursor)
|
||||
else:
|
||||
return self._load_local(ticker.upper(), limit)
|
||||
|
||||
def build_context(self, ticker: str, limit: int = 3) -> str:
|
||||
"""Build a human-readable context string from past decisions.
|
||||
|
||||
Suitable for injection into agent system prompts::
|
||||
|
||||
context = memory.build_context("AAPL", limit=3)
|
||||
system_prompt = f"...\\n\\nPast decisions:\\n{context}"
|
||||
|
||||
Args:
|
||||
ticker: Ticker symbol.
|
||||
limit: How many past decisions to include.
|
||||
|
||||
Returns:
|
||||
Multi-line string summarising recent decisions and outcomes.
|
||||
"""
|
||||
history = self.get_history(ticker, limit=limit)
|
||||
if not history:
|
||||
return f"No prior decisions recorded for {ticker.upper()}."
|
||||
|
||||
lines: list[str] = []
|
||||
for rec in history:
|
||||
dt = rec.get("decision_date", "?")
|
||||
dec = rec.get("decision", "?")
|
||||
conf = rec.get("confidence", "?")
|
||||
rat = rec.get("rationale", "")[:200]
|
||||
|
||||
outcome = rec.get("outcome")
|
||||
if outcome:
|
||||
pct = outcome.get("price_change_pct", "?")
|
||||
correct = outcome.get("correct", "?")
|
||||
outcome_str = f" Outcome: {pct}% change, correct={correct}"
|
||||
else:
|
||||
outcome_str = " Outcome: pending"
|
||||
|
||||
lines.append(
|
||||
f"- [{dt}] {dec} (confidence: {conf})\n"
|
||||
f" Rationale: {rat}\n{outcome_str}"
|
||||
)
|
||||
return "\n".join(lines)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Local JSON fallback
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _load_all_local(self) -> list[dict[str, Any]]:
|
||||
"""Load all records from the local JSON file."""
|
||||
if not self._fallback_path.exists():
|
||||
return []
|
||||
try:
|
||||
return json.loads(self._fallback_path.read_text(encoding="utf-8"))
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return []
|
||||
|
||||
def _save_all_local(self, records: list[dict[str, Any]]) -> None:
|
||||
"""Overwrite the local JSON file with all records."""
|
||||
self._fallback_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
self._fallback_path.write_text(
|
||||
json.dumps(records, indent=2), encoding="utf-8"
|
||||
)
|
||||
|
||||
def _append_local(self, doc: dict[str, Any]) -> None:
|
||||
"""Append a single record to the local file."""
|
||||
records = self._load_all_local()
|
||||
records.append(doc)
|
||||
self._save_all_local(records)
|
||||
|
||||
def _load_local(self, ticker: str, limit: int) -> list[dict[str, Any]]:
|
||||
"""Load and filter records for a ticker from the local file."""
|
||||
records = self._load_all_local()
|
||||
filtered = [r for r in records if r.get("ticker") == ticker]
|
||||
filtered.sort(key=lambda r: r.get("decision_date", ""), reverse=True)
|
||||
return filtered[:limit]
|
||||
|
||||
def _update_local_outcome(
|
||||
self, ticker: str, decision_date: str, outcome: dict[str, Any]
|
||||
) -> bool:
|
||||
"""Update the most recent matching decision in the local file."""
|
||||
records = self._load_all_local()
|
||||
# Find matching records (newest first)
|
||||
for rec in reversed(records):
|
||||
if (
|
||||
rec.get("ticker") == ticker
|
||||
and rec.get("decision_date") == decision_date
|
||||
and rec.get("outcome") is None
|
||||
):
|
||||
rec["outcome"] = outcome
|
||||
self._save_all_local(records)
|
||||
return True
|
||||
return False
|
||||
|
|
@ -322,14 +322,18 @@ def _extract_graph_node(kwargs: dict) -> str:
|
|||
# Thread-local context for passing RunLogger to vendor/tool layers
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_current_run_logger: threading.local = threading.local()
|
||||
import contextvars as _cv
|
||||
|
||||
_current_run_logger: _cv.ContextVar["RunLogger | None"] = _cv.ContextVar(
|
||||
"current_run_logger", default=None
|
||||
)
|
||||
|
||||
|
||||
def set_run_logger(rl: RunLogger | None) -> None:
|
||||
"""Set the active RunLogger for the current thread."""
|
||||
_current_run_logger.instance = rl
|
||||
def set_run_logger(rl: "RunLogger | None") -> None:
|
||||
"""Set the active RunLogger for the current async task or thread."""
|
||||
_current_run_logger.set(rl)
|
||||
|
||||
|
||||
def get_run_logger() -> RunLogger | None:
|
||||
"""Get the active RunLogger (or None if not set)."""
|
||||
return getattr(_current_run_logger, "instance", None)
|
||||
def get_run_logger() -> "RunLogger | None":
|
||||
"""Get the active RunLogger for the current async task (or None if not set)."""
|
||||
return _current_run_logger.get()
|
||||
|
|
|
|||
|
|
@ -0,0 +1,279 @@
|
|||
"""MongoDB document store for Portfolio Manager reports.
|
||||
|
||||
Drop-in replacement for the filesystem :class:`ReportStore` that persists
|
||||
every report as a MongoDB document. Multiple same-day runs naturally coexist
|
||||
because each document carries a ``run_id`` and ``created_at`` timestamp —
|
||||
no files are ever overwritten.
|
||||
|
||||
Required dependency: ``pymongo >= 4.12``.
|
||||
|
||||
Usage::
|
||||
|
||||
from tradingagents.portfolio.mongo_report_store import MongoReportStore
|
||||
|
||||
store = MongoReportStore("mongodb://localhost:27017", run_id="a1b2c3d4")
|
||||
store.save_scan("2026-03-20", {"watchlist": ["AAPL"]})
|
||||
data = store.load_scan("2026-03-20")
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
|
||||
from pymongo import DESCENDING, MongoClient
|
||||
from pymongo.collection import Collection
|
||||
from pymongo.database import Database
|
||||
|
||||
from tradingagents.portfolio.exceptions import ReportStoreError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Canonical collection names
|
||||
_REPORTS_COLLECTION = "reports"
|
||||
|
||||
|
||||
class MongoReportStore:
|
||||
"""MongoDB-backed report store.
|
||||
|
||||
Each report is a document in the ``reports`` collection with the schema::
|
||||
|
||||
{
|
||||
"run_id": str, # short hex id for the run
|
||||
"date": str, # ISO date string "2026-03-20"
|
||||
"report_type": str, # scan | analysis | holding_review
|
||||
# | risk_metrics | pm_decision
|
||||
# | execution_result
|
||||
"ticker": str | None, # uppercase ticker (analysis, holding_review)
|
||||
"portfolio_id": str | None, # portfolio UUID (risk, decision, execution)
|
||||
"data": dict, # the actual report payload
|
||||
"markdown": str | None, # optional markdown (pm_decision only)
|
||||
"created_at": datetime, # UTC timestamp
|
||||
}
|
||||
|
||||
All load methods return the **most recent** document for a given
|
||||
``(date, report_type [, ticker | portfolio_id])`` tuple, ordered by
|
||||
``created_at DESC``. Pass a specific ``run_id`` to ``load_*`` via
|
||||
``load_scan(date, run_id=run_id)`` to pin to a particular run.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
connection_string: str,
|
||||
db_name: str = "tradingagents",
|
||||
run_id: str | None = None,
|
||||
) -> None:
|
||||
self._run_id = run_id
|
||||
try:
|
||||
self._client: MongoClient = MongoClient(connection_string)
|
||||
self._db: Database = self._client[db_name]
|
||||
self._col: Collection = self._db[_REPORTS_COLLECTION]
|
||||
except Exception as exc:
|
||||
raise ReportStoreError(f"MongoDB connection failed: {exc}") from exc
|
||||
self.ensure_indexes()
|
||||
|
||||
@property
|
||||
def run_id(self) -> str | None:
|
||||
return self._run_id
|
||||
|
||||
def ensure_indexes(self) -> None:
|
||||
"""Create indexes for efficient querying (idempotent)."""
|
||||
self._col.create_index([("date", DESCENDING), ("report_type", 1)])
|
||||
self._col.create_index(
|
||||
[("date", DESCENDING), ("report_type", 1), ("ticker", 1)]
|
||||
)
|
||||
self._col.create_index(
|
||||
[("date", DESCENDING), ("report_type", 1), ("portfolio_id", 1)]
|
||||
)
|
||||
self._col.create_index("run_id")
|
||||
self._col.create_index("created_at")
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _save(
|
||||
self,
|
||||
date: str,
|
||||
report_type: str,
|
||||
data: dict[str, Any],
|
||||
*,
|
||||
ticker: str | None = None,
|
||||
portfolio_id: str | None = None,
|
||||
markdown: str | None = None,
|
||||
) -> str:
|
||||
"""Insert a report document. Returns the inserted document's _id."""
|
||||
doc = {
|
||||
"run_id": self._run_id,
|
||||
"date": date,
|
||||
"report_type": report_type,
|
||||
"ticker": ticker.upper() if ticker else None,
|
||||
"portfolio_id": portfolio_id,
|
||||
"data": data,
|
||||
"markdown": markdown,
|
||||
"created_at": datetime.now(timezone.utc),
|
||||
}
|
||||
try:
|
||||
result = self._col.insert_one(doc)
|
||||
return str(result.inserted_id)
|
||||
except Exception as exc:
|
||||
raise ReportStoreError(
|
||||
f"MongoDB insert failed ({report_type}): {exc}"
|
||||
) from exc
|
||||
|
||||
def _load(
|
||||
self,
|
||||
date: str,
|
||||
report_type: str,
|
||||
*,
|
||||
ticker: str | None = None,
|
||||
portfolio_id: str | None = None,
|
||||
run_id: str | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Load the most recent document matching the query.
|
||||
|
||||
When *run_id* is provided, only documents from that run are considered.
|
||||
Otherwise the most recent (by ``created_at``) is returned.
|
||||
"""
|
||||
query: dict[str, Any] = {"date": date, "report_type": report_type}
|
||||
if ticker:
|
||||
query["ticker"] = ticker.upper()
|
||||
if portfolio_id:
|
||||
query["portfolio_id"] = portfolio_id
|
||||
if run_id:
|
||||
query["run_id"] = run_id
|
||||
|
||||
doc = self._col.find_one(query, sort=[("created_at", DESCENDING)])
|
||||
if doc is None:
|
||||
return None
|
||||
return doc.get("data")
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Macro Scan
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def save_scan(self, date: str, data: dict[str, Any]) -> str:
|
||||
return self._save(date, "scan", data)
|
||||
|
||||
def load_scan(self, date: str, *, run_id: str | None = None) -> dict[str, Any] | None:
|
||||
return self._load(date, "scan", run_id=run_id)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Per-Ticker Analysis
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def save_analysis(self, date: str, ticker: str, data: dict[str, Any]) -> str:
|
||||
return self._save(date, "analysis", data, ticker=ticker)
|
||||
|
||||
def load_analysis(
|
||||
self, date: str, ticker: str, *, run_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
return self._load(date, "analysis", ticker=ticker, run_id=run_id)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Holding Reviews
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def save_holding_review(
|
||||
self, date: str, ticker: str, data: dict[str, Any]
|
||||
) -> str:
|
||||
return self._save(date, "holding_review", data, ticker=ticker)
|
||||
|
||||
def load_holding_review(
|
||||
self, date: str, ticker: str, *, run_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
return self._load(date, "holding_review", ticker=ticker, run_id=run_id)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Risk Metrics
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def save_risk_metrics(
|
||||
self, date: str, portfolio_id: str, data: dict[str, Any]
|
||||
) -> str:
|
||||
return self._save(date, "risk_metrics", data, portfolio_id=portfolio_id)
|
||||
|
||||
def load_risk_metrics(
|
||||
self, date: str, portfolio_id: str, *, run_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
return self._load(
|
||||
date, "risk_metrics", portfolio_id=portfolio_id, run_id=run_id
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# PM Decisions
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def save_pm_decision(
|
||||
self,
|
||||
date: str,
|
||||
portfolio_id: str,
|
||||
data: dict[str, Any],
|
||||
markdown: str | None = None,
|
||||
) -> str:
|
||||
return self._save(
|
||||
date, "pm_decision", data,
|
||||
portfolio_id=portfolio_id, markdown=markdown,
|
||||
)
|
||||
|
||||
def load_pm_decision(
|
||||
self, date: str, portfolio_id: str, *, run_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
return self._load(
|
||||
date, "pm_decision", portfolio_id=portfolio_id, run_id=run_id
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Execution Results
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def save_execution_result(
|
||||
self, date: str, portfolio_id: str, data: dict[str, Any]
|
||||
) -> str:
|
||||
return self._save(
|
||||
date, "execution_result", data, portfolio_id=portfolio_id,
|
||||
)
|
||||
|
||||
def load_execution_result(
|
||||
self, date: str, portfolio_id: str, *, run_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
return self._load(
|
||||
date, "execution_result", portfolio_id=portfolio_id, run_id=run_id,
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Utility
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def clear_portfolio_stage(self, date: str, portfolio_id: str) -> list[str]:
|
||||
"""Delete PM decision and execution result documents for a given date/portfolio."""
|
||||
deleted = []
|
||||
for rtype in ("pm_decision", "execution_result"):
|
||||
result = self._col.delete_many(
|
||||
{"date": date, "report_type": rtype, "portfolio_id": portfolio_id}
|
||||
)
|
||||
if result.deleted_count:
|
||||
deleted.append(rtype)
|
||||
return deleted
|
||||
|
||||
def list_pm_decisions(self, portfolio_id: str) -> list[dict[str, Any]]:
|
||||
"""Return all PM decisions for a portfolio, newest first.
|
||||
|
||||
Excludes ``_id`` (BSON ObjectId) which is not JSON-serializable.
|
||||
"""
|
||||
return list(
|
||||
self._col.find(
|
||||
{"report_type": "pm_decision", "portfolio_id": portfolio_id},
|
||||
{"_id": 0},
|
||||
sort=[("date", DESCENDING), ("created_at", DESCENDING)],
|
||||
)
|
||||
)
|
||||
|
||||
def list_analyses_for_date(self, date: str) -> list[str]:
|
||||
"""Return ticker symbols that have an analysis for the given date."""
|
||||
docs = self._col.find(
|
||||
{"date": date, "report_type": "analysis"},
|
||||
{"ticker": 1},
|
||||
)
|
||||
return list({d["ticker"] for d in docs if d.get("ticker")})
|
||||
|
|
@ -4,26 +4,32 @@ Saves and loads all non-transactional portfolio artifacts (scans, per-ticker
|
|||
analysis, holding reviews, risk metrics, PM decisions) using the existing
|
||||
``tradingagents/report_paths.py`` path convention.
|
||||
|
||||
Directory layout::
|
||||
When a ``run_id`` is set on the store, all artifacts are written under a
|
||||
run-specific subdirectory so that same-day re-runs never overwrite earlier
|
||||
results::
|
||||
|
||||
reports/daily/{date}/
|
||||
reports/daily/{date}/runs/{run_id}/
|
||||
├── market/
|
||||
│ └── macro_scan_summary.json ← save_scan / load_scan
|
||||
│ └── macro_scan_summary.json
|
||||
├── {TICKER}/
|
||||
│ └── complete_report.json ← save_analysis / load_analysis
|
||||
│ └── complete_report.json
|
||||
└── portfolio/
|
||||
├── {TICKER}_holding_review.json ← save/load_holding_review
|
||||
├── {TICKER}_holding_review.json
|
||||
├── {portfolio_id}_risk_metrics.json
|
||||
├── {portfolio_id}_pm_decision.json
|
||||
└── {portfolio_id}_pm_decision.md
|
||||
|
||||
A ``latest.json`` pointer at the date level is updated on every write so
|
||||
that load methods (when called *without* a ``run_id``) transparently
|
||||
resolve to the most recent run.
|
||||
|
||||
Usage::
|
||||
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
|
||||
store = ReportStore()
|
||||
store = ReportStore(run_id="a1b2c3d4")
|
||||
store.save_scan("2026-03-20", {"watchlist": [...]})
|
||||
data = store.load_scan("2026-03-20")
|
||||
data = store.load_scan("2026-03-20") # reads from latest run
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
@ -33,6 +39,7 @@ from pathlib import Path
|
|||
from typing import Any
|
||||
|
||||
from tradingagents.portfolio.exceptions import ReportStoreError
|
||||
from tradingagents.report_paths import read_latest_pointer, write_latest_pointer
|
||||
|
||||
|
||||
class ReportStore:
|
||||
|
|
@ -40,9 +47,18 @@ class ReportStore:
|
|||
|
||||
Directories are created automatically on first write.
|
||||
All load methods return ``None`` when the file does not exist.
|
||||
|
||||
When ``run_id`` is provided, write paths are scoped under
|
||||
``{base_dir}/daily/{date}/runs/{run_id}/…`` and a ``latest.json``
|
||||
pointer is updated automatically. Load methods resolve through
|
||||
the pointer when no ``run_id`` is set.
|
||||
"""
|
||||
|
||||
def __init__(self, base_dir: str | Path = "reports") -> None:
|
||||
def __init__(
|
||||
self,
|
||||
base_dir: str | Path = "reports",
|
||||
run_id: str | None = None,
|
||||
) -> None:
|
||||
"""Initialise the store with a base reports directory.
|
||||
|
||||
Args:
|
||||
|
|
@ -50,19 +66,60 @@ class ReportStore:
|
|||
(relative to CWD), matching ``report_paths.REPORTS_ROOT``.
|
||||
Override via the ``PORTFOLIO_DATA_DIR`` env var or
|
||||
``get_portfolio_config()["data_dir"]``.
|
||||
run_id: Optional short identifier for the current run. When set,
|
||||
all writes are scoped under a ``runs/{run_id}/``
|
||||
subdirectory so that same-day re-runs are preserved.
|
||||
"""
|
||||
self._base_dir = Path(base_dir)
|
||||
self._run_id = run_id
|
||||
|
||||
@property
|
||||
def run_id(self) -> str | None:
|
||||
"""The run identifier set on this store, if any."""
|
||||
return self._run_id
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _portfolio_dir(self, date: str) -> Path:
|
||||
def _date_root(self, date: str, *, for_write: bool = False) -> Path:
|
||||
"""Return the base directory for a given date, scoped by run_id.
|
||||
|
||||
When ``for_write=True``, the run_id *must* be used (if present) so
|
||||
that writes land in the run-specific directory.
|
||||
|
||||
When ``for_write=False`` (reads), the method first tries the
|
||||
run_id directory, then falls back to latest.json pointer, and
|
||||
finally falls back to the legacy flat layout.
|
||||
"""
|
||||
daily = self._base_dir / "daily" / date
|
||||
|
||||
if for_write and self._run_id:
|
||||
return daily / "runs" / self._run_id
|
||||
if self._run_id:
|
||||
return daily / "runs" / self._run_id
|
||||
|
||||
# Read path: check latest.json pointer (using our base_dir)
|
||||
latest_id = read_latest_pointer(date, base_dir=self._base_dir)
|
||||
if latest_id:
|
||||
candidate = daily / "runs" / latest_id
|
||||
if candidate.exists():
|
||||
return candidate
|
||||
|
||||
# Fallback to legacy flat layout
|
||||
return daily
|
||||
|
||||
def _update_latest(self, date: str) -> None:
|
||||
"""Update the latest.json pointer if run_id is set."""
|
||||
if self._run_id:
|
||||
write_latest_pointer(date, self._run_id, base_dir=self._base_dir)
|
||||
|
||||
def _portfolio_dir(self, date: str, *, for_write: bool = False) -> Path:
|
||||
"""Return the portfolio subdirectory for a given date.
|
||||
|
||||
Path: ``{base_dir}/daily/{date}/portfolio/``
|
||||
Path: ``{base}/daily/{date}[/runs/{run_id}]/portfolio/``
|
||||
"""
|
||||
return self._base_dir / "daily" / date / "portfolio"
|
||||
return self._date_root(date, for_write=for_write) / "portfolio"
|
||||
|
||||
@staticmethod
|
||||
def _sanitize(obj: Any) -> Any:
|
||||
|
|
@ -134,7 +191,7 @@ class ReportStore:
|
|||
def save_scan(self, date: str, data: dict[str, Any]) -> Path:
|
||||
"""Save macro scan summary JSON.
|
||||
|
||||
Path: ``{base_dir}/daily/{date}/market/macro_scan_summary.json``
|
||||
Path: ``{base}/daily/{date}[/runs/{run_id}]/market/macro_scan_summary.json``
|
||||
|
||||
Args:
|
||||
date: ISO date string, e.g. ``"2026-03-20"``.
|
||||
|
|
@ -143,12 +200,16 @@ class ReportStore:
|
|||
Returns:
|
||||
Path of the written file.
|
||||
"""
|
||||
path = self._base_dir / "daily" / date / "market" / "macro_scan_summary.json"
|
||||
return self._write_json(path, data)
|
||||
root = self._date_root(date, for_write=True)
|
||||
path = root / "market" / "macro_scan_summary.json"
|
||||
result = self._write_json(path, data)
|
||||
self._update_latest(date)
|
||||
return result
|
||||
|
||||
def load_scan(self, date: str) -> dict[str, Any] | None:
|
||||
"""Load macro scan summary. Returns None if the file does not exist."""
|
||||
path = self._base_dir / "daily" / date / "market" / "macro_scan_summary.json"
|
||||
root = self._date_root(date)
|
||||
path = root / "market" / "macro_scan_summary.json"
|
||||
return self._read_json(path)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
|
|
@ -158,19 +219,23 @@ class ReportStore:
|
|||
def save_analysis(self, date: str, ticker: str, data: dict[str, Any]) -> Path:
|
||||
"""Save per-ticker analysis report as JSON.
|
||||
|
||||
Path: ``{base_dir}/daily/{date}/{TICKER}/complete_report.json``
|
||||
Path: ``{base}/daily/{date}[/runs/{run_id}]/{TICKER}/complete_report.json``
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
ticker: Ticker symbol (stored as uppercase).
|
||||
data: Analysis output dict.
|
||||
"""
|
||||
path = self._base_dir / "daily" / date / ticker.upper() / "complete_report.json"
|
||||
return self._write_json(path, data)
|
||||
root = self._date_root(date, for_write=True)
|
||||
path = root / ticker.upper() / "complete_report.json"
|
||||
result = self._write_json(path, data)
|
||||
self._update_latest(date)
|
||||
return result
|
||||
|
||||
def load_analysis(self, date: str, ticker: str) -> dict[str, Any] | None:
|
||||
"""Load per-ticker analysis JSON. Returns None if the file does not exist."""
|
||||
path = self._base_dir / "daily" / date / ticker.upper() / "complete_report.json"
|
||||
root = self._date_root(date)
|
||||
path = root / ticker.upper() / "complete_report.json"
|
||||
return self._read_json(path)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
|
|
@ -185,15 +250,17 @@ class ReportStore:
|
|||
) -> Path:
|
||||
"""Save holding reviewer output for one ticker.
|
||||
|
||||
Path: ``{base_dir}/daily/{date}/portfolio/{TICKER}_holding_review.json``
|
||||
Path: ``{base}/daily/{date}[/runs/{run_id}]/portfolio/{TICKER}_holding_review.json``
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
ticker: Ticker symbol (stored as uppercase).
|
||||
data: HoldingReviewerAgent output dict.
|
||||
"""
|
||||
path = self._portfolio_dir(date) / f"{ticker.upper()}_holding_review.json"
|
||||
return self._write_json(path, data)
|
||||
path = self._portfolio_dir(date, for_write=True) / f"{ticker.upper()}_holding_review.json"
|
||||
result = self._write_json(path, data)
|
||||
self._update_latest(date)
|
||||
return result
|
||||
|
||||
def load_holding_review(self, date: str, ticker: str) -> dict[str, Any] | None:
|
||||
"""Load holding review output. Returns None if the file does not exist."""
|
||||
|
|
@ -212,15 +279,17 @@ class ReportStore:
|
|||
) -> Path:
|
||||
"""Save risk computation results.
|
||||
|
||||
Path: ``{base_dir}/daily/{date}/portfolio/{portfolio_id}_risk_metrics.json``
|
||||
Path: ``{base}/daily/{date}[/runs/{run_id}]/portfolio/{portfolio_id}_risk_metrics.json``
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
portfolio_id: UUID of the target portfolio.
|
||||
data: Risk metrics dict (Sharpe, Sortino, VaR, etc.).
|
||||
"""
|
||||
path = self._portfolio_dir(date) / f"{portfolio_id}_risk_metrics.json"
|
||||
return self._write_json(path, data)
|
||||
path = self._portfolio_dir(date, for_write=True) / f"{portfolio_id}_risk_metrics.json"
|
||||
result = self._write_json(path, data)
|
||||
self._update_latest(date)
|
||||
return result
|
||||
|
||||
def load_risk_metrics(
|
||||
self,
|
||||
|
|
@ -244,9 +313,8 @@ class ReportStore:
|
|||
) -> Path:
|
||||
"""Save PM agent decision.
|
||||
|
||||
JSON path: ``{base_dir}/daily/{date}/portfolio/{portfolio_id}_pm_decision.json``
|
||||
MD path: ``{base_dir}/daily/{date}/portfolio/{portfolio_id}_pm_decision.md``
|
||||
(written only when ``markdown`` is not None)
|
||||
JSON path: ``{base}/daily/{date}[/runs/{run_id}]/portfolio/{portfolio_id}_pm_decision.json``
|
||||
MD path: ``…/{portfolio_id}_pm_decision.md`` (written only when ``markdown`` is not None)
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
|
|
@ -257,14 +325,16 @@ class ReportStore:
|
|||
Returns:
|
||||
Path of the written JSON file.
|
||||
"""
|
||||
json_path = self._portfolio_dir(date) / f"{portfolio_id}_pm_decision.json"
|
||||
pdir = self._portfolio_dir(date, for_write=True)
|
||||
json_path = pdir / f"{portfolio_id}_pm_decision.json"
|
||||
self._write_json(json_path, data)
|
||||
if markdown is not None:
|
||||
md_path = self._portfolio_dir(date) / f"{portfolio_id}_pm_decision.md"
|
||||
md_path = pdir / f"{portfolio_id}_pm_decision.md"
|
||||
try:
|
||||
md_path.write_text(markdown, encoding="utf-8")
|
||||
except OSError as exc:
|
||||
raise ReportStoreError(f"Failed to write {md_path}: {exc}") from exc
|
||||
self._update_latest(date)
|
||||
return json_path
|
||||
|
||||
def load_pm_decision(
|
||||
|
|
@ -284,15 +354,17 @@ class ReportStore:
|
|||
) -> Path:
|
||||
"""Save trade execution results.
|
||||
|
||||
Path: ``{base_dir}/daily/{date}/portfolio/{portfolio_id}_execution_result.json``
|
||||
Path: ``{base}/daily/{date}[/runs/{run_id}]/portfolio/{portfolio_id}_execution_result.json``
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
portfolio_id: UUID of the target portfolio.
|
||||
data: TradeExecutor output dict.
|
||||
"""
|
||||
path = self._portfolio_dir(date) / f"{portfolio_id}_execution_result.json"
|
||||
return self._write_json(path, data)
|
||||
path = self._portfolio_dir(date, for_write=True) / f"{portfolio_id}_execution_result.json"
|
||||
result = self._write_json(path, data)
|
||||
self._update_latest(date)
|
||||
return result
|
||||
|
||||
def load_execution_result(
|
||||
self,
|
||||
|
|
@ -308,10 +380,11 @@ class ReportStore:
|
|||
|
||||
Returns a list of deleted file names so the caller can log what was removed.
|
||||
"""
|
||||
pdir = self._portfolio_dir(date, for_write=True)
|
||||
targets = [
|
||||
self._portfolio_dir(date) / f"{portfolio_id}_pm_decision.json",
|
||||
self._portfolio_dir(date) / f"{portfolio_id}_pm_decision.md",
|
||||
self._portfolio_dir(date) / f"{portfolio_id}_execution_result.json",
|
||||
pdir / f"{portfolio_id}_pm_decision.json",
|
||||
pdir / f"{portfolio_id}_pm_decision.md",
|
||||
pdir / f"{portfolio_id}_execution_result.json",
|
||||
]
|
||||
deleted = []
|
||||
for path in targets:
|
||||
|
|
@ -323,7 +396,7 @@ class ReportStore:
|
|||
def list_pm_decisions(self, portfolio_id: str) -> list[Path]:
|
||||
"""Return all saved PM decision JSON paths for portfolio_id, newest first.
|
||||
|
||||
Scans ``{base_dir}/daily/*/portfolio/{portfolio_id}_pm_decision.json``.
|
||||
Searches both run-scoped and legacy flat layouts.
|
||||
|
||||
Args:
|
||||
portfolio_id: UUID of the target portfolio.
|
||||
|
|
@ -331,5 +404,9 @@ class ReportStore:
|
|||
Returns:
|
||||
Sorted list of Path objects, newest date first.
|
||||
"""
|
||||
pattern = f"daily/*/portfolio/{portfolio_id}_pm_decision.json"
|
||||
return sorted(self._base_dir.glob(pattern), reverse=True)
|
||||
# Run-scoped layout: daily/*/runs/*/portfolio/{pid}_pm_decision.json
|
||||
run_pattern = f"daily/*/runs/*/portfolio/{portfolio_id}_pm_decision.json"
|
||||
# Legacy flat layout: daily/*/portfolio/{pid}_pm_decision.json
|
||||
flat_pattern = f"daily/*/portfolio/{portfolio_id}_pm_decision.json"
|
||||
paths = set(self._base_dir.glob(run_pattern)) | set(self._base_dir.glob(flat_pattern))
|
||||
return sorted(paths, reverse=True)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,75 @@
|
|||
"""Factory for creating the appropriate report store backend.
|
||||
|
||||
Returns a :class:`MongoReportStore` when a MongoDB connection string is
|
||||
configured, otherwise falls back to the filesystem :class:`ReportStore`.
|
||||
|
||||
Usage::
|
||||
|
||||
from tradingagents.portfolio.store_factory import create_report_store
|
||||
|
||||
store = create_report_store(run_id="a1b2c3d4")
|
||||
store.save_scan("2026-03-20", {...})
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
from typing import Union
|
||||
|
||||
from tradingagents.portfolio.report_store import ReportStore
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def create_report_store(
|
||||
run_id: str | None = None,
|
||||
*,
|
||||
base_dir: str | None = None,
|
||||
mongo_uri: str | None = None,
|
||||
mongo_db: str | None = None,
|
||||
) -> Union[ReportStore, "MongoReportStore"]: # noqa: F821
|
||||
"""Create and return the appropriate report store.
|
||||
|
||||
Resolution order for the backend:
|
||||
|
||||
1. If *mongo_uri* is passed explicitly, use MongoDB.
|
||||
2. If ``TRADINGAGENTS_MONGO_URI`` env var is set, use MongoDB.
|
||||
3. Fall back to the filesystem :class:`ReportStore`.
|
||||
|
||||
Args:
|
||||
run_id: Short identifier for the current run.
|
||||
base_dir: Override for the filesystem store's base directory.
|
||||
mongo_uri: MongoDB connection string (overrides env var).
|
||||
mongo_db: MongoDB database name (default ``"tradingagents"``).
|
||||
|
||||
Returns:
|
||||
A store instance (either ``ReportStore`` or ``MongoReportStore``).
|
||||
"""
|
||||
uri = mongo_uri or os.getenv("TRADINGAGENTS_MONGO_URI", "")
|
||||
db = mongo_db or os.getenv("TRADINGAGENTS_MONGO_DB", "tradingagents")
|
||||
|
||||
if uri:
|
||||
try:
|
||||
from tradingagents.portfolio.mongo_report_store import MongoReportStore
|
||||
|
||||
store = MongoReportStore(
|
||||
connection_string=uri,
|
||||
db_name=db,
|
||||
run_id=run_id,
|
||||
)
|
||||
# ensure_indexes() is called automatically in __init__
|
||||
logger.info("Using MongoDB report store (db=%s, run_id=%s)", db, run_id)
|
||||
return store
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"MongoDB connection failed — falling back to filesystem store",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# Filesystem fallback
|
||||
_base = base_dir or os.getenv("PORTFOLIO_DATA_DIR") or os.getenv(
|
||||
"TRADINGAGENTS_REPORTS_DIR", "reports"
|
||||
)
|
||||
logger.info("Using filesystem report store (base=%s, run_id=%s)", _base, run_id)
|
||||
return ReportStore(base_dir=_base, run_id=run_id)
|
||||
|
|
@ -1,25 +1,35 @@
|
|||
"""Unified report-path helpers.
|
||||
|
||||
Every CLI command and internal save routine should use these helpers so that
|
||||
all generated artifacts land under a single ``reports/`` tree::
|
||||
all generated artifacts land under a single ``reports/`` tree.
|
||||
|
||||
When a ``run_id`` is supplied the layout becomes::
|
||||
|
||||
reports/
|
||||
└── daily/{YYYY-MM-DD}/
|
||||
├── market/ # scan results
|
||||
│ ├── geopolitical_report.md
|
||||
│ └── ...
|
||||
├── {TICKER}/ # per-ticker analysis / pipeline
|
||||
│ ├── 1_analysts/
|
||||
│ ├── ...
|
||||
│ ├── complete_report.md
|
||||
│ └── eval/
|
||||
│ └── full_states_log.json
|
||||
└── summary.md # pipeline combined summary
|
||||
├── runs/{run_id}/
|
||||
│ ├── market/ # scan results
|
||||
│ ├── {TICKER}/ # per-ticker analysis
|
||||
│ └── portfolio/ # PM artefacts
|
||||
├── latest.json # pointer → most recent run_id
|
||||
└── daily_digest.md # append-only (shared across runs)
|
||||
|
||||
Without a ``run_id`` the legacy flat layout is preserved for backward
|
||||
compatibility::
|
||||
|
||||
reports/
|
||||
└── daily/{YYYY-MM-DD}/
|
||||
├── market/
|
||||
├── {TICKER}/
|
||||
└── summary.md
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import uuid
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
# Configurable via TRADINGAGENTS_REPORTS_DIR env var.
|
||||
|
|
@ -27,26 +37,92 @@ from pathlib import Path
|
|||
REPORTS_ROOT = Path(os.getenv("TRADINGAGENTS_REPORTS_DIR") or "reports")
|
||||
|
||||
|
||||
def get_daily_dir(date: str) -> Path:
|
||||
"""``reports/daily/{date}/``"""
|
||||
return REPORTS_ROOT / "daily" / date
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Run-ID helpers
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_run_id() -> str:
|
||||
"""Return a short, human-readable run identifier (8-char hex)."""
|
||||
return uuid.uuid4().hex[:8]
|
||||
|
||||
|
||||
def get_market_dir(date: str) -> Path:
|
||||
"""``reports/daily/{date}/market/``"""
|
||||
return get_daily_dir(date) / "market"
|
||||
def write_latest_pointer(date: str, run_id: str, base_dir: Path | None = None) -> Path:
|
||||
"""Write ``{base}/daily/{date}/latest.json`` pointing to *run_id*.
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
run_id: Short identifier for the run.
|
||||
base_dir: Reports root directory. Falls back to ``REPORTS_ROOT``
|
||||
when ``None``.
|
||||
|
||||
Returns the path of the written file.
|
||||
"""
|
||||
root = base_dir or REPORTS_ROOT
|
||||
daily = root / "daily" / date
|
||||
daily.mkdir(parents=True, exist_ok=True)
|
||||
pointer = daily / "latest.json"
|
||||
payload = {
|
||||
"run_id": run_id,
|
||||
"updated_at": datetime.now(timezone.utc).isoformat(),
|
||||
}
|
||||
pointer.write_text(json.dumps(payload, indent=2), encoding="utf-8")
|
||||
return pointer
|
||||
|
||||
|
||||
def get_ticker_dir(date: str, ticker: str) -> Path:
|
||||
"""``reports/daily/{date}/{TICKER}/``"""
|
||||
return get_daily_dir(date) / ticker.upper()
|
||||
def read_latest_pointer(date: str, base_dir: Path | None = None) -> str | None:
|
||||
"""Read the latest run_id for *date*, or ``None`` if no pointer exists.
|
||||
|
||||
Args:
|
||||
date: ISO date string.
|
||||
base_dir: Reports root directory. Falls back to ``REPORTS_ROOT``
|
||||
when ``None``.
|
||||
"""
|
||||
root = base_dir or REPORTS_ROOT
|
||||
pointer = root / "daily" / date / "latest.json"
|
||||
if not pointer.exists():
|
||||
return None
|
||||
try:
|
||||
data = json.loads(pointer.read_text(encoding="utf-8"))
|
||||
return data.get("run_id")
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return None
|
||||
|
||||
|
||||
def get_eval_dir(date: str, ticker: str) -> Path:
|
||||
"""``reports/daily/{date}/{TICKER}/eval/``"""
|
||||
return get_ticker_dir(date, ticker) / "eval"
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Path helpers
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _run_prefix(date: str, run_id: str | None) -> Path:
|
||||
"""Base directory for a date, optionally scoped by run_id."""
|
||||
daily = REPORTS_ROOT / "daily" / date
|
||||
if run_id:
|
||||
return daily / "runs" / run_id
|
||||
return daily
|
||||
|
||||
|
||||
def get_daily_dir(date: str, run_id: str | None = None) -> Path:
|
||||
"""``reports/daily/{date}/`` or ``reports/daily/{date}/runs/{run_id}/``"""
|
||||
return _run_prefix(date, run_id)
|
||||
|
||||
|
||||
def get_market_dir(date: str, run_id: str | None = None) -> Path:
|
||||
"""``…/{date}[/runs/{run_id}]/market/``"""
|
||||
return get_daily_dir(date, run_id) / "market"
|
||||
|
||||
|
||||
def get_ticker_dir(date: str, ticker: str, run_id: str | None = None) -> Path:
|
||||
"""``…/{date}[/runs/{run_id}]/{TICKER}/``"""
|
||||
return get_daily_dir(date, run_id) / ticker.upper()
|
||||
|
||||
|
||||
def get_eval_dir(date: str, ticker: str, run_id: str | None = None) -> Path:
|
||||
"""``…/{date}[/runs/{run_id}]/{TICKER}/eval/``"""
|
||||
return get_ticker_dir(date, ticker, run_id) / "eval"
|
||||
|
||||
|
||||
def get_digest_path(date: str) -> Path:
|
||||
"""``reports/daily/{date}/daily_digest.md``"""
|
||||
return get_daily_dir(date) / "daily_digest.md"
|
||||
"""``reports/daily/{date}/daily_digest.md``
|
||||
|
||||
The digest is always at the date level (shared across runs).
|
||||
"""
|
||||
return REPORTS_ROOT / "daily" / date / "daily_digest.md"
|
||||
|
|
|
|||
Loading…
Reference in New Issue