Add scheduled Codex report automation
This commit is contained in:
parent
50865711e3
commit
8c24dcce61
|
|
@ -0,0 +1,123 @@
|
|||
name: Daily Codex Analysis
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# 00:13 UTC = 09:13 Asia/Seoul
|
||||
- cron: "13 0 * * *"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
tickers:
|
||||
description: "Optional comma-separated tickers override"
|
||||
required: false
|
||||
type: string
|
||||
trade_date:
|
||||
description: "Optional YYYY-MM-DD trade date override"
|
||||
required: false
|
||||
type: string
|
||||
site_only:
|
||||
description: "Only rebuild GitHub Pages from archived runs"
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pages: write
|
||||
id-token: write
|
||||
|
||||
concurrency:
|
||||
group: daily-codex-analysis
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
runs-on: [self-hosted, Windows]
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTHONUTF8: "1"
|
||||
PIP_DISABLE_PIP_VERSION_CHECK: "1"
|
||||
TRADINGAGENTS_SITE_DIR: ${{ github.workspace }}\site
|
||||
TRADINGAGENTS_ARCHIVE_DIR: ${{ vars.TRADINGAGENTS_ARCHIVE_DIR }}
|
||||
steps:
|
||||
- name: Check out repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Configure GitHub Pages
|
||||
uses: actions/configure-pages@v5
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.13"
|
||||
|
||||
- name: Install TradingAgents
|
||||
shell: pwsh
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install -e .
|
||||
|
||||
- name: Verify Codex login and model availability
|
||||
shell: pwsh
|
||||
run: |
|
||||
$workspaceDir = Join-Path $env:GITHUB_WORKSPACE ".codex-preflight"
|
||||
@"
|
||||
from tradingagents.llm_clients.codex_preflight import run_codex_preflight
|
||||
result = run_codex_preflight(
|
||||
codex_binary=None,
|
||||
model="gpt-5.4",
|
||||
request_timeout=30.0,
|
||||
workspace_dir=r"$workspaceDir",
|
||||
cleanup_threads=True,
|
||||
)
|
||||
print("Codex account:", result.account)
|
||||
print("First available models:", ", ".join(result.models[:8]))
|
||||
"@ | python -
|
||||
|
||||
- name: Run scheduled analysis and build site
|
||||
shell: pwsh
|
||||
run: |
|
||||
$configPath = "config/scheduled_analysis.toml"
|
||||
if (-not (Test-Path $configPath)) {
|
||||
throw "Missing config/scheduled_analysis.toml. Copy config/scheduled_analysis.example.toml, set your real tickers, and commit the file before enabling the schedule."
|
||||
}
|
||||
|
||||
$args = @("-m", "tradingagents.scheduled", "--config", $configPath, "--site-dir", $env:TRADINGAGENTS_SITE_DIR, "--label", "github-actions")
|
||||
|
||||
if (-not [string]::IsNullOrWhiteSpace($env:TRADINGAGENTS_ARCHIVE_DIR)) {
|
||||
$args += @("--archive-dir", $env:TRADINGAGENTS_ARCHIVE_DIR)
|
||||
} else {
|
||||
Write-Warning "TRADINGAGENTS_ARCHIVE_DIR is not set. Run history will live under the repository checkout unless the config overrides it."
|
||||
}
|
||||
|
||||
$manualTickers = "${{ github.event.inputs.tickers }}"
|
||||
if (-not [string]::IsNullOrWhiteSpace($manualTickers)) {
|
||||
$args += @("--tickers", $manualTickers)
|
||||
}
|
||||
|
||||
$manualTradeDate = "${{ github.event.inputs.trade_date }}"
|
||||
if (-not [string]::IsNullOrWhiteSpace($manualTradeDate)) {
|
||||
$args += @("--trade-date", $manualTradeDate)
|
||||
}
|
||||
|
||||
$siteOnly = "${{ github.event.inputs.site_only }}"
|
||||
if ($siteOnly -eq "true") {
|
||||
$args += "--site-only"
|
||||
}
|
||||
|
||||
python @args
|
||||
|
||||
- name: Upload GitHub Pages artifact
|
||||
uses: actions/upload-pages-artifact@v3
|
||||
with:
|
||||
path: site
|
||||
|
||||
deploy:
|
||||
needs: analyze
|
||||
runs-on: ubuntu-latest
|
||||
environment:
|
||||
name: github-pages
|
||||
url: ${{ steps.deployment.outputs.page_url }}
|
||||
steps:
|
||||
- name: Deploy to GitHub Pages
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v4
|
||||
|
|
@ -0,0 +1,316 @@
|
|||
# TradingAgents 일일 자동 리포트 설정 가이드
|
||||
|
||||
이 문서는 현재 저장소에 추가된 `self-hosted runner + Codex + GitHub Actions + GitHub Pages` 자동화 구성을 실제로 운영하는 방법을 처음부터 끝까지 정리한 문서입니다.
|
||||
|
||||
적용 대상:
|
||||
- 저장소: `nornen0202/TradingAgents`
|
||||
- 기본 티커: `GOOGL`, `NVDA`
|
||||
- LLM 제공자: `codex`
|
||||
- 모델: `gpt-5.4`
|
||||
- analyst 구성: `market`, `social`, `news`, `fundamentals`
|
||||
- 결과 언어: `Korean`
|
||||
|
||||
관련 파일:
|
||||
- 설정 파일: [config/scheduled_analysis.toml](/C:/Projects/TradingAgents/config/scheduled_analysis.toml)
|
||||
- 예시 설정: [config/scheduled_analysis.example.toml](/C:/Projects/TradingAgents/config/scheduled_analysis.example.toml)
|
||||
- 실행 엔트리포인트: [tradingagents/scheduled/runner.py](/C:/Projects/TradingAgents/tradingagents/scheduled/runner.py)
|
||||
- 정적 사이트 생성기: [tradingagents/scheduled/site.py](/C:/Projects/TradingAgents/tradingagents/scheduled/site.py)
|
||||
- GitHub Actions 워크플로: [.github/workflows/daily-codex-analysis.yml](/C:/Projects/TradingAgents/.github/workflows/daily-codex-analysis.yml)
|
||||
|
||||
## 1. 지금 이미 준비된 것
|
||||
|
||||
이 저장소에는 다음이 이미 구현되어 있습니다.
|
||||
|
||||
- 비대화식 스케줄 실행기
|
||||
- 여러 티커를 순차 실행합니다.
|
||||
- `latest_available` 기준으로 최근 거래일을 자동 해석합니다.
|
||||
- 실패한 티커가 있어도 다른 티커를 계속 실행할 수 있습니다.
|
||||
- 결과 아카이브
|
||||
- 각 실행마다 run manifest, final state, markdown report, graph log를 저장합니다.
|
||||
- 웹 리포트 생성
|
||||
- GitHub Pages에 바로 올릴 수 있는 정적 HTML/CSS/JSON을 생성합니다.
|
||||
- GitHub Actions 워크플로
|
||||
- 매일 `09:13 KST` 기준으로 실행되도록 cron이 잡혀 있습니다.
|
||||
- 수동 실행도 가능합니다.
|
||||
|
||||
## 2. 당신이 반드시 해야 하는 작업
|
||||
|
||||
이 부분은 제가 대신할 수 없습니다.
|
||||
|
||||
### 2-1. GitHub 저장소에 변경 반영
|
||||
|
||||
제가 로컬 저장소에는 구현을 끝냈지만, 원격 GitHub 저장소에 반영하려면 당신이 아래 둘 중 하나를 해야 합니다.
|
||||
|
||||
1. 직접 commit / push
|
||||
2. 다음 턴에서 저에게 commit 메시지와 push/PR 작업까지 맡기기
|
||||
|
||||
### 2-2. self-hosted runner 준비
|
||||
|
||||
Codex 로그인 상태를 유지해야 하므로 GitHub-hosted runner가 아니라 self-hosted runner가 필요합니다.
|
||||
|
||||
권장:
|
||||
- 항상 켜져 있거나, 최소한 스케줄 시간 전에 켜져 있는 Windows 머신 1대
|
||||
- 이 저장소가 체크아웃된 경로 유지
|
||||
- Python 3.13 사용
|
||||
- `codex` 실행 가능
|
||||
|
||||
### 2-3. Codex 로그인
|
||||
|
||||
runner 머신에서 한 번 로그인해야 합니다.
|
||||
|
||||
PowerShell:
|
||||
|
||||
```powershell
|
||||
where.exe codex
|
||||
codex login
|
||||
```
|
||||
|
||||
브라우저 기반 로그인이 어려우면:
|
||||
|
||||
```powershell
|
||||
codex login --device-auth
|
||||
```
|
||||
|
||||
확인:
|
||||
|
||||
```powershell
|
||||
codex --version
|
||||
```
|
||||
|
||||
참고:
|
||||
- 이 환경에서는 `codex --version`이 WindowsApps alias 때문에 바로 실패했지만, TradingAgents preflight는 실제 Codex 바이너리를 자동 탐지해서 정상 통과했습니다.
|
||||
- 즉 `codex` alias가 애매해도 TradingAgents 자체는 동작할 수 있습니다.
|
||||
- 그래도 runner 머신에서는 가능하면 `where.exe codex`와 실제 `codex login`이 확실히 동작하도록 맞추는 편이 안전합니다.
|
||||
|
||||
### 2-4. GitHub Pages 설정
|
||||
|
||||
GitHub 저장소 설정에서 아래 작업이 필요합니다.
|
||||
|
||||
1. 저장소 `Settings`로 이동
|
||||
2. 왼쪽 `Pages` 선택
|
||||
3. `Build and deployment`의 `Source`를 `GitHub Actions`로 선택
|
||||
|
||||
이 단계는 GitHub UI 권한이 필요해서 당신이 해야 합니다.
|
||||
|
||||
### 2-5. self-hosted runner 등록
|
||||
|
||||
저장소 `Settings > Actions > Runners`에서 runner를 등록해야 합니다.
|
||||
|
||||
일반 순서:
|
||||
1. 저장소 `Settings`
|
||||
2. `Actions`
|
||||
3. `Runners`
|
||||
4. `New self-hosted runner`
|
||||
5. Windows 선택
|
||||
6. GitHub가 보여주는 등록 스크립트를 runner 머신에서 실행
|
||||
|
||||
runner label은 워크플로가 현재 아래를 요구합니다.
|
||||
|
||||
```yaml
|
||||
runs-on: [self-hosted, Windows]
|
||||
```
|
||||
|
||||
즉 `self-hosted`, `Windows` 라벨이 붙은 러너면 됩니다.
|
||||
|
||||
### 2-6. 선택이지만 강력 권장: 아카이브 경로 영속화
|
||||
|
||||
지금 기본 설정은 저장소 내부의 `./.runtime/tradingagents-archive`를 쓰게 되어 있습니다.
|
||||
더 안정적인 운영을 원하면 GitHub repository variable에 아래 값을 넣는 것을 권장합니다.
|
||||
|
||||
- 이름: `TRADINGAGENTS_ARCHIVE_DIR`
|
||||
- 예시 값: `D:\TradingAgentsData\archive`
|
||||
|
||||
이렇게 하면 저장소를 새로 checkout해도 이력 데이터가 유지됩니다.
|
||||
|
||||
저장소 변수 위치:
|
||||
- `Settings > Secrets and variables > Actions > Variables`
|
||||
|
||||
## 3. 빠른 실행 순서
|
||||
|
||||
### 3-1. 로컬 확인
|
||||
|
||||
```powershell
|
||||
Set-Location C:\Projects\TradingAgents
|
||||
.\.venv-codex\Scripts\Activate.ps1
|
||||
python -m pip install -e .
|
||||
python -m tradingagents.scheduled --config config/scheduled_analysis.toml --label manual-local
|
||||
```
|
||||
|
||||
실행 후 확인 경로:
|
||||
- 아카이브: [config/scheduled_analysis.toml](/C:/Projects/TradingAgents/config/scheduled_analysis.toml)의 `archive_dir`
|
||||
- 사이트: [site](/C:/Projects/TradingAgents/site)
|
||||
|
||||
### 3-2. GitHub Actions 수동 실행
|
||||
|
||||
1. GitHub 저장소의 `Actions` 탭 이동
|
||||
2. `Daily Codex Analysis` 선택
|
||||
3. `Run workflow` 클릭
|
||||
4. 필요 시:
|
||||
- `tickers`: 예: `GOOGL,NVDA,MSFT`
|
||||
- `trade_date`: 예: `2026-04-04`
|
||||
- `site_only`: `true` 또는 `false`
|
||||
5. 실행
|
||||
|
||||
입력 의미:
|
||||
- `tickers`: 설정 파일의 티커를 일회성으로 덮어씁니다.
|
||||
- `trade_date`: `latest_available` 대신 특정 날짜를 강제합니다.
|
||||
- `site_only`: 새 분석 없이 기존 아카이브만 다시 Pages로 재배포합니다.
|
||||
|
||||
## 4. 매일 자동 실행 방식
|
||||
|
||||
현재 워크플로 cron:
|
||||
|
||||
```yaml
|
||||
- cron: "13 0 * * *"
|
||||
```
|
||||
|
||||
이 값은 UTC 기준이므로 한국 시간으로는 매일 `09:13`입니다.
|
||||
|
||||
왜 `09:00`이 아니라 `09:13`인가:
|
||||
- GitHub 문서상 scheduled workflow는 부하가 높은 시각, 특히 정각 부근에서 지연되거나 드롭될 수 있습니다.
|
||||
- 그래서 정각보다 몇 분 비켜서 잡는 편이 안전합니다.
|
||||
|
||||
## 5. 산출물 구조
|
||||
|
||||
예시 run 디렉터리:
|
||||
|
||||
```text
|
||||
archive/
|
||||
latest-run.json
|
||||
runs/
|
||||
2026/
|
||||
20260405T080047_real-smoke/
|
||||
run.json
|
||||
tickers/
|
||||
GOOGL/
|
||||
NVDA/
|
||||
engine-results/
|
||||
```
|
||||
|
||||
티커별 주요 파일:
|
||||
- `analysis.json`: 실행 요약
|
||||
- `final_state.json`: TradingAgents 최종 상태
|
||||
- `report/complete_report.md`: 통합 마크다운 리포트
|
||||
- `full_states_log_<date>.json`: graph 상태 로그
|
||||
- 실패 시 `error.json`
|
||||
|
||||
사이트 구조:
|
||||
|
||||
```text
|
||||
site/
|
||||
index.html
|
||||
feed.json
|
||||
runs/<run_id>/index.html
|
||||
runs/<run_id>/<ticker>.html
|
||||
downloads/<run_id>/<ticker>/*
|
||||
```
|
||||
|
||||
## 6. 설정 변경 방법
|
||||
|
||||
기본 티커를 바꾸려면 [config/scheduled_analysis.toml](/C:/Projects/TradingAgents/config/scheduled_analysis.toml)에서 이 부분만 수정하면 됩니다.
|
||||
|
||||
```toml
|
||||
[run]
|
||||
tickers = ["GOOGL", "NVDA"]
|
||||
```
|
||||
|
||||
연구 깊이를 올리려면:
|
||||
|
||||
```toml
|
||||
max_debate_rounds = 3
|
||||
max_risk_discuss_rounds = 3
|
||||
```
|
||||
|
||||
주의:
|
||||
- 값이 커질수록 실행 시간과 Codex 사용량이 늘어납니다.
|
||||
|
||||
## 7. 가장 추천하는 운영 형태
|
||||
|
||||
### 최소 운영
|
||||
|
||||
- runner 머신 1대
|
||||
- `codex login` 1회
|
||||
- GitHub Pages 공개 배포
|
||||
- `GOOGL`, `NVDA` 일일 실행
|
||||
|
||||
### 안정 운영
|
||||
|
||||
- runner 머신 1대 고정
|
||||
- `TRADINGAGENTS_ARCHIVE_DIR`를 저장소 밖 영속 경로로 지정
|
||||
- Windows 부팅 시 runner 자동 시작
|
||||
- 주 1회 정도 Actions 실행 기록 점검
|
||||
|
||||
## 8. 트러블슈팅
|
||||
|
||||
### `Missing config/scheduled_analysis.toml`
|
||||
|
||||
원인:
|
||||
- 실제 설정 파일이 아직 저장소에 없음
|
||||
|
||||
해결:
|
||||
- 현재는 이미 [config/scheduled_analysis.toml](/C:/Projects/TradingAgents/config/scheduled_analysis.toml)을 추가해 두었습니다.
|
||||
|
||||
### Codex 인증 오류
|
||||
|
||||
원인:
|
||||
- runner 머신에서 로그인 안 됨
|
||||
|
||||
해결:
|
||||
|
||||
```powershell
|
||||
codex login
|
||||
```
|
||||
|
||||
또는:
|
||||
|
||||
```powershell
|
||||
codex login --device-auth
|
||||
```
|
||||
|
||||
### Pages가 비어 있음
|
||||
|
||||
확인 순서:
|
||||
1. `Actions` 탭에서 `Daily Codex Analysis` 실행 성공 여부 확인
|
||||
2. `Settings > Pages`에서 Source가 `GitHub Actions`인지 확인
|
||||
3. workflow의 `deploy` job 성공 여부 확인
|
||||
|
||||
### 스케줄이 안 뜸
|
||||
|
||||
확인 순서:
|
||||
1. workflow 파일이 default branch에 있는지 확인
|
||||
2. 저장소에 최근 60일 내 활동이 있었는지 확인
|
||||
3. cron이 UTC 기준임을 확인
|
||||
|
||||
## 9. 제가 이미 직접 검증한 것
|
||||
|
||||
이 저장소 로컬 환경에서 아래를 확인했습니다.
|
||||
|
||||
- `Codex preflight` 성공
|
||||
- Codex 계정 읽기 성공
|
||||
- 모델 목록에서 `gpt-5.4` 확인
|
||||
- mock 기반 자동화 테스트 통과
|
||||
- 실제 `SPY` 1티커 end-to-end 스모크 런 성공
|
||||
- 시작: `2026-04-05 08:00:47 +09:00`
|
||||
- 종료: `2026-04-05 08:06:24 +09:00`
|
||||
- 거래일 해석: `2026-04-02`
|
||||
- 최종 decision: `SELL`
|
||||
|
||||
## 10. 이번 요청 기준 정리
|
||||
|
||||
현재 상태에서 당신이 해야 하는 최소 작업은 아래입니다.
|
||||
|
||||
1. 변경사항을 원격 GitHub 저장소에 반영
|
||||
2. self-hosted runner 등록
|
||||
3. runner 머신에서 `codex login`
|
||||
4. GitHub Pages Source를 `GitHub Actions`로 설정
|
||||
5. 필요하면 `TRADINGAGENTS_ARCHIVE_DIR` repository variable 추가
|
||||
|
||||
그 외의 저장소 코드, 설정 파일, 워크플로, 문서는 지금 이 저장소에 이미 준비되어 있습니다.
|
||||
|
||||
## 참고 링크
|
||||
|
||||
- GitHub Actions `schedule` 이벤트: https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows
|
||||
- GitHub Pages custom workflow: https://docs.github.com/en/pages/getting-started-with-github-pages/using-custom-workflows-with-github-pages
|
||||
- GitHub Pages publishing source: https://docs.github.com/en/pages/getting-started-with-github-pages/configuring-a-publishing-source-for-your-github-pages-site
|
||||
- OpenAI Codex cloud/docs: https://developers.openai.com/codex/cloud
|
||||
- OpenAI Codex app announcement: https://openai.com/index/introducing-the-codex-app/
|
||||
96
cli/main.py
96
cli/main.py
|
|
@ -25,6 +25,7 @@ from rich.rule import Rule
|
|||
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.reporting import save_report_bundle
|
||||
from cli.models import AnalystType
|
||||
from cli.utils import *
|
||||
from cli.announcements import fetch_announcements, display_announcements
|
||||
|
|
@ -462,7 +463,7 @@ def update_display(layout, spinner_text=None, stats_handler=None, start_time=Non
|
|||
def get_user_selections():
|
||||
"""Get all user selections before starting the analysis display."""
|
||||
# Display ASCII art welcome message
|
||||
with open(Path(__file__).parent / "static" / "welcome.txt", "r") as f:
|
||||
with open(Path(__file__).parent / "static" / "welcome.txt", "r", encoding="utf-8") as f:
|
||||
welcome_ascii = f.read()
|
||||
|
||||
# Create welcome box content
|
||||
|
|
@ -647,92 +648,7 @@ def get_analysis_date():
|
|||
|
||||
def save_report_to_disk(final_state, ticker: str, save_path: Path):
|
||||
"""Save complete analysis report to disk with organized subfolders."""
|
||||
save_path.mkdir(parents=True, exist_ok=True)
|
||||
sections = []
|
||||
|
||||
# 1. Analysts
|
||||
analysts_dir = save_path / "1_analysts"
|
||||
analyst_parts = []
|
||||
if final_state.get("market_report"):
|
||||
analysts_dir.mkdir(exist_ok=True)
|
||||
(analysts_dir / "market.md").write_text(final_state["market_report"])
|
||||
analyst_parts.append(("Market Analyst", final_state["market_report"]))
|
||||
if final_state.get("sentiment_report"):
|
||||
analysts_dir.mkdir(exist_ok=True)
|
||||
(analysts_dir / "sentiment.md").write_text(final_state["sentiment_report"])
|
||||
analyst_parts.append(("Social Analyst", final_state["sentiment_report"]))
|
||||
if final_state.get("news_report"):
|
||||
analysts_dir.mkdir(exist_ok=True)
|
||||
(analysts_dir / "news.md").write_text(final_state["news_report"])
|
||||
analyst_parts.append(("News Analyst", final_state["news_report"]))
|
||||
if final_state.get("fundamentals_report"):
|
||||
analysts_dir.mkdir(exist_ok=True)
|
||||
(analysts_dir / "fundamentals.md").write_text(final_state["fundamentals_report"])
|
||||
analyst_parts.append(("Fundamentals Analyst", final_state["fundamentals_report"]))
|
||||
if analyst_parts:
|
||||
content = "\n\n".join(f"### {name}\n{text}" for name, text in analyst_parts)
|
||||
sections.append(f"## I. Analyst Team Reports\n\n{content}")
|
||||
|
||||
# 2. Research
|
||||
if final_state.get("investment_debate_state"):
|
||||
research_dir = save_path / "2_research"
|
||||
debate = final_state["investment_debate_state"]
|
||||
research_parts = []
|
||||
if debate.get("bull_history"):
|
||||
research_dir.mkdir(exist_ok=True)
|
||||
(research_dir / "bull.md").write_text(debate["bull_history"])
|
||||
research_parts.append(("Bull Researcher", debate["bull_history"]))
|
||||
if debate.get("bear_history"):
|
||||
research_dir.mkdir(exist_ok=True)
|
||||
(research_dir / "bear.md").write_text(debate["bear_history"])
|
||||
research_parts.append(("Bear Researcher", debate["bear_history"]))
|
||||
if debate.get("judge_decision"):
|
||||
research_dir.mkdir(exist_ok=True)
|
||||
(research_dir / "manager.md").write_text(debate["judge_decision"])
|
||||
research_parts.append(("Research Manager", debate["judge_decision"]))
|
||||
if research_parts:
|
||||
content = "\n\n".join(f"### {name}\n{text}" for name, text in research_parts)
|
||||
sections.append(f"## II. Research Team Decision\n\n{content}")
|
||||
|
||||
# 3. Trading
|
||||
if final_state.get("trader_investment_plan"):
|
||||
trading_dir = save_path / "3_trading"
|
||||
trading_dir.mkdir(exist_ok=True)
|
||||
(trading_dir / "trader.md").write_text(final_state["trader_investment_plan"])
|
||||
sections.append(f"## III. Trading Team Plan\n\n### Trader\n{final_state['trader_investment_plan']}")
|
||||
|
||||
# 4. Risk Management
|
||||
if final_state.get("risk_debate_state"):
|
||||
risk_dir = save_path / "4_risk"
|
||||
risk = final_state["risk_debate_state"]
|
||||
risk_parts = []
|
||||
if risk.get("aggressive_history"):
|
||||
risk_dir.mkdir(exist_ok=True)
|
||||
(risk_dir / "aggressive.md").write_text(risk["aggressive_history"])
|
||||
risk_parts.append(("Aggressive Analyst", risk["aggressive_history"]))
|
||||
if risk.get("conservative_history"):
|
||||
risk_dir.mkdir(exist_ok=True)
|
||||
(risk_dir / "conservative.md").write_text(risk["conservative_history"])
|
||||
risk_parts.append(("Conservative Analyst", risk["conservative_history"]))
|
||||
if risk.get("neutral_history"):
|
||||
risk_dir.mkdir(exist_ok=True)
|
||||
(risk_dir / "neutral.md").write_text(risk["neutral_history"])
|
||||
risk_parts.append(("Neutral Analyst", risk["neutral_history"]))
|
||||
if risk_parts:
|
||||
content = "\n\n".join(f"### {name}\n{text}" for name, text in risk_parts)
|
||||
sections.append(f"## IV. Risk Management Team Decision\n\n{content}")
|
||||
|
||||
# 5. Portfolio Manager
|
||||
if risk.get("judge_decision"):
|
||||
portfolio_dir = save_path / "5_portfolio"
|
||||
portfolio_dir.mkdir(exist_ok=True)
|
||||
(portfolio_dir / "decision.md").write_text(risk["judge_decision"])
|
||||
sections.append(f"## V. Portfolio Manager Decision\n\n### Portfolio Manager\n{risk['judge_decision']}")
|
||||
|
||||
# Write consolidated report
|
||||
header = f"# Trading Analysis Report: {ticker}\n\nGenerated: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
(save_path / "complete_report.md").write_text(header + "\n\n".join(sections))
|
||||
return save_path / "complete_report.md"
|
||||
return save_report_bundle(final_state, ticker, save_path)
|
||||
|
||||
|
||||
def display_complete_report(final_state):
|
||||
|
|
@ -990,7 +906,7 @@ def run_analysis():
|
|||
func(*args, **kwargs)
|
||||
timestamp, message_type, content = obj.messages[-1]
|
||||
content = content.replace("\n", " ") # Replace newlines with spaces
|
||||
with open(log_file, "a") as f:
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
f.write(f"{timestamp} [{message_type}] {content}\n")
|
||||
return wrapper
|
||||
|
||||
|
|
@ -1001,7 +917,7 @@ def run_analysis():
|
|||
func(*args, **kwargs)
|
||||
timestamp, tool_name, args = obj.tool_calls[-1]
|
||||
args_str = ", ".join(f"{k}={v}" for k, v in args.items())
|
||||
with open(log_file, "a") as f:
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
f.write(f"{timestamp} [Tool Call] {tool_name}({args_str})\n")
|
||||
return wrapper
|
||||
|
||||
|
|
@ -1015,7 +931,7 @@ def run_analysis():
|
|||
if content:
|
||||
file_name = f"{section_name}.md"
|
||||
text = "\n".join(str(item) for item in content) if isinstance(content, list) else content
|
||||
with open(report_dir / file_name, "w") as f:
|
||||
with open(report_dir / file_name, "w", encoding="utf-8") as f:
|
||||
f.write(text)
|
||||
return wrapper
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,36 @@
|
|||
# Copy this file to `config/scheduled_analysis.toml` and adjust the values for your runner.
|
||||
|
||||
[run]
|
||||
tickers = ["NVDA", "MSFT", "TSLA"]
|
||||
analysts = ["market", "social", "news", "fundamentals"]
|
||||
output_language = "Korean"
|
||||
trade_date_mode = "latest_available"
|
||||
timezone = "Asia/Seoul"
|
||||
max_debate_rounds = 1
|
||||
max_risk_discuss_rounds = 1
|
||||
latest_market_data_lookback_days = 14
|
||||
continue_on_ticker_error = true
|
||||
|
||||
[llm]
|
||||
provider = "codex"
|
||||
# TradingAgents' current Codex provider path uses the frontier model id `gpt-5.4`
|
||||
# for Codex 5.4 sessions.
|
||||
quick_model = "gpt-5.4"
|
||||
deep_model = "gpt-5.4"
|
||||
codex_reasoning_effort = "medium"
|
||||
codex_summary = "none"
|
||||
codex_personality = "none"
|
||||
codex_request_timeout = 180.0
|
||||
codex_max_retries = 2
|
||||
codex_cleanup_threads = true
|
||||
|
||||
[storage]
|
||||
# For stable run history on a self-hosted runner, prefer a persistent path outside the repo checkout.
|
||||
# Example on Windows: "C:/TradingAgentsData/archive"
|
||||
archive_dir = "./.runtime/tradingagents-archive"
|
||||
site_dir = "./site"
|
||||
|
||||
[site]
|
||||
title = "TradingAgents Daily Reports"
|
||||
subtitle = "Self-hosted Codex automation for scheduled multi-ticker analysis"
|
||||
max_runs_on_homepage = 30
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
[run]
|
||||
tickers = ["GOOGL", "NVDA"]
|
||||
analysts = ["market", "social", "news", "fundamentals"]
|
||||
output_language = "Korean"
|
||||
trade_date_mode = "latest_available"
|
||||
timezone = "Asia/Seoul"
|
||||
max_debate_rounds = 1
|
||||
max_risk_discuss_rounds = 1
|
||||
latest_market_data_lookback_days = 14
|
||||
continue_on_ticker_error = true
|
||||
|
||||
[llm]
|
||||
provider = "codex"
|
||||
# TradingAgents' current Codex provider path uses the frontier model id `gpt-5.4`
|
||||
# for Codex 5.4 sessions.
|
||||
quick_model = "gpt-5.4"
|
||||
deep_model = "gpt-5.4"
|
||||
codex_reasoning_effort = "medium"
|
||||
codex_summary = "none"
|
||||
codex_personality = "none"
|
||||
codex_request_timeout = 180.0
|
||||
codex_max_retries = 2
|
||||
codex_cleanup_threads = true
|
||||
|
||||
[storage]
|
||||
# For a self-hosted runner, it is better to override this to a persistent absolute path
|
||||
# via the TRADINGAGENTS_ARCHIVE_DIR repository variable or by editing this file.
|
||||
archive_dir = "./.runtime/tradingagents-archive"
|
||||
site_dir = "./site"
|
||||
|
||||
[site]
|
||||
title = "TradingAgents Daily Reports"
|
||||
subtitle = "Self-hosted Codex automation for scheduled multi-ticker analysis"
|
||||
max_runs_on_homepage = 30
|
||||
|
|
@ -34,6 +34,7 @@ dependencies = [
|
|||
|
||||
[project.scripts]
|
||||
tradingagents = "cli.main:app"
|
||||
tradingagents-scheduled = "tradingagents.scheduled.runner:main"
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
include = ["tradingagents*", "cli*"]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,93 @@
|
|||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import patch
|
||||
|
||||
from cli.main import run_analysis
|
||||
from cli.models import AnalystType
|
||||
|
||||
|
||||
class _DummyLive:
|
||||
def __init__(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc, tb):
|
||||
return False
|
||||
|
||||
|
||||
class _FakePropagator:
|
||||
def create_initial_state(self, ticker, analysis_date):
|
||||
return {"ticker": ticker, "analysis_date": analysis_date}
|
||||
|
||||
def get_graph_args(self, callbacks=None):
|
||||
return {}
|
||||
|
||||
|
||||
class _FakeGraphRunner:
|
||||
def stream(self, init_state, **kwargs):
|
||||
yield {
|
||||
"messages": [SimpleNamespace(id="msg-1", tool_calls=[])],
|
||||
"market_report": "시장 보고서 — 한글 검증",
|
||||
"final_trade_decision": "HOLD — 포지션 유지",
|
||||
}
|
||||
|
||||
|
||||
class _FakeTradingAgentsGraph:
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.propagator = _FakePropagator()
|
||||
self.graph = _FakeGraphRunner()
|
||||
|
||||
def process_signal(self, signal):
|
||||
return signal
|
||||
|
||||
|
||||
class CliUnicodeLoggingTests(unittest.TestCase):
|
||||
def test_run_analysis_writes_logs_and_reports_as_utf8(self):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
results_dir = Path(tmpdir) / "results"
|
||||
selections = {
|
||||
"ticker": "GOOGL",
|
||||
"analysis_date": "2026-04-05",
|
||||
"output_language": "Korean",
|
||||
"analysts": [AnalystType.MARKET],
|
||||
"research_depth": 1,
|
||||
"llm_provider": "codex",
|
||||
"backend_url": None,
|
||||
"shallow_thinker": "gpt-5.4",
|
||||
"deep_thinker": "gpt-5.4",
|
||||
"codex_reasoning_effort": "medium",
|
||||
}
|
||||
|
||||
with (
|
||||
patch("cli.main.get_user_selections", return_value=selections),
|
||||
patch("cli.main.DEFAULT_CONFIG", {"results_dir": str(results_dir)}),
|
||||
patch("cli.main.TradingAgentsGraph", _FakeTradingAgentsGraph),
|
||||
patch("cli.main.StatsCallbackHandler", return_value=SimpleNamespace()),
|
||||
patch("cli.main.Live", _DummyLive),
|
||||
patch("cli.main.create_layout", return_value=object()),
|
||||
patch("cli.main.update_display"),
|
||||
patch("cli.main.update_analyst_statuses"),
|
||||
patch(
|
||||
"cli.main.classify_message_type",
|
||||
return_value=("Agent", "유니코드 메시지 — 로그 저장 검증"),
|
||||
),
|
||||
patch("cli.main.typer.prompt", side_effect=["N", "N"]),
|
||||
patch("cli.main.console.print"),
|
||||
):
|
||||
run_analysis()
|
||||
|
||||
log_file = results_dir / "GOOGL" / "2026-04-05" / "message_tool.log"
|
||||
report_file = results_dir / "GOOGL" / "2026-04-05" / "reports" / "market_report.md"
|
||||
|
||||
self.assertTrue(log_file.exists())
|
||||
self.assertTrue(report_file.exists())
|
||||
self.assertIn("유니코드 메시지 — 로그 저장 검증", log_file.read_text(encoding="utf-8"))
|
||||
self.assertIn("시장 보고서 — 한글 검증", report_file.read_text(encoding="utf-8"))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
|
@ -119,16 +119,60 @@ class CodexProviderTests(unittest.TestCase):
|
|||
|
||||
self.assertEqual(resolved, str(candidate))
|
||||
|
||||
def test_resolve_codex_binary_skips_unusable_path_alias_on_windows(self):
|
||||
fake_home = Path("C:/Users/tester")
|
||||
alias_path = "C:/Program Files/WindowsApps/OpenAI.Codex/app/resources/codex.exe"
|
||||
candidate = fake_home / ".vscode/extensions/openai.chatgpt-1.0.0/bin/windows-x86_64/codex.exe"
|
||||
|
||||
with (
|
||||
patch("tradingagents.llm_clients.codex_binary.os.name", "nt"),
|
||||
patch("tradingagents.llm_clients.codex_binary.Path.home", return_value=fake_home),
|
||||
patch("tradingagents.llm_clients.codex_binary.shutil.which", return_value=alias_path),
|
||||
patch(
|
||||
"tradingagents.llm_clients.codex_binary.Path.glob",
|
||||
return_value=[candidate],
|
||||
),
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch("pathlib.Path.exists", return_value=True),
|
||||
patch("pathlib.Path.stat") as mocked_stat,
|
||||
patch(
|
||||
"tradingagents.llm_clients.codex_binary._is_usable_codex_binary",
|
||||
side_effect=lambda path: path != alias_path,
|
||||
),
|
||||
):
|
||||
mocked_stat.return_value.st_mtime = 1
|
||||
resolved = resolve_codex_binary(None)
|
||||
|
||||
self.assertEqual(resolved, str(candidate))
|
||||
|
||||
def test_resolve_codex_binary_uses_env_override(self):
|
||||
with (
|
||||
patch("tradingagents.llm_clients.codex_binary.os.name", "nt"),
|
||||
patch("tradingagents.llm_clients.codex_binary.shutil.which", return_value=None),
|
||||
patch.dict("os.environ", {"CODEX_BINARY": "C:/custom/codex.exe"}, clear=False),
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch(
|
||||
"tradingagents.llm_clients.codex_binary._is_usable_codex_binary",
|
||||
return_value=True,
|
||||
),
|
||||
):
|
||||
resolved = resolve_codex_binary(None)
|
||||
|
||||
self.assertEqual(Path(resolved), Path("C:/custom/codex.exe"))
|
||||
|
||||
def test_resolve_codex_binary_checks_explicit_binary_usability(self):
|
||||
with (
|
||||
patch("tradingagents.llm_clients.codex_binary.os.name", "nt"),
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch(
|
||||
"tradingagents.llm_clients.codex_binary._is_usable_codex_binary",
|
||||
return_value=False,
|
||||
),
|
||||
):
|
||||
resolved = resolve_codex_binary("C:/custom/codex.exe")
|
||||
|
||||
self.assertEqual(Path(resolved), Path("C:/custom/codex.exe"))
|
||||
|
||||
def test_message_normalization_supports_str_messages_and_openai_dicts(self):
|
||||
normalized = normalize_input_messages(
|
||||
[
|
||||
|
|
|
|||
|
|
@ -0,0 +1,211 @@
|
|||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from tradingagents.scheduled.runner import execute_scheduled_run, load_scheduled_config, main
|
||||
|
||||
|
||||
class _FakeStatsHandler:
|
||||
def get_stats(self):
|
||||
return {
|
||||
"llm_calls": 12,
|
||||
"tool_calls": 7,
|
||||
"tokens_in": 1024,
|
||||
"tokens_out": 2048,
|
||||
}
|
||||
|
||||
|
||||
class _FakeTradingAgentsGraph:
|
||||
def __init__(self, selected_analysts, debug=False, config=None, callbacks=None):
|
||||
self.selected_analysts = selected_analysts
|
||||
self.debug = debug
|
||||
self.config = config or {}
|
||||
self.callbacks = callbacks or []
|
||||
|
||||
def propagate(self, ticker, trade_date):
|
||||
if ticker == "FAIL":
|
||||
raise RuntimeError("synthetic failure")
|
||||
|
||||
final_state = {
|
||||
"company_of_interest": ticker,
|
||||
"trade_date": trade_date,
|
||||
"market_report": f"## Market\n{ticker} market analysis",
|
||||
"sentiment_report": f"## Sentiment\n{ticker} sentiment analysis",
|
||||
"news_report": f"## News\n{ticker} news analysis",
|
||||
"fundamentals_report": f"## Fundamentals\n{ticker} fundamentals analysis",
|
||||
"investment_debate_state": {
|
||||
"bull_history": f"{ticker} bull case",
|
||||
"bear_history": f"{ticker} bear case",
|
||||
"history": "debate transcript",
|
||||
"current_response": "",
|
||||
"judge_decision": f"{ticker} research manager decision",
|
||||
},
|
||||
"trader_investment_plan": f"{ticker} trading plan",
|
||||
"investment_plan": f"{ticker} investment plan",
|
||||
"risk_debate_state": {
|
||||
"aggressive_history": f"{ticker} aggressive case",
|
||||
"conservative_history": f"{ticker} conservative case",
|
||||
"neutral_history": f"{ticker} neutral case",
|
||||
"history": "risk transcript",
|
||||
"judge_decision": f"{ticker} final portfolio decision",
|
||||
},
|
||||
"final_trade_decision": f"{ticker} final trade decision",
|
||||
}
|
||||
return final_state, "BUY"
|
||||
|
||||
|
||||
class ScheduledAnalysisTests(unittest.TestCase):
|
||||
def test_execute_scheduled_run_archives_outputs_and_builds_site(self):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
root = Path(tmpdir)
|
||||
config_path = root / "scheduled_analysis.toml"
|
||||
archive_dir = root / "archive"
|
||||
site_dir = root / "site"
|
||||
config_path.write_text(
|
||||
f"""
|
||||
[run]
|
||||
tickers = ["NVDA", "FAIL"]
|
||||
analysts = ["market", "social", "news", "fundamentals"]
|
||||
output_language = "Korean"
|
||||
trade_date_mode = "latest_available"
|
||||
timezone = "Asia/Seoul"
|
||||
continue_on_ticker_error = true
|
||||
|
||||
[llm]
|
||||
provider = "codex"
|
||||
quick_model = "gpt-5.4"
|
||||
deep_model = "gpt-5.4"
|
||||
codex_reasoning_effort = "medium"
|
||||
|
||||
[storage]
|
||||
archive_dir = "{archive_dir.as_posix()}"
|
||||
site_dir = "{site_dir.as_posix()}"
|
||||
|
||||
[site]
|
||||
title = "Daily Reports"
|
||||
subtitle = "Automated"
|
||||
""",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
config = load_scheduled_config(config_path)
|
||||
with (
|
||||
patch("tradingagents.scheduled.runner.TradingAgentsGraph", _FakeTradingAgentsGraph),
|
||||
patch("tradingagents.scheduled.runner.StatsCallbackHandler", _FakeStatsHandler),
|
||||
patch("tradingagents.scheduled.runner.resolve_trade_date", return_value="2026-04-04"),
|
||||
):
|
||||
manifest = execute_scheduled_run(config, run_label="test")
|
||||
|
||||
self.assertEqual(manifest["status"], "partial_failure")
|
||||
self.assertEqual(manifest["summary"]["successful_tickers"], 1)
|
||||
self.assertEqual(manifest["summary"]["failed_tickers"], 1)
|
||||
self.assertEqual(manifest["settings"]["provider"], "codex")
|
||||
self.assertEqual(manifest["settings"]["deep_model"], "gpt-5.4")
|
||||
self.assertEqual(manifest["settings"]["quick_model"], "gpt-5.4")
|
||||
|
||||
run_dir = archive_dir / "runs" / manifest["started_at"][:4] / manifest["run_id"]
|
||||
self.assertTrue((run_dir / "run.json").exists())
|
||||
self.assertTrue((run_dir / "tickers" / "NVDA" / "report" / "complete_report.md").exists())
|
||||
self.assertTrue((run_dir / "tickers" / "FAIL" / "error.json").exists())
|
||||
|
||||
index_html = (site_dir / "index.html").read_text(encoding="utf-8")
|
||||
run_html = (site_dir / "runs" / manifest["run_id"] / "index.html").read_text(encoding="utf-8")
|
||||
ticker_html = (site_dir / "runs" / manifest["run_id"] / "NVDA.html").read_text(encoding="utf-8")
|
||||
|
||||
self.assertIn("Daily Reports", index_html)
|
||||
self.assertIn("partial failure", index_html)
|
||||
self.assertIn("NVDA", run_html)
|
||||
self.assertIn("Rendered report", ticker_html)
|
||||
self.assertTrue((site_dir / "downloads" / manifest["run_id"] / "NVDA" / "complete_report.md").exists())
|
||||
|
||||
def test_main_site_only_rebuilds_from_existing_archive(self):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
root = Path(tmpdir)
|
||||
archive_dir = root / "archive"
|
||||
site_dir = root / "site"
|
||||
run_dir = archive_dir / "runs" / "2026" / "20260405T091300_seed"
|
||||
ticker_dir = run_dir / "tickers" / "NVDA" / "report"
|
||||
ticker_dir.mkdir(parents=True, exist_ok=True)
|
||||
(ticker_dir / "complete_report.md").write_text("# Test report", encoding="utf-8")
|
||||
analysis_dir = run_dir / "tickers" / "NVDA"
|
||||
(analysis_dir / "analysis.json").write_text("{}", encoding="utf-8")
|
||||
(analysis_dir / "final_state.json").write_text("{}", encoding="utf-8")
|
||||
(run_dir / "run.json").write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"version": 1,
|
||||
"run_id": "20260405T091300_seed",
|
||||
"label": "seed",
|
||||
"status": "success",
|
||||
"started_at": "2026-04-05T09:13:00+09:00",
|
||||
"finished_at": "2026-04-05T09:20:00+09:00",
|
||||
"timezone": "Asia/Seoul",
|
||||
"settings": {
|
||||
"provider": "codex",
|
||||
"quick_model": "gpt-5.4",
|
||||
"deep_model": "gpt-5.4",
|
||||
"codex_reasoning_effort": "medium",
|
||||
"output_language": "Korean",
|
||||
"analysts": ["market", "social", "news", "fundamentals"],
|
||||
"trade_date_mode": "latest_available",
|
||||
"max_debate_rounds": 1,
|
||||
"max_risk_discuss_rounds": 1,
|
||||
},
|
||||
"summary": {
|
||||
"total_tickers": 1,
|
||||
"successful_tickers": 1,
|
||||
"failed_tickers": 0,
|
||||
},
|
||||
"tickers": [
|
||||
{
|
||||
"ticker": "NVDA",
|
||||
"status": "success",
|
||||
"trade_date": "2026-04-04",
|
||||
"decision": "BUY",
|
||||
"started_at": "2026-04-05T09:13:00+09:00",
|
||||
"finished_at": "2026-04-05T09:20:00+09:00",
|
||||
"duration_seconds": 420.0,
|
||||
"metrics": {
|
||||
"llm_calls": 10,
|
||||
"tool_calls": 7,
|
||||
"tokens_in": 1000,
|
||||
"tokens_out": 2000,
|
||||
},
|
||||
"artifacts": {
|
||||
"analysis_json": "tickers/NVDA/analysis.json",
|
||||
"report_markdown": "tickers/NVDA/report/complete_report.md",
|
||||
"final_state_json": "tickers/NVDA/final_state.json",
|
||||
"graph_log_json": None,
|
||||
},
|
||||
}
|
||||
],
|
||||
},
|
||||
ensure_ascii=False,
|
||||
),
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
config_path = root / "scheduled_analysis.toml"
|
||||
config_path.write_text(
|
||||
f"""
|
||||
[run]
|
||||
tickers = ["NVDA"]
|
||||
|
||||
[storage]
|
||||
archive_dir = "{archive_dir.as_posix()}"
|
||||
site_dir = "{site_dir.as_posix()}"
|
||||
""",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
exit_code = main(["--config", str(config_path), "--site-only"])
|
||||
|
||||
self.assertEqual(exit_code, 0)
|
||||
self.assertTrue((site_dir / "index.html").exists())
|
||||
self.assertIn("NVDA", (site_dir / "runs" / "20260405T091300_seed" / "NVDA.html").read_text(encoding="utf-8"))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
|
@ -2,27 +2,40 @@ from __future__ import annotations
|
|||
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def resolve_codex_binary(codex_binary: str | None) -> str | None:
|
||||
explicit = _normalize_explicit_binary(codex_binary)
|
||||
if explicit:
|
||||
return explicit
|
||||
|
||||
env_value = _normalize_explicit_binary(os.getenv("CODEX_BINARY"))
|
||||
if env_value:
|
||||
return env_value
|
||||
requested_candidates = [
|
||||
_normalize_explicit_binary(codex_binary),
|
||||
_normalize_explicit_binary(os.getenv("CODEX_BINARY")),
|
||||
]
|
||||
for candidate in requested_candidates:
|
||||
if candidate and _is_usable_codex_binary(candidate):
|
||||
return candidate
|
||||
|
||||
discovered_candidates = []
|
||||
path_binary = shutil.which("codex")
|
||||
if path_binary:
|
||||
return path_binary
|
||||
discovered_candidates.append(path_binary)
|
||||
|
||||
for candidate in _windows_codex_candidates():
|
||||
if candidate.is_file():
|
||||
return str(candidate)
|
||||
discovered_candidates.extend(str(candidate) for candidate in _windows_codex_candidates())
|
||||
|
||||
return None
|
||||
first_existing = None
|
||||
for candidate in _dedupe_candidates(discovered_candidates):
|
||||
if not Path(candidate).is_file():
|
||||
continue
|
||||
if first_existing is None:
|
||||
first_existing = candidate
|
||||
if _is_usable_codex_binary(candidate):
|
||||
return candidate
|
||||
|
||||
for candidate in requested_candidates:
|
||||
if candidate:
|
||||
return candidate
|
||||
|
||||
return first_existing
|
||||
|
||||
|
||||
def codex_binary_error_message(codex_binary: str | None) -> str:
|
||||
|
|
@ -62,8 +75,39 @@ def _windows_codex_candidates() -> list[Path]:
|
|||
)
|
||||
candidates.extend(
|
||||
[
|
||||
home / ".codex" / ".sandbox-bin" / "codex.exe",
|
||||
home / ".codex" / "bin" / "codex.exe",
|
||||
home / "AppData" / "Local" / "Programs" / "Codex" / "codex.exe",
|
||||
]
|
||||
)
|
||||
return candidates
|
||||
|
||||
|
||||
def _dedupe_candidates(candidates: list[str]) -> list[str]:
|
||||
unique = []
|
||||
seen = set()
|
||||
for candidate in candidates:
|
||||
normalized = os.path.normcase(os.path.normpath(candidate))
|
||||
if normalized in seen:
|
||||
continue
|
||||
seen.add(normalized)
|
||||
unique.append(candidate)
|
||||
return unique
|
||||
|
||||
|
||||
def _is_usable_codex_binary(binary: str) -> bool:
|
||||
if os.name != "nt":
|
||||
return True
|
||||
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
[binary, "--version"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
check=False,
|
||||
)
|
||||
except (OSError, subprocess.SubprocessError):
|
||||
return False
|
||||
|
||||
return completed.returncode == 0
|
||||
|
|
|
|||
|
|
@ -0,0 +1,123 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import datetime as dt
|
||||
from pathlib import Path
|
||||
from typing import Any, Mapping
|
||||
|
||||
|
||||
def save_report_bundle(
|
||||
final_state: Mapping[str, Any],
|
||||
ticker: str,
|
||||
save_path: Path,
|
||||
*,
|
||||
generated_at: dt.datetime | None = None,
|
||||
) -> Path:
|
||||
"""Persist a complete TradingAgents report bundle to disk."""
|
||||
|
||||
generated_at = generated_at or dt.datetime.now()
|
||||
save_path = Path(save_path)
|
||||
save_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
sections: list[str] = []
|
||||
|
||||
analysts_dir = save_path / "1_analysts"
|
||||
analyst_parts: list[tuple[str, str]] = []
|
||||
for file_name, title, key in (
|
||||
("market.md", "Market Analyst", "market_report"),
|
||||
("sentiment.md", "Social Analyst", "sentiment_report"),
|
||||
("news.md", "News Analyst", "news_report"),
|
||||
("fundamentals.md", "Fundamentals Analyst", "fundamentals_report"),
|
||||
):
|
||||
content = _coerce_text(final_state.get(key))
|
||||
if not content:
|
||||
continue
|
||||
analysts_dir.mkdir(exist_ok=True)
|
||||
_write_text(analysts_dir / file_name, content)
|
||||
analyst_parts.append((title, content))
|
||||
|
||||
if analyst_parts:
|
||||
sections.append(
|
||||
"## I. Analyst Team Reports\n\n"
|
||||
+ "\n\n".join(f"### {title}\n{content}" for title, content in analyst_parts)
|
||||
)
|
||||
|
||||
debate = final_state.get("investment_debate_state") or {}
|
||||
research_dir = save_path / "2_research"
|
||||
research_parts: list[tuple[str, str]] = []
|
||||
for file_name, title, key in (
|
||||
("bull.md", "Bull Researcher", "bull_history"),
|
||||
("bear.md", "Bear Researcher", "bear_history"),
|
||||
("manager.md", "Research Manager", "judge_decision"),
|
||||
):
|
||||
content = _coerce_text(debate.get(key))
|
||||
if not content:
|
||||
continue
|
||||
research_dir.mkdir(exist_ok=True)
|
||||
_write_text(research_dir / file_name, content)
|
||||
research_parts.append((title, content))
|
||||
|
||||
if research_parts:
|
||||
sections.append(
|
||||
"## II. Research Team Decision\n\n"
|
||||
+ "\n\n".join(f"### {title}\n{content}" for title, content in research_parts)
|
||||
)
|
||||
|
||||
trader_plan = _coerce_text(final_state.get("trader_investment_plan"))
|
||||
if trader_plan:
|
||||
trading_dir = save_path / "3_trading"
|
||||
trading_dir.mkdir(exist_ok=True)
|
||||
_write_text(trading_dir / "trader.md", trader_plan)
|
||||
sections.append(f"## III. Trading Team Plan\n\n### Trader\n{trader_plan}")
|
||||
|
||||
risk = final_state.get("risk_debate_state") or {}
|
||||
risk_dir = save_path / "4_risk"
|
||||
risk_parts: list[tuple[str, str]] = []
|
||||
for file_name, title, key in (
|
||||
("aggressive.md", "Aggressive Analyst", "aggressive_history"),
|
||||
("conservative.md", "Conservative Analyst", "conservative_history"),
|
||||
("neutral.md", "Neutral Analyst", "neutral_history"),
|
||||
):
|
||||
content = _coerce_text(risk.get(key))
|
||||
if not content:
|
||||
continue
|
||||
risk_dir.mkdir(exist_ok=True)
|
||||
_write_text(risk_dir / file_name, content)
|
||||
risk_parts.append((title, content))
|
||||
|
||||
if risk_parts:
|
||||
sections.append(
|
||||
"## IV. Risk Management Team Decision\n\n"
|
||||
+ "\n\n".join(f"### {title}\n{content}" for title, content in risk_parts)
|
||||
)
|
||||
|
||||
portfolio_decision = _coerce_text(risk.get("judge_decision"))
|
||||
if portfolio_decision:
|
||||
portfolio_dir = save_path / "5_portfolio"
|
||||
portfolio_dir.mkdir(exist_ok=True)
|
||||
_write_text(portfolio_dir / "decision.md", portfolio_decision)
|
||||
sections.append(
|
||||
"## V. Portfolio Manager Decision\n\n"
|
||||
f"### Portfolio Manager\n{portfolio_decision}"
|
||||
)
|
||||
|
||||
header = (
|
||||
f"# Trading Analysis Report: {ticker}\n\n"
|
||||
f"Generated: {generated_at.strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
)
|
||||
complete_report = save_path / "complete_report.md"
|
||||
_write_text(complete_report, header + "\n\n".join(sections))
|
||||
return complete_report
|
||||
|
||||
|
||||
def _coerce_text(value: Any) -> str:
|
||||
if value is None:
|
||||
return ""
|
||||
if isinstance(value, str):
|
||||
return value
|
||||
if isinstance(value, list):
|
||||
return "\n".join(str(item) for item in value)
|
||||
return str(value)
|
||||
|
||||
|
||||
def _write_text(path: Path, content: str) -> None:
|
||||
path.write_text(content, encoding="utf-8")
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
from .config import ScheduledAnalysisConfig, load_scheduled_config
|
||||
from .runner import execute_scheduled_run, main
|
||||
from .site import build_site
|
||||
|
||||
__all__ = [
|
||||
"ScheduledAnalysisConfig",
|
||||
"build_site",
|
||||
"execute_scheduled_run",
|
||||
"load_scheduled_config",
|
||||
"main",
|
||||
]
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
from .runner import main
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
|
|
@ -0,0 +1,220 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import tomllib
|
||||
from dataclasses import dataclass, field, replace
|
||||
from pathlib import Path
|
||||
from typing import Iterable
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
from cli.utils import normalize_ticker_symbol
|
||||
|
||||
|
||||
ALL_ANALYSTS = ("market", "social", "news", "fundamentals")
|
||||
VALID_TRADE_DATE_MODES = {"latest_available", "today", "previous_business_day", "explicit"}
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RunSettings:
|
||||
tickers: list[str]
|
||||
analysts: list[str] = field(default_factory=lambda: list(ALL_ANALYSTS))
|
||||
output_language: str = "Korean"
|
||||
trade_date_mode: str = "latest_available"
|
||||
explicit_trade_date: str | None = None
|
||||
timezone: str = "Asia/Seoul"
|
||||
max_debate_rounds: int = 1
|
||||
max_risk_discuss_rounds: int = 1
|
||||
latest_market_data_lookback_days: int = 14
|
||||
continue_on_ticker_error: bool = True
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class LLMSettings:
|
||||
provider: str = "codex"
|
||||
deep_model: str = "gpt-5.4"
|
||||
quick_model: str = "gpt-5.4"
|
||||
codex_reasoning_effort: str = "medium"
|
||||
codex_summary: str = "none"
|
||||
codex_personality: str = "none"
|
||||
codex_request_timeout: float = 180.0
|
||||
codex_max_retries: int = 2
|
||||
codex_cleanup_threads: bool = True
|
||||
codex_workspace_dir: str | None = None
|
||||
codex_binary: str | None = None
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class StorageSettings:
|
||||
archive_dir: Path
|
||||
site_dir: Path
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class SiteSettings:
|
||||
title: str = "TradingAgents Daily Reports"
|
||||
subtitle: str = "Automated multi-agent market analysis powered by Codex"
|
||||
max_runs_on_homepage: int = 30
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ScheduledAnalysisConfig:
|
||||
run: RunSettings
|
||||
llm: LLMSettings
|
||||
storage: StorageSettings
|
||||
site: SiteSettings
|
||||
config_path: Path
|
||||
|
||||
|
||||
def load_scheduled_config(path: str | Path) -> ScheduledAnalysisConfig:
|
||||
config_path = Path(path).resolve()
|
||||
with config_path.open("rb") as handle:
|
||||
raw = tomllib.load(handle)
|
||||
|
||||
run_raw = raw.get("run") or {}
|
||||
llm_raw = raw.get("llm") or {}
|
||||
storage_raw = raw.get("storage") or {}
|
||||
site_raw = raw.get("site") or {}
|
||||
|
||||
tickers = _normalize_tickers(run_raw.get("tickers") or [])
|
||||
if not tickers:
|
||||
raise ValueError("Scheduled analysis config must declare at least one ticker in [run].tickers.")
|
||||
|
||||
analysts = _normalize_analysts(run_raw.get("analysts") or list(ALL_ANALYSTS))
|
||||
|
||||
trade_date_mode = str(run_raw.get("trade_date_mode", "latest_available")).strip().lower()
|
||||
explicit_trade_date = None
|
||||
if run_raw.get("trade_date"):
|
||||
trade_date_mode = "explicit"
|
||||
explicit_trade_date = _validate_trade_date(str(run_raw["trade_date"]))
|
||||
elif trade_date_mode == "explicit":
|
||||
explicit_trade_date = _validate_trade_date(str(run_raw.get("explicit_trade_date", "")).strip())
|
||||
|
||||
if trade_date_mode not in VALID_TRADE_DATE_MODES:
|
||||
raise ValueError(
|
||||
f"Unsupported trade_date_mode '{trade_date_mode}'. "
|
||||
f"Expected one of: {', '.join(sorted(VALID_TRADE_DATE_MODES))}."
|
||||
)
|
||||
|
||||
timezone_name = str(run_raw.get("timezone", "Asia/Seoul")).strip()
|
||||
ZoneInfo(timezone_name)
|
||||
|
||||
base_dir = config_path.parent
|
||||
archive_dir = _resolve_path(storage_raw.get("archive_dir", ".tradingagents-scheduled/archive"), base_dir)
|
||||
site_dir = _resolve_path(storage_raw.get("site_dir", "site"), base_dir)
|
||||
|
||||
return ScheduledAnalysisConfig(
|
||||
run=RunSettings(
|
||||
tickers=tickers,
|
||||
analysts=analysts,
|
||||
output_language=str(run_raw.get("output_language", "Korean")).strip() or "Korean",
|
||||
trade_date_mode=trade_date_mode,
|
||||
explicit_trade_date=explicit_trade_date,
|
||||
timezone=timezone_name,
|
||||
max_debate_rounds=int(run_raw.get("max_debate_rounds", 1)),
|
||||
max_risk_discuss_rounds=int(run_raw.get("max_risk_discuss_rounds", 1)),
|
||||
latest_market_data_lookback_days=int(run_raw.get("latest_market_data_lookback_days", 14)),
|
||||
continue_on_ticker_error=bool(run_raw.get("continue_on_ticker_error", True)),
|
||||
),
|
||||
llm=LLMSettings(
|
||||
provider=str(llm_raw.get("provider", "codex")).strip().lower() or "codex",
|
||||
deep_model=str(llm_raw.get("deep_model", "gpt-5.4")).strip() or "gpt-5.4",
|
||||
quick_model=str(llm_raw.get("quick_model", "gpt-5.4")).strip() or "gpt-5.4",
|
||||
codex_reasoning_effort=str(llm_raw.get("codex_reasoning_effort", "medium")).strip() or "medium",
|
||||
codex_summary=str(llm_raw.get("codex_summary", "none")).strip() or "none",
|
||||
codex_personality=str(llm_raw.get("codex_personality", "none")).strip() or "none",
|
||||
codex_request_timeout=float(llm_raw.get("codex_request_timeout", 180.0)),
|
||||
codex_max_retries=int(llm_raw.get("codex_max_retries", 2)),
|
||||
codex_cleanup_threads=bool(llm_raw.get("codex_cleanup_threads", True)),
|
||||
codex_workspace_dir=_optional_string(llm_raw.get("codex_workspace_dir")),
|
||||
codex_binary=_optional_string(llm_raw.get("codex_binary")),
|
||||
),
|
||||
storage=StorageSettings(
|
||||
archive_dir=archive_dir,
|
||||
site_dir=site_dir,
|
||||
),
|
||||
site=SiteSettings(
|
||||
title=str(site_raw.get("title", "TradingAgents Daily Reports")).strip() or "TradingAgents Daily Reports",
|
||||
subtitle=str(
|
||||
site_raw.get(
|
||||
"subtitle",
|
||||
"Automated multi-agent market analysis powered by Codex",
|
||||
)
|
||||
).strip()
|
||||
or "Automated multi-agent market analysis powered by Codex",
|
||||
max_runs_on_homepage=int(site_raw.get("max_runs_on_homepage", 30)),
|
||||
),
|
||||
config_path=config_path,
|
||||
)
|
||||
|
||||
|
||||
def with_overrides(
|
||||
config: ScheduledAnalysisConfig,
|
||||
*,
|
||||
archive_dir: str | Path | None = None,
|
||||
site_dir: str | Path | None = None,
|
||||
tickers: Iterable[str] | None = None,
|
||||
trade_date: str | None = None,
|
||||
) -> ScheduledAnalysisConfig:
|
||||
run = config.run
|
||||
storage = config.storage
|
||||
|
||||
if tickers is not None:
|
||||
run = replace(run, tickers=_normalize_tickers(tickers))
|
||||
if trade_date:
|
||||
run = replace(run, trade_date_mode="explicit", explicit_trade_date=_validate_trade_date(trade_date))
|
||||
if archive_dir:
|
||||
storage = replace(storage, archive_dir=Path(archive_dir).expanduser().resolve())
|
||||
if site_dir:
|
||||
storage = replace(storage, site_dir=Path(site_dir).expanduser().resolve())
|
||||
|
||||
return replace(config, run=run, storage=storage)
|
||||
|
||||
|
||||
def _normalize_tickers(values: Iterable[str]) -> list[str]:
|
||||
normalized: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for value in values:
|
||||
ticker = normalize_ticker_symbol(str(value))
|
||||
if not ticker or ticker in seen:
|
||||
continue
|
||||
seen.add(ticker)
|
||||
normalized.append(ticker)
|
||||
return normalized
|
||||
|
||||
|
||||
def _normalize_analysts(values: Iterable[str]) -> list[str]:
|
||||
normalized: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for value in values:
|
||||
analyst = str(value).strip().lower()
|
||||
if analyst not in ALL_ANALYSTS:
|
||||
raise ValueError(
|
||||
f"Unsupported analyst '{analyst}'. Expected only: {', '.join(ALL_ANALYSTS)}."
|
||||
)
|
||||
if analyst in seen:
|
||||
continue
|
||||
seen.add(analyst)
|
||||
normalized.append(analyst)
|
||||
return normalized or list(ALL_ANALYSTS)
|
||||
|
||||
|
||||
def _resolve_path(value: str | os.PathLike[str], base_dir: Path) -> Path:
|
||||
expanded = os.path.expanduser(os.path.expandvars(str(value)))
|
||||
path = Path(expanded)
|
||||
if not path.is_absolute():
|
||||
path = (base_dir / path).resolve()
|
||||
return path
|
||||
|
||||
|
||||
def _optional_string(value: object) -> str | None:
|
||||
if value is None:
|
||||
return None
|
||||
text = str(value).strip()
|
||||
return text or None
|
||||
|
||||
|
||||
def _validate_trade_date(value: str) -> str:
|
||||
text = value.strip()
|
||||
if len(text) != 10 or text[4] != "-" or text[7] != "-":
|
||||
raise ValueError(f"Invalid trade date '{value}'. Expected YYYY-MM-DD.")
|
||||
return text
|
||||
|
|
@ -0,0 +1,346 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import traceback
|
||||
from datetime import date, datetime, timedelta
|
||||
from pathlib import Path
|
||||
from time import perf_counter
|
||||
from typing import Any
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
import yfinance as yf
|
||||
|
||||
from cli.stats_handler import StatsCallbackHandler
|
||||
from tradingagents.default_config import DEFAULT_CONFIG
|
||||
from tradingagents.graph.trading_graph import TradingAgentsGraph
|
||||
from tradingagents.reporting import save_report_bundle
|
||||
|
||||
from .config import ScheduledAnalysisConfig, load_scheduled_config, with_overrides
|
||||
from .site import build_site
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Run a non-interactive scheduled TradingAgents analysis and build a static report site."
|
||||
)
|
||||
parser.add_argument("--config", default="config/scheduled_analysis.toml", help="Path to scheduled analysis TOML config.")
|
||||
parser.add_argument("--archive-dir", help="Override archive directory for run history.")
|
||||
parser.add_argument("--site-dir", help="Override generated site output directory.")
|
||||
parser.add_argument("--tickers", help="Comma-separated ticker override.")
|
||||
parser.add_argument("--trade-date", help="Optional YYYY-MM-DD override for all tickers.")
|
||||
parser.add_argument("--site-only", action="store_true", help="Only rebuild the static site from archived runs.")
|
||||
parser.add_argument("--strict", action="store_true", help="Return a non-zero exit code if any ticker fails.")
|
||||
parser.add_argument("--label", default="github-actions", help="Run label for archived metadata.")
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
config = with_overrides(
|
||||
load_scheduled_config(args.config),
|
||||
archive_dir=args.archive_dir,
|
||||
site_dir=args.site_dir,
|
||||
tickers=_parse_ticker_override(args.tickers),
|
||||
trade_date=args.trade_date,
|
||||
)
|
||||
|
||||
if args.site_only:
|
||||
manifests = build_site(config.storage.archive_dir, config.storage.site_dir, config.site)
|
||||
print(
|
||||
f"Rebuilt static site at {config.storage.site_dir} from {len(manifests)} archived run(s)."
|
||||
)
|
||||
return 0
|
||||
|
||||
manifest = execute_scheduled_run(config, run_label=args.label)
|
||||
print(
|
||||
f"Completed run {manifest['run_id']} with status {manifest['status']} "
|
||||
f"({manifest['summary']['successful_tickers']} success / {manifest['summary']['failed_tickers']} failed)."
|
||||
)
|
||||
return 1 if args.strict and manifest["summary"]["failed_tickers"] else 0
|
||||
|
||||
|
||||
def execute_scheduled_run(
|
||||
config: ScheduledAnalysisConfig,
|
||||
*,
|
||||
run_label: str = "manual",
|
||||
) -> dict[str, Any]:
|
||||
tz = ZoneInfo(config.run.timezone)
|
||||
started_at = datetime.now(tz)
|
||||
run_id = _build_run_id(started_at, run_label)
|
||||
run_dir = config.storage.archive_dir / "runs" / started_at.strftime("%Y") / run_id
|
||||
run_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
ticker_summaries: list[dict[str, Any]] = []
|
||||
engine_results_dir = run_dir / "engine-results"
|
||||
|
||||
for ticker in config.run.tickers:
|
||||
ticker_summary = _run_single_ticker(
|
||||
config=config,
|
||||
ticker=ticker,
|
||||
run_dir=run_dir,
|
||||
engine_results_dir=engine_results_dir,
|
||||
)
|
||||
ticker_summaries.append(ticker_summary)
|
||||
|
||||
if ticker_summary["status"] != "success" and not config.run.continue_on_ticker_error:
|
||||
break
|
||||
|
||||
finished_at = datetime.now(tz)
|
||||
failures = sum(1 for item in ticker_summaries if item["status"] != "success")
|
||||
successes = len(ticker_summaries) - failures
|
||||
status = "success"
|
||||
if failures and successes:
|
||||
status = "partial_failure"
|
||||
elif failures:
|
||||
status = "failed"
|
||||
|
||||
manifest = {
|
||||
"version": 1,
|
||||
"run_id": run_id,
|
||||
"label": run_label,
|
||||
"status": status,
|
||||
"started_at": started_at.isoformat(),
|
||||
"finished_at": finished_at.isoformat(),
|
||||
"timezone": config.run.timezone,
|
||||
"settings": _settings_snapshot(config),
|
||||
"summary": {
|
||||
"total_tickers": len(ticker_summaries),
|
||||
"successful_tickers": successes,
|
||||
"failed_tickers": failures,
|
||||
},
|
||||
"tickers": ticker_summaries,
|
||||
}
|
||||
|
||||
_write_json(run_dir / "run.json", manifest)
|
||||
_write_json(config.storage.archive_dir / "latest-run.json", manifest)
|
||||
build_site(config.storage.archive_dir, config.storage.site_dir, config.site)
|
||||
return manifest
|
||||
|
||||
|
||||
def resolve_trade_date(
|
||||
ticker: str,
|
||||
config: ScheduledAnalysisConfig,
|
||||
) -> str:
|
||||
mode = config.run.trade_date_mode
|
||||
if mode == "explicit" and config.run.explicit_trade_date:
|
||||
return config.run.explicit_trade_date
|
||||
|
||||
now = datetime.now(ZoneInfo(config.run.timezone))
|
||||
if mode == "today":
|
||||
return now.date().isoformat()
|
||||
if mode == "previous_business_day":
|
||||
return _previous_business_day(now.date()).isoformat()
|
||||
|
||||
history = yf.Ticker(ticker).history(
|
||||
period=f"{config.run.latest_market_data_lookback_days}d",
|
||||
interval="1d",
|
||||
auto_adjust=False,
|
||||
)
|
||||
if history.empty:
|
||||
raise RuntimeError(
|
||||
f"Could not resolve the latest available trade date for {ticker}; yfinance returned no rows."
|
||||
)
|
||||
|
||||
last_index = history.index[-1]
|
||||
last_value = getattr(last_index, "to_pydatetime", lambda: last_index)()
|
||||
last_date = last_value.date() if hasattr(last_value, "date") else last_value
|
||||
if not isinstance(last_date, date):
|
||||
raise RuntimeError(f"Unexpected trade date index value for {ticker}: {last_index!r}")
|
||||
return last_date.isoformat()
|
||||
|
||||
|
||||
def _run_single_ticker(
|
||||
*,
|
||||
config: ScheduledAnalysisConfig,
|
||||
ticker: str,
|
||||
run_dir: Path,
|
||||
engine_results_dir: Path,
|
||||
) -> dict[str, Any]:
|
||||
ticker_dir = run_dir / "tickers" / ticker
|
||||
ticker_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
ticker_started = datetime.now(ZoneInfo(config.run.timezone))
|
||||
timer_start = perf_counter()
|
||||
|
||||
try:
|
||||
trade_date = resolve_trade_date(ticker, config)
|
||||
stats_handler = StatsCallbackHandler()
|
||||
graph = TradingAgentsGraph(
|
||||
config.run.analysts,
|
||||
debug=False,
|
||||
config=_graph_config(config, engine_results_dir),
|
||||
callbacks=[stats_handler],
|
||||
)
|
||||
final_state, decision = graph.propagate(ticker, trade_date)
|
||||
|
||||
report_dir = ticker_dir / "report"
|
||||
report_file = save_report_bundle(final_state, ticker, report_dir, generated_at=ticker_started)
|
||||
final_state_path = ticker_dir / "final_state.json"
|
||||
_write_json(final_state_path, _serialize_final_state(final_state))
|
||||
|
||||
graph_log = (
|
||||
engine_results_dir
|
||||
/ ticker
|
||||
/ "TradingAgentsStrategy_logs"
|
||||
/ f"full_states_log_{trade_date}.json"
|
||||
)
|
||||
copied_graph_log = None
|
||||
if graph_log.exists():
|
||||
copied_graph_log = ticker_dir / graph_log.name
|
||||
copied_graph_log.write_text(graph_log.read_text(encoding="utf-8"), encoding="utf-8")
|
||||
|
||||
metrics = stats_handler.get_stats()
|
||||
analysis_payload = {
|
||||
"ticker": ticker,
|
||||
"status": "success",
|
||||
"trade_date": trade_date,
|
||||
"decision": str(decision),
|
||||
"started_at": ticker_started.isoformat(),
|
||||
"finished_at": datetime.now(ZoneInfo(config.run.timezone)).isoformat(),
|
||||
"duration_seconds": round(perf_counter() - timer_start, 2),
|
||||
"metrics": metrics,
|
||||
"provider": config.llm.provider,
|
||||
"models": {
|
||||
"quick_model": config.llm.quick_model,
|
||||
"deep_model": config.llm.deep_model,
|
||||
},
|
||||
}
|
||||
analysis_path = ticker_dir / "analysis.json"
|
||||
_write_json(analysis_path, analysis_payload)
|
||||
|
||||
return {
|
||||
"ticker": ticker,
|
||||
"status": "success",
|
||||
"trade_date": trade_date,
|
||||
"decision": str(decision),
|
||||
"started_at": ticker_started.isoformat(),
|
||||
"finished_at": analysis_payload["finished_at"],
|
||||
"duration_seconds": analysis_payload["duration_seconds"],
|
||||
"metrics": metrics,
|
||||
"artifacts": {
|
||||
"analysis_json": _relative_to_run(run_dir, analysis_path),
|
||||
"report_markdown": _relative_to_run(run_dir, report_file),
|
||||
"final_state_json": _relative_to_run(run_dir, final_state_path),
|
||||
"graph_log_json": _relative_to_run(run_dir, copied_graph_log) if copied_graph_log else None,
|
||||
},
|
||||
}
|
||||
except Exception as exc:
|
||||
error_payload = {
|
||||
"ticker": ticker,
|
||||
"status": "failed",
|
||||
"error": str(exc),
|
||||
"traceback": traceback.format_exc(),
|
||||
"started_at": ticker_started.isoformat(),
|
||||
"finished_at": datetime.now(ZoneInfo(config.run.timezone)).isoformat(),
|
||||
"duration_seconds": round(perf_counter() - timer_start, 2),
|
||||
}
|
||||
error_path = ticker_dir / "error.json"
|
||||
_write_json(error_path, error_payload)
|
||||
|
||||
return {
|
||||
"ticker": ticker,
|
||||
"status": "failed",
|
||||
"trade_date": None,
|
||||
"decision": None,
|
||||
"error": str(exc),
|
||||
"started_at": error_payload["started_at"],
|
||||
"finished_at": error_payload["finished_at"],
|
||||
"duration_seconds": error_payload["duration_seconds"],
|
||||
"metrics": {"llm_calls": 0, "tool_calls": 0, "tokens_in": 0, "tokens_out": 0},
|
||||
"artifacts": {
|
||||
"error_json": _relative_to_run(run_dir, error_path),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _graph_config(config: ScheduledAnalysisConfig, engine_results_dir: Path) -> dict[str, Any]:
|
||||
graph_config = DEFAULT_CONFIG.copy()
|
||||
graph_config["results_dir"] = str(engine_results_dir)
|
||||
graph_config["llm_provider"] = config.llm.provider
|
||||
graph_config["quick_think_llm"] = config.llm.quick_model
|
||||
graph_config["deep_think_llm"] = config.llm.deep_model
|
||||
graph_config["max_debate_rounds"] = config.run.max_debate_rounds
|
||||
graph_config["max_risk_discuss_rounds"] = config.run.max_risk_discuss_rounds
|
||||
graph_config["output_language"] = config.run.output_language
|
||||
graph_config["codex_reasoning_effort"] = config.llm.codex_reasoning_effort
|
||||
graph_config["codex_summary"] = config.llm.codex_summary
|
||||
graph_config["codex_personality"] = config.llm.codex_personality
|
||||
graph_config["codex_request_timeout"] = config.llm.codex_request_timeout
|
||||
graph_config["codex_max_retries"] = config.llm.codex_max_retries
|
||||
graph_config["codex_cleanup_threads"] = config.llm.codex_cleanup_threads
|
||||
if config.llm.codex_workspace_dir:
|
||||
graph_config["codex_workspace_dir"] = config.llm.codex_workspace_dir
|
||||
if config.llm.codex_binary:
|
||||
graph_config["codex_binary"] = config.llm.codex_binary
|
||||
return graph_config
|
||||
|
||||
|
||||
def _serialize_final_state(final_state: dict[str, Any]) -> dict[str, Any]:
|
||||
investment_debate = final_state.get("investment_debate_state") or {}
|
||||
risk_debate = final_state.get("risk_debate_state") or {}
|
||||
return {
|
||||
"company_of_interest": final_state.get("company_of_interest"),
|
||||
"trade_date": final_state.get("trade_date"),
|
||||
"market_report": final_state.get("market_report"),
|
||||
"sentiment_report": final_state.get("sentiment_report"),
|
||||
"news_report": final_state.get("news_report"),
|
||||
"fundamentals_report": final_state.get("fundamentals_report"),
|
||||
"investment_debate_state": {
|
||||
"bull_history": investment_debate.get("bull_history", ""),
|
||||
"bear_history": investment_debate.get("bear_history", ""),
|
||||
"history": investment_debate.get("history", ""),
|
||||
"current_response": investment_debate.get("current_response", ""),
|
||||
"judge_decision": investment_debate.get("judge_decision", ""),
|
||||
},
|
||||
"trader_investment_plan": final_state.get("trader_investment_plan", ""),
|
||||
"investment_plan": final_state.get("investment_plan", ""),
|
||||
"risk_debate_state": {
|
||||
"aggressive_history": risk_debate.get("aggressive_history", ""),
|
||||
"conservative_history": risk_debate.get("conservative_history", ""),
|
||||
"neutral_history": risk_debate.get("neutral_history", ""),
|
||||
"history": risk_debate.get("history", ""),
|
||||
"judge_decision": risk_debate.get("judge_decision", ""),
|
||||
},
|
||||
"final_trade_decision": final_state.get("final_trade_decision", ""),
|
||||
}
|
||||
|
||||
|
||||
def _settings_snapshot(config: ScheduledAnalysisConfig) -> dict[str, Any]:
|
||||
return {
|
||||
"provider": config.llm.provider,
|
||||
"quick_model": config.llm.quick_model,
|
||||
"deep_model": config.llm.deep_model,
|
||||
"codex_reasoning_effort": config.llm.codex_reasoning_effort,
|
||||
"output_language": config.run.output_language,
|
||||
"analysts": list(config.run.analysts),
|
||||
"trade_date_mode": config.run.trade_date_mode,
|
||||
"max_debate_rounds": config.run.max_debate_rounds,
|
||||
"max_risk_discuss_rounds": config.run.max_risk_discuss_rounds,
|
||||
}
|
||||
|
||||
|
||||
def _build_run_id(started_at: datetime, run_label: str) -> str:
|
||||
clean_label = "".join(ch if ch.isalnum() or ch in ("-", "_") else "-" for ch in run_label.strip()) or "run"
|
||||
return f"{started_at.strftime('%Y%m%dT%H%M%S')}_{clean_label}"
|
||||
|
||||
|
||||
def _parse_ticker_override(value: str | None) -> list[str] | None:
|
||||
if not value:
|
||||
return None
|
||||
return [item.strip() for item in value.split(",") if item.strip()]
|
||||
|
||||
|
||||
def _previous_business_day(current: date) -> date:
|
||||
candidate = current - timedelta(days=1)
|
||||
while candidate.weekday() >= 5:
|
||||
candidate -= timedelta(days=1)
|
||||
return candidate
|
||||
|
||||
|
||||
def _relative_to_run(run_dir: Path, path: Path | None) -> str | None:
|
||||
if path is None:
|
||||
return None
|
||||
return path.relative_to(run_dir).as_posix()
|
||||
|
||||
|
||||
def _write_json(path: Path, payload: dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(payload, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
|
|
@ -0,0 +1,483 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import html
|
||||
import json
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from .config import SiteSettings
|
||||
|
||||
try:
|
||||
from markdown_it import MarkdownIt
|
||||
except ImportError: # pragma: no cover
|
||||
MarkdownIt = None
|
||||
|
||||
|
||||
_MARKDOWN = (
|
||||
MarkdownIt("commonmark", {"html": False, "linkify": True}).enable(["table", "strikethrough"])
|
||||
if MarkdownIt
|
||||
else None
|
||||
)
|
||||
|
||||
|
||||
def build_site(archive_dir: Path, site_dir: Path, settings: SiteSettings) -> list[dict[str, Any]]:
|
||||
archive_dir = Path(archive_dir)
|
||||
site_dir = Path(site_dir)
|
||||
manifests = _load_run_manifests(archive_dir)
|
||||
|
||||
if site_dir.exists():
|
||||
shutil.rmtree(site_dir)
|
||||
(site_dir / "assets").mkdir(parents=True, exist_ok=True)
|
||||
_write_text(site_dir / "assets" / "style.css", _STYLE_CSS)
|
||||
|
||||
for manifest in manifests:
|
||||
run_dir = Path(manifest["_run_dir"])
|
||||
_copy_artifacts(site_dir, run_dir, manifest)
|
||||
_write_text(
|
||||
site_dir / "runs" / manifest["run_id"] / "index.html",
|
||||
_render_run_page(manifest, settings),
|
||||
)
|
||||
for ticker_summary in manifest.get("tickers", []):
|
||||
_write_text(
|
||||
site_dir / "runs" / manifest["run_id"] / f"{ticker_summary['ticker']}.html",
|
||||
_render_ticker_page(manifest, ticker_summary, settings),
|
||||
)
|
||||
|
||||
_write_text(site_dir / "index.html", _render_index_page(manifests, settings))
|
||||
_write_json(
|
||||
site_dir / "feed.json",
|
||||
{
|
||||
"generated_at": datetime.now().isoformat(),
|
||||
"runs": [
|
||||
{key: value for key, value in manifest.items() if key != "_run_dir"}
|
||||
for manifest in manifests
|
||||
],
|
||||
},
|
||||
)
|
||||
return manifests
|
||||
|
||||
|
||||
def _load_run_manifests(archive_dir: Path) -> list[dict[str, Any]]:
|
||||
manifests: list[dict[str, Any]] = []
|
||||
runs_root = archive_dir / "runs"
|
||||
if not runs_root.exists():
|
||||
return manifests
|
||||
|
||||
for path in runs_root.rglob("run.json"):
|
||||
payload = json.loads(path.read_text(encoding="utf-8"))
|
||||
payload["_run_dir"] = str(path.parent)
|
||||
manifests.append(payload)
|
||||
|
||||
manifests.sort(key=lambda item: item.get("started_at", ""), reverse=True)
|
||||
return manifests
|
||||
|
||||
|
||||
def _copy_artifacts(site_dir: Path, run_dir: Path, manifest: dict[str, Any]) -> None:
|
||||
for ticker_summary in manifest.get("tickers", []):
|
||||
download_dir = site_dir / "downloads" / manifest["run_id"] / ticker_summary["ticker"]
|
||||
download_dir.mkdir(parents=True, exist_ok=True)
|
||||
for relative_path in (ticker_summary.get("artifacts") or {}).values():
|
||||
if not relative_path:
|
||||
continue
|
||||
source = run_dir / relative_path
|
||||
if source.is_file():
|
||||
shutil.copy2(source, download_dir / source.name)
|
||||
|
||||
|
||||
def _render_index_page(manifests: list[dict[str, Any]], settings: SiteSettings) -> str:
|
||||
latest = manifests[0] if manifests else None
|
||||
latest_html = (
|
||||
f"""
|
||||
<section class="hero">
|
||||
<div>
|
||||
<p class="eyebrow">Latest automated run</p>
|
||||
<h1>{_escape(settings.title)}</h1>
|
||||
<p class="subtitle">{_escape(settings.subtitle)}</p>
|
||||
</div>
|
||||
<div class="hero-card">
|
||||
<div class="status {latest['status']}">{_escape(latest['status'].replace('_', ' '))}</div>
|
||||
<p><strong>Run ID</strong><span>{_escape(latest['run_id'])}</span></p>
|
||||
<p><strong>Started</strong><span>{_escape(latest['started_at'])}</span></p>
|
||||
<p><strong>Tickers</strong><span>{latest['summary']['total_tickers']}</span></p>
|
||||
<p><strong>Success</strong><span>{latest['summary']['successful_tickers']}</span></p>
|
||||
<p><strong>Failed</strong><span>{latest['summary']['failed_tickers']}</span></p>
|
||||
<a class="button" href="runs/{_escape(latest['run_id'])}/index.html">Open latest run</a>
|
||||
</div>
|
||||
</section>
|
||||
"""
|
||||
if latest
|
||||
else f"""
|
||||
<section class="hero">
|
||||
<div>
|
||||
<p class="eyebrow">Waiting for first run</p>
|
||||
<h1>{_escape(settings.title)}</h1>
|
||||
<p class="subtitle">{_escape(settings.subtitle)}</p>
|
||||
</div>
|
||||
<div class="hero-card">
|
||||
<div class="status pending">no data yet</div>
|
||||
<p>The scheduled workflow has not produced an archived run yet.</p>
|
||||
</div>
|
||||
</section>
|
||||
"""
|
||||
)
|
||||
|
||||
cards = []
|
||||
for manifest in manifests[: settings.max_runs_on_homepage]:
|
||||
cards.append(
|
||||
f"""
|
||||
<article class="run-card">
|
||||
<div class="run-card-header">
|
||||
<a href="runs/{_escape(manifest['run_id'])}/index.html">{_escape(manifest['run_id'])}</a>
|
||||
<span class="status {manifest['status']}">{_escape(manifest['status'].replace('_', ' '))}</span>
|
||||
</div>
|
||||
<p>{_escape(manifest['started_at'])}</p>
|
||||
<p>{manifest['summary']['successful_tickers']} succeeded, {manifest['summary']['failed_tickers']} failed</p>
|
||||
<p>{_escape(manifest['settings']['provider'])} / {_escape(manifest['settings']['deep_model'])}</p>
|
||||
</article>
|
||||
"""
|
||||
)
|
||||
|
||||
body = latest_html + f"""
|
||||
<section class="section">
|
||||
<div class="section-head">
|
||||
<h2>Recent runs</h2>
|
||||
<p>{len(manifests)} archived run(s)</p>
|
||||
</div>
|
||||
<div class="run-grid">
|
||||
{''.join(cards) if cards else '<p class="empty">No archived runs were found.</p>'}
|
||||
</div>
|
||||
</section>
|
||||
"""
|
||||
return _page_template(settings.title, body, prefix="")
|
||||
|
||||
|
||||
def _render_run_page(manifest: dict[str, Any], settings: SiteSettings) -> str:
|
||||
ticker_cards = []
|
||||
for ticker_summary in manifest.get("tickers", []):
|
||||
ticker_cards.append(
|
||||
f"""
|
||||
<article class="ticker-card">
|
||||
<div class="ticker-card-header">
|
||||
<a href="{_escape(ticker_summary['ticker'])}.html">{_escape(ticker_summary['ticker'])}</a>
|
||||
<span class="status {ticker_summary['status']}">{_escape(ticker_summary['status'])}</span>
|
||||
</div>
|
||||
<p><strong>Trade date</strong><span>{_escape(ticker_summary.get('trade_date') or '-')}</span></p>
|
||||
<p><strong>Duration</strong><span>{ticker_summary.get('duration_seconds', 0):.1f}s</span></p>
|
||||
<p><strong>Decision</strong><span>{_escape(ticker_summary.get('decision') or ticker_summary.get('error') or '-')}</span></p>
|
||||
</article>
|
||||
"""
|
||||
)
|
||||
|
||||
body = f"""
|
||||
<nav class="breadcrumbs"><a href="../../index.html">Home</a></nav>
|
||||
<section class="hero compact">
|
||||
<div>
|
||||
<p class="eyebrow">Run detail</p>
|
||||
<h1>{_escape(manifest['run_id'])}</h1>
|
||||
<p class="subtitle">{_escape(manifest['started_at'])}</p>
|
||||
</div>
|
||||
<div class="hero-card">
|
||||
<div class="status {manifest['status']}">{_escape(manifest['status'].replace('_', ' '))}</div>
|
||||
<p><strong>Provider</strong><span>{_escape(manifest['settings']['provider'])}</span></p>
|
||||
<p><strong>Deep model</strong><span>{_escape(manifest['settings']['deep_model'])}</span></p>
|
||||
<p><strong>Quick model</strong><span>{_escape(manifest['settings']['quick_model'])}</span></p>
|
||||
<p><strong>Language</strong><span>{_escape(manifest['settings']['output_language'])}</span></p>
|
||||
</div>
|
||||
</section>
|
||||
<section class="section">
|
||||
<div class="section-head">
|
||||
<h2>Tickers</h2>
|
||||
<p>{manifest['summary']['successful_tickers']} success / {manifest['summary']['failed_tickers']} failed</p>
|
||||
</div>
|
||||
<div class="ticker-grid">
|
||||
{''.join(ticker_cards)}
|
||||
</div>
|
||||
</section>
|
||||
"""
|
||||
return _page_template(f"{manifest['run_id']} | {settings.title}", body, prefix="../../")
|
||||
|
||||
|
||||
def _render_ticker_page(
|
||||
manifest: dict[str, Any],
|
||||
ticker_summary: dict[str, Any],
|
||||
settings: SiteSettings,
|
||||
) -> str:
|
||||
run_dir = Path(manifest["_run_dir"])
|
||||
report_html = "<p class='empty'>No report markdown was generated for this ticker.</p>"
|
||||
report_relative = (ticker_summary.get("artifacts") or {}).get("report_markdown")
|
||||
if report_relative:
|
||||
report_path = run_dir / report_relative
|
||||
if report_path.exists():
|
||||
report_html = _render_markdown(report_path.read_text(encoding="utf-8"))
|
||||
|
||||
download_links = []
|
||||
for relative_path in (ticker_summary.get("artifacts") or {}).values():
|
||||
if not relative_path:
|
||||
continue
|
||||
artifact_name = Path(relative_path).name
|
||||
download_links.append(
|
||||
f"<a class='pill' href='../../downloads/{_escape(manifest['run_id'])}/{_escape(ticker_summary['ticker'])}/{_escape(artifact_name)}'>{_escape(artifact_name)}</a>"
|
||||
)
|
||||
|
||||
failure_html = ""
|
||||
if ticker_summary["status"] != "success":
|
||||
failure_html = (
|
||||
"<section class='section'>"
|
||||
"<div class='section-head'><h2>Failure</h2></div>"
|
||||
f"<pre class='error-block'>{_escape(ticker_summary.get('error') or 'Unknown error')}</pre>"
|
||||
"</section>"
|
||||
)
|
||||
|
||||
body = f"""
|
||||
<nav class="breadcrumbs">
|
||||
<a href="../../index.html">Home</a>
|
||||
<a href="index.html">{_escape(manifest['run_id'])}</a>
|
||||
</nav>
|
||||
<section class="hero compact">
|
||||
<div>
|
||||
<p class="eyebrow">Ticker report</p>
|
||||
<h1>{_escape(ticker_summary['ticker'])}</h1>
|
||||
<p class="subtitle">{_escape(ticker_summary.get('trade_date') or '-')} / {_escape(ticker_summary['status'])}</p>
|
||||
</div>
|
||||
<div class="hero-card">
|
||||
<div class="status {ticker_summary['status']}">{_escape(ticker_summary['status'])}</div>
|
||||
<p><strong>Decision</strong><span>{_escape(ticker_summary.get('decision') or '-')}</span></p>
|
||||
<p><strong>Duration</strong><span>{ticker_summary.get('duration_seconds', 0):.1f}s</span></p>
|
||||
<p><strong>LLM calls</strong><span>{ticker_summary.get('metrics', {}).get('llm_calls', 0)}</span></p>
|
||||
<p><strong>Tool calls</strong><span>{ticker_summary.get('metrics', {}).get('tool_calls', 0)}</span></p>
|
||||
</div>
|
||||
</section>
|
||||
<section class="section">
|
||||
<div class="section-head">
|
||||
<h2>Artifacts</h2>
|
||||
</div>
|
||||
<div class="pill-row">
|
||||
{''.join(download_links) if download_links else "<span class='empty'>No downloadable artifacts</span>"}
|
||||
</div>
|
||||
</section>
|
||||
{failure_html}
|
||||
<section class="section prose">
|
||||
<div class="section-head">
|
||||
<h2>Rendered report</h2>
|
||||
</div>
|
||||
{report_html}
|
||||
</section>
|
||||
"""
|
||||
return _page_template(
|
||||
f"{ticker_summary['ticker']} | {settings.title}",
|
||||
body,
|
||||
prefix="../../",
|
||||
)
|
||||
|
||||
|
||||
def _page_template(title: str, body: str, *, prefix: str) -> str:
|
||||
return f"""<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<title>{_escape(title)}</title>
|
||||
<link rel="stylesheet" href="{prefix}assets/style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<main class="shell">
|
||||
{body}
|
||||
</main>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
|
||||
def _render_markdown(content: str) -> str:
|
||||
if _MARKDOWN is None:
|
||||
return f"<pre>{_escape(content)}</pre>"
|
||||
return _MARKDOWN.render(content)
|
||||
|
||||
|
||||
def _write_text(path: Path, content: str) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(content, encoding="utf-8")
|
||||
|
||||
|
||||
def _write_json(path: Path, payload: dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(payload, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
|
||||
|
||||
def _escape(value: object) -> str:
|
||||
return html.escape(str(value))
|
||||
|
||||
|
||||
_STYLE_CSS = """
|
||||
:root {
|
||||
--bg: #f4efe7;
|
||||
--paper: rgba(255, 255, 255, 0.84);
|
||||
--ink: #132238;
|
||||
--muted: #5d6c7d;
|
||||
--line: rgba(19, 34, 56, 0.12);
|
||||
--accent: #0f7c82;
|
||||
--success: #1f7a4d;
|
||||
--warning: #c46a1c;
|
||||
--danger: #b23b3b;
|
||||
--shadow: 0 18px 45px rgba(17, 34, 51, 0.12);
|
||||
}
|
||||
|
||||
* { box-sizing: border-box; }
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
color: var(--ink);
|
||||
font-family: Aptos, "Segoe UI", "Noto Sans KR", sans-serif;
|
||||
background:
|
||||
radial-gradient(circle at top right, rgba(15, 124, 130, 0.16), transparent 34%),
|
||||
radial-gradient(circle at top left, rgba(196, 106, 28, 0.16), transparent 28%),
|
||||
linear-gradient(180deg, #f8f3eb 0%, #eef4f5 100%);
|
||||
}
|
||||
|
||||
a { color: inherit; }
|
||||
|
||||
.shell {
|
||||
width: min(1180px, calc(100% - 32px));
|
||||
margin: 0 auto;
|
||||
padding: 24px 0 56px;
|
||||
}
|
||||
|
||||
.hero {
|
||||
display: grid;
|
||||
grid-template-columns: minmax(0, 1.7fr) minmax(280px, 0.9fr);
|
||||
gap: 20px;
|
||||
padding: 28px;
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 28px;
|
||||
background: linear-gradient(135deg, rgba(255,255,255,0.9), rgba(248,251,252,0.9));
|
||||
box-shadow: var(--shadow);
|
||||
}
|
||||
|
||||
.hero h1, .section h2 {
|
||||
margin: 0;
|
||||
font-family: Georgia, "Times New Roman", serif;
|
||||
letter-spacing: -0.03em;
|
||||
}
|
||||
|
||||
.hero h1 {
|
||||
font-size: clamp(2.1rem, 4vw, 3.4rem);
|
||||
line-height: 0.95;
|
||||
}
|
||||
|
||||
.subtitle, .section-head p, .hero-card p, .run-card p, .ticker-card p, .breadcrumbs, .empty {
|
||||
color: var(--muted);
|
||||
}
|
||||
|
||||
.eyebrow {
|
||||
margin: 0 0 14px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.16em;
|
||||
font-size: 0.78rem;
|
||||
color: var(--accent);
|
||||
}
|
||||
|
||||
.hero-card, .run-card, .ticker-card, .section, .error-block, .prose pre {
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 22px;
|
||||
background: var(--paper);
|
||||
box-shadow: var(--shadow);
|
||||
}
|
||||
|
||||
.hero-card, .run-card, .ticker-card, .section { padding: 18px 20px; }
|
||||
|
||||
.hero-card p, .ticker-card p {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
margin: 10px 0;
|
||||
}
|
||||
|
||||
.status {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 8px 12px;
|
||||
border-radius: 999px;
|
||||
font-size: 0.82rem;
|
||||
font-weight: 700;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.06em;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.status.success { background: rgba(31, 122, 77, 0.12); color: var(--success); }
|
||||
.status.partial_failure, .status.pending { background: rgba(196, 106, 28, 0.14); color: var(--warning); }
|
||||
.status.failed { background: rgba(178, 59, 59, 0.12); color: var(--danger); }
|
||||
|
||||
.button, .pill {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
text-decoration: none;
|
||||
border-radius: 999px;
|
||||
padding: 10px 16px;
|
||||
font-weight: 600;
|
||||
border: 1px solid rgba(15, 124, 130, 0.22);
|
||||
background: rgba(15, 124, 130, 0.12);
|
||||
}
|
||||
|
||||
.section { margin-top: 20px; }
|
||||
|
||||
.section-head, .run-card-header, .ticker-card-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
gap: 16px;
|
||||
align-items: baseline;
|
||||
}
|
||||
|
||||
.run-grid, .ticker-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(240px, 1fr));
|
||||
gap: 16px;
|
||||
}
|
||||
|
||||
.breadcrumbs {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
margin: 0 0 12px;
|
||||
}
|
||||
|
||||
.breadcrumbs a::after {
|
||||
content: "/";
|
||||
margin-left: 12px;
|
||||
opacity: 0.4;
|
||||
}
|
||||
|
||||
.breadcrumbs a:last-child::after { display: none; }
|
||||
|
||||
.pill-row {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.prose { line-height: 1.65; }
|
||||
.prose h1, .prose h2, .prose h3 { font-family: Georgia, "Times New Roman", serif; }
|
||||
.prose pre, .error-block {
|
||||
padding: 16px;
|
||||
overflow: auto;
|
||||
white-space: pre-wrap;
|
||||
font-family: Consolas, "Courier New", monospace;
|
||||
}
|
||||
|
||||
.prose table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
.prose th, .prose td {
|
||||
border: 1px solid var(--line);
|
||||
padding: 10px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
@media (max-width: 840px) {
|
||||
.hero { grid-template-columns: 1fr; }
|
||||
.shell { width: min(100% - 20px, 1180px); }
|
||||
}
|
||||
"""
|
||||
Loading…
Reference in New Issue