Add Settings UI, Analysis Pipeline visualization, and comprehensive documentation
Features: - API key management with secure browser localStorage - Model selection for Deep Think (Opus) and Quick Think (Sonnet/Haiku) - Configurable max debate rounds (1-5) - Full analysis pipeline visualization with 9-step progress tracking - Agent reports display (Market, News, Social, Fundamentals analysts) - Investment debate viewer (Bull vs Bear with Research Manager decision) - Risk debate viewer (Aggressive vs Conservative vs Neutral) - Data sources tracking panel - Dark mode support throughout - Bulk "Analyze All" functionality for all 50 stocks Backend: - Added analysis config parameters to API endpoints - Support for provider/model selection in analysis requests - Indian market data integration improvements Documentation: - Comprehensive README with 10 feature screenshots - API endpoint documentation - Project structure guide - Getting started instructions
|
|
@ -1,73 +1,210 @@
|
|||
# React + TypeScript + Vite
|
||||
# Nifty50 AI Trading Dashboard
|
||||
|
||||
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
|
||||
A modern, feature-rich frontend for the TradingAgents multi-agent AI stock analysis system. This dashboard provides real-time AI-powered recommendations for all 50 stocks in the Nifty 50 index, with full visibility into the analysis pipeline, agent reports, and debate processes.
|
||||
|
||||
Currently, two official plugins are available:
|
||||
## Features Overview
|
||||
|
||||
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Babel](https://babeljs.io/) (or [oxc](https://oxc.rs) when used in [rolldown-vite](https://vite.dev/guide/rolldown)) for Fast Refresh
|
||||
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
|
||||
### Dashboard - Main View
|
||||
The main dashboard displays AI recommendations for all 50 Nifty stocks with:
|
||||
- **Summary Statistics**: Quick view of Buy/Hold/Sell distribution
|
||||
- **Top Picks**: Highlighted stocks with the strongest buy signals
|
||||
- **Stocks to Avoid**: High-confidence sell recommendations
|
||||
- **Analyze All**: One-click bulk analysis of all stocks
|
||||
- **Filter & Search**: Filter by recommendation type or search by symbol
|
||||
|
||||
## React Compiler
|
||||

|
||||
|
||||
The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
|
||||
### Dark Mode Support
|
||||
Full dark mode support with automatic system theme detection:
|
||||
|
||||
## Expanding the ESLint configuration
|
||||

|
||||
|
||||
If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules:
|
||||
### Settings Panel
|
||||
Configure the AI analysis system directly from the browser:
|
||||
- **LLM Provider Selection**: Choose between Claude Subscription or Anthropic API
|
||||
- **API Key Management**: Securely store API keys in browser localStorage
|
||||
- **Model Selection**: Configure Deep Think (Opus) and Quick Think (Sonnet/Haiku) models
|
||||
- **Analysis Settings**: Adjust max debate rounds for thoroughness vs speed
|
||||
|
||||
```js
|
||||
export default defineConfig([
|
||||
globalIgnores(['dist']),
|
||||
{
|
||||
files: ['**/*.{ts,tsx}'],
|
||||
extends: [
|
||||
// Other configs...
|
||||

|
||||
|
||||
// Remove tseslint.configs.recommended and replace with this
|
||||
tseslint.configs.recommendedTypeChecked,
|
||||
// Alternatively, use this for stricter rules
|
||||
tseslint.configs.strictTypeChecked,
|
||||
// Optionally, add this for stylistic rules
|
||||
tseslint.configs.stylisticTypeChecked,
|
||||
### Stock Detail View
|
||||
Detailed analysis view for individual stocks with:
|
||||
- **Price Chart**: Interactive price history with buy/sell/hold signal markers
|
||||
- **Recommendation Details**: Decision, confidence level, and risk assessment
|
||||
- **Recommendation History**: Historical AI decisions for the stock
|
||||
- **AI Analysis Summary**: Expandable detailed analysis sections
|
||||
|
||||
// Other configs...
|
||||
],
|
||||
languageOptions: {
|
||||
parserOptions: {
|
||||
project: ['./tsconfig.node.json', './tsconfig.app.json'],
|
||||
tsconfigRootDir: import.meta.dirname,
|
||||
},
|
||||
// other options...
|
||||
},
|
||||
},
|
||||
])
|
||||

|
||||
|
||||
### Analysis Pipeline Visualization
|
||||
See exactly how the AI reached its decision with the full analysis pipeline:
|
||||
- **9-Step Pipeline**: Track progress through data collection, analysis, debates, and final decision
|
||||
- **Agent Reports**: View individual reports from Market, News, Social Media, and Fundamentals analysts
|
||||
- **Real-time Status**: See which steps are completed, running, or pending
|
||||
|
||||

|
||||
|
||||
### Investment Debates
|
||||
The AI uses a debate system where Bull and Bear analysts argue their cases:
|
||||
- **Bull vs Bear**: Opposing viewpoints with detailed arguments
|
||||
- **Research Manager Decision**: Final judgment weighing both sides
|
||||
- **Full Debate History**: Complete transcript of the debate rounds
|
||||
|
||||

|
||||
|
||||
#### Expanded Debate View
|
||||
Full debate content with Bull and Bear arguments:
|
||||
|
||||

|
||||
|
||||
### Data Sources Tracking
|
||||
View all raw data sources used for analysis:
|
||||
- **Source Types**: Market data, news, fundamentals, social media
|
||||
- **Fetch Status**: Success/failure indicators for each data source
|
||||
- **Data Preview**: Expandable view of fetched data
|
||||
|
||||

|
||||
|
||||
### How It Works Page
|
||||
Educational content explaining the multi-agent AI system:
|
||||
- **Multi-Agent Architecture**: Overview of the specialized AI agents
|
||||
- **Analysis Process**: Step-by-step breakdown of the pipeline
|
||||
- **Agent Profiles**: Details about each analyst type
|
||||
- **Debate Process**: Explanation of how consensus is reached
|
||||
|
||||

|
||||
|
||||
### Historical Analysis & Backtesting
|
||||
Track AI performance over time with comprehensive analytics:
|
||||
- **Prediction Accuracy**: Overall and per-recommendation-type accuracy
|
||||
- **Accuracy Trend**: Visualize accuracy over time
|
||||
- **Risk Metrics**: Sharpe ratio, max drawdown, win rate
|
||||
- **Portfolio Simulator**: Test different investment amounts
|
||||
- **AI vs Nifty50**: Compare AI strategy performance against the index
|
||||
- **Return Distribution**: Histogram of next-day returns
|
||||
|
||||

|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Frontend**: React 18 + TypeScript + Vite
|
||||
- **Styling**: Tailwind CSS with dark mode support
|
||||
- **Charts**: Recharts for interactive visualizations
|
||||
- **Icons**: Lucide React
|
||||
- **State Management**: React Context API
|
||||
- **Backend**: FastAPI (Python) with SQLite database
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Prerequisites
|
||||
- Node.js 18+
|
||||
- Python 3.10+
|
||||
- npm or yarn
|
||||
|
||||
### Installation
|
||||
|
||||
1. **Install frontend dependencies:**
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
```
|
||||
|
||||
You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules:
|
||||
|
||||
```js
|
||||
// eslint.config.js
|
||||
import reactX from 'eslint-plugin-react-x'
|
||||
import reactDom from 'eslint-plugin-react-dom'
|
||||
|
||||
export default defineConfig([
|
||||
globalIgnores(['dist']),
|
||||
{
|
||||
files: ['**/*.{ts,tsx}'],
|
||||
extends: [
|
||||
// Other configs...
|
||||
// Enable lint rules for React
|
||||
reactX.configs['recommended-typescript'],
|
||||
// Enable lint rules for React DOM
|
||||
reactDom.configs.recommended,
|
||||
],
|
||||
languageOptions: {
|
||||
parserOptions: {
|
||||
project: ['./tsconfig.node.json', './tsconfig.app.json'],
|
||||
tsconfigRootDir: import.meta.dirname,
|
||||
},
|
||||
// other options...
|
||||
},
|
||||
},
|
||||
])
|
||||
2. **Install backend dependencies:**
|
||||
```bash
|
||||
cd frontend/backend
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Running the Application
|
||||
|
||||
1. **Start the backend server:**
|
||||
```bash
|
||||
cd frontend/backend
|
||||
python server.py
|
||||
```
|
||||
The backend runs on `http://localhost:8001`
|
||||
|
||||
2. **Start the frontend development server:**
|
||||
```bash
|
||||
cd frontend
|
||||
npm run dev
|
||||
```
|
||||
The frontend runs on `http://localhost:5173`
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
frontend/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ │ ├── pipeline/ # Pipeline visualization components
|
||||
│ │ │ ├── PipelineOverview.tsx
|
||||
│ │ │ ├── AgentReportCard.tsx
|
||||
│ │ │ ├── DebateViewer.tsx
|
||||
│ │ │ ├── RiskDebateViewer.tsx
|
||||
│ │ │ └── DataSourcesPanel.tsx
|
||||
│ │ ├── Header.tsx
|
||||
│ │ ├── SettingsModal.tsx
|
||||
│ │ └── ...
|
||||
│ ├── contexts/
|
||||
│ │ └── SettingsContext.tsx # Settings state management
|
||||
│ ├── pages/
|
||||
│ │ ├── Dashboard.tsx
|
||||
│ │ ├── StockDetail.tsx
|
||||
│ │ ├── History.tsx
|
||||
│ │ └── About.tsx
|
||||
│ ├── services/
|
||||
│ │ └── api.ts # API client
|
||||
│ ├── types/
|
||||
│ │ └── pipeline.ts # TypeScript types for pipeline data
|
||||
│ └── App.tsx
|
||||
├── backend/
|
||||
│ ├── server.py # FastAPI server
|
||||
│ ├── database.py # SQLite database operations
|
||||
│ └── recommendations.db # SQLite database
|
||||
└── docs/
|
||||
└── screenshots/ # Feature screenshots
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Recommendations
|
||||
- `GET /recommendations/{date}` - Get all recommendations for a date
|
||||
- `GET /recommendations/{date}/{symbol}` - Get recommendation for a specific stock
|
||||
- `POST /recommendations` - Save new recommendations
|
||||
|
||||
### Pipeline Data
|
||||
- `GET /recommendations/{date}/{symbol}/pipeline` - Get full pipeline data
|
||||
- `GET /recommendations/{date}/{symbol}/agents` - Get agent reports
|
||||
- `GET /recommendations/{date}/{symbol}/debates` - Get debate history
|
||||
- `GET /recommendations/{date}/{symbol}/data-sources` - Get data source logs
|
||||
|
||||
### Analysis
|
||||
- `POST /analyze/{symbol}` - Run analysis for a single stock
|
||||
- `POST /analyze-bulk` - Run analysis for multiple stocks
|
||||
|
||||
## Configuration
|
||||
|
||||
Settings are stored in browser localStorage and include:
|
||||
- `deepThinkModel`: Model for complex analysis (opus/sonnet/haiku)
|
||||
- `quickThinkModel`: Model for fast operations (opus/sonnet/haiku)
|
||||
- `provider`: LLM provider (claude_subscription/anthropic_api)
|
||||
- `anthropicApiKey`: API key for Anthropic API provider
|
||||
- `maxDebateRounds`: Number of debate rounds (1-5)
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Make your changes
|
||||
4. Run tests and linting
|
||||
5. Submit a pull request
|
||||
|
||||
## License
|
||||
|
||||
This project is part of the TradingAgents research project.
|
||||
|
||||
## Disclaimer
|
||||
|
||||
AI-generated recommendations are for educational and informational purposes only. These do not constitute financial advice. Always conduct your own research and consult with a qualified financial advisor before making investment decisions.
|
||||
|
|
|
|||
|
|
@ -129,15 +129,34 @@ class SavePipelineDataRequest(BaseModel):
|
|||
data_sources: Optional[list] = None
|
||||
|
||||
|
||||
class AnalysisConfig(BaseModel):
|
||||
deep_think_model: Optional[str] = "opus"
|
||||
quick_think_model: Optional[str] = "sonnet"
|
||||
provider: Optional[str] = "claude_subscription" # claude_subscription or anthropic_api
|
||||
api_key: Optional[str] = None
|
||||
max_debate_rounds: Optional[int] = 1
|
||||
|
||||
|
||||
class RunAnalysisRequest(BaseModel):
|
||||
symbol: str
|
||||
date: Optional[str] = None # Defaults to today if not provided
|
||||
config: Optional[AnalysisConfig] = None
|
||||
|
||||
|
||||
def run_analysis_task(symbol: str, date: str):
|
||||
def run_analysis_task(symbol: str, date: str, analysis_config: dict = None):
|
||||
"""Background task to run trading analysis for a stock."""
|
||||
global running_analyses
|
||||
|
||||
# Default config values
|
||||
if analysis_config is None:
|
||||
analysis_config = {}
|
||||
|
||||
deep_think_model = analysis_config.get("deep_think_model", "opus")
|
||||
quick_think_model = analysis_config.get("quick_think_model", "sonnet")
|
||||
provider = analysis_config.get("provider", "claude_subscription")
|
||||
api_key = analysis_config.get("api_key")
|
||||
max_debate_rounds = analysis_config.get("max_debate_rounds", 1)
|
||||
|
||||
try:
|
||||
running_analyses[symbol] = {
|
||||
"status": "initializing",
|
||||
|
|
@ -151,15 +170,19 @@ def run_analysis_task(symbol: str, date: str):
|
|||
|
||||
running_analyses[symbol]["progress"] = "Initializing analysis pipeline..."
|
||||
|
||||
# Create config
|
||||
# Create config from user settings
|
||||
config = DEFAULT_CONFIG.copy()
|
||||
config["llm_provider"] = "anthropic" # Use Claude for all LLM
|
||||
config["deep_think_llm"] = "opus" # Claude Opus (Claude Max CLI alias)
|
||||
config["quick_think_llm"] = "sonnet" # Claude Sonnet (Claude Max CLI alias)
|
||||
config["max_debate_rounds"] = 1
|
||||
config["deep_think_llm"] = deep_think_model
|
||||
config["quick_think_llm"] = quick_think_model
|
||||
config["max_debate_rounds"] = max_debate_rounds
|
||||
|
||||
# If using API provider and key is provided, set it in environment
|
||||
if provider == "anthropic_api" and api_key:
|
||||
os.environ["ANTHROPIC_API_KEY"] = api_key
|
||||
|
||||
running_analyses[symbol]["status"] = "running"
|
||||
running_analyses[symbol]["progress"] = "Running market analysis..."
|
||||
running_analyses[symbol]["progress"] = f"Running market analysis (model: {deep_think_model})..."
|
||||
|
||||
# Initialize and run
|
||||
ta = TradingAgentsGraph(debug=False, config=config)
|
||||
|
|
@ -368,8 +391,145 @@ async def save_pipeline_data(request: SavePipelineDataRequest):
|
|||
|
||||
# ============== Analysis Endpoints ==============
|
||||
|
||||
# Track bulk analysis state
|
||||
bulk_analysis_state = {
|
||||
"status": "idle", # idle, running, completed, error
|
||||
"total": 0,
|
||||
"completed": 0,
|
||||
"failed": 0,
|
||||
"current_symbol": None,
|
||||
"started_at": None,
|
||||
"completed_at": None,
|
||||
"results": {}
|
||||
}
|
||||
|
||||
# List of Nifty 50 stocks
|
||||
NIFTY_50_SYMBOLS = [
|
||||
"RELIANCE", "TCS", "HDFCBANK", "INFY", "ICICIBANK", "HINDUNILVR", "ITC", "SBIN",
|
||||
"BHARTIARTL", "KOTAKBANK", "LT", "AXISBANK", "ASIANPAINT", "MARUTI", "HCLTECH",
|
||||
"SUNPHARMA", "TITAN", "BAJFINANCE", "WIPRO", "ULTRACEMCO", "NESTLEIND", "NTPC",
|
||||
"POWERGRID", "M&M", "TATAMOTORS", "ONGC", "JSWSTEEL", "TATASTEEL", "ADANIENT",
|
||||
"ADANIPORTS", "COALINDIA", "BAJAJFINSV", "TECHM", "HDFCLIFE", "SBILIFE", "GRASIM",
|
||||
"DIVISLAB", "DRREDDY", "CIPLA", "BRITANNIA", "EICHERMOT", "APOLLOHOSP", "INDUSINDBK",
|
||||
"HEROMOTOCO", "TATACONSUM", "BPCL", "UPL", "HINDALCO", "BAJAJ-AUTO", "LTIM"
|
||||
]
|
||||
|
||||
|
||||
class BulkAnalysisRequest(BaseModel):
|
||||
deep_think_model: Optional[str] = "opus"
|
||||
quick_think_model: Optional[str] = "sonnet"
|
||||
provider: Optional[str] = "claude_subscription"
|
||||
api_key: Optional[str] = None
|
||||
max_debate_rounds: Optional[int] = 1
|
||||
|
||||
|
||||
@app.post("/analyze/all")
|
||||
async def run_bulk_analysis(request: Optional[BulkAnalysisRequest] = None, date: Optional[str] = None):
|
||||
"""Trigger analysis for all Nifty 50 stocks. Runs in background."""
|
||||
global bulk_analysis_state
|
||||
|
||||
# Check if bulk analysis is already running
|
||||
if bulk_analysis_state.get("status") == "running":
|
||||
return {
|
||||
"message": "Bulk analysis already running",
|
||||
"status": bulk_analysis_state
|
||||
}
|
||||
|
||||
# Use today's date if not provided
|
||||
if not date:
|
||||
date = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
# Build analysis config from request
|
||||
analysis_config = {}
|
||||
if request:
|
||||
analysis_config = {
|
||||
"deep_think_model": request.deep_think_model,
|
||||
"quick_think_model": request.quick_think_model,
|
||||
"provider": request.provider,
|
||||
"api_key": request.api_key,
|
||||
"max_debate_rounds": request.max_debate_rounds
|
||||
}
|
||||
|
||||
# Start bulk analysis in background thread
|
||||
def run_bulk():
|
||||
global bulk_analysis_state
|
||||
bulk_analysis_state = {
|
||||
"status": "running",
|
||||
"total": len(NIFTY_50_SYMBOLS),
|
||||
"completed": 0,
|
||||
"failed": 0,
|
||||
"current_symbol": None,
|
||||
"started_at": datetime.now().isoformat(),
|
||||
"completed_at": None,
|
||||
"results": {}
|
||||
}
|
||||
|
||||
for symbol in NIFTY_50_SYMBOLS:
|
||||
try:
|
||||
bulk_analysis_state["current_symbol"] = symbol
|
||||
run_analysis_task(symbol, date, analysis_config)
|
||||
|
||||
# Wait for completion
|
||||
import time
|
||||
while symbol in running_analyses and running_analyses[symbol].get("status") == "running":
|
||||
time.sleep(2)
|
||||
|
||||
if symbol in running_analyses:
|
||||
status = running_analyses[symbol].get("status", "unknown")
|
||||
bulk_analysis_state["results"][symbol] = status
|
||||
if status == "completed":
|
||||
bulk_analysis_state["completed"] += 1
|
||||
else:
|
||||
bulk_analysis_state["failed"] += 1
|
||||
else:
|
||||
bulk_analysis_state["results"][symbol] = "unknown"
|
||||
bulk_analysis_state["failed"] += 1
|
||||
|
||||
except Exception as e:
|
||||
bulk_analysis_state["results"][symbol] = f"error: {str(e)}"
|
||||
bulk_analysis_state["failed"] += 1
|
||||
|
||||
bulk_analysis_state["status"] = "completed"
|
||||
bulk_analysis_state["current_symbol"] = None
|
||||
bulk_analysis_state["completed_at"] = datetime.now().isoformat()
|
||||
|
||||
thread = threading.Thread(target=run_bulk)
|
||||
thread.start()
|
||||
|
||||
return {
|
||||
"message": "Bulk analysis started for all Nifty 50 stocks",
|
||||
"date": date,
|
||||
"total_stocks": len(NIFTY_50_SYMBOLS),
|
||||
"status": "started"
|
||||
}
|
||||
|
||||
|
||||
@app.get("/analyze/all/status")
|
||||
async def get_bulk_analysis_status():
|
||||
"""Get the status of bulk analysis."""
|
||||
return bulk_analysis_state
|
||||
|
||||
|
||||
@app.get("/analyze/running")
|
||||
async def get_running_analyses():
|
||||
"""Get all currently running analyses."""
|
||||
running = {k: v for k, v in running_analyses.items() if v.get("status") == "running"}
|
||||
return {
|
||||
"running": running,
|
||||
"count": len(running)
|
||||
}
|
||||
|
||||
|
||||
class SingleAnalysisRequest(BaseModel):
|
||||
deep_think_model: Optional[str] = "opus"
|
||||
quick_think_model: Optional[str] = "sonnet"
|
||||
provider: Optional[str] = "claude_subscription"
|
||||
api_key: Optional[str] = None
|
||||
max_debate_rounds: Optional[int] = 1
|
||||
|
||||
|
||||
@app.post("/analyze/{symbol}")
|
||||
async def run_analysis(symbol: str, background_tasks: BackgroundTasks, date: Optional[str] = None):
|
||||
async def run_analysis(symbol: str, background_tasks: BackgroundTasks, request: Optional[SingleAnalysisRequest] = None, date: Optional[str] = None):
|
||||
"""Trigger analysis for a stock. Runs in background."""
|
||||
symbol = symbol.upper()
|
||||
|
||||
|
|
@ -384,8 +544,19 @@ async def run_analysis(symbol: str, background_tasks: BackgroundTasks, date: Opt
|
|||
if not date:
|
||||
date = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
# Build analysis config from request
|
||||
analysis_config = {}
|
||||
if request:
|
||||
analysis_config = {
|
||||
"deep_think_model": request.deep_think_model,
|
||||
"quick_think_model": request.quick_think_model,
|
||||
"provider": request.provider,
|
||||
"api_key": request.api_key,
|
||||
"max_debate_rounds": request.max_debate_rounds
|
||||
}
|
||||
|
||||
# Start analysis in background thread
|
||||
thread = threading.Thread(target=run_analysis_task, args=(symbol, date))
|
||||
thread = threading.Thread(target=run_analysis_task, args=(symbol, date, analysis_config))
|
||||
thread.start()
|
||||
|
||||
return {
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 321 KiB |
|
After Width: | Height: | Size: 81 KiB |
|
After Width: | Height: | Size: 149 KiB |
|
After Width: | Height: | Size: 171 KiB |
|
After Width: | Height: | Size: 148 KiB |
|
After Width: | Height: | Size: 512 KiB |
|
After Width: | Height: | Size: 63 KiB |
|
After Width: | Height: | Size: 319 KiB |
|
After Width: | Height: | Size: 400 KiB |
|
After Width: | Height: | Size: 226 KiB |
|
|
@ -1,7 +1,9 @@
|
|||
import { Routes, Route } from 'react-router-dom';
|
||||
import { ThemeProvider } from './contexts/ThemeContext';
|
||||
import { SettingsProvider } from './contexts/SettingsContext';
|
||||
import Header from './components/Header';
|
||||
import Footer from './components/Footer';
|
||||
import SettingsModal from './components/SettingsModal';
|
||||
import Dashboard from './pages/Dashboard';
|
||||
import History from './pages/History';
|
||||
import StockDetail from './pages/StockDetail';
|
||||
|
|
@ -10,18 +12,21 @@ import About from './pages/About';
|
|||
function App() {
|
||||
return (
|
||||
<ThemeProvider>
|
||||
<div className="min-h-screen flex flex-col bg-gray-50 dark:bg-slate-900 transition-colors">
|
||||
<Header />
|
||||
<main className="flex-1 max-w-7xl mx-auto w-full px-3 sm:px-4 lg:px-6 py-4">
|
||||
<Routes>
|
||||
<Route path="/" element={<Dashboard />} />
|
||||
<Route path="/history" element={<History />} />
|
||||
<Route path="/stock/:symbol" element={<StockDetail />} />
|
||||
<Route path="/about" element={<About />} />
|
||||
</Routes>
|
||||
</main>
|
||||
<Footer />
|
||||
</div>
|
||||
<SettingsProvider>
|
||||
<div className="min-h-screen flex flex-col bg-gray-50 dark:bg-slate-900 transition-colors">
|
||||
<Header />
|
||||
<main className="flex-1 max-w-7xl mx-auto w-full px-3 sm:px-4 lg:px-6 py-4">
|
||||
<Routes>
|
||||
<Route path="/" element={<Dashboard />} />
|
||||
<Route path="/history" element={<History />} />
|
||||
<Route path="/stock/:symbol" element={<StockDetail />} />
|
||||
<Route path="/about" element={<About />} />
|
||||
</Routes>
|
||||
</main>
|
||||
<Footer />
|
||||
<SettingsModal />
|
||||
</div>
|
||||
</SettingsProvider>
|
||||
</ThemeProvider>
|
||||
);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,11 +1,13 @@
|
|||
import { Link, useLocation } from 'react-router-dom';
|
||||
import { TrendingUp, BarChart3, History, Menu, X, Sparkles } from 'lucide-react';
|
||||
import { TrendingUp, BarChart3, History, Menu, X, Sparkles, Settings } from 'lucide-react';
|
||||
import { useState } from 'react';
|
||||
import ThemeToggle from './ThemeToggle';
|
||||
import { useSettings } from '../contexts/SettingsContext';
|
||||
|
||||
export default function Header() {
|
||||
const location = useLocation();
|
||||
const [mobileMenuOpen, setMobileMenuOpen] = useState(false);
|
||||
const { openSettings } = useSettings();
|
||||
|
||||
const navItems = [
|
||||
{ path: '/', label: 'Dashboard', icon: BarChart3 },
|
||||
|
|
@ -46,8 +48,17 @@ export default function Header() {
|
|||
))}
|
||||
</nav>
|
||||
|
||||
{/* Theme Toggle & Mobile Menu */}
|
||||
{/* Settings, Theme Toggle & Mobile Menu */}
|
||||
<div className="flex items-center gap-2">
|
||||
{/* Settings Button */}
|
||||
<button
|
||||
onClick={openSettings}
|
||||
className="p-2 rounded-lg hover:bg-gray-100 dark:hover:bg-slate-800 transition-colors text-gray-600 dark:text-gray-300"
|
||||
aria-label="Open settings"
|
||||
title="Settings"
|
||||
>
|
||||
<Settings className="w-4 h-4" />
|
||||
</button>
|
||||
<div className="hidden md:block">
|
||||
<ThemeToggle />
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,297 @@
|
|||
import { useState } from 'react';
|
||||
import {
|
||||
X, Settings, Cpu, Key, Zap, Brain, Sparkles,
|
||||
Eye, EyeOff, Check, AlertCircle, RefreshCw
|
||||
} from 'lucide-react';
|
||||
import { useSettings, MODELS, PROVIDERS } from '../contexts/SettingsContext';
|
||||
import type { ModelId, ProviderId } from '../contexts/SettingsContext';
|
||||
|
||||
export default function SettingsModal() {
|
||||
const { settings, updateSettings, resetSettings, isSettingsOpen, closeSettings } = useSettings();
|
||||
const [showApiKey, setShowApiKey] = useState(false);
|
||||
const [isTesting, setIsTesting] = useState(false);
|
||||
const [testResult, setTestResult] = useState<{ success: boolean; message: string } | null>(null);
|
||||
|
||||
if (!isSettingsOpen) return null;
|
||||
|
||||
const handleProviderChange = (providerId: ProviderId) => {
|
||||
updateSettings({ provider: providerId });
|
||||
};
|
||||
|
||||
const handleModelChange = (type: 'deepThinkModel' | 'quickThinkModel', modelId: ModelId) => {
|
||||
updateSettings({ [type]: modelId });
|
||||
};
|
||||
|
||||
const handleApiKeyChange = (value: string) => {
|
||||
updateSettings({ anthropicApiKey: value });
|
||||
setTestResult(null);
|
||||
};
|
||||
|
||||
const testApiKey = async () => {
|
||||
if (!settings.anthropicApiKey) {
|
||||
setTestResult({ success: false, message: 'Please enter an API key' });
|
||||
return;
|
||||
}
|
||||
|
||||
setIsTesting(true);
|
||||
setTestResult(null);
|
||||
|
||||
try {
|
||||
// Simple validation - just check format
|
||||
if (!settings.anthropicApiKey.startsWith('sk-ant-')) {
|
||||
setTestResult({ success: false, message: 'Invalid API key format. Should start with sk-ant-' });
|
||||
} else {
|
||||
setTestResult({ success: true, message: 'API key format looks valid' });
|
||||
}
|
||||
} catch (error) {
|
||||
setTestResult({ success: false, message: 'Failed to validate API key' });
|
||||
} finally {
|
||||
setIsTesting(false);
|
||||
}
|
||||
};
|
||||
|
||||
const selectedProvider = PROVIDERS[settings.provider];
|
||||
|
||||
return (
|
||||
<div className="fixed inset-0 z-50 overflow-y-auto">
|
||||
{/* Backdrop */}
|
||||
<div
|
||||
className="fixed inset-0 bg-black/50 backdrop-blur-sm transition-opacity"
|
||||
onClick={closeSettings}
|
||||
/>
|
||||
|
||||
{/* Modal */}
|
||||
<div className="flex min-h-full items-center justify-center p-4">
|
||||
<div className="relative w-full max-w-lg bg-white dark:bg-slate-900 rounded-2xl shadow-2xl transform transition-all">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between p-4 border-b border-gray-200 dark:border-slate-700">
|
||||
<div className="flex items-center gap-3">
|
||||
<div className="p-2 bg-nifty-100 dark:bg-nifty-900/30 rounded-lg">
|
||||
<Settings className="w-5 h-5 text-nifty-600 dark:text-nifty-400" />
|
||||
</div>
|
||||
<div>
|
||||
<h2 className="text-lg font-semibold text-gray-900 dark:text-gray-100">Settings</h2>
|
||||
<p className="text-xs text-gray-500 dark:text-gray-400">Configure AI models and API settings</p>
|
||||
</div>
|
||||
</div>
|
||||
<button
|
||||
onClick={closeSettings}
|
||||
className="p-2 rounded-lg hover:bg-gray-100 dark:hover:bg-slate-800 transition-colors"
|
||||
>
|
||||
<X className="w-5 h-5 text-gray-500" />
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Content */}
|
||||
<div className="p-4 space-y-6 max-h-[70vh] overflow-y-auto">
|
||||
{/* Provider Selection */}
|
||||
<section>
|
||||
<h3 className="flex items-center gap-2 text-sm font-semibold text-gray-900 dark:text-gray-100 mb-3">
|
||||
<Zap className="w-4 h-4 text-amber-500" />
|
||||
LLM Provider
|
||||
</h3>
|
||||
<div className="grid gap-2">
|
||||
{Object.values(PROVIDERS).map(provider => (
|
||||
<button
|
||||
key={provider.id}
|
||||
onClick={() => handleProviderChange(provider.id as ProviderId)}
|
||||
className={`
|
||||
flex items-start gap-3 p-3 rounded-xl border-2 transition-all text-left
|
||||
${settings.provider === provider.id
|
||||
? 'border-nifty-500 bg-nifty-50 dark:bg-nifty-900/20'
|
||||
: 'border-gray-200 dark:border-slate-700 hover:border-gray-300 dark:hover:border-slate-600'
|
||||
}
|
||||
`}
|
||||
>
|
||||
<div className={`
|
||||
w-5 h-5 rounded-full border-2 flex items-center justify-center mt-0.5
|
||||
${settings.provider === provider.id
|
||||
? 'border-nifty-500 bg-nifty-500'
|
||||
: 'border-gray-300 dark:border-slate-600'
|
||||
}
|
||||
`}>
|
||||
{settings.provider === provider.id && (
|
||||
<Check className="w-3 h-3 text-white" />
|
||||
)}
|
||||
</div>
|
||||
<div>
|
||||
<div className="font-medium text-gray-900 dark:text-gray-100">
|
||||
{provider.name}
|
||||
</div>
|
||||
<div className="text-xs text-gray-500 dark:text-gray-400">
|
||||
{provider.description}
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
</section>
|
||||
|
||||
{/* API Key (only shown for API provider) */}
|
||||
{selectedProvider.requiresApiKey && (
|
||||
<section>
|
||||
<h3 className="flex items-center gap-2 text-sm font-semibold text-gray-900 dark:text-gray-100 mb-3">
|
||||
<Key className="w-4 h-4 text-purple-500" />
|
||||
API Key
|
||||
</h3>
|
||||
<div className="space-y-2">
|
||||
<div className="relative">
|
||||
<input
|
||||
type={showApiKey ? 'text' : 'password'}
|
||||
value={settings.anthropicApiKey}
|
||||
onChange={(e) => handleApiKeyChange(e.target.value)}
|
||||
placeholder="sk-ant-..."
|
||||
className="w-full px-4 py-2.5 pr-20 rounded-xl border border-gray-200 dark:border-slate-700 bg-white dark:bg-slate-800 text-gray-900 dark:text-gray-100 placeholder-gray-400 dark:placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-nifty-500 font-mono text-sm"
|
||||
/>
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => setShowApiKey(!showApiKey)}
|
||||
className="absolute right-2 top-1/2 -translate-y-1/2 p-2 text-gray-400 hover:text-gray-600 dark:hover:text-gray-300"
|
||||
>
|
||||
{showApiKey ? <EyeOff className="w-4 h-4" /> : <Eye className="w-4 h-4" />}
|
||||
</button>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<button
|
||||
onClick={testApiKey}
|
||||
disabled={isTesting || !settings.anthropicApiKey}
|
||||
className="flex items-center gap-2 px-3 py-1.5 text-xs font-medium bg-gray-100 dark:bg-slate-800 text-gray-700 dark:text-gray-300 rounded-lg hover:bg-gray-200 dark:hover:bg-slate-700 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||
>
|
||||
{isTesting ? (
|
||||
<RefreshCw className="w-3 h-3 animate-spin" />
|
||||
) : (
|
||||
<Check className="w-3 h-3" />
|
||||
)}
|
||||
Validate Key
|
||||
</button>
|
||||
{testResult && (
|
||||
<span className={`flex items-center gap-1 text-xs ${testResult.success ? 'text-green-600' : 'text-red-600'}`}>
|
||||
{testResult.success ? <Check className="w-3 h-3" /> : <AlertCircle className="w-3 h-3" />}
|
||||
{testResult.message}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
<p className="text-xs text-gray-500 dark:text-gray-400">
|
||||
Your API key is stored locally in your browser and never sent to our servers.
|
||||
</p>
|
||||
</div>
|
||||
</section>
|
||||
)}
|
||||
|
||||
{/* Model Selection */}
|
||||
<section>
|
||||
<h3 className="flex items-center gap-2 text-sm font-semibold text-gray-900 dark:text-gray-100 mb-3">
|
||||
<Cpu className="w-4 h-4 text-blue-500" />
|
||||
Model Selection
|
||||
</h3>
|
||||
|
||||
{/* Deep Think Model */}
|
||||
<div className="mb-4">
|
||||
<label className="flex items-center gap-2 text-xs font-medium text-gray-600 dark:text-gray-400 mb-2">
|
||||
<Brain className="w-3 h-3" />
|
||||
Deep Think Model (Complex Analysis)
|
||||
</label>
|
||||
<div className="grid grid-cols-3 gap-2">
|
||||
{Object.values(MODELS).map(model => (
|
||||
<button
|
||||
key={model.id}
|
||||
onClick={() => handleModelChange('deepThinkModel', model.id as ModelId)}
|
||||
className={`
|
||||
p-2 rounded-lg border-2 transition-all text-center
|
||||
${settings.deepThinkModel === model.id
|
||||
? 'border-blue-500 bg-blue-50 dark:bg-blue-900/20'
|
||||
: 'border-gray-200 dark:border-slate-700 hover:border-gray-300 dark:hover:border-slate-600'
|
||||
}
|
||||
`}
|
||||
>
|
||||
<div className={`text-sm font-medium ${
|
||||
settings.deepThinkModel === model.id
|
||||
? 'text-blue-700 dark:text-blue-300'
|
||||
: 'text-gray-700 dark:text-gray-300'
|
||||
}`}>
|
||||
{model.name.replace('Claude ', '')}
|
||||
</div>
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Quick Think Model */}
|
||||
<div>
|
||||
<label className="flex items-center gap-2 text-xs font-medium text-gray-600 dark:text-gray-400 mb-2">
|
||||
<Sparkles className="w-3 h-3" />
|
||||
Quick Think Model (Fast Operations)
|
||||
</label>
|
||||
<div className="grid grid-cols-3 gap-2">
|
||||
{Object.values(MODELS).map(model => (
|
||||
<button
|
||||
key={model.id}
|
||||
onClick={() => handleModelChange('quickThinkModel', model.id as ModelId)}
|
||||
className={`
|
||||
p-2 rounded-lg border-2 transition-all text-center
|
||||
${settings.quickThinkModel === model.id
|
||||
? 'border-green-500 bg-green-50 dark:bg-green-900/20'
|
||||
: 'border-gray-200 dark:border-slate-700 hover:border-gray-300 dark:hover:border-slate-600'
|
||||
}
|
||||
`}
|
||||
>
|
||||
<div className={`text-sm font-medium ${
|
||||
settings.quickThinkModel === model.id
|
||||
? 'text-green-700 dark:text-green-300'
|
||||
: 'text-gray-700 dark:text-gray-300'
|
||||
}`}>
|
||||
{model.name.replace('Claude ', '')}
|
||||
</div>
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
{/* Analysis Settings */}
|
||||
<section>
|
||||
<h3 className="flex items-center gap-2 text-sm font-semibold text-gray-900 dark:text-gray-100 mb-3">
|
||||
<Settings className="w-4 h-4 text-gray-500" />
|
||||
Analysis Settings
|
||||
</h3>
|
||||
<div>
|
||||
<label className="flex items-center justify-between text-xs font-medium text-gray-600 dark:text-gray-400 mb-2">
|
||||
<span>Max Debate Rounds</span>
|
||||
<span className="text-nifty-600 dark:text-nifty-400">{settings.maxDebateRounds}</span>
|
||||
</label>
|
||||
<input
|
||||
type="range"
|
||||
min="1"
|
||||
max="5"
|
||||
value={settings.maxDebateRounds}
|
||||
onChange={(e) => updateSettings({ maxDebateRounds: parseInt(e.target.value) })}
|
||||
className="w-full h-2 bg-gray-200 dark:bg-slate-700 rounded-lg appearance-none cursor-pointer accent-nifty-600"
|
||||
/>
|
||||
<div className="flex justify-between text-xs text-gray-400 mt-1">
|
||||
<span>1 (Faster)</span>
|
||||
<span>5 (More thorough)</span>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
{/* Footer */}
|
||||
<div className="flex items-center justify-between p-4 border-t border-gray-200 dark:border-slate-700">
|
||||
<button
|
||||
onClick={resetSettings}
|
||||
className="px-4 py-2 text-sm font-medium text-gray-600 dark:text-gray-400 hover:text-gray-900 dark:hover:text-gray-100 transition-colors"
|
||||
>
|
||||
Reset to Defaults
|
||||
</button>
|
||||
<button
|
||||
onClick={closeSettings}
|
||||
className="px-4 py-2 text-sm font-medium bg-nifty-600 text-white rounded-lg hover:bg-nifty-700 transition-colors"
|
||||
>
|
||||
Done
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
import { useState } from 'react';
|
||||
import {
|
||||
Database, ChevronDown, ChevronUp, CheckCircle,
|
||||
XCircle, Clock, ExternalLink, Server
|
||||
XCircle, Clock, Server
|
||||
} from 'lucide-react';
|
||||
import type { DataSourceLog } from '../../types/pipeline';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import { useState } from 'react';
|
||||
import {
|
||||
TrendingUp, TrendingDown, Scale, ChevronDown, ChevronUp,
|
||||
MessageSquare, Award, Clock
|
||||
MessageSquare, Award
|
||||
} from 'lucide-react';
|
||||
import type { DebateHistory } from '../../types/pipeline';
|
||||
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ export function PipelineOverview({ steps, onStepClick, compact = false }: Pipeli
|
|||
if (compact) {
|
||||
return (
|
||||
<div className="flex items-center gap-1">
|
||||
{displaySteps.map((step, index) => {
|
||||
{displaySteps.map((step) => {
|
||||
const styles = STATUS_STYLES[step.status];
|
||||
return (
|
||||
<div
|
||||
|
|
@ -117,7 +117,7 @@ export function PipelineOverview({ steps, onStepClick, compact = false }: Pipeli
|
|||
|
||||
{/* Pipeline steps */}
|
||||
<div className="flex flex-wrap gap-2">
|
||||
{displaySteps.map((step, index) => {
|
||||
{displaySteps.map((step) => {
|
||||
const StepIcon = STEP_ICONS[step.step_name] || Database;
|
||||
const styles = STATUS_STYLES[step.status];
|
||||
const StatusIcon = styles.icon;
|
||||
|
|
|
|||
|
|
@ -0,0 +1,127 @@
|
|||
import { createContext, useContext, useState, useEffect } from 'react';
|
||||
import type { ReactNode } from 'react';
|
||||
|
||||
// Model options
|
||||
export const MODELS = {
|
||||
opus: { id: 'opus', name: 'Claude Opus', description: 'Most capable, best for complex reasoning' },
|
||||
sonnet: { id: 'sonnet', name: 'Claude Sonnet', description: 'Balanced performance and speed' },
|
||||
haiku: { id: 'haiku', name: 'Claude Haiku', description: 'Fastest, good for simple tasks' },
|
||||
} as const;
|
||||
|
||||
// Provider options
|
||||
export const PROVIDERS = {
|
||||
claude_subscription: {
|
||||
id: 'claude_subscription',
|
||||
name: 'Claude Subscription',
|
||||
description: 'Use your Claude Max subscription (no API key needed)',
|
||||
requiresApiKey: false
|
||||
},
|
||||
anthropic_api: {
|
||||
id: 'anthropic_api',
|
||||
name: 'Anthropic API',
|
||||
description: 'Use Anthropic API directly with your API key',
|
||||
requiresApiKey: true
|
||||
},
|
||||
} as const;
|
||||
|
||||
export type ModelId = keyof typeof MODELS;
|
||||
export type ProviderId = keyof typeof PROVIDERS;
|
||||
|
||||
interface Settings {
|
||||
// Model settings
|
||||
deepThinkModel: ModelId;
|
||||
quickThinkModel: ModelId;
|
||||
|
||||
// Provider settings
|
||||
provider: ProviderId;
|
||||
|
||||
// API keys (only used when provider is anthropic_api)
|
||||
anthropicApiKey: string;
|
||||
|
||||
// Analysis settings
|
||||
maxDebateRounds: number;
|
||||
}
|
||||
|
||||
interface SettingsContextType {
|
||||
settings: Settings;
|
||||
updateSettings: (newSettings: Partial<Settings>) => void;
|
||||
resetSettings: () => void;
|
||||
isSettingsOpen: boolean;
|
||||
openSettings: () => void;
|
||||
closeSettings: () => void;
|
||||
}
|
||||
|
||||
const DEFAULT_SETTINGS: Settings = {
|
||||
deepThinkModel: 'opus',
|
||||
quickThinkModel: 'sonnet',
|
||||
provider: 'claude_subscription',
|
||||
anthropicApiKey: '',
|
||||
maxDebateRounds: 1,
|
||||
};
|
||||
|
||||
const STORAGE_KEY = 'nifty50ai_settings';
|
||||
|
||||
const SettingsContext = createContext<SettingsContextType | undefined>(undefined);
|
||||
|
||||
export function SettingsProvider({ children }: { children: ReactNode }) {
|
||||
const [settings, setSettings] = useState<Settings>(() => {
|
||||
// Load from localStorage on initial render
|
||||
if (typeof window !== 'undefined') {
|
||||
const stored = localStorage.getItem(STORAGE_KEY);
|
||||
if (stored) {
|
||||
try {
|
||||
const parsed = JSON.parse(stored);
|
||||
return { ...DEFAULT_SETTINGS, ...parsed };
|
||||
} catch (e) {
|
||||
console.error('Failed to parse settings from localStorage:', e);
|
||||
}
|
||||
}
|
||||
}
|
||||
return DEFAULT_SETTINGS;
|
||||
});
|
||||
|
||||
const [isSettingsOpen, setIsSettingsOpen] = useState(false);
|
||||
|
||||
// Persist settings to localStorage whenever they change
|
||||
useEffect(() => {
|
||||
if (typeof window !== 'undefined') {
|
||||
// Don't store the API key in plain text - encrypt it or use a more secure method in production
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(settings));
|
||||
}
|
||||
}, [settings]);
|
||||
|
||||
const updateSettings = (newSettings: Partial<Settings>) => {
|
||||
setSettings(prev => ({ ...prev, ...newSettings }));
|
||||
};
|
||||
|
||||
const resetSettings = () => {
|
||||
setSettings(DEFAULT_SETTINGS);
|
||||
if (typeof window !== 'undefined') {
|
||||
localStorage.removeItem(STORAGE_KEY);
|
||||
}
|
||||
};
|
||||
|
||||
const openSettings = () => setIsSettingsOpen(true);
|
||||
const closeSettings = () => setIsSettingsOpen(false);
|
||||
|
||||
return (
|
||||
<SettingsContext.Provider value={{
|
||||
settings,
|
||||
updateSettings,
|
||||
resetSettings,
|
||||
isSettingsOpen,
|
||||
openSettings,
|
||||
closeSettings,
|
||||
}}>
|
||||
{children}
|
||||
</SettingsContext.Provider>
|
||||
);
|
||||
}
|
||||
|
||||
export function useSettings() {
|
||||
const context = useContext(SettingsContext);
|
||||
if (context === undefined) {
|
||||
throw new Error('useSettings must be used within a SettingsProvider');
|
||||
}
|
||||
return context;
|
||||
}
|
||||
|
|
@ -1,11 +1,13 @@
|
|||
import { useState, useMemo } from 'react';
|
||||
import { useState, useMemo, useEffect } from 'react';
|
||||
import { Link } from 'react-router-dom';
|
||||
import { Calendar, RefreshCw, Filter, ChevronRight, TrendingUp, TrendingDown, Minus, History, Search, X } from 'lucide-react';
|
||||
import { Calendar, RefreshCw, Filter, ChevronRight, TrendingUp, TrendingDown, Minus, History, Search, X, Play, Loader2 } from 'lucide-react';
|
||||
import TopPicks, { StocksToAvoid } from '../components/TopPicks';
|
||||
import { DecisionBadge } from '../components/StockCard';
|
||||
import HowItWorks from '../components/HowItWorks';
|
||||
import BackgroundSparkline from '../components/BackgroundSparkline';
|
||||
import { getLatestRecommendation, getBacktestResult } from '../data/recommendations';
|
||||
import { api } from '../services/api';
|
||||
import { useSettings } from '../contexts/SettingsContext';
|
||||
import type { Decision, StockAnalysis } from '../types';
|
||||
|
||||
type FilterType = 'ALL' | Decision;
|
||||
|
|
@ -14,6 +16,84 @@ export default function Dashboard() {
|
|||
const recommendation = getLatestRecommendation();
|
||||
const [filter, setFilter] = useState<FilterType>('ALL');
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
const { settings } = useSettings();
|
||||
|
||||
// Bulk analysis state
|
||||
const [isAnalyzing, setIsAnalyzing] = useState(false);
|
||||
const [analysisProgress, setAnalysisProgress] = useState<{
|
||||
status: string;
|
||||
total: number;
|
||||
completed: number;
|
||||
failed: number;
|
||||
current_symbol: string | null;
|
||||
} | null>(null);
|
||||
|
||||
// Check for running analysis on mount
|
||||
useEffect(() => {
|
||||
const checkAnalysisStatus = async () => {
|
||||
try {
|
||||
const status = await api.getBulkAnalysisStatus();
|
||||
if (status.status === 'running') {
|
||||
setIsAnalyzing(true);
|
||||
setAnalysisProgress(status);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('Failed to check analysis status:', e);
|
||||
}
|
||||
};
|
||||
checkAnalysisStatus();
|
||||
}, []);
|
||||
|
||||
// Poll for analysis progress
|
||||
useEffect(() => {
|
||||
if (!isAnalyzing) return;
|
||||
|
||||
const pollInterval = setInterval(async () => {
|
||||
try {
|
||||
const status = await api.getBulkAnalysisStatus();
|
||||
setAnalysisProgress(status);
|
||||
|
||||
if (status.status === 'completed' || status.status === 'idle') {
|
||||
setIsAnalyzing(false);
|
||||
clearInterval(pollInterval);
|
||||
// Refresh the page to show updated data
|
||||
window.location.reload();
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('Failed to poll analysis status:', e);
|
||||
}
|
||||
}, 3000);
|
||||
|
||||
return () => clearInterval(pollInterval);
|
||||
}, [isAnalyzing]);
|
||||
|
||||
const handleAnalyzeAll = async () => {
|
||||
if (isAnalyzing) return;
|
||||
|
||||
setIsAnalyzing(true);
|
||||
setAnalysisProgress({
|
||||
status: 'starting',
|
||||
total: 50,
|
||||
completed: 0,
|
||||
failed: 0,
|
||||
current_symbol: null
|
||||
});
|
||||
|
||||
try {
|
||||
// Pass settings from context to the API
|
||||
await api.runBulkAnalysis(undefined, {
|
||||
deep_think_model: settings.deepThinkModel,
|
||||
quick_think_model: settings.quickThinkModel,
|
||||
provider: settings.provider,
|
||||
api_key: settings.provider === 'anthropic_api' ? settings.anthropicApiKey : undefined,
|
||||
max_debate_rounds: settings.maxDebateRounds
|
||||
});
|
||||
} catch (e) {
|
||||
console.error('Failed to start bulk analysis:', e);
|
||||
setIsAnalyzing(false);
|
||||
setAnalysisProgress(null);
|
||||
}
|
||||
};
|
||||
|
||||
if (!recommendation) {
|
||||
return (
|
||||
|
|
@ -64,8 +144,29 @@ export default function Dashboard() {
|
|||
</div>
|
||||
</div>
|
||||
|
||||
{/* Inline Stats */}
|
||||
{/* Analyze All Button + Inline Stats */}
|
||||
<div className="flex items-center gap-3" role="group" aria-label="Summary statistics">
|
||||
{/* Analyze All Button */}
|
||||
<button
|
||||
onClick={handleAnalyzeAll}
|
||||
disabled={isAnalyzing}
|
||||
className={`
|
||||
flex items-center gap-2 px-4 py-2 rounded-lg text-sm font-semibold transition-all
|
||||
${isAnalyzing
|
||||
? 'bg-amber-100 text-amber-700 dark:bg-amber-900/30 dark:text-amber-300 cursor-not-allowed'
|
||||
: 'bg-nifty-600 text-white hover:bg-nifty-700 shadow-sm hover:shadow-md'
|
||||
}
|
||||
`}
|
||||
title={isAnalyzing ? 'Analysis in progress...' : 'Run AI analysis for all 50 stocks'}
|
||||
>
|
||||
{isAnalyzing ? (
|
||||
<Loader2 className="w-4 h-4 animate-spin" />
|
||||
) : (
|
||||
<Play className="w-4 h-4" />
|
||||
)}
|
||||
{isAnalyzing ? 'Analyzing...' : 'Analyze All'}
|
||||
</button>
|
||||
|
||||
<div className="flex items-center gap-1.5 px-3 py-1.5 bg-green-50 dark:bg-green-900/30 rounded-lg cursor-pointer hover:bg-green-100 dark:hover:bg-green-900/50 transition-colors" onClick={() => setFilter('BUY')} title="Click to filter Buy stocks">
|
||||
<TrendingUp className="w-4 h-4 text-green-600 dark:text-green-400" aria-hidden="true" />
|
||||
<span className="font-bold text-green-700 dark:text-green-400">{buy}</span>
|
||||
|
|
@ -92,6 +193,34 @@ export default function Dashboard() {
|
|||
<div className="bg-red-500 transition-all" style={{ width: `${sellPct}%` }} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Analysis Progress Banner */}
|
||||
{isAnalyzing && analysisProgress && (
|
||||
<div className="mt-3 p-3 bg-blue-50 dark:bg-blue-900/30 rounded-lg border border-blue-200 dark:border-blue-800">
|
||||
<div className="flex items-center justify-between mb-2">
|
||||
<div className="flex items-center gap-2">
|
||||
<Loader2 className="w-4 h-4 animate-spin text-blue-600 dark:text-blue-400" />
|
||||
<span className="text-sm font-medium text-blue-700 dark:text-blue-300">
|
||||
Analyzing {analysisProgress.current_symbol || 'stocks'}...
|
||||
</span>
|
||||
</div>
|
||||
<span className="text-xs text-blue-600 dark:text-blue-400">
|
||||
{analysisProgress.completed + analysisProgress.failed} / {analysisProgress.total} stocks
|
||||
</span>
|
||||
</div>
|
||||
<div className="w-full bg-blue-200 dark:bg-blue-800 rounded-full h-2">
|
||||
<div
|
||||
className="bg-blue-600 dark:bg-blue-500 h-2 rounded-full transition-all duration-300"
|
||||
style={{ width: `${((analysisProgress.completed + analysisProgress.failed) / analysisProgress.total) * 100}%` }}
|
||||
/>
|
||||
</div>
|
||||
{analysisProgress.failed > 0 && (
|
||||
<p className="text-xs text-amber-600 dark:text-amber-400 mt-1">
|
||||
{analysisProgress.failed} failed
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</section>
|
||||
|
||||
{/* How It Works Section */}
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ import {
|
|||
DataSourcesPanel
|
||||
} from '../components/pipeline';
|
||||
import { api } from '../services/api';
|
||||
import { useSettings } from '../contexts/SettingsContext';
|
||||
import type { FullPipelineData, AgentType } from '../types/pipeline';
|
||||
|
||||
type TabType = 'overview' | 'pipeline' | 'debates' | 'data';
|
||||
|
|
@ -30,6 +31,7 @@ export default function StockDetail() {
|
|||
const [isRefreshing, setIsRefreshing] = useState(false);
|
||||
const [lastRefresh, setLastRefresh] = useState<string | null>(null);
|
||||
const [refreshMessage, setRefreshMessage] = useState<string | null>(null);
|
||||
const { settings } = useSettings();
|
||||
|
||||
// Analysis state
|
||||
const [isAnalysisRunning, setIsAnalysisRunning] = useState(false);
|
||||
|
|
@ -116,8 +118,14 @@ export default function StockDetail() {
|
|||
setAnalysisProgress('Starting analysis...');
|
||||
|
||||
try {
|
||||
// Trigger analysis
|
||||
await api.runAnalysis(symbol, latestRecommendation.date);
|
||||
// Trigger analysis with settings from context
|
||||
await api.runAnalysis(symbol, latestRecommendation.date, {
|
||||
deep_think_model: settings.deepThinkModel,
|
||||
quick_think_model: settings.quickThinkModel,
|
||||
provider: settings.provider,
|
||||
api_key: settings.provider === 'anthropic_api' ? settings.anthropicApiKey : undefined,
|
||||
max_debate_rounds: settings.maxDebateRounds
|
||||
});
|
||||
setAnalysisStatus('running');
|
||||
|
||||
// Poll for status
|
||||
|
|
|
|||
|
|
@ -70,6 +70,17 @@ export interface StockHistory {
|
|||
risk?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Analysis configuration options
|
||||
*/
|
||||
export interface AnalysisConfig {
|
||||
deep_think_model?: string;
|
||||
quick_think_model?: string;
|
||||
provider?: string;
|
||||
api_key?: string;
|
||||
max_debate_rounds?: number;
|
||||
}
|
||||
|
||||
class ApiService {
|
||||
private baseUrl: string;
|
||||
|
||||
|
|
@ -242,7 +253,7 @@ class ApiService {
|
|||
/**
|
||||
* Start analysis for a stock
|
||||
*/
|
||||
async runAnalysis(symbol: string, date?: string): Promise<{
|
||||
async runAnalysis(symbol: string, date?: string, config?: AnalysisConfig): Promise<{
|
||||
message: string;
|
||||
symbol: string;
|
||||
date: string;
|
||||
|
|
@ -251,7 +262,7 @@ class ApiService {
|
|||
const url = date ? `/analyze/${symbol}?date=${date}` : `/analyze/${symbol}`;
|
||||
return this.fetch(url, {
|
||||
method: 'POST',
|
||||
body: JSON.stringify({}),
|
||||
body: JSON.stringify(config || {}),
|
||||
noCache: true,
|
||||
headers: {
|
||||
'Cache-Control': 'no-cache, no-store, must-revalidate',
|
||||
|
|
@ -284,6 +295,45 @@ class ApiService {
|
|||
}> {
|
||||
return this.fetch('/analyze/running', { noCache: true });
|
||||
}
|
||||
|
||||
/**
|
||||
* Start bulk analysis for all Nifty 50 stocks
|
||||
*/
|
||||
async runBulkAnalysis(date?: string, config?: {
|
||||
deep_think_model?: string;
|
||||
quick_think_model?: string;
|
||||
provider?: string;
|
||||
api_key?: string;
|
||||
max_debate_rounds?: number;
|
||||
}): Promise<{
|
||||
message: string;
|
||||
date: string;
|
||||
total_stocks: number;
|
||||
status: string;
|
||||
}> {
|
||||
const url = date ? `/analyze/all?date=${date}` : '/analyze/all';
|
||||
return this.fetch(url, {
|
||||
method: 'POST',
|
||||
body: JSON.stringify(config || {}),
|
||||
noCache: true
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get bulk analysis status
|
||||
*/
|
||||
async getBulkAnalysisStatus(): Promise<{
|
||||
status: string;
|
||||
total: number;
|
||||
completed: number;
|
||||
failed: number;
|
||||
current_symbol: string | null;
|
||||
started_at: string | null;
|
||||
completed_at: string | null;
|
||||
results: Record<string, string>;
|
||||
}> {
|
||||
return this.fetch('/analyze/all/status', { noCache: true });
|
||||
}
|
||||
}
|
||||
|
||||
export const api = new ApiService();
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ export interface PipelineStep {
|
|||
export interface DataSourceLog {
|
||||
source_type: string;
|
||||
source_name: string;
|
||||
data_fetched?: Record<string, unknown>;
|
||||
data_fetched?: Record<string, unknown> | string;
|
||||
fetch_timestamp?: string;
|
||||
success: boolean;
|
||||
error_message?: string;
|
||||
|
|
|
|||
|
|
@ -17,7 +17,8 @@ class FinancialSituationMemory:
|
|||
# Use ChromaDB's default embedding function (uses all-MiniLM-L6-v2 internally)
|
||||
self.embedding_fn = embedding_functions.DefaultEmbeddingFunction()
|
||||
self.chroma_client = chromadb.Client(Settings(allow_reset=True))
|
||||
self.situation_collection = self.chroma_client.create_collection(
|
||||
# Use get_or_create to avoid errors when collection already exists
|
||||
self.situation_collection = self.chroma_client.get_or_create_collection(
|
||||
name=name,
|
||||
embedding_function=self.embedding_fn
|
||||
)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,9 @@ with Max subscription authentication instead of API keys.
|
|||
import os
|
||||
import subprocess
|
||||
import json
|
||||
from typing import Any, Dict, List, Optional, Iterator
|
||||
import re
|
||||
import copy
|
||||
from typing import Any, Dict, List, Optional, Iterator, Sequence, Union
|
||||
|
||||
from langchain_core.language_models.chat_models import BaseChatModel
|
||||
from langchain_core.messages import (
|
||||
|
|
@ -16,9 +18,12 @@ from langchain_core.messages import (
|
|||
BaseMessage,
|
||||
HumanMessage,
|
||||
SystemMessage,
|
||||
ToolMessage,
|
||||
)
|
||||
from langchain_core.outputs import ChatGeneration, ChatResult
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.tools import BaseTool
|
||||
from langchain_core.runnables import Runnable
|
||||
|
||||
|
||||
class ClaudeMaxLLM(BaseChatModel):
|
||||
|
|
@ -33,6 +38,10 @@ class ClaudeMaxLLM(BaseChatModel):
|
|||
max_tokens: int = 4096
|
||||
temperature: float = 0.7
|
||||
claude_cli_path: str = "claude"
|
||||
tools: List[Any] = [] # Bound tools
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
|
|
@ -46,19 +55,94 @@ class ClaudeMaxLLM(BaseChatModel):
|
|||
"temperature": self.temperature,
|
||||
}
|
||||
|
||||
def bind_tools(
|
||||
self,
|
||||
tools: Sequence[Union[Dict[str, Any], BaseTool, Any]],
|
||||
**kwargs: Any,
|
||||
) -> "ClaudeMaxLLM":
|
||||
"""Bind tools to the model for function calling.
|
||||
|
||||
Args:
|
||||
tools: A list of tools to bind to the model.
|
||||
**kwargs: Additional arguments (ignored for compatibility).
|
||||
|
||||
Returns:
|
||||
A new ClaudeMaxLLM instance with tools bound.
|
||||
"""
|
||||
# Create a copy with tools bound
|
||||
new_instance = ClaudeMaxLLM(
|
||||
model=self.model,
|
||||
max_tokens=self.max_tokens,
|
||||
temperature=self.temperature,
|
||||
claude_cli_path=self.claude_cli_path,
|
||||
tools=list(tools),
|
||||
)
|
||||
return new_instance
|
||||
|
||||
def _format_tools_for_prompt(self) -> str:
|
||||
"""Format bound tools as a string for the prompt."""
|
||||
if not self.tools:
|
||||
return ""
|
||||
|
||||
tool_descriptions = []
|
||||
for tool in self.tools:
|
||||
if hasattr(tool, 'name') and hasattr(tool, 'description'):
|
||||
# LangChain BaseTool
|
||||
name = tool.name
|
||||
desc = tool.description
|
||||
args = ""
|
||||
if hasattr(tool, 'args_schema') and tool.args_schema:
|
||||
schema = tool.args_schema.schema() if hasattr(tool.args_schema, 'schema') else {}
|
||||
if 'properties' in schema:
|
||||
args = ", ".join(f"{k}: {v.get('type', 'any')}" for k, v in schema['properties'].items())
|
||||
tool_descriptions.append(f"- {name}({args}): {desc}")
|
||||
elif isinstance(tool, dict):
|
||||
# Dict format
|
||||
name = tool.get('name', 'unknown')
|
||||
desc = tool.get('description', '')
|
||||
tool_descriptions.append(f"- {name}: {desc}")
|
||||
else:
|
||||
# Try to get function info
|
||||
name = getattr(tool, '__name__', str(tool))
|
||||
desc = getattr(tool, '__doc__', '') or ''
|
||||
tool_descriptions.append(f"- {name}: {desc[:100]}")
|
||||
|
||||
return "\n\nAvailable tools:\n" + "\n".join(tool_descriptions) + "\n\nTo use a tool, respond with: TOOL_CALL: tool_name(arguments)\n"
|
||||
|
||||
def _format_messages_for_prompt(self, messages: List[BaseMessage]) -> str:
|
||||
"""Convert LangChain messages to a single prompt string."""
|
||||
formatted_parts = []
|
||||
|
||||
# Add tools description if tools are bound
|
||||
tools_prompt = self._format_tools_for_prompt()
|
||||
if tools_prompt:
|
||||
formatted_parts.append(tools_prompt)
|
||||
|
||||
for msg in messages:
|
||||
if isinstance(msg, SystemMessage):
|
||||
# Handle dict messages (LangChain sometimes passes these)
|
||||
if isinstance(msg, dict):
|
||||
role = msg.get("role", msg.get("type", "human"))
|
||||
content = msg.get("content", str(msg))
|
||||
if role in ("system",):
|
||||
formatted_parts.append(f"<system>\n{content}\n</system>\n")
|
||||
elif role in ("human", "user"):
|
||||
formatted_parts.append(f"Human: {content}\n")
|
||||
elif role in ("ai", "assistant"):
|
||||
formatted_parts.append(f"Assistant: {content}\n")
|
||||
else:
|
||||
formatted_parts.append(f"{content}\n")
|
||||
elif isinstance(msg, SystemMessage):
|
||||
formatted_parts.append(f"<system>\n{msg.content}\n</system>\n")
|
||||
elif isinstance(msg, HumanMessage):
|
||||
formatted_parts.append(f"Human: {msg.content}\n")
|
||||
elif isinstance(msg, AIMessage):
|
||||
formatted_parts.append(f"Assistant: {msg.content}\n")
|
||||
else:
|
||||
elif isinstance(msg, ToolMessage):
|
||||
formatted_parts.append(f"Tool Result ({msg.name}): {msg.content}\n")
|
||||
elif hasattr(msg, 'content'):
|
||||
formatted_parts.append(f"{msg.content}\n")
|
||||
else:
|
||||
formatted_parts.append(f"{str(msg)}\n")
|
||||
|
||||
return "\n".join(formatted_parts)
|
||||
|
||||
|
|
@ -68,12 +152,12 @@ class ClaudeMaxLLM(BaseChatModel):
|
|||
env = os.environ.copy()
|
||||
env.pop("ANTHROPIC_API_KEY", None)
|
||||
|
||||
# Build the command
|
||||
# Build the command - use --prompt flag with stdin for long prompts
|
||||
cmd = [
|
||||
self.claude_cli_path,
|
||||
"--print", # Non-interactive mode
|
||||
"--model", self.model,
|
||||
prompt
|
||||
"-p", prompt # Use -p flag for prompt
|
||||
]
|
||||
|
||||
try:
|
||||
|
|
@ -86,7 +170,9 @@ class ClaudeMaxLLM(BaseChatModel):
|
|||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
raise RuntimeError(f"Claude CLI error: {result.stderr}")
|
||||
# Include both stdout and stderr for better debugging
|
||||
error_info = result.stderr or result.stdout or "No output"
|
||||
raise RuntimeError(f"Claude CLI error (code {result.returncode}): {error_info}")
|
||||
|
||||
return result.stdout.strip()
|
||||
|
||||
|
|
@ -120,7 +206,14 @@ class ClaudeMaxLLM(BaseChatModel):
|
|||
|
||||
return ChatResult(generations=[generation])
|
||||
|
||||
def invoke(self, input: Any, **kwargs) -> AIMessage:
|
||||
def invoke(
|
||||
self,
|
||||
input: Any,
|
||||
config: Optional[Dict[str, Any]] = None,
|
||||
*,
|
||||
stop: Optional[List[str]] = None,
|
||||
**kwargs: Any
|
||||
) -> AIMessage:
|
||||
"""Invoke the model with the given input."""
|
||||
if isinstance(input, str):
|
||||
messages = [HumanMessage(content=input)]
|
||||
|
|
@ -129,11 +222,11 @@ class ClaudeMaxLLM(BaseChatModel):
|
|||
else:
|
||||
messages = [HumanMessage(content=str(input))]
|
||||
|
||||
result = self._generate(messages, **kwargs)
|
||||
result = self._generate(messages, stop=stop, **kwargs)
|
||||
return result.generations[0].message
|
||||
|
||||
|
||||
def get_claude_max_llm(model: str = "claude-sonnet-4-5-20250514", **kwargs) -> ClaudeMaxLLM:
|
||||
def get_claude_max_llm(model: str = "sonnet", **kwargs) -> ClaudeMaxLLM:
|
||||
"""
|
||||
Factory function to create a ClaudeMaxLLM instance.
|
||||
|
||||
|
|
@ -151,7 +244,7 @@ def test_claude_max():
|
|||
"""Test the Claude Max LLM wrapper."""
|
||||
print("Testing Claude Max LLM wrapper...")
|
||||
|
||||
llm = ClaudeMaxLLM(model="claude-sonnet-4-5-20250514")
|
||||
llm = ClaudeMaxLLM(model="sonnet")
|
||||
|
||||
# Test with a simple prompt
|
||||
response = llm.invoke("Say 'Hello, I am using Claude Max subscription!' in exactly those words.")
|
||||
|
|
|
|||
|
|
@ -1,13 +1,78 @@
|
|||
from datetime import datetime, timedelta
|
||||
from .alpha_vantage_common import _make_api_request
|
||||
import json
|
||||
|
||||
|
||||
def _filter_reports_by_date(data_str: str, curr_date: str, report_keys: list = None) -> str:
|
||||
"""
|
||||
Filter Alpha Vantage fundamentals data to only include reports available as of curr_date.
|
||||
This ensures point-in-time accuracy for backtesting.
|
||||
|
||||
Financial reports are typically published ~45 days after the fiscal date ending.
|
||||
We filter to only include reports that would have been published by curr_date.
|
||||
|
||||
Args:
|
||||
data_str: JSON string from Alpha Vantage API
|
||||
curr_date: The backtest date in yyyy-mm-dd format
|
||||
report_keys: List of keys containing report arrays (e.g., ['quarterlyReports', 'annualReports'])
|
||||
|
||||
Returns:
|
||||
Filtered JSON string with only point-in-time available reports
|
||||
"""
|
||||
if curr_date is None:
|
||||
return data_str
|
||||
|
||||
if report_keys is None:
|
||||
report_keys = ['quarterlyReports', 'annualReports']
|
||||
|
||||
try:
|
||||
data = json.loads(data_str)
|
||||
curr_date_dt = datetime.strptime(curr_date, "%Y-%m-%d")
|
||||
# Financial reports typically published ~45 days after fiscal date ending
|
||||
publication_delay_days = 45
|
||||
|
||||
for key in report_keys:
|
||||
if key in data and isinstance(data[key], list):
|
||||
filtered_reports = []
|
||||
for report in data[key]:
|
||||
fiscal_date = report.get('fiscalDateEnding')
|
||||
if fiscal_date:
|
||||
try:
|
||||
fiscal_date_dt = datetime.strptime(fiscal_date, "%Y-%m-%d")
|
||||
# Estimate when this report would have been published
|
||||
estimated_publish_date = fiscal_date_dt + timedelta(days=publication_delay_days)
|
||||
if estimated_publish_date <= curr_date_dt:
|
||||
filtered_reports.append(report)
|
||||
except ValueError:
|
||||
# If date parsing fails, keep the report
|
||||
filtered_reports.append(report)
|
||||
else:
|
||||
# If no fiscal date, keep the report
|
||||
filtered_reports.append(report)
|
||||
data[key] = filtered_reports
|
||||
|
||||
# Add point-in-time metadata
|
||||
data['_point_in_time_date'] = curr_date
|
||||
data['_filtered_for_backtesting'] = True
|
||||
|
||||
return json.dumps(data, indent=2)
|
||||
|
||||
except (json.JSONDecodeError, Exception) as e:
|
||||
# If parsing fails, return original data with warning
|
||||
print(f"Warning: Could not filter Alpha Vantage data by date: {e}")
|
||||
return data_str
|
||||
|
||||
|
||||
def get_fundamentals(ticker: str, curr_date: str = None) -> str:
|
||||
"""
|
||||
Retrieve comprehensive fundamental data for a given ticker symbol using Alpha Vantage.
|
||||
|
||||
Note: OVERVIEW endpoint returns current snapshot data only. For backtesting,
|
||||
this may not reflect the exact fundamentals as of the historical date.
|
||||
|
||||
Args:
|
||||
ticker (str): Ticker symbol of the company
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (not used for Alpha Vantage)
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (used for documentation)
|
||||
|
||||
Returns:
|
||||
str: Company overview data including financial ratios and key metrics
|
||||
|
|
@ -16,62 +81,91 @@ def get_fundamentals(ticker: str, curr_date: str = None) -> str:
|
|||
"symbol": ticker,
|
||||
}
|
||||
|
||||
return _make_api_request("OVERVIEW", params)
|
||||
result = _make_api_request("OVERVIEW", params)
|
||||
|
||||
# Add warning about point-in-time accuracy for OVERVIEW data
|
||||
if curr_date and result and not result.startswith("Error"):
|
||||
try:
|
||||
data = json.loads(result)
|
||||
data['_warning'] = (
|
||||
"OVERVIEW data is current snapshot only. For accurate backtesting, "
|
||||
"fundamental ratios may differ from actual values as of " + curr_date
|
||||
)
|
||||
data['_requested_date'] = curr_date
|
||||
return json.dumps(data, indent=2)
|
||||
except:
|
||||
pass
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_balance_sheet(ticker: str, freq: str = "quarterly", curr_date: str = None) -> str:
|
||||
"""
|
||||
Retrieve balance sheet data for a given ticker symbol using Alpha Vantage.
|
||||
Filtered by curr_date for point-in-time backtesting accuracy.
|
||||
|
||||
Args:
|
||||
ticker (str): Ticker symbol of the company
|
||||
freq (str): Reporting frequency: annual/quarterly (default quarterly) - not used for Alpha Vantage
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (not used for Alpha Vantage)
|
||||
freq (str): Reporting frequency: annual/quarterly (default quarterly)
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (used for point-in-time filtering)
|
||||
|
||||
Returns:
|
||||
str: Balance sheet data with normalized fields
|
||||
str: Balance sheet data with normalized fields, filtered to only include
|
||||
reports that would have been published by curr_date
|
||||
"""
|
||||
params = {
|
||||
"symbol": ticker,
|
||||
}
|
||||
|
||||
return _make_api_request("BALANCE_SHEET", params)
|
||||
result = _make_api_request("BALANCE_SHEET", params)
|
||||
|
||||
# Filter reports to only include those available as of curr_date
|
||||
return _filter_reports_by_date(result, curr_date)
|
||||
|
||||
|
||||
def get_cashflow(ticker: str, freq: str = "quarterly", curr_date: str = None) -> str:
|
||||
"""
|
||||
Retrieve cash flow statement data for a given ticker symbol using Alpha Vantage.
|
||||
Filtered by curr_date for point-in-time backtesting accuracy.
|
||||
|
||||
Args:
|
||||
ticker (str): Ticker symbol of the company
|
||||
freq (str): Reporting frequency: annual/quarterly (default quarterly) - not used for Alpha Vantage
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (not used for Alpha Vantage)
|
||||
freq (str): Reporting frequency: annual/quarterly (default quarterly)
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (used for point-in-time filtering)
|
||||
|
||||
Returns:
|
||||
str: Cash flow statement data with normalized fields
|
||||
str: Cash flow statement data with normalized fields, filtered to only include
|
||||
reports that would have been published by curr_date
|
||||
"""
|
||||
params = {
|
||||
"symbol": ticker,
|
||||
}
|
||||
|
||||
return _make_api_request("CASH_FLOW", params)
|
||||
result = _make_api_request("CASH_FLOW", params)
|
||||
|
||||
# Filter reports to only include those available as of curr_date
|
||||
return _filter_reports_by_date(result, curr_date)
|
||||
|
||||
|
||||
def get_income_statement(ticker: str, freq: str = "quarterly", curr_date: str = None) -> str:
|
||||
"""
|
||||
Retrieve income statement data for a given ticker symbol using Alpha Vantage.
|
||||
Filtered by curr_date for point-in-time backtesting accuracy.
|
||||
|
||||
Args:
|
||||
ticker (str): Ticker symbol of the company
|
||||
freq (str): Reporting frequency: annual/quarterly (default quarterly) - not used for Alpha Vantage
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (not used for Alpha Vantage)
|
||||
freq (str): Reporting frequency: annual/quarterly (default quarterly)
|
||||
curr_date (str): Current date you are trading at, yyyy-mm-dd (used for point-in-time filtering)
|
||||
|
||||
Returns:
|
||||
str: Income statement data with normalized fields
|
||||
str: Income statement data with normalized fields, filtered to only include
|
||||
reports that would have been published by curr_date
|
||||
"""
|
||||
params = {
|
||||
"symbol": ticker,
|
||||
}
|
||||
|
||||
return _make_api_request("INCOME_STATEMENT", params)
|
||||
result = _make_api_request("INCOME_STATEMENT", params)
|
||||
|
||||
# Filter reports to only include those available as of curr_date
|
||||
return _filter_reports_by_date(result, curr_date)
|
||||
|
|
|
|||
|
|
@ -220,11 +220,12 @@ def _get_stock_stats_bulk(
|
|||
raise Exception("Stockstats fail: Yahoo Finance data not fetched yet!")
|
||||
else:
|
||||
# Online data fetching with caching
|
||||
today_date = pd.Timestamp.today()
|
||||
# IMPORTANT: Use curr_date as end_date for backtesting accuracy
|
||||
# This ensures we only use data available at the backtest date (point-in-time)
|
||||
curr_date_dt = pd.to_datetime(curr_date)
|
||||
|
||||
end_date = today_date
|
||||
start_date = today_date - pd.DateOffset(years=15)
|
||||
|
||||
end_date = curr_date_dt # Use backtest date, NOT today's date
|
||||
start_date = curr_date_dt - pd.DateOffset(years=15)
|
||||
start_date_str = start_date.strftime("%Y-%m-%d")
|
||||
end_date_str = end_date.strftime("%Y-%m-%d")
|
||||
|
||||
|
|
@ -297,30 +298,80 @@ def get_stockstats_indicator(
|
|||
return str(indicator_value)
|
||||
|
||||
|
||||
def _filter_fundamentals_by_date(data, curr_date):
|
||||
"""
|
||||
Filter fundamentals data to only include reports available on or before curr_date.
|
||||
This ensures point-in-time accuracy for backtesting.
|
||||
|
||||
yfinance returns fundamentals with report dates as column headers.
|
||||
Financial reports are typically published 30-45 days after quarter end.
|
||||
We filter to only include columns (report dates) that are at least 45 days before curr_date.
|
||||
"""
|
||||
import pandas as pd
|
||||
|
||||
if data.empty or curr_date is None:
|
||||
return data
|
||||
|
||||
try:
|
||||
curr_date_dt = pd.to_datetime(curr_date)
|
||||
# Financial reports are typically published ~45 days after the report date
|
||||
# So for a report dated 2024-03-31, it would be available around mid-May
|
||||
publication_delay_days = 45
|
||||
|
||||
# Filter columns (report dates) to only include those available at curr_date
|
||||
valid_columns = []
|
||||
for col in data.columns:
|
||||
try:
|
||||
report_date = pd.to_datetime(col)
|
||||
# Report would have been published ~45 days after report_date
|
||||
estimated_publish_date = report_date + pd.Timedelta(days=publication_delay_days)
|
||||
if estimated_publish_date <= curr_date_dt:
|
||||
valid_columns.append(col)
|
||||
except:
|
||||
# If column can't be parsed as date, keep it (might be a label column)
|
||||
valid_columns.append(col)
|
||||
|
||||
if valid_columns:
|
||||
return data[valid_columns]
|
||||
else:
|
||||
return data.iloc[:, :0] # Return empty dataframe with same index
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not filter fundamentals by date: {e}")
|
||||
return data
|
||||
|
||||
|
||||
def get_balance_sheet(
|
||||
ticker: Annotated[str, "ticker symbol of the company"],
|
||||
freq: Annotated[str, "frequency of data: 'annual' or 'quarterly'"] = "quarterly",
|
||||
curr_date: Annotated[str, "current date (not used for yfinance)"] = None
|
||||
curr_date: Annotated[str, "current date for point-in-time filtering"] = None
|
||||
):
|
||||
"""Get balance sheet data from yfinance."""
|
||||
"""Get balance sheet data from yfinance, filtered by curr_date for backtesting accuracy."""
|
||||
try:
|
||||
# Normalize symbol for yfinance (adds .NS suffix for NSE stocks)
|
||||
normalized_ticker = normalize_symbol(ticker, target="yfinance")
|
||||
ticker_obj = yf.Ticker(normalized_ticker)
|
||||
|
||||
|
||||
if freq.lower() == "quarterly":
|
||||
data = ticker_obj.quarterly_balance_sheet
|
||||
else:
|
||||
data = ticker_obj.balance_sheet
|
||||
|
||||
|
||||
if data.empty:
|
||||
return f"No balance sheet data found for symbol '{normalized_ticker}'"
|
||||
|
||||
# Filter by curr_date for point-in-time accuracy in backtesting
|
||||
data = _filter_fundamentals_by_date(data, curr_date)
|
||||
|
||||
if data.empty:
|
||||
return f"No balance sheet data available for {normalized_ticker} as of {curr_date}"
|
||||
|
||||
# Convert to CSV string for consistency with other functions
|
||||
csv_string = data.to_csv()
|
||||
|
||||
# Add header information
|
||||
header = f"# Balance Sheet data for {normalized_ticker} ({freq})\n"
|
||||
if curr_date:
|
||||
header += f"# Point-in-time data as of: {curr_date}\n"
|
||||
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
|
||||
return header + csv_string
|
||||
|
|
@ -332,9 +383,9 @@ def get_balance_sheet(
|
|||
def get_cashflow(
|
||||
ticker: Annotated[str, "ticker symbol of the company"],
|
||||
freq: Annotated[str, "frequency of data: 'annual' or 'quarterly'"] = "quarterly",
|
||||
curr_date: Annotated[str, "current date (not used for yfinance)"] = None
|
||||
curr_date: Annotated[str, "current date for point-in-time filtering"] = None
|
||||
):
|
||||
"""Get cash flow data from yfinance."""
|
||||
"""Get cash flow data from yfinance, filtered by curr_date for backtesting accuracy."""
|
||||
try:
|
||||
# Normalize symbol for yfinance (adds .NS suffix for NSE stocks)
|
||||
normalized_ticker = normalize_symbol(ticker, target="yfinance")
|
||||
|
|
@ -348,11 +399,19 @@ def get_cashflow(
|
|||
if data.empty:
|
||||
return f"No cash flow data found for symbol '{normalized_ticker}'"
|
||||
|
||||
# Filter by curr_date for point-in-time accuracy in backtesting
|
||||
data = _filter_fundamentals_by_date(data, curr_date)
|
||||
|
||||
if data.empty:
|
||||
return f"No cash flow data available for {normalized_ticker} as of {curr_date}"
|
||||
|
||||
# Convert to CSV string for consistency with other functions
|
||||
csv_string = data.to_csv()
|
||||
|
||||
# Add header information
|
||||
header = f"# Cash Flow data for {normalized_ticker} ({freq})\n"
|
||||
if curr_date:
|
||||
header += f"# Point-in-time data as of: {curr_date}\n"
|
||||
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
|
||||
return header + csv_string
|
||||
|
|
@ -364,9 +423,9 @@ def get_cashflow(
|
|||
def get_income_statement(
|
||||
ticker: Annotated[str, "ticker symbol of the company"],
|
||||
freq: Annotated[str, "frequency of data: 'annual' or 'quarterly'"] = "quarterly",
|
||||
curr_date: Annotated[str, "current date (not used for yfinance)"] = None
|
||||
curr_date: Annotated[str, "current date for point-in-time filtering"] = None
|
||||
):
|
||||
"""Get income statement data from yfinance."""
|
||||
"""Get income statement data from yfinance, filtered by curr_date for backtesting accuracy."""
|
||||
try:
|
||||
# Normalize symbol for yfinance (adds .NS suffix for NSE stocks)
|
||||
normalized_ticker = normalize_symbol(ticker, target="yfinance")
|
||||
|
|
@ -380,11 +439,19 @@ def get_income_statement(
|
|||
if data.empty:
|
||||
return f"No income statement data found for symbol '{normalized_ticker}'"
|
||||
|
||||
# Filter by curr_date for point-in-time accuracy in backtesting
|
||||
data = _filter_fundamentals_by_date(data, curr_date)
|
||||
|
||||
if data.empty:
|
||||
return f"No income statement data available for {normalized_ticker} as of {curr_date}"
|
||||
|
||||
# Convert to CSV string for consistency with other functions
|
||||
csv_string = data.to_csv()
|
||||
|
||||
# Add header information
|
||||
header = f"# Income Statement data for {normalized_ticker} ({freq})\n"
|
||||
if curr_date:
|
||||
header += f"# Point-in-time data as of: {curr_date}\n"
|
||||
header += f"# Data retrieved on: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
|
||||
return header + csv_string
|
||||
|
|
|
|||
|
|
@ -1,11 +1,17 @@
|
|||
# TradingAgents/graph/trading_graph.py
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import json
|
||||
from datetime import date
|
||||
from datetime import date, datetime
|
||||
from typing import Dict, Any, Tuple, List, Optional
|
||||
|
||||
# Add frontend backend to path for database access
|
||||
FRONTEND_BACKEND_PATH = Path(__file__).parent.parent.parent / "frontend" / "backend"
|
||||
if str(FRONTEND_BACKEND_PATH) not in sys.path:
|
||||
sys.path.insert(0, str(FRONTEND_BACKEND_PATH))
|
||||
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
|
@ -191,6 +197,9 @@ class TradingAgentsGraph:
|
|||
# Log state
|
||||
self._log_state(trade_date, final_state)
|
||||
|
||||
# Save to frontend database for UI display
|
||||
self._save_to_frontend_db(trade_date, final_state)
|
||||
|
||||
# Return decision and processed signal
|
||||
return final_state, self.process_signal(final_state["final_trade_decision"])
|
||||
|
||||
|
|
@ -236,6 +245,93 @@ class TradingAgentsGraph:
|
|||
) as f:
|
||||
json.dump(self.log_states_dict, f, indent=4)
|
||||
|
||||
def _save_to_frontend_db(self, trade_date: str, final_state: Dict[str, Any]):
|
||||
"""Save pipeline data to the frontend database for UI display.
|
||||
|
||||
Args:
|
||||
trade_date: The date of the analysis
|
||||
final_state: The final state from the graph execution
|
||||
"""
|
||||
try:
|
||||
from database import (
|
||||
init_db,
|
||||
save_agent_report,
|
||||
save_debate_history,
|
||||
save_pipeline_steps_bulk,
|
||||
save_data_source_logs_bulk
|
||||
)
|
||||
|
||||
# Initialize database if needed
|
||||
init_db()
|
||||
|
||||
symbol = final_state.get("company_of_interest", self.ticker)
|
||||
now = datetime.now().isoformat()
|
||||
|
||||
# 1. Save agent reports
|
||||
agent_reports = [
|
||||
("market", final_state.get("market_report", "")),
|
||||
("news", final_state.get("news_report", "")),
|
||||
("social_media", final_state.get("sentiment_report", "")),
|
||||
("fundamentals", final_state.get("fundamentals_report", "")),
|
||||
]
|
||||
|
||||
for agent_type, content in agent_reports:
|
||||
if content:
|
||||
save_agent_report(
|
||||
date=trade_date,
|
||||
symbol=symbol,
|
||||
agent_type=agent_type,
|
||||
report_content=content,
|
||||
data_sources_used=[]
|
||||
)
|
||||
|
||||
# 2. Save investment debate
|
||||
invest_debate = final_state.get("investment_debate_state", {})
|
||||
if invest_debate:
|
||||
save_debate_history(
|
||||
date=trade_date,
|
||||
symbol=symbol,
|
||||
debate_type="investment",
|
||||
bull_arguments=invest_debate.get("bull_history", ""),
|
||||
bear_arguments=invest_debate.get("bear_history", ""),
|
||||
judge_decision=invest_debate.get("judge_decision", ""),
|
||||
full_history=invest_debate.get("history", "")
|
||||
)
|
||||
|
||||
# 3. Save risk debate
|
||||
risk_debate = final_state.get("risk_debate_state", {})
|
||||
if risk_debate:
|
||||
save_debate_history(
|
||||
date=trade_date,
|
||||
symbol=symbol,
|
||||
debate_type="risk",
|
||||
risky_arguments=risk_debate.get("risky_history", ""),
|
||||
safe_arguments=risk_debate.get("safe_history", ""),
|
||||
neutral_arguments=risk_debate.get("neutral_history", ""),
|
||||
judge_decision=risk_debate.get("judge_decision", ""),
|
||||
full_history=risk_debate.get("history", "")
|
||||
)
|
||||
|
||||
# 4. Save pipeline steps (tracking the stages)
|
||||
pipeline_steps = [
|
||||
{"step_number": 1, "step_name": "initialize", "status": "completed", "started_at": now, "completed_at": now, "output_summary": "Pipeline initialized"},
|
||||
{"step_number": 2, "step_name": "market_analysis", "status": "completed", "started_at": now, "completed_at": now, "output_summary": "Market analysis complete" if final_state.get("market_report") else "Skipped"},
|
||||
{"step_number": 3, "step_name": "news_analysis", "status": "completed", "started_at": now, "completed_at": now, "output_summary": "News analysis complete" if final_state.get("news_report") else "Skipped"},
|
||||
{"step_number": 4, "step_name": "social_analysis", "status": "completed", "started_at": now, "completed_at": now, "output_summary": "Social analysis complete" if final_state.get("sentiment_report") else "Skipped"},
|
||||
{"step_number": 5, "step_name": "fundamental_analysis", "status": "completed", "started_at": now, "completed_at": now, "output_summary": "Fundamental analysis complete" if final_state.get("fundamentals_report") else "Skipped"},
|
||||
{"step_number": 6, "step_name": "investment_debate", "status": "completed", "started_at": now, "completed_at": now, "output_summary": invest_debate.get("judge_decision", "")[:100] if invest_debate else "Skipped"},
|
||||
{"step_number": 7, "step_name": "trader_decision", "status": "completed", "started_at": now, "completed_at": now, "output_summary": final_state.get("trader_investment_plan", "")[:100] if final_state.get("trader_investment_plan") else "Skipped"},
|
||||
{"step_number": 8, "step_name": "risk_debate", "status": "completed", "started_at": now, "completed_at": now, "output_summary": risk_debate.get("judge_decision", "")[:100] if risk_debate else "Skipped"},
|
||||
{"step_number": 9, "step_name": "final_decision", "status": "completed", "started_at": now, "completed_at": now, "output_summary": final_state.get("final_trade_decision", "")[:100] if final_state.get("final_trade_decision") else "Pending"},
|
||||
]
|
||||
save_pipeline_steps_bulk(trade_date, symbol, pipeline_steps)
|
||||
|
||||
print(f"[Frontend DB] Saved pipeline data for {symbol} on {trade_date}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"[Frontend DB] Warning: Could not save to frontend database: {e}")
|
||||
# Don't fail the main process if frontend DB save fails
|
||||
|
||||
def reflect_and_remember(self, returns_losses):
|
||||
"""Reflect on decisions and update memory based on returns."""
|
||||
self.reflector.reflect_bull_researcher(
|
||||
|
|
|
|||