Compare commits

...

53 Commits

Author SHA1 Message Date
hemangjoshi37a 572ef6c367 Address PR review feedback from Gemini Code Assist
Fixes:
- Remove duplicate get_running_analyses function (critical)
- Fix N+1 query in get_pipeline_summary_for_date with batch queries (high)
- Add thread-safety warning comment for running_analyses dict (high)
- Remove package-lock.json from .gitignore and track it (high)
- Config param in memory.py kept for backward compatibility (documented)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 08:24:35 +11:00
hemangjoshi37a 7e4700626e Add Nifty50 AI Frontend documentation with screenshots to main README
- Added comprehensive frontend section with 10 feature screenshots
- Documented all key features: Dashboard, Settings, Pipeline, Debates, History
- Included Quick Start guide and tech stack information
- Added project structure overview

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 08:10:21 +11:00
hemangjoshi37a b6ea4bd939 add all 2026-02-01 08:05:09 +11:00
hemangjoshi37a 24d61e673a Add Settings UI, Analysis Pipeline visualization, and comprehensive documentation
Features:
- Settings panel with LLM provider selection (Claude Subscription / Anthropic API)
- API key management with secure browser localStorage
- Model selection for Deep Think (Opus) and Quick Think (Sonnet/Haiku)
- Configurable max debate rounds (1-5)
- Full analysis pipeline visualization with 9-step progress tracking
- Agent reports display (Market, News, Social, Fundamentals analysts)
- Investment debate viewer (Bull vs Bear with Research Manager decision)
- Risk debate viewer (Aggressive vs Conservative vs Neutral)
- Data sources tracking panel
- Dark mode support throughout
- Bulk "Analyze All" functionality for all 50 stocks

Backend:
- Added analysis config parameters to API endpoints
- Support for provider/model selection in analysis requests
- Indian market data integration improvements

Documentation:
- Comprehensive README with 10 feature screenshots
- API endpoint documentation
- Project structure guide
- Getting started instructions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 08:01:53 +11:00
hemangjoshi37a cf1f89adf7 ok 2026-02-01 06:55:15 +11:00
Hemang Joshi df916f1c1a add 2026-01-31 18:44:53 +05:30
hemangjoshi37a 254c6104bb Add Nifty50 AI Trading Dashboard frontend and Indian market support
- Add React + Vite + Tailwind CSS frontend for Nifty50 recommendations
- Add FastAPI backend for serving stock recommendations
- Add Indian market data sources (jugaad_data, markets API)
- Add Claude MAX LLM integration
- Add Nifty50 stock recommender modules
- Update dataflows for Indian market support
- Fix various utility and configuration updates

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 19:41:01 +11:00
Edward Sun 13b826a31d
Merge pull request #245 from TauricResearch/feat/tooloptim
Y Finance Tools Optimizations
2025-10-09 00:34:10 -07:00
Edward Sun b2ef960da7 updated readme 2025-10-09 00:32:04 -07:00
Edward Sun a5dcc7da45 update readme 2025-10-06 20:33:12 -07:00
Edward Sun 7bb2941b07 optimized yfin fetching to be much faster 2025-10-06 19:58:01 -07:00
Yijia Xiao 32be17c606
Merge pull request #235 from luohy15/data_vendor
Add Alpha Vantage API Integration and Refactor Data Provider Architecture
2025-10-05 16:01:30 -07:00
Edward Sun c07dcf026b added fallbacks for tools 2025-10-03 22:40:09 -07:00
luohy15 d23fb539e9 minor fix 2025-09-30 13:27:48 +08:00
luohy15 b01051b9f4 Switch default data vendor
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-30 12:43:27 +08:00
luohy15 8fdbbcca3d alpha vantage api key url 2025-09-29 18:22:31 +08:00
luohy15 86bc0e793f minor fix 2025-09-27 00:04:59 +08:00
luohy15 7fc9c28a94 Add environment variable configuration support
- Add .env.example file with API key placeholders
- Update README.md with .env file setup instructions
- Add dotenv loading in main.py for environment variables

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 23:58:51 +08:00
luohy15 7bcc2cbd8a Update configuration documentation for Alpha Vantage data vendor
Add data vendor configuration examples in README and main.py showing how to configure Alpha Vantage as the primary data provider. Update documentation to reflect the current default behavior of using Alpha Vantage for real-time market data access.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 23:52:26 +08:00
luohy15 6211b1132a Improve Alpha Vantage indicator column parsing with robust mapping
- Replace hardcoded column indices with column name lookup
- Add mapping for all supported indicators to their expected CSV column names
- Handle missing columns gracefully with descriptive error messages
- Strip whitespace from header parsing for reliability

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 23:36:36 +08:00
luohy15 8b04ec307f minor fix 2025-09-26 23:25:33 +08:00
luohy15 0ab323c2c6 Add Alpha Vantage API integration as primary data provider
- Replace FinnHub with Alpha Vantage API in README documentation
- Implement comprehensive Alpha Vantage modules:
  - Stock data (daily OHLCV with date filtering)
  - Technical indicators (SMA, EMA, MACD, RSI, Bollinger Bands, ATR)
  - Fundamental data (overview, balance sheet, cashflow, income statement)
  - News and sentiment data with insider transactions
- Update news analyst tools to use ticker-based news search
- Integrate Alpha Vantage vendor methods into interface routing
- Maintain backward compatibility with existing vendor system

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 22:57:50 +08:00
luohy15 a6734d71bc WIP 2025-09-26 16:17:50 +08:00
Yijia Xiao a438acdbbd
Merge pull request #89 from Mirza-Samad-Ahmed-Baig/fixes
Enhancement: agent reflection, logging improvement
2025-07-03 10:15:39 -04:00
Yijia Xiao c73e374e7c
Update main.py 2025-07-03 10:14:06 -04:00
mirza-samad-ahmed-baig f704828f89 Fix: Prevent infinite loops, enable reflection, and improve logging 2025-07-03 17:43:40 +05:00
Edward Sun fda4f664e8
Merge pull request #49 from Zhongyi-Lu/a
Exclude `.env` from Git.
2025-07-01 09:17:46 -07:00
Yijia Xiao 718df34932
Merge pull request #29 from ZeroAct/save_results
Save results
2025-06-26 00:28:30 -04:00
Max Wong 43aa9c5d09
Local Ollama (#53)
- Fix typo 'Start' 'End'
- Add llama3.1 selection
- Use 'quick_think_llm' model instead of hard-coding GPT
2025-06-26 00:27:01 -04:00
Yijia Xiao 26c5ba5a78
Revert "Docker support and Ollama support (#47)" (#57)
This reverts commit 78ea029a0b.
2025-06-26 00:07:58 -04:00
Geeta Chauhan 78ea029a0b
Docker support and Ollama support (#47)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER
2025-06-25 23:57:05 -04:00
Huijae Lee ee3d499894
Merge branch 'TauricResearch:main' into save_results 2025-06-25 08:43:19 +09:00
Yijia Xiao 7abff0f354
Merge pull request #46 from AtharvSabde/patch-2
Updated requirements.txt based on latest commit
2025-06-23 20:40:58 -04:00
Yijia Xiao b575bd0941
Merge pull request #52 from TauricResearch/dev
Merge dev into main. Add support for Anthropic and OpenRouter.
2025-06-23 20:38:14 -04:00
Zhongyi Lu b8f712b170 Exclude `.env` from Git 2025-06-21 23:29:26 -07:00
Edward Sun 52284ce13c fixed anthropic support. Anthropic has different format of response when it has tool calls. Explicit handling added 2025-06-21 12:51:34 -07:00
Atharv Sabde 11804f88ff
Updated requirements.txt based on latest commit
PULL REQUEST: Add support for other backends, such as OpenRouter and Ollama

it had two requirments missing. added those
2025-06-20 15:58:22 +05:30
Yijia Xiao 1e86e74314
Merge pull request #40 from RealMyth21/main
Updated README.md: Swap Trader and Management order.
2025-06-19 15:10:36 -04:00
Yijia Xiao c2f897fc67
Merge pull request #43 from AtharvSabde/patch-1
fundamentals_analyst.py (spelling mistake in instruction: Makrdown -> Markdown)
2025-06-19 15:05:08 -04:00
Yijia Xiao ed32081f57
Merge pull request #44 from TauricResearch/dev
Merge dev into main branch
2025-06-19 15:00:07 -04:00
Atharv Sabde 2af7ef3d79
fundamentals_analyst.py(spelling mistake.markdown) 2025-06-19 21:48:16 +05:30
Mithil Srungarapu 383deb72aa
Updated README.md
The diagrams were switched, so I fixed it.
2025-06-18 19:08:10 -07:00
Edward Sun 7eaf4d995f update clear msg bc anthropic needs at least 1 msg in chat call 2025-06-15 23:14:47 -07:00
Edward Sun da84ef43aa main works, cli bugs 2025-06-15 22:20:59 -07:00
Edward Sun 90b23e72f5
Merge pull request #25 from maxer137/main
Add support for other backends, such as OpenRouter and Ollama
2025-06-15 16:06:20 -07:00
ZeroAct 417b09712c refactor 2025-06-12 13:53:28 +09:00
saksham0161 570644d939
Fix ticker hardcoding in prompt (#28) 2025-06-11 19:43:39 -07:00
ZeroAct 9647359246 save reports & logs under results_dir 2025-06-12 11:25:07 +09:00
maxer137 99789f9cd1 Add support for other backends, such as OpenRouter and olama
This aims to offer alternative OpenAI capable api's.
This offers people to experiment with running the application locally
2025-06-11 14:19:25 +02:00
neo a879868396
docs: add links to other language versions of README (#13)
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, Russian, and Chinese.
2025-06-09 15:51:06 -07:00
Yijia-Xiao 0013415378 Add star history 2025-06-09 15:14:41 -07:00
Edward Sun 0fdfd35867
Fix default python usage config code 2025-06-08 13:16:10 -07:00
Edward Sun e994e56c23
Remove EODHD from readme 2025-06-07 15:04:43 -07:00
185 changed files with 26874 additions and 1474 deletions

2
.env.example Normal file
View File

@ -0,0 +1,2 @@
ALPHA_VANTAGE_API_KEY=alpha_vantage_api_key_placeholder
OPENAI_API_KEY=openai_api_key_placeholder

11
.gitignore vendored
View File

@ -1,8 +1,17 @@
.venv
results
env/
__pycache__/
.DS_Store
*.csv
src/
/src/
eval_results/
eval_data/
*.egg-info/
.env
# Node.js
node_modules/
# Frontend dev artifacts
.frontend-dev/

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 512 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

1
.python-version Normal file
View File

@ -0,0 +1 @@
3.10

192
README.md
View File

@ -11,6 +11,18 @@
<a href="https://github.com/TauricResearch/" target="_blank"><img alt="Community" src="https://img.shields.io/badge/Join_GitHub_Community-TauricResearch-14C290?logo=discourse"/></a>
</div>
<div align="center">
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=de">Deutsch</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=es">Español</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=fr">français</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ja">日本語</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ko">한국어</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=pt">Português</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ru">Русский</a> |
<a href="https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=zh">中文</a>
</div>
---
# TradingAgents: Multi-Agents LLM Financial Trading Framework
@ -19,6 +31,16 @@
>
> So we decided to fully open-source the framework. Looking forward to building impactful projects with you!
<div align="center">
<a href="https://www.star-history.com/#TauricResearch/TradingAgents&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date" />
<img alt="TradingAgents Star History" src="https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&type=Date" style="width: 80%; height: auto;" />
</picture>
</a>
</div>
<div align="center">
🚀 [TradingAgents](#tradingagents-framework) | ⚡ [Installation & CLI](#installation-and-cli) | 🎬 [Demo](https://www.youtube.com/watch?v=90gr5lwjIho) | 📦 [Package Usage](#tradingagents-package) | 🤝 [Contributing](#contributing) | 📄 [Citation](#citation)
@ -58,7 +80,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
- Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.
<p align="center">
<img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;">
<img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;">
</p>
### Risk Management and Portfolio Manager
@ -66,7 +88,7 @@ Our framework decomposes complex trading tasks into specialized roles. This ensu
- The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.
<p align="center">
<img src="assets/trader.png" width="70%" style="display: inline-block; margin: 0 2%;">
<img src="assets/risk.png" width="70%" style="display: inline-block; margin: 0 2%;">
</p>
## Installation and CLI
@ -92,16 +114,21 @@ pip install -r requirements.txt
### Required APIs
You will also need the FinnHub API and EODHD API for financial data. All of our code is implemented with the free tier.
```bash
export FINNHUB_API_KEY=$YOUR_FINNHUB_API_KEY
```
You will need the OpenAI API for all the agents, and [Alpha Vantage API](https://www.alphavantage.co/support/#api-key) for fundamental and news data (default configuration).
You will need the OpenAI API for all the agents.
```bash
export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY
export ALPHA_VANTAGE_API_KEY=$YOUR_ALPHA_VANTAGE_API_KEY
```
Alternatively, you can create a `.env` file in the project root with your API keys (see `.env.example` for reference):
```bash
cp .env.example .env
# Edit .env with your actual API keys
```
**Note:** We are happy to partner with Alpha Vantage to provide robust API support for TradingAgents. You can get a free AlphaVantage API [here](https://www.alphavantage.co/support/#api-key), TradingAgents-sourced requests also have increased rate limits to 60 requests per minute with no daily limits. Typically the quota is sufficient for performing complex tasks with TradingAgents thanks to Alpha Vantages open-source support program. If you prefer to use OpenAI for these data sources instead, you can modify the data vendor settings in `tradingagents/default_config.py`.
### CLI Usage
You can also try out the CLI directly by running:
@ -124,6 +151,143 @@ An interface will appear showing results as they load, letting you track the age
<img src="assets/cli/cli_transaction.png" width="100%" style="display: inline-block; margin: 0 2%;">
</p>
---
## 🌐 Nifty50 AI Trading Dashboard (Web Frontend)
A modern, feature-rich web dashboard for TradingAgents, specifically built for **Indian Nifty 50 stocks**. This dashboard provides a complete visual interface for AI-powered stock analysis with full transparency into the multi-agent decision process.
### 🚀 Quick Start
```bash
# Start the backend server
cd frontend/backend
pip install -r requirements.txt
python server.py # Runs on http://localhost:8001
# Start the frontend (in a new terminal)
cd frontend
npm install
npm run dev # Runs on http://localhost:5173
```
### ✨ Key Features
#### Dashboard - AI Recommendations at a Glance
View all 50 Nifty stocks with AI recommendations, top picks, stocks to avoid, and one-click bulk analysis.
<p align="center">
<img src="frontend/docs/screenshots/01-dashboard.png" width="100%" style="display: inline-block;">
</p>
#### 🌙 Dark Mode Support
Full dark mode with automatic system theme detection for comfortable viewing.
<p align="center">
<img src="frontend/docs/screenshots/08-dashboard-dark-mode.png" width="100%" style="display: inline-block;">
</p>
#### ⚙️ Configurable Settings Panel
Configure your AI analysis directly from the browser:
- **LLM Provider**: Claude Subscription or Anthropic API
- **Model Selection**: Choose Deep Think (Opus) and Quick Think (Sonnet/Haiku) models
- **API Key Management**: Securely stored in browser localStorage
- **Debate Rounds**: Adjust thoroughness (1-5 rounds)
<p align="center">
<img src="frontend/docs/screenshots/02-settings-modal.png" width="60%" style="display: inline-block;">
</p>
#### 📊 Stock Detail View
Detailed analysis for each stock with interactive price charts, recommendation history, and AI analysis summaries.
<p align="center">
<img src="frontend/docs/screenshots/03-stock-detail-overview.png" width="100%" style="display: inline-block;">
</p>
#### 🔬 Analysis Pipeline Visualization
See exactly how the AI reached its decision with a 9-step pipeline showing:
- Data collection progress
- Individual agent reports (Market, News, Social Media, Fundamentals)
- Real-time status tracking
<p align="center">
<img src="frontend/docs/screenshots/04-analysis-pipeline.png" width="100%" style="display: inline-block;">
</p>
#### 💬 Investment Debates (Bull vs Bear)
Watch AI agents debate investment decisions with full transparency:
- **Bull Analyst**: Makes the case for buying
- **Bear Analyst**: Presents risks and concerns
- **Research Manager**: Weighs both sides and decides
<p align="center">
<img src="frontend/docs/screenshots/05-debates-tab.png" width="100%" style="display: inline-block;">
</p>
<details>
<summary><b>📜 View Full Debate Example (Click to expand)</b></summary>
<p align="center">
<img src="frontend/docs/screenshots/06-investment-debate-expanded.png" width="100%" style="display: inline-block;">
</p>
</details>
#### 📈 Historical Analysis & Backtesting
Track AI performance over time with comprehensive analytics:
- Prediction accuracy metrics (Buy/Sell/Hold)
- Risk metrics (Sharpe ratio, max drawdown, win rate)
- Portfolio simulator with customizable starting amounts
- AI Strategy vs Nifty50 Index comparison
<p align="center">
<img src="frontend/docs/screenshots/10-history-page.png" width="100%" style="display: inline-block;">
</p>
#### 📚 How It Works
Educational content explaining the multi-agent AI system and decision process.
<p align="center">
<img src="frontend/docs/screenshots/09-how-it-works.png" width="100%" style="display: inline-block;">
</p>
### 🛠️ Frontend Tech Stack
| Technology | Purpose |
|------------|---------|
| React 18 + TypeScript | Core framework |
| Vite | Build tool & dev server |
| Tailwind CSS | Styling with dark mode |
| Recharts | Interactive charts |
| Lucide React | Icons |
| FastAPI (Python) | Backend API |
| SQLite | Data persistence |
### 📁 Frontend Project Structure
```
frontend/
├── src/
│ ├── components/
│ │ ├── pipeline/ # Pipeline visualization
│ │ ├── SettingsModal.tsx # Settings UI
│ │ └── Header.tsx
│ ├── contexts/
│ │ └── SettingsContext.tsx
│ ├── pages/
│ │ ├── Dashboard.tsx
│ │ ├── StockDetail.tsx
│ │ ├── History.tsx
│ │ └── About.tsx
│ └── services/
│ └── api.ts
├── backend/
│ ├── server.py
│ └── database.py
└── docs/screenshots/
```
---
## TradingAgents Package
### Implementation Details
@ -136,8 +300,9 @@ To use TradingAgents inside your code, you can import the `tradingagents` module
```python
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
ta = TradingAgentsGraph(debug=True, config=config)
ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
# forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
@ -155,7 +320,14 @@ config = DEFAULT_CONFIG.copy()
config["deep_think_llm"] = "gpt-4.1-nano" # Use a different model
config["quick_think_llm"] = "gpt-4.1-nano" # Use a different model
config["max_debate_rounds"] = 1 # Increase debate rounds
config["online_tools"] = True # Use online tools or cached data
# Configure data vendors (default uses yfinance and Alpha Vantage)
config["data_vendors"] = {
"core_stock_apis": "yfinance", # Options: yfinance, alpha_vantage, local
"technical_indicators": "yfinance", # Options: yfinance, alpha_vantage, local
"fundamental_data": "alpha_vantage", # Options: openai, alpha_vantage, local
"news_data": "alpha_vantage", # Options: openai, alpha_vantage, google, local
}
# Initialize with custom config
ta = TradingAgentsGraph(debug=True, config=config)
@ -165,7 +337,7 @@ _, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)
```
> For `online_tools`, we recommend enabling them for experimentation, as they provide access to real-time data. The agents' offline tools rely on cached data from our **Tauric TradingDB**, a curated dataset we use for backtesting. We're currently in the process of refining this dataset, and we plan to release it soon alongside our upcoming projects. Stay tuned!
> The default configuration uses yfinance for stock price and technical data, and Alpha Vantage for fundamental and news data. For production use or if you encounter rate limits, consider upgrading to [Alpha Vantage Premium](https://www.alphavantage.co/premium/) for more stable and reliable data access. For offline experimentation, there's a local data vendor option that uses our **Tauric TradingDB**, a curated dataset for backtesting, though this is still in development. We're currently refining this dataset and plan to release it soon alongside our upcoming projects. Stay tuned!
You can view the full list of configurations in `tradingagents/default_config.py`.

View File

@ -1,7 +1,13 @@
from typing import Optional
import datetime
import typer
from pathlib import Path
from functools import wraps
from rich.console import Console
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
from rich.panel import Panel
from rich.spinner import Spinner
from rich.live import Live
@ -20,6 +26,7 @@ from rich.rule import Rule
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.dataflows.markets import is_nifty_50_stock, NIFTY_50_STOCKS
from cli.models import AnalystType
from cli.utils import *
@ -97,7 +104,7 @@ class MessageBuffer:
if content is not None:
latest_section = section
latest_content = content
if latest_section and latest_content:
# Format the current section for display
section_titles = {
@ -295,10 +302,27 @@ def update_display(layout, spinner_text=None):
# Add regular messages
for timestamp, msg_type, content in message_buffer.messages:
# Convert content to string if it's not already
content_str = content
if isinstance(content, list):
# Handle list of content blocks (Anthropic format)
text_parts = []
for item in content:
if isinstance(item, dict):
if item.get('type') == 'text':
text_parts.append(item.get('text', ''))
elif item.get('type') == 'tool_use':
text_parts.append(f"[Tool: {item.get('name', 'unknown')}]")
else:
text_parts.append(str(item))
content_str = ' '.join(text_parts)
elif not isinstance(content_str, str):
content_str = str(content)
# Truncate message content if too long
if isinstance(content, str) and len(content) > 200:
content = content[:197] + "..."
all_messages.append((timestamp, msg_type, content))
if len(content_str) > 200:
content_str = content_str[:197] + "..."
all_messages.append((timestamp, msg_type, content_str))
# Sort by timestamp
all_messages.sort(key=lambda x: x[0])
@ -406,29 +430,42 @@ def get_user_selections():
box_content += f"\n[dim]Default: {default}[/dim]"
return Panel(box_content, border_style="blue", padding=(1, 2))
# Step 1: Ticker symbol
# Step 1: Market selection
console.print(
create_question_box(
"Step 1: Ticker Symbol", "Enter the ticker symbol to analyze", "SPY"
"Step 1: Market Selection", "Select the market for your analysis"
)
)
selected_ticker = get_ticker()
selected_market = select_market()
# Step 2: Analysis date
# Show Nifty 50 stocks if Indian market is selected
if selected_market == "india_nse":
show_nifty_50_stocks()
# Step 2: Ticker symbol
console.print(
create_question_box(
"Step 2: Ticker Symbol", "Enter the ticker symbol to analyze",
"RELIANCE" if selected_market == "india_nse" else "SPY"
)
)
selected_ticker = get_ticker_with_market_hint(selected_market)
# Step 3: Analysis date
default_date = datetime.datetime.now().strftime("%Y-%m-%d")
console.print(
create_question_box(
"Step 2: Analysis Date",
"Step 3: Analysis Date",
"Enter the analysis date (YYYY-MM-DD)",
default_date,
)
)
analysis_date = get_analysis_date()
# Step 3: Select analysts
# Step 4: Select analysts
console.print(
create_question_box(
"Step 3: Analysts Team", "Select your LLM analyst agents for the analysis"
"Step 4: Analysts Team", "Select your LLM analyst agents for the analysis"
)
)
selected_analysts = select_analysts()
@ -436,30 +473,41 @@ def get_user_selections():
f"[green]Selected analysts:[/green] {', '.join(analyst.value for analyst in selected_analysts)}"
)
# Step 4: Research depth
# Step 5: Research depth
console.print(
create_question_box(
"Step 4: Research Depth", "Select your research depth level"
"Step 5: Research Depth", "Select your research depth level"
)
)
selected_research_depth = select_research_depth()
# Step 5: Thinking agents
# Step 6: OpenAI backend
console.print(
create_question_box(
"Step 5: Thinking Agents", "Select your thinking agents for analysis"
"Step 6: LLM Provider", "Select which service to talk to"
)
)
selected_shallow_thinker = select_shallow_thinking_agent()
selected_deep_thinker = select_deep_thinking_agent()
selected_llm_provider, backend_url = select_llm_provider()
# Step 7: Thinking agents
console.print(
create_question_box(
"Step 7: Thinking Agents", "Select your thinking agents for analysis"
)
)
selected_shallow_thinker = select_shallow_thinking_agent(selected_llm_provider)
selected_deep_thinker = select_deep_thinking_agent(selected_llm_provider)
return {
"ticker": selected_ticker,
"analysis_date": analysis_date,
"analysts": selected_analysts,
"research_depth": selected_research_depth,
"llm_provider": selected_llm_provider.lower(),
"backend_url": backend_url,
"shallow_thinker": selected_shallow_thinker,
"deep_thinker": selected_deep_thinker,
"market": selected_market,
}
@ -683,6 +731,24 @@ def update_research_team_status(status):
for agent in research_team:
message_buffer.update_agent_status(agent, status)
def extract_content_string(content):
"""Extract string content from various message formats."""
if isinstance(content, str):
return content
elif isinstance(content, list):
# Handle Anthropic's list format
text_parts = []
for item in content:
if isinstance(item, dict):
if item.get('type') == 'text':
text_parts.append(item.get('text', ''))
elif item.get('type') == 'tool_use':
text_parts.append(f"[Tool: {item.get('name', 'unknown')}]")
else:
text_parts.append(str(item))
return ' '.join(text_parts)
else:
return str(content)
def run_analysis():
# First get all user selections
@ -694,12 +760,68 @@ def run_analysis():
config["max_risk_discuss_rounds"] = selections["research_depth"]
config["quick_think_llm"] = selections["shallow_thinker"]
config["deep_think_llm"] = selections["deep_thinker"]
config["backend_url"] = selections["backend_url"]
config["llm_provider"] = selections["llm_provider"].lower()
config["market"] = selections["market"]
# Display market info for NSE stocks
if is_nifty_50_stock(selections["ticker"]):
company_name = NIFTY_50_STOCKS.get(selections["ticker"].replace(".NS", ""), "")
console.print(f"[cyan]Analyzing NSE stock:[/cyan] {selections['ticker']} - {company_name}")
console.print("[dim]Using jugaad-data for NSE stock data, yfinance for fundamentals[/dim]")
# Initialize the graph
graph = TradingAgentsGraph(
[analyst.value for analyst in selections["analysts"]], config=config, debug=True
)
# Create result directory
results_dir = Path(config["results_dir"]) / selections["ticker"] / selections["analysis_date"]
results_dir.mkdir(parents=True, exist_ok=True)
report_dir = results_dir / "reports"
report_dir.mkdir(parents=True, exist_ok=True)
log_file = results_dir / "message_tool.log"
log_file.touch(exist_ok=True)
def save_message_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
timestamp, message_type, content = obj.messages[-1]
content = content.replace("\n", " ") # Replace newlines with spaces
with open(log_file, "a") as f:
f.write(f"{timestamp} [{message_type}] {content}\n")
return wrapper
def save_tool_call_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
timestamp, tool_name, args = obj.tool_calls[-1]
args_str = ", ".join(f"{k}={v}" for k, v in args.items())
with open(log_file, "a") as f:
f.write(f"{timestamp} [Tool Call] {tool_name}({args_str})\n")
return wrapper
def save_report_section_decorator(obj, func_name):
func = getattr(obj, func_name)
@wraps(func)
def wrapper(section_name, content):
func(section_name, content)
if section_name in obj.report_sections and obj.report_sections[section_name] is not None:
content = obj.report_sections[section_name]
if content:
file_name = f"{section_name}.md"
with open(report_dir / file_name, "w") as f:
f.write(content)
return wrapper
message_buffer.add_message = save_message_decorator(message_buffer, "add_message")
message_buffer.add_tool_call = save_tool_call_decorator(message_buffer, "add_tool_call")
message_buffer.update_report_section = save_report_section_decorator(message_buffer, "update_report_section")
# Now start the display layout
layout = create_layout()
@ -708,10 +830,17 @@ def run_analysis():
update_display(layout)
# Add initial messages
message_buffer.add_message("System", f"Selected ticker: {selections['ticker']}")
ticker_info = selections['ticker']
if is_nifty_50_stock(selections['ticker']):
company_name = NIFTY_50_STOCKS.get(selections['ticker'].replace(".NS", ""), "")
ticker_info = f"{selections['ticker']} ({company_name}) [NSE]"
message_buffer.add_message("System", f"Selected ticker: {ticker_info}")
message_buffer.add_message(
"System", f"Analysis date: {selections['analysis_date']}"
)
message_buffer.add_message(
"System", f"Market: {selections['market'].upper()}"
)
message_buffer.add_message(
"System",
f"Selected analysts: {', '.join(analyst.value for analyst in selections['analysts'])}",
@ -754,14 +883,14 @@ def run_analysis():
# Extract message content and type
if hasattr(last_message, "content"):
content = last_message.content
content = extract_content_string(last_message.content) # Use the helper function
msg_type = "Reasoning"
else:
content = str(last_message)
msg_type = "System"
# Add message to buffer
message_buffer.add_message(msg_type, content)
message_buffer.add_message(msg_type, content)
# If it's a tool call, add it to tool calls
if hasattr(last_message, "tool_calls"):

View File

@ -1,7 +1,13 @@
import questionary
from typing import List, Optional, Tuple, Dict
from rich.console import Console
from rich.table import Table
from rich import box
from cli.models import AnalystType
from tradingagents.dataflows.markets import NIFTY_50_STOCKS, is_nifty_50_stock
console = Console()
ANALYST_ORDER = [
("Market Analyst", AnalystType.MARKET),
@ -122,22 +128,44 @@ def select_research_depth() -> int:
return choice
def select_shallow_thinking_agent() -> str:
def select_shallow_thinking_agent(provider) -> str:
"""Select shallow thinking llm engine using an interactive selection."""
# Define shallow thinking llm engine options with their corresponding model names
SHALLOW_AGENT_OPTIONS = [
("GPT-4o-mini - Fast and efficient for quick tasks", "gpt-4o-mini"),
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
]
SHALLOW_AGENT_OPTIONS = {
"openai": [
("GPT-4o-mini - Fast and efficient for quick tasks", "gpt-4o-mini"),
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
],
"anthropic": [
("Claude Haiku 3.5 - Fast inference and standard capabilities", "claude-3-5-haiku-latest"),
("Claude Sonnet 3.5 - Highly capable standard model", "claude-3-5-sonnet-latest"),
("Claude Sonnet 3.7 - Exceptional hybrid reasoning and agentic capabilities", "claude-3-7-sonnet-latest"),
("Claude Sonnet 4 - High performance and excellent reasoning", "claude-sonnet-4-0"),
],
"google": [
("Gemini 2.0 Flash-Lite - Cost efficiency and low latency", "gemini-2.0-flash-lite"),
("Gemini 2.0 Flash - Next generation features, speed, and thinking", "gemini-2.0-flash"),
("Gemini 2.5 Flash - Adaptive thinking, cost efficiency", "gemini-2.5-flash-preview-05-20"),
],
"openrouter": [
("Meta: Llama 4 Scout", "meta-llama/llama-4-scout:free"),
("Meta: Llama 3.3 8B Instruct - A lightweight and ultra-fast variant of Llama 3.3 70B", "meta-llama/llama-3.3-8b-instruct:free"),
("google/gemini-2.0-flash-exp:free - Gemini Flash 2.0 offers a significantly faster time to first token", "google/gemini-2.0-flash-exp:free"),
],
"ollama": [
("llama3.1 local", "llama3.1"),
("llama3.2 local", "llama3.2"),
]
}
choice = questionary.select(
"Select Your [Quick-Thinking LLM Engine]:",
choices=[
questionary.Choice(display, value=value)
for display, value in SHALLOW_AGENT_OPTIONS
for display, value in SHALLOW_AGENT_OPTIONS[provider.lower()]
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
@ -158,25 +186,48 @@ def select_shallow_thinking_agent() -> str:
return choice
def select_deep_thinking_agent() -> str:
def select_deep_thinking_agent(provider) -> str:
"""Select deep thinking llm engine using an interactive selection."""
# Define deep thinking llm engine options with their corresponding model names
DEEP_AGENT_OPTIONS = [
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
("o4-mini - Specialized reasoning model (compact)", "o4-mini"),
("o3-mini - Advanced reasoning model (lightweight)", "o3-mini"),
("o3 - Full advanced reasoning model", "o3"),
("o1 - Premier reasoning and problem-solving model", "o1"),
]
DEEP_AGENT_OPTIONS = {
"openai": [
("GPT-4.1-nano - Ultra-lightweight model for basic operations", "gpt-4.1-nano"),
("GPT-4.1-mini - Compact model with good performance", "gpt-4.1-mini"),
("GPT-4o - Standard model with solid capabilities", "gpt-4o"),
("o4-mini - Specialized reasoning model (compact)", "o4-mini"),
("o3-mini - Advanced reasoning model (lightweight)", "o3-mini"),
("o3 - Full advanced reasoning model", "o3"),
("o1 - Premier reasoning and problem-solving model", "o1"),
],
"anthropic": [
("Claude Haiku 3.5 - Fast inference and standard capabilities", "claude-3-5-haiku-latest"),
("Claude Sonnet 3.5 - Highly capable standard model", "claude-3-5-sonnet-latest"),
("Claude Sonnet 3.7 - Exceptional hybrid reasoning and agentic capabilities", "claude-3-7-sonnet-latest"),
("Claude Sonnet 4 - High performance and excellent reasoning", "claude-sonnet-4-0"),
("Claude Opus 4 - Most powerful Anthropic model", " claude-opus-4-0"),
],
"google": [
("Gemini 2.0 Flash-Lite - Cost efficiency and low latency", "gemini-2.0-flash-lite"),
("Gemini 2.0 Flash - Next generation features, speed, and thinking", "gemini-2.0-flash"),
("Gemini 2.5 Flash - Adaptive thinking, cost efficiency", "gemini-2.5-flash-preview-05-20"),
("Gemini 2.5 Pro", "gemini-2.5-pro-preview-06-05"),
],
"openrouter": [
("DeepSeek V3 - a 685B-parameter, mixture-of-experts model", "deepseek/deepseek-chat-v3-0324:free"),
("Deepseek - latest iteration of the flagship chat model family from the DeepSeek team.", "deepseek/deepseek-chat-v3-0324:free"),
],
"ollama": [
("llama3.1 local", "llama3.1"),
("qwen3", "qwen3"),
]
}
choice = questionary.select(
"Select Your [Deep-Thinking LLM Engine]:",
choices=[
questionary.Choice(display, value=value)
for display, value in DEEP_AGENT_OPTIONS
for display, value in DEEP_AGENT_OPTIONS[provider.lower()]
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
@ -193,3 +244,151 @@ def select_deep_thinking_agent() -> str:
exit(1)
return choice
def select_llm_provider() -> tuple[str, str]:
"""Select the OpenAI api url using interactive selection."""
# Define OpenAI api options with their corresponding endpoints
BASE_URLS = [
("OpenAI", "https://api.openai.com/v1"),
("Anthropic", "https://api.anthropic.com/"),
("Google", "https://generativelanguage.googleapis.com/v1"),
("Openrouter", "https://openrouter.ai/api/v1"),
("Ollama", "http://localhost:11434/v1"),
]
choice = questionary.select(
"Select your LLM Provider:",
choices=[
questionary.Choice(display, value=(display, value))
for display, value in BASE_URLS
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
[
("selected", "fg:magenta noinherit"),
("highlighted", "fg:magenta noinherit"),
("pointer", "fg:magenta noinherit"),
]
),
).ask()
if choice is None:
console.print("\n[red]no OpenAI backend selected. Exiting...[/red]")
exit(1)
display_name, url = choice
print(f"You selected: {display_name}\tURL: {url}")
return display_name, url
def select_market() -> str:
"""Select market using an interactive selection."""
MARKET_OPTIONS = [
("Auto-detect (Recommended)", "auto"),
("US Markets (NYSE, NASDAQ)", "us"),
("Indian NSE (Nifty 50)", "india_nse"),
]
choice = questionary.select(
"Select Your [Market]:",
choices=[
questionary.Choice(display, value=value)
for display, value in MARKET_OPTIONS
],
instruction="\n- Use arrow keys to navigate\n- Press Enter to select",
style=questionary.Style(
[
("selected", "fg:cyan noinherit"),
("highlighted", "fg:cyan noinherit"),
("pointer", "fg:cyan noinherit"),
]
),
).ask()
if choice is None:
console.print("\n[red]No market selected. Exiting...[/red]")
exit(1)
return choice
def display_nifty_50_stocks():
"""Display the list of Nifty 50 stocks in a formatted table."""
table = Table(
title="Nifty 50 Stocks",
box=box.ROUNDED,
show_header=True,
header_style="bold cyan",
)
table.add_column("Symbol", style="green", width=15)
table.add_column("Company Name", style="white", width=45)
# Sort stocks alphabetically
sorted_stocks = sorted(NIFTY_50_STOCKS.items())
for symbol, company_name in sorted_stocks:
table.add_row(symbol, company_name)
console.print(table)
console.print()
def show_nifty_50_stocks() -> bool:
"""Ask user if they want to see Nifty 50 stocks list."""
show = questionary.confirm(
"Would you like to see the list of Nifty 50 stocks?",
default=False,
style=questionary.Style(
[
("selected", "fg:cyan noinherit"),
("highlighted", "fg:cyan noinherit"),
]
),
).ask()
if show:
display_nifty_50_stocks()
return show
def get_ticker_with_market_hint(market: str) -> str:
"""Get ticker symbol with market-specific hints."""
if market == "india_nse":
hint = "Enter NSE symbol (e.g., RELIANCE, TCS, INFY)"
default = "RELIANCE"
elif market == "us":
hint = "Enter US ticker symbol (e.g., AAPL, GOOGL, MSFT)"
default = "SPY"
else:
hint = "Enter ticker symbol (auto-detects market)"
default = "SPY"
ticker = questionary.text(
hint + ":",
default=default,
validate=lambda x: len(x.strip()) > 0 or "Please enter a valid ticker symbol.",
style=questionary.Style(
[
("text", "fg:green"),
("highlighted", "noinherit"),
]
),
).ask()
if not ticker:
console.print("\n[red]No ticker symbol provided. Exiting...[/red]")
exit(1)
ticker = ticker.strip().upper()
# Provide feedback for NSE stocks
if is_nifty_50_stock(ticker):
company_name = NIFTY_50_STOCKS.get(ticker.replace(".NS", ""), "")
if company_name:
console.print(f"[green]Detected NSE stock:[/green] {ticker} - {company_name}")
return ticker

24
frontend/.gitignore vendored Normal file
View File

@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

210
frontend/README.md Normal file
View File

@ -0,0 +1,210 @@
# Nifty50 AI Trading Dashboard
A modern, feature-rich frontend for the TradingAgents multi-agent AI stock analysis system. This dashboard provides real-time AI-powered recommendations for all 50 stocks in the Nifty 50 index, with full visibility into the analysis pipeline, agent reports, and debate processes.
## Features Overview
### Dashboard - Main View
The main dashboard displays AI recommendations for all 50 Nifty stocks with:
- **Summary Statistics**: Quick view of Buy/Hold/Sell distribution
- **Top Picks**: Highlighted stocks with the strongest buy signals
- **Stocks to Avoid**: High-confidence sell recommendations
- **Analyze All**: One-click bulk analysis of all stocks
- **Filter & Search**: Filter by recommendation type or search by symbol
![Dashboard](docs/screenshots/01-dashboard.png)
### Dark Mode Support
Full dark mode support with automatic system theme detection:
![Dashboard Dark Mode](docs/screenshots/08-dashboard-dark-mode.png)
### Settings Panel
Configure the AI analysis system directly from the browser:
- **LLM Provider Selection**: Choose between Claude Subscription or Anthropic API
- **API Key Management**: Securely store API keys in browser localStorage
- **Model Selection**: Configure Deep Think (Opus) and Quick Think (Sonnet/Haiku) models
- **Analysis Settings**: Adjust max debate rounds for thoroughness vs speed
![Settings Modal](docs/screenshots/02-settings-modal.png)
### Stock Detail View
Detailed analysis view for individual stocks with:
- **Price Chart**: Interactive price history with buy/sell/hold signal markers
- **Recommendation Details**: Decision, confidence level, and risk assessment
- **Recommendation History**: Historical AI decisions for the stock
- **AI Analysis Summary**: Expandable detailed analysis sections
![Stock Detail Overview](docs/screenshots/03-stock-detail-overview.png)
### Analysis Pipeline Visualization
See exactly how the AI reached its decision with the full analysis pipeline:
- **9-Step Pipeline**: Track progress through data collection, analysis, debates, and final decision
- **Agent Reports**: View individual reports from Market, News, Social Media, and Fundamentals analysts
- **Real-time Status**: See which steps are completed, running, or pending
![Analysis Pipeline](docs/screenshots/04-analysis-pipeline.png)
### Investment Debates
The AI uses a debate system where Bull and Bear analysts argue their cases:
- **Bull vs Bear**: Opposing viewpoints with detailed arguments
- **Research Manager Decision**: Final judgment weighing both sides
- **Full Debate History**: Complete transcript of the debate rounds
![Debates Tab](docs/screenshots/05-debates-tab.png)
#### Expanded Debate View
Full debate content with Bull and Bear arguments:
![Investment Debate Expanded](docs/screenshots/06-investment-debate-expanded.png)
### Data Sources Tracking
View all raw data sources used for analysis:
- **Source Types**: Market data, news, fundamentals, social media
- **Fetch Status**: Success/failure indicators for each data source
- **Data Preview**: Expandable view of fetched data
![Data Sources Tab](docs/screenshots/07-data-sources-tab.png)
### How It Works Page
Educational content explaining the multi-agent AI system:
- **Multi-Agent Architecture**: Overview of the specialized AI agents
- **Analysis Process**: Step-by-step breakdown of the pipeline
- **Agent Profiles**: Details about each analyst type
- **Debate Process**: Explanation of how consensus is reached
![How It Works](docs/screenshots/09-how-it-works.png)
### Historical Analysis & Backtesting
Track AI performance over time with comprehensive analytics:
- **Prediction Accuracy**: Overall and per-recommendation-type accuracy
- **Accuracy Trend**: Visualize accuracy over time
- **Risk Metrics**: Sharpe ratio, max drawdown, win rate
- **Portfolio Simulator**: Test different investment amounts
- **AI vs Nifty50**: Compare AI strategy performance against the index
- **Return Distribution**: Histogram of next-day returns
![History Page](docs/screenshots/10-history-page.png)
## Tech Stack
- **Frontend**: React 18 + TypeScript + Vite
- **Styling**: Tailwind CSS with dark mode support
- **Charts**: Recharts for interactive visualizations
- **Icons**: Lucide React
- **State Management**: React Context API
- **Backend**: FastAPI (Python) with SQLite database
## Getting Started
### Prerequisites
- Node.js 18+
- Python 3.10+
- npm or yarn
### Installation
1. **Install frontend dependencies:**
```bash
cd frontend
npm install
```
2. **Install backend dependencies:**
```bash
cd frontend/backend
pip install -r requirements.txt
```
### Running the Application
1. **Start the backend server:**
```bash
cd frontend/backend
python server.py
```
The backend runs on `http://localhost:8001`
2. **Start the frontend development server:**
```bash
cd frontend
npm run dev
```
The frontend runs on `http://localhost:5173`
## Project Structure
```
frontend/
├── src/
│ ├── components/
│ │ ├── pipeline/ # Pipeline visualization components
│ │ │ ├── PipelineOverview.tsx
│ │ │ ├── AgentReportCard.tsx
│ │ │ ├── DebateViewer.tsx
│ │ │ ├── RiskDebateViewer.tsx
│ │ │ └── DataSourcesPanel.tsx
│ │ ├── Header.tsx
│ │ ├── SettingsModal.tsx
│ │ └── ...
│ ├── contexts/
│ │ └── SettingsContext.tsx # Settings state management
│ ├── pages/
│ │ ├── Dashboard.tsx
│ │ ├── StockDetail.tsx
│ │ ├── History.tsx
│ │ └── About.tsx
│ ├── services/
│ │ └── api.ts # API client
│ ├── types/
│ │ └── pipeline.ts # TypeScript types for pipeline data
│ └── App.tsx
├── backend/
│ ├── server.py # FastAPI server
│ ├── database.py # SQLite database operations
│ └── recommendations.db # SQLite database
└── docs/
└── screenshots/ # Feature screenshots
```
## API Endpoints
### Recommendations
- `GET /recommendations/{date}` - Get all recommendations for a date
- `GET /recommendations/{date}/{symbol}` - Get recommendation for a specific stock
- `POST /recommendations` - Save new recommendations
### Pipeline Data
- `GET /recommendations/{date}/{symbol}/pipeline` - Get full pipeline data
- `GET /recommendations/{date}/{symbol}/agents` - Get agent reports
- `GET /recommendations/{date}/{symbol}/debates` - Get debate history
- `GET /recommendations/{date}/{symbol}/data-sources` - Get data source logs
### Analysis
- `POST /analyze/{symbol}` - Run analysis for a single stock
- `POST /analyze-bulk` - Run analysis for multiple stocks
## Configuration
Settings are stored in browser localStorage and include:
- `deepThinkModel`: Model for complex analysis (opus/sonnet/haiku)
- `quickThinkModel`: Model for fast operations (opus/sonnet/haiku)
- `provider`: LLM provider (claude_subscription/anthropic_api)
- `anthropicApiKey`: API key for Anthropic API provider
- `maxDebateRounds`: Number of debate rounds (1-5)
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run tests and linting
5. Submit a pull request
## License
This project is part of the TradingAgents research project.
## Disclaimer
AI-generated recommendations are for educational and informational purposes only. These do not constitute financial advice. Always conduct your own research and consult with a qualified financial advisor before making investment decisions.

View File

@ -0,0 +1,702 @@
"""SQLite database module for storing stock recommendations."""
import sqlite3
import json
from pathlib import Path
from datetime import datetime
from typing import Optional
DB_PATH = Path(__file__).parent / "recommendations.db"
def get_connection():
"""Get SQLite database connection."""
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
return conn
def init_db():
"""Initialize the database with required tables."""
conn = get_connection()
cursor = conn.cursor()
# Create recommendations table
cursor.execute("""
CREATE TABLE IF NOT EXISTS daily_recommendations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT UNIQUE NOT NULL,
summary_total INTEGER,
summary_buy INTEGER,
summary_sell INTEGER,
summary_hold INTEGER,
top_picks TEXT,
stocks_to_avoid TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
)
""")
# Create stock analysis table
cursor.execute("""
CREATE TABLE IF NOT EXISTS stock_analysis (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
symbol TEXT NOT NULL,
company_name TEXT,
decision TEXT,
confidence TEXT,
risk TEXT,
raw_analysis TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
UNIQUE(date, symbol)
)
""")
# Create index for faster queries
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_stock_analysis_date ON stock_analysis(date)
""")
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_stock_analysis_symbol ON stock_analysis(symbol)
""")
# Create agent_reports table (stores each analyst's detailed report)
cursor.execute("""
CREATE TABLE IF NOT EXISTS agent_reports (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
symbol TEXT NOT NULL,
agent_type TEXT NOT NULL,
report_content TEXT,
data_sources_used TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
UNIQUE(date, symbol, agent_type)
)
""")
# Create debate_history table (stores investment and risk debates)
cursor.execute("""
CREATE TABLE IF NOT EXISTS debate_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
symbol TEXT NOT NULL,
debate_type TEXT NOT NULL,
bull_arguments TEXT,
bear_arguments TEXT,
risky_arguments TEXT,
safe_arguments TEXT,
neutral_arguments TEXT,
judge_decision TEXT,
full_history TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
UNIQUE(date, symbol, debate_type)
)
""")
# Create pipeline_steps table (stores step-by-step execution log)
cursor.execute("""
CREATE TABLE IF NOT EXISTS pipeline_steps (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
symbol TEXT NOT NULL,
step_number INTEGER,
step_name TEXT,
status TEXT,
started_at TEXT,
completed_at TEXT,
duration_ms INTEGER,
output_summary TEXT,
UNIQUE(date, symbol, step_number)
)
""")
# Create data_source_logs table (stores what raw data was fetched)
cursor.execute("""
CREATE TABLE IF NOT EXISTS data_source_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
symbol TEXT NOT NULL,
source_type TEXT,
source_name TEXT,
data_fetched TEXT,
fetch_timestamp TEXT,
success INTEGER DEFAULT 1,
error_message TEXT
)
""")
# Create indexes for new tables
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_agent_reports_date_symbol ON agent_reports(date, symbol)
""")
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_debate_history_date_symbol ON debate_history(date, symbol)
""")
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_pipeline_steps_date_symbol ON pipeline_steps(date, symbol)
""")
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_data_source_logs_date_symbol ON data_source_logs(date, symbol)
""")
conn.commit()
conn.close()
def save_recommendation(date: str, analysis_data: dict, summary: dict,
top_picks: list, stocks_to_avoid: list):
"""Save a daily recommendation to the database."""
conn = get_connection()
cursor = conn.cursor()
try:
# Insert or replace daily recommendation
cursor.execute("""
INSERT OR REPLACE INTO daily_recommendations
(date, summary_total, summary_buy, summary_sell, summary_hold, top_picks, stocks_to_avoid)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
date,
summary.get('total', 0),
summary.get('buy', 0),
summary.get('sell', 0),
summary.get('hold', 0),
json.dumps(top_picks),
json.dumps(stocks_to_avoid)
))
# Insert stock analysis for each stock
for symbol, analysis in analysis_data.items():
cursor.execute("""
INSERT OR REPLACE INTO stock_analysis
(date, symbol, company_name, decision, confidence, risk, raw_analysis)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
date,
symbol,
analysis.get('company_name', ''),
analysis.get('decision'),
analysis.get('confidence'),
analysis.get('risk'),
analysis.get('raw_analysis', '')
))
conn.commit()
finally:
conn.close()
def get_recommendation_by_date(date: str) -> Optional[dict]:
"""Get recommendation for a specific date."""
conn = get_connection()
cursor = conn.cursor()
try:
# Get daily summary
cursor.execute("""
SELECT * FROM daily_recommendations WHERE date = ?
""", (date,))
row = cursor.fetchone()
if not row:
return None
# Get stock analysis for this date
cursor.execute("""
SELECT * FROM stock_analysis WHERE date = ?
""", (date,))
analysis_rows = cursor.fetchall()
analysis = {}
for a in analysis_rows:
analysis[a['symbol']] = {
'symbol': a['symbol'],
'company_name': a['company_name'],
'decision': a['decision'],
'confidence': a['confidence'],
'risk': a['risk'],
'raw_analysis': a['raw_analysis']
}
return {
'date': row['date'],
'analysis': analysis,
'summary': {
'total': row['summary_total'],
'buy': row['summary_buy'],
'sell': row['summary_sell'],
'hold': row['summary_hold']
},
'top_picks': json.loads(row['top_picks']) if row['top_picks'] else [],
'stocks_to_avoid': json.loads(row['stocks_to_avoid']) if row['stocks_to_avoid'] else []
}
finally:
conn.close()
def get_latest_recommendation() -> Optional[dict]:
"""Get the most recent recommendation."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT date FROM daily_recommendations ORDER BY date DESC LIMIT 1
""")
row = cursor.fetchone()
if not row:
return None
return get_recommendation_by_date(row['date'])
finally:
conn.close()
def get_all_dates() -> list:
"""Get all available dates."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT date FROM daily_recommendations ORDER BY date DESC
""")
return [row['date'] for row in cursor.fetchall()]
finally:
conn.close()
def get_stock_history(symbol: str) -> list:
"""Get historical recommendations for a specific stock."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT date, decision, confidence, risk
FROM stock_analysis
WHERE symbol = ?
ORDER BY date DESC
""", (symbol,))
return [
{
'date': row['date'],
'decision': row['decision'],
'confidence': row['confidence'],
'risk': row['risk']
}
for row in cursor.fetchall()
]
finally:
conn.close()
def get_all_recommendations() -> list:
"""Get all daily recommendations."""
dates = get_all_dates()
return [get_recommendation_by_date(date) for date in dates]
# ============== Pipeline Data Functions ==============
def save_agent_report(date: str, symbol: str, agent_type: str,
report_content: str, data_sources_used: list = None):
"""Save an individual agent's report."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
INSERT OR REPLACE INTO agent_reports
(date, symbol, agent_type, report_content, data_sources_used)
VALUES (?, ?, ?, ?, ?)
""", (
date, symbol, agent_type, report_content,
json.dumps(data_sources_used) if data_sources_used else '[]'
))
conn.commit()
finally:
conn.close()
def save_agent_reports_bulk(date: str, symbol: str, reports: dict):
"""Save all agent reports for a stock at once.
Args:
date: Date string (YYYY-MM-DD)
symbol: Stock symbol
reports: Dict with keys 'market', 'news', 'social_media', 'fundamentals'
"""
conn = get_connection()
cursor = conn.cursor()
try:
for agent_type, report_data in reports.items():
if isinstance(report_data, str):
report_content = report_data
data_sources = []
else:
report_content = report_data.get('content', '')
data_sources = report_data.get('data_sources', [])
cursor.execute("""
INSERT OR REPLACE INTO agent_reports
(date, symbol, agent_type, report_content, data_sources_used)
VALUES (?, ?, ?, ?, ?)
""", (date, symbol, agent_type, report_content, json.dumps(data_sources)))
conn.commit()
finally:
conn.close()
def get_agent_reports(date: str, symbol: str) -> dict:
"""Get all agent reports for a stock on a date."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT agent_type, report_content, data_sources_used, created_at
FROM agent_reports
WHERE date = ? AND symbol = ?
""", (date, symbol))
reports = {}
for row in cursor.fetchall():
reports[row['agent_type']] = {
'agent_type': row['agent_type'],
'report_content': row['report_content'],
'data_sources_used': json.loads(row['data_sources_used']) if row['data_sources_used'] else [],
'created_at': row['created_at']
}
return reports
finally:
conn.close()
def save_debate_history(date: str, symbol: str, debate_type: str,
bull_arguments: str = None, bear_arguments: str = None,
risky_arguments: str = None, safe_arguments: str = None,
neutral_arguments: str = None, judge_decision: str = None,
full_history: str = None):
"""Save debate history for investment or risk debate."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
INSERT OR REPLACE INTO debate_history
(date, symbol, debate_type, bull_arguments, bear_arguments,
risky_arguments, safe_arguments, neutral_arguments,
judge_decision, full_history)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
date, symbol, debate_type,
bull_arguments, bear_arguments,
risky_arguments, safe_arguments, neutral_arguments,
judge_decision, full_history
))
conn.commit()
finally:
conn.close()
def get_debate_history(date: str, symbol: str) -> dict:
"""Get all debate history for a stock on a date."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT * FROM debate_history
WHERE date = ? AND symbol = ?
""", (date, symbol))
debates = {}
for row in cursor.fetchall():
debates[row['debate_type']] = {
'debate_type': row['debate_type'],
'bull_arguments': row['bull_arguments'],
'bear_arguments': row['bear_arguments'],
'risky_arguments': row['risky_arguments'],
'safe_arguments': row['safe_arguments'],
'neutral_arguments': row['neutral_arguments'],
'judge_decision': row['judge_decision'],
'full_history': row['full_history'],
'created_at': row['created_at']
}
return debates
finally:
conn.close()
def save_pipeline_step(date: str, symbol: str, step_number: int, step_name: str,
status: str, started_at: str = None, completed_at: str = None,
duration_ms: int = None, output_summary: str = None):
"""Save a pipeline step status."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
INSERT OR REPLACE INTO pipeline_steps
(date, symbol, step_number, step_name, status,
started_at, completed_at, duration_ms, output_summary)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
date, symbol, step_number, step_name, status,
started_at, completed_at, duration_ms, output_summary
))
conn.commit()
finally:
conn.close()
def save_pipeline_steps_bulk(date: str, symbol: str, steps: list):
"""Save all pipeline steps at once.
Args:
date: Date string
symbol: Stock symbol
steps: List of step dicts with step_number, step_name, status, etc.
"""
conn = get_connection()
cursor = conn.cursor()
try:
for step in steps:
cursor.execute("""
INSERT OR REPLACE INTO pipeline_steps
(date, symbol, step_number, step_name, status,
started_at, completed_at, duration_ms, output_summary)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
date, symbol,
step.get('step_number'),
step.get('step_name'),
step.get('status'),
step.get('started_at'),
step.get('completed_at'),
step.get('duration_ms'),
step.get('output_summary')
))
conn.commit()
finally:
conn.close()
def get_pipeline_steps(date: str, symbol: str) -> list:
"""Get all pipeline steps for a stock on a date."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT * FROM pipeline_steps
WHERE date = ? AND symbol = ?
ORDER BY step_number
""", (date, symbol))
return [
{
'step_number': row['step_number'],
'step_name': row['step_name'],
'status': row['status'],
'started_at': row['started_at'],
'completed_at': row['completed_at'],
'duration_ms': row['duration_ms'],
'output_summary': row['output_summary']
}
for row in cursor.fetchall()
]
finally:
conn.close()
def save_data_source_log(date: str, symbol: str, source_type: str,
source_name: str, data_fetched: dict = None,
fetch_timestamp: str = None, success: bool = True,
error_message: str = None):
"""Log a data source fetch."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
INSERT INTO data_source_logs
(date, symbol, source_type, source_name, data_fetched,
fetch_timestamp, success, error_message)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (
date, symbol, source_type, source_name,
json.dumps(data_fetched) if data_fetched else None,
fetch_timestamp or datetime.now().isoformat(),
1 if success else 0,
error_message
))
conn.commit()
finally:
conn.close()
def save_data_source_logs_bulk(date: str, symbol: str, logs: list):
"""Save multiple data source logs at once."""
conn = get_connection()
cursor = conn.cursor()
try:
for log in logs:
cursor.execute("""
INSERT INTO data_source_logs
(date, symbol, source_type, source_name, data_fetched,
fetch_timestamp, success, error_message)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (
date, symbol,
log.get('source_type'),
log.get('source_name'),
json.dumps(log.get('data_fetched')) if log.get('data_fetched') else None,
log.get('fetch_timestamp') or datetime.now().isoformat(),
1 if log.get('success', True) else 0,
log.get('error_message')
))
conn.commit()
finally:
conn.close()
def get_data_source_logs(date: str, symbol: str) -> list:
"""Get all data source logs for a stock on a date."""
conn = get_connection()
cursor = conn.cursor()
try:
cursor.execute("""
SELECT * FROM data_source_logs
WHERE date = ? AND symbol = ?
ORDER BY fetch_timestamp
""", (date, symbol))
return [
{
'source_type': row['source_type'],
'source_name': row['source_name'],
'data_fetched': json.loads(row['data_fetched']) if row['data_fetched'] else None,
'fetch_timestamp': row['fetch_timestamp'],
'success': bool(row['success']),
'error_message': row['error_message']
}
for row in cursor.fetchall()
]
finally:
conn.close()
def get_full_pipeline_data(date: str, symbol: str) -> dict:
"""Get complete pipeline data for a stock on a date."""
return {
'date': date,
'symbol': symbol,
'agent_reports': get_agent_reports(date, symbol),
'debates': get_debate_history(date, symbol),
'pipeline_steps': get_pipeline_steps(date, symbol),
'data_sources': get_data_source_logs(date, symbol)
}
def save_full_pipeline_data(date: str, symbol: str, pipeline_data: dict):
"""Save complete pipeline data for a stock.
Args:
date: Date string
symbol: Stock symbol
pipeline_data: Dict containing agent_reports, debates, pipeline_steps, data_sources
"""
if 'agent_reports' in pipeline_data:
save_agent_reports_bulk(date, symbol, pipeline_data['agent_reports'])
if 'investment_debate' in pipeline_data:
debate = pipeline_data['investment_debate']
save_debate_history(
date, symbol, 'investment',
bull_arguments=debate.get('bull_history'),
bear_arguments=debate.get('bear_history'),
judge_decision=debate.get('judge_decision'),
full_history=debate.get('history')
)
if 'risk_debate' in pipeline_data:
debate = pipeline_data['risk_debate']
save_debate_history(
date, symbol, 'risk',
risky_arguments=debate.get('risky_history'),
safe_arguments=debate.get('safe_history'),
neutral_arguments=debate.get('neutral_history'),
judge_decision=debate.get('judge_decision'),
full_history=debate.get('history')
)
if 'pipeline_steps' in pipeline_data:
save_pipeline_steps_bulk(date, symbol, pipeline_data['pipeline_steps'])
if 'data_sources' in pipeline_data:
save_data_source_logs_bulk(date, symbol, pipeline_data['data_sources'])
def get_pipeline_summary_for_date(date: str) -> list:
"""Get pipeline summary for all stocks on a date."""
conn = get_connection()
cursor = conn.cursor()
try:
# Get all symbols for this date
cursor.execute("""
SELECT DISTINCT symbol FROM stock_analysis WHERE date = ?
""", (date,))
symbols = [row['symbol'] for row in cursor.fetchall()]
# Batch fetch all pipeline steps for the date (avoids N+1)
cursor.execute("""
SELECT symbol, step_name, status FROM pipeline_steps
WHERE date = ?
ORDER BY symbol, step_number
""", (date,))
all_steps = cursor.fetchall()
steps_by_symbol = {}
for row in all_steps:
if row['symbol'] not in steps_by_symbol:
steps_by_symbol[row['symbol']] = []
steps_by_symbol[row['symbol']].append({'step_name': row['step_name'], 'status': row['status']})
# Batch fetch agent report counts (avoids N+1)
cursor.execute("""
SELECT symbol, COUNT(*) as count FROM agent_reports
WHERE date = ?
GROUP BY symbol
""", (date,))
agent_counts = {row['symbol']: row['count'] for row in cursor.fetchall()}
# Batch fetch debates existence (avoids N+1)
cursor.execute("""
SELECT DISTINCT symbol FROM debate_history WHERE date = ?
""", (date,))
symbols_with_debates = {row['symbol'] for row in cursor.fetchall()}
summaries = []
for symbol in symbols:
summaries.append({
'symbol': symbol,
'pipeline_steps': steps_by_symbol.get(symbol, []),
'agent_reports_count': agent_counts.get(symbol, 0),
'has_debates': symbol in symbols_with_debates
})
return summaries
finally:
conn.close()
# Initialize database on module import
init_db()

Binary file not shown.

View File

@ -0,0 +1,3 @@
fastapi>=0.109.0
uvicorn>=0.27.0
pydantic>=2.0.0

View File

@ -0,0 +1,135 @@
"""Seed the database with sample data from the Jan 30, 2025 analysis."""
import database as db
# Sample data from the Jan 30, 2025 analysis
SAMPLE_DATA = {
"date": "2025-01-30",
"analysis": {
"RELIANCE": {"symbol": "RELIANCE", "company_name": "Reliance Industries Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"TCS": {"symbol": "TCS", "company_name": "Tata Consultancy Services Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"HDFCBANK": {"symbol": "HDFCBANK", "company_name": "HDFC Bank Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"INFY": {"symbol": "INFY", "company_name": "Infosys Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"ICICIBANK": {"symbol": "ICICIBANK", "company_name": "ICICI Bank Ltd", "decision": "BUY", "confidence": "MEDIUM", "risk": "MEDIUM"},
"HINDUNILVR": {"symbol": "HINDUNILVR", "company_name": "Hindustan Unilever Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"ITC": {"symbol": "ITC", "company_name": "ITC Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"SBIN": {"symbol": "SBIN", "company_name": "State Bank of India", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"BHARTIARTL": {"symbol": "BHARTIARTL", "company_name": "Bharti Airtel Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"KOTAKBANK": {"symbol": "KOTAKBANK", "company_name": "Kotak Mahindra Bank Ltd", "decision": "BUY", "confidence": "MEDIUM", "risk": "MEDIUM"},
"LT": {"symbol": "LT", "company_name": "Larsen & Toubro Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"AXISBANK": {"symbol": "AXISBANK", "company_name": "Axis Bank Ltd", "decision": "SELL", "confidence": "HIGH", "risk": "HIGH"},
"ASIANPAINT": {"symbol": "ASIANPAINT", "company_name": "Asian Paints Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"MARUTI": {"symbol": "MARUTI", "company_name": "Maruti Suzuki India Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"HCLTECH": {"symbol": "HCLTECH", "company_name": "HCL Technologies Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "HIGH"},
"SUNPHARMA": {"symbol": "SUNPHARMA", "company_name": "Sun Pharmaceutical Industries Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "MEDIUM"},
"TITAN": {"symbol": "TITAN", "company_name": "Titan Company Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"BAJFINANCE": {"symbol": "BAJFINANCE", "company_name": "Bajaj Finance Ltd", "decision": "BUY", "confidence": "HIGH", "risk": "MEDIUM"},
"WIPRO": {"symbol": "WIPRO", "company_name": "Wipro Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"ULTRACEMCO": {"symbol": "ULTRACEMCO", "company_name": "UltraTech Cement Ltd", "decision": "BUY", "confidence": "MEDIUM", "risk": "MEDIUM"},
"NESTLEIND": {"symbol": "NESTLEIND", "company_name": "Nestle India Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"NTPC": {"symbol": "NTPC", "company_name": "NTPC Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"POWERGRID": {"symbol": "POWERGRID", "company_name": "Power Grid Corporation of India Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "MEDIUM"},
"M&M": {"symbol": "M&M", "company_name": "Mahindra & Mahindra Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"TATAMOTORS": {"symbol": "TATAMOTORS", "company_name": "Tata Motors Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"ONGC": {"symbol": "ONGC", "company_name": "Oil & Natural Gas Corporation Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "HIGH"},
"JSWSTEEL": {"symbol": "JSWSTEEL", "company_name": "JSW Steel Ltd", "decision": "BUY", "confidence": "MEDIUM", "risk": "MEDIUM"},
"TATASTEEL": {"symbol": "TATASTEEL", "company_name": "Tata Steel Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"ADANIENT": {"symbol": "ADANIENT", "company_name": "Adani Enterprises Ltd", "decision": "HOLD", "confidence": "LOW", "risk": "HIGH"},
"ADANIPORTS": {"symbol": "ADANIPORTS", "company_name": "Adani Ports and SEZ Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "HIGH"},
"COALINDIA": {"symbol": "COALINDIA", "company_name": "Coal India Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"BAJAJFINSV": {"symbol": "BAJAJFINSV", "company_name": "Bajaj Finserv Ltd", "decision": "BUY", "confidence": "HIGH", "risk": "MEDIUM"},
"TECHM": {"symbol": "TECHM", "company_name": "Tech Mahindra Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"HDFCLIFE": {"symbol": "HDFCLIFE", "company_name": "HDFC Life Insurance Company Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"SBILIFE": {"symbol": "SBILIFE", "company_name": "SBI Life Insurance Company Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"GRASIM": {"symbol": "GRASIM", "company_name": "Grasim Industries Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"DIVISLAB": {"symbol": "DIVISLAB", "company_name": "Divi's Laboratories Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "MEDIUM"},
"DRREDDY": {"symbol": "DRREDDY", "company_name": "Dr. Reddy's Laboratories Ltd", "decision": "SELL", "confidence": "HIGH", "risk": "HIGH"},
"CIPLA": {"symbol": "CIPLA", "company_name": "Cipla Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"BRITANNIA": {"symbol": "BRITANNIA", "company_name": "Britannia Industries Ltd", "decision": "BUY", "confidence": "MEDIUM", "risk": "LOW"},
"EICHERMOT": {"symbol": "EICHERMOT", "company_name": "Eicher Motors Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"APOLLOHOSP": {"symbol": "APOLLOHOSP", "company_name": "Apollo Hospitals Enterprise Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"INDUSINDBK": {"symbol": "INDUSINDBK", "company_name": "IndusInd Bank Ltd", "decision": "SELL", "confidence": "HIGH", "risk": "HIGH"},
"HEROMOTOCO": {"symbol": "HEROMOTOCO", "company_name": "Hero MotoCorp Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"TATACONSUM": {"symbol": "TATACONSUM", "company_name": "Tata Consumer Products Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"BPCL": {"symbol": "BPCL", "company_name": "Bharat Petroleum Corporation Ltd", "decision": "SELL", "confidence": "MEDIUM", "risk": "MEDIUM"},
"UPL": {"symbol": "UPL", "company_name": "UPL Ltd", "decision": "HOLD", "confidence": "LOW", "risk": "HIGH"},
"HINDALCO": {"symbol": "HINDALCO", "company_name": "Hindalco Industries Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"BAJAJ-AUTO": {"symbol": "BAJAJ-AUTO", "company_name": "Bajaj Auto Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
"LTIM": {"symbol": "LTIM", "company_name": "LTIMindtree Ltd", "decision": "HOLD", "confidence": "MEDIUM", "risk": "MEDIUM"},
},
"summary": {
"total": 50,
"buy": 7,
"sell": 10,
"hold": 33,
},
"top_picks": [
{
"rank": 1,
"symbol": "BAJFINANCE",
"company_name": "Bajaj Finance Ltd",
"decision": "BUY",
"reason": "13.7% gain over 30 days (Rs.678 to Rs.771), strongest bullish momentum with robust upward trend.",
"risk_level": "MEDIUM",
},
{
"rank": 2,
"symbol": "BAJAJFINSV",
"company_name": "Bajaj Finserv Ltd",
"decision": "BUY",
"reason": "14% gain in one month (Rs.1,567 to Rs.1,789) demonstrates clear bullish momentum with sector-wide tailwinds.",
"risk_level": "MEDIUM",
},
{
"rank": 3,
"symbol": "KOTAKBANK",
"company_name": "Kotak Mahindra Bank Ltd",
"decision": "BUY",
"reason": "Significant breakout on January 20th with 9.2% gain on exceptionally high volume (66.6M shares).",
"risk_level": "MEDIUM",
},
],
"stocks_to_avoid": [
{
"symbol": "DRREDDY",
"company_name": "Dr. Reddy's Laboratories Ltd",
"reason": "HIGH CONFIDENCE SELL with 14.9% decline in one month. Severe downtrend with high risk.",
},
{
"symbol": "AXISBANK",
"company_name": "Axis Bank Ltd",
"reason": "HIGH CONFIDENCE SELL with 10.5% sustained decline. Clear and persistent downtrend.",
},
{
"symbol": "HCLTECH",
"company_name": "HCL Technologies Ltd",
"reason": "SELL with 9.4% drop from recent highs. High risk rating with continued selling pressure.",
},
{
"symbol": "ADANIPORTS",
"company_name": "Adani Ports and SEZ Ltd",
"reason": "SELL with 12% monthly decline and consistently lower lows. High risk profile.",
},
],
}
def seed_database():
"""Seed the database with sample data."""
print("Seeding database...")
db.save_recommendation(
date=SAMPLE_DATA["date"],
analysis_data=SAMPLE_DATA["analysis"],
summary=SAMPLE_DATA["summary"],
top_picks=SAMPLE_DATA["top_picks"],
stocks_to_avoid=SAMPLE_DATA["stocks_to_avoid"],
)
print(f"Saved recommendation for {SAMPLE_DATA['date']}")
print(f" - {len(SAMPLE_DATA['analysis'])} stocks analyzed")
print(f" - Summary: {SAMPLE_DATA['summary']['buy']} BUY, {SAMPLE_DATA['summary']['sell']} SELL, {SAMPLE_DATA['summary']['hold']} HOLD")
print("Database seeded successfully!")
if __name__ == "__main__":
seed_database()

592
frontend/backend/server.py Normal file
View File

@ -0,0 +1,592 @@
"""FastAPI server for Nifty50 AI recommendations."""
from fastapi import FastAPI, HTTPException, BackgroundTasks
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from typing import Optional
import database as db
import sys
import os
from pathlib import Path
from datetime import datetime
import threading
# Add parent directories to path for importing trading agents
PROJECT_ROOT = Path(__file__).parent.parent.parent
sys.path.insert(0, str(PROJECT_ROOT))
# Track running analyses
# NOTE: This is not thread-safe for production multi-worker deployments.
# For production, use Redis or a database-backed job queue instead.
running_analyses = {} # {symbol: {"status": "running", "started_at": datetime, "progress": str}}
app = FastAPI(
title="Nifty50 AI API",
description="API for Nifty 50 stock recommendations",
version="1.0.0"
)
# Enable CORS for frontend
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # In production, replace with specific origins
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
class StockAnalysis(BaseModel):
symbol: str
company_name: str
decision: Optional[str] = None
confidence: Optional[str] = None
risk: Optional[str] = None
raw_analysis: Optional[str] = None
class TopPick(BaseModel):
rank: int
symbol: str
company_name: str
decision: str
reason: str
risk_level: str
class StockToAvoid(BaseModel):
symbol: str
company_name: str
reason: str
class Summary(BaseModel):
total: int
buy: int
sell: int
hold: int
class DailyRecommendation(BaseModel):
date: str
analysis: dict[str, StockAnalysis]
summary: Summary
top_picks: list[TopPick]
stocks_to_avoid: list[StockToAvoid]
class SaveRecommendationRequest(BaseModel):
date: str
analysis: dict
summary: dict
top_picks: list
stocks_to_avoid: list
# ============== Pipeline Data Models ==============
class AgentReport(BaseModel):
agent_type: str
report_content: str
data_sources_used: Optional[list] = []
created_at: Optional[str] = None
class DebateHistory(BaseModel):
debate_type: str
bull_arguments: Optional[str] = None
bear_arguments: Optional[str] = None
risky_arguments: Optional[str] = None
safe_arguments: Optional[str] = None
neutral_arguments: Optional[str] = None
judge_decision: Optional[str] = None
full_history: Optional[str] = None
class PipelineStep(BaseModel):
step_number: int
step_name: str
status: str
started_at: Optional[str] = None
completed_at: Optional[str] = None
duration_ms: Optional[int] = None
output_summary: Optional[str] = None
class DataSourceLog(BaseModel):
source_type: str
source_name: str
data_fetched: Optional[dict] = None
fetch_timestamp: Optional[str] = None
success: bool = True
error_message: Optional[str] = None
class SavePipelineDataRequest(BaseModel):
date: str
symbol: str
agent_reports: Optional[dict] = None
investment_debate: Optional[dict] = None
risk_debate: Optional[dict] = None
pipeline_steps: Optional[list] = None
data_sources: Optional[list] = None
class AnalysisConfig(BaseModel):
deep_think_model: Optional[str] = "opus"
quick_think_model: Optional[str] = "sonnet"
provider: Optional[str] = "claude_subscription" # claude_subscription or anthropic_api
api_key: Optional[str] = None
max_debate_rounds: Optional[int] = 1
class RunAnalysisRequest(BaseModel):
symbol: str
date: Optional[str] = None # Defaults to today if not provided
config: Optional[AnalysisConfig] = None
def run_analysis_task(symbol: str, date: str, analysis_config: dict = None):
"""Background task to run trading analysis for a stock."""
global running_analyses
# Default config values
if analysis_config is None:
analysis_config = {}
deep_think_model = analysis_config.get("deep_think_model", "opus")
quick_think_model = analysis_config.get("quick_think_model", "sonnet")
provider = analysis_config.get("provider", "claude_subscription")
api_key = analysis_config.get("api_key")
max_debate_rounds = analysis_config.get("max_debate_rounds", 1)
try:
running_analyses[symbol] = {
"status": "initializing",
"started_at": datetime.now().isoformat(),
"progress": "Loading trading agents..."
}
# Import trading agents
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
running_analyses[symbol]["progress"] = "Initializing analysis pipeline..."
# Create config from user settings
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "anthropic" # Use Claude for all LLM
config["deep_think_llm"] = deep_think_model
config["quick_think_llm"] = quick_think_model
config["max_debate_rounds"] = max_debate_rounds
# If using API provider and key is provided, set it in environment
if provider == "anthropic_api" and api_key:
os.environ["ANTHROPIC_API_KEY"] = api_key
running_analyses[symbol]["status"] = "running"
running_analyses[symbol]["progress"] = f"Running market analysis (model: {deep_think_model})..."
# Initialize and run
ta = TradingAgentsGraph(debug=False, config=config)
running_analyses[symbol]["progress"] = f"Analyzing {symbol}..."
final_state, decision = ta.propagate(symbol, date)
running_analyses[symbol] = {
"status": "completed",
"completed_at": datetime.now().isoformat(),
"progress": f"Analysis complete: {decision}",
"decision": decision
}
except Exception as e:
error_msg = str(e) if str(e) else f"{type(e).__name__}: No details provided"
running_analyses[symbol] = {
"status": "error",
"error": error_msg,
"progress": f"Error: {error_msg[:100]}"
}
import traceback
print(f"Analysis error for {symbol}: {type(e).__name__}: {error_msg}")
traceback.print_exc()
@app.get("/")
async def root():
"""API root endpoint."""
return {
"name": "Nifty50 AI API",
"version": "2.0.0",
"endpoints": {
"GET /recommendations": "Get all recommendations",
"GET /recommendations/latest": "Get latest recommendation",
"GET /recommendations/{date}": "Get recommendation by date",
"GET /recommendations/{date}/{symbol}/pipeline": "Get full pipeline data for a stock",
"GET /recommendations/{date}/{symbol}/agents": "Get agent reports for a stock",
"GET /recommendations/{date}/{symbol}/debates": "Get debate history for a stock",
"GET /recommendations/{date}/{symbol}/data-sources": "Get data source logs for a stock",
"GET /recommendations/{date}/pipeline-summary": "Get pipeline summary for all stocks on a date",
"GET /stocks/{symbol}/history": "Get stock history",
"GET /dates": "Get all available dates",
"POST /recommendations": "Save a new recommendation",
"POST /pipeline": "Save pipeline data for a stock"
}
}
@app.get("/recommendations")
async def get_all_recommendations():
"""Get all daily recommendations."""
recommendations = db.get_all_recommendations()
return {"recommendations": recommendations, "count": len(recommendations)}
@app.get("/recommendations/latest")
async def get_latest_recommendation():
"""Get the most recent recommendation."""
recommendation = db.get_latest_recommendation()
if not recommendation:
raise HTTPException(status_code=404, detail="No recommendations found")
return recommendation
@app.get("/recommendations/{date}")
async def get_recommendation_by_date(date: str):
"""Get recommendation for a specific date (format: YYYY-MM-DD)."""
recommendation = db.get_recommendation_by_date(date)
if not recommendation:
raise HTTPException(status_code=404, detail=f"No recommendation found for {date}")
return recommendation
@app.get("/stocks/{symbol}/history")
async def get_stock_history(symbol: str):
"""Get historical recommendations for a specific stock."""
history = db.get_stock_history(symbol.upper())
return {"symbol": symbol.upper(), "history": history, "count": len(history)}
@app.get("/dates")
async def get_available_dates():
"""Get all dates with recommendations."""
dates = db.get_all_dates()
return {"dates": dates, "count": len(dates)}
@app.post("/recommendations")
async def save_recommendation(request: SaveRecommendationRequest):
"""Save a new daily recommendation."""
try:
db.save_recommendation(
date=request.date,
analysis_data=request.analysis,
summary=request.summary,
top_picks=request.top_picks,
stocks_to_avoid=request.stocks_to_avoid
)
return {"message": f"Recommendation for {request.date} saved successfully"}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get("/health")
async def health_check():
"""Health check endpoint."""
return {"status": "healthy", "database": "connected"}
# ============== Pipeline Data Endpoints ==============
@app.get("/recommendations/{date}/{symbol}/pipeline")
async def get_pipeline_data(date: str, symbol: str):
"""Get full pipeline data for a stock on a specific date."""
pipeline_data = db.get_full_pipeline_data(date, symbol.upper())
# Check if we have any data
has_data = (
pipeline_data.get('agent_reports') or
pipeline_data.get('debates') or
pipeline_data.get('pipeline_steps') or
pipeline_data.get('data_sources')
)
if not has_data:
# Return empty structure with mock pipeline steps if no data
return {
"date": date,
"symbol": symbol.upper(),
"agent_reports": {},
"debates": {},
"pipeline_steps": [],
"data_sources": [],
"status": "no_data"
}
return {**pipeline_data, "status": "complete"}
@app.get("/recommendations/{date}/{symbol}/agents")
async def get_agent_reports(date: str, symbol: str):
"""Get agent reports for a stock on a specific date."""
reports = db.get_agent_reports(date, symbol.upper())
return {
"date": date,
"symbol": symbol.upper(),
"reports": reports,
"count": len(reports)
}
@app.get("/recommendations/{date}/{symbol}/debates")
async def get_debate_history(date: str, symbol: str):
"""Get debate history for a stock on a specific date."""
debates = db.get_debate_history(date, symbol.upper())
return {
"date": date,
"symbol": symbol.upper(),
"debates": debates
}
@app.get("/recommendations/{date}/{symbol}/data-sources")
async def get_data_sources(date: str, symbol: str):
"""Get data source logs for a stock on a specific date."""
logs = db.get_data_source_logs(date, symbol.upper())
return {
"date": date,
"symbol": symbol.upper(),
"data_sources": logs,
"count": len(logs)
}
@app.get("/recommendations/{date}/pipeline-summary")
async def get_pipeline_summary(date: str):
"""Get pipeline summary for all stocks on a specific date."""
summary = db.get_pipeline_summary_for_date(date)
return {
"date": date,
"stocks": summary,
"count": len(summary)
}
@app.post("/pipeline")
async def save_pipeline_data(request: SavePipelineDataRequest):
"""Save pipeline data for a stock."""
try:
db.save_full_pipeline_data(
date=request.date,
symbol=request.symbol.upper(),
pipeline_data={
'agent_reports': request.agent_reports,
'investment_debate': request.investment_debate,
'risk_debate': request.risk_debate,
'pipeline_steps': request.pipeline_steps,
'data_sources': request.data_sources
}
)
return {"message": f"Pipeline data for {request.symbol} on {request.date} saved successfully"}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
# ============== Analysis Endpoints ==============
# Track bulk analysis state
bulk_analysis_state = {
"status": "idle", # idle, running, completed, error
"total": 0,
"completed": 0,
"failed": 0,
"current_symbol": None,
"started_at": None,
"completed_at": None,
"results": {}
}
# List of Nifty 50 stocks
NIFTY_50_SYMBOLS = [
"RELIANCE", "TCS", "HDFCBANK", "INFY", "ICICIBANK", "HINDUNILVR", "ITC", "SBIN",
"BHARTIARTL", "KOTAKBANK", "LT", "AXISBANK", "ASIANPAINT", "MARUTI", "HCLTECH",
"SUNPHARMA", "TITAN", "BAJFINANCE", "WIPRO", "ULTRACEMCO", "NESTLEIND", "NTPC",
"POWERGRID", "M&M", "TATAMOTORS", "ONGC", "JSWSTEEL", "TATASTEEL", "ADANIENT",
"ADANIPORTS", "COALINDIA", "BAJAJFINSV", "TECHM", "HDFCLIFE", "SBILIFE", "GRASIM",
"DIVISLAB", "DRREDDY", "CIPLA", "BRITANNIA", "EICHERMOT", "APOLLOHOSP", "INDUSINDBK",
"HEROMOTOCO", "TATACONSUM", "BPCL", "UPL", "HINDALCO", "BAJAJ-AUTO", "LTIM"
]
class BulkAnalysisRequest(BaseModel):
deep_think_model: Optional[str] = "opus"
quick_think_model: Optional[str] = "sonnet"
provider: Optional[str] = "claude_subscription"
api_key: Optional[str] = None
max_debate_rounds: Optional[int] = 1
@app.post("/analyze/all")
async def run_bulk_analysis(request: Optional[BulkAnalysisRequest] = None, date: Optional[str] = None):
"""Trigger analysis for all Nifty 50 stocks. Runs in background."""
global bulk_analysis_state
# Check if bulk analysis is already running
if bulk_analysis_state.get("status") == "running":
return {
"message": "Bulk analysis already running",
"status": bulk_analysis_state
}
# Use today's date if not provided
if not date:
date = datetime.now().strftime("%Y-%m-%d")
# Build analysis config from request
analysis_config = {}
if request:
analysis_config = {
"deep_think_model": request.deep_think_model,
"quick_think_model": request.quick_think_model,
"provider": request.provider,
"api_key": request.api_key,
"max_debate_rounds": request.max_debate_rounds
}
# Start bulk analysis in background thread
def run_bulk():
global bulk_analysis_state
bulk_analysis_state = {
"status": "running",
"total": len(NIFTY_50_SYMBOLS),
"completed": 0,
"failed": 0,
"current_symbol": None,
"started_at": datetime.now().isoformat(),
"completed_at": None,
"results": {}
}
for symbol in NIFTY_50_SYMBOLS:
try:
bulk_analysis_state["current_symbol"] = symbol
run_analysis_task(symbol, date, analysis_config)
# Wait for completion
import time
while symbol in running_analyses and running_analyses[symbol].get("status") == "running":
time.sleep(2)
if symbol in running_analyses:
status = running_analyses[symbol].get("status", "unknown")
bulk_analysis_state["results"][symbol] = status
if status == "completed":
bulk_analysis_state["completed"] += 1
else:
bulk_analysis_state["failed"] += 1
else:
bulk_analysis_state["results"][symbol] = "unknown"
bulk_analysis_state["failed"] += 1
except Exception as e:
bulk_analysis_state["results"][symbol] = f"error: {str(e)}"
bulk_analysis_state["failed"] += 1
bulk_analysis_state["status"] = "completed"
bulk_analysis_state["current_symbol"] = None
bulk_analysis_state["completed_at"] = datetime.now().isoformat()
thread = threading.Thread(target=run_bulk)
thread.start()
return {
"message": "Bulk analysis started for all Nifty 50 stocks",
"date": date,
"total_stocks": len(NIFTY_50_SYMBOLS),
"status": "started"
}
@app.get("/analyze/all/status")
async def get_bulk_analysis_status():
"""Get the status of bulk analysis."""
return bulk_analysis_state
@app.get("/analyze/running")
async def get_running_analyses():
"""Get all currently running analyses."""
running = {k: v for k, v in running_analyses.items() if v.get("status") == "running"}
return {
"running": running,
"count": len(running)
}
class SingleAnalysisRequest(BaseModel):
deep_think_model: Optional[str] = "opus"
quick_think_model: Optional[str] = "sonnet"
provider: Optional[str] = "claude_subscription"
api_key: Optional[str] = None
max_debate_rounds: Optional[int] = 1
@app.post("/analyze/{symbol}")
async def run_analysis(symbol: str, background_tasks: BackgroundTasks, request: Optional[SingleAnalysisRequest] = None, date: Optional[str] = None):
"""Trigger analysis for a stock. Runs in background."""
symbol = symbol.upper()
# Check if analysis is already running
if symbol in running_analyses and running_analyses[symbol].get("status") == "running":
return {
"message": f"Analysis already running for {symbol}",
"status": running_analyses[symbol]
}
# Use today's date if not provided
if not date:
date = datetime.now().strftime("%Y-%m-%d")
# Build analysis config from request
analysis_config = {}
if request:
analysis_config = {
"deep_think_model": request.deep_think_model,
"quick_think_model": request.quick_think_model,
"provider": request.provider,
"api_key": request.api_key,
"max_debate_rounds": request.max_debate_rounds
}
# Start analysis in background thread
thread = threading.Thread(target=run_analysis_task, args=(symbol, date, analysis_config))
thread.start()
return {
"message": f"Analysis started for {symbol}",
"symbol": symbol,
"date": date,
"status": "started"
}
@app.get("/analyze/{symbol}/status")
async def get_analysis_status(symbol: str):
"""Get the status of a running or completed analysis."""
symbol = symbol.upper()
if symbol not in running_analyses:
return {
"symbol": symbol,
"status": "not_started",
"message": "No analysis has been run for this stock"
}
return {
"symbol": symbol,
**running_analyses[symbol]
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 512 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

23
frontend/eslint.config.js Normal file
View File

@ -0,0 +1,23 @@
import js from '@eslint/js'
import globals from 'globals'
import reactHooks from 'eslint-plugin-react-hooks'
import reactRefresh from 'eslint-plugin-react-refresh'
import tseslint from 'typescript-eslint'
import { defineConfig, globalIgnores } from 'eslint/config'
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
js.configs.recommended,
tseslint.configs.recommended,
reactHooks.configs.flat.recommended,
reactRefresh.configs.vite,
],
languageOptions: {
ecmaVersion: 2020,
globals: globals.browser,
},
},
])

86
frontend/index.html Normal file
View File

@ -0,0 +1,86 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<!-- Primary Meta Tags -->
<title>Nifty50 AI - Daily Stock Recommendations for Indian Markets</title>
<meta name="title" content="Nifty50 AI - Daily Stock Recommendations for Indian Markets" />
<meta name="description" content="AI-powered daily stock recommendations for all Nifty 50 stocks. Get actionable buy, sell, and hold signals based on technical analysis, fundamentals, and news sentiment." />
<meta name="keywords" content="Nifty 50, stock recommendations, AI stock analysis, Indian stock market, NSE, BSE, trading signals, buy sell hold, stock market India" />
<meta name="author" content="Nifty50 AI" />
<meta name="robots" content="index, follow" />
<!-- Favicon -->
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png" />
<!-- Open Graph / Facebook -->
<meta property="og:type" content="website" />
<meta property="og:url" content="https://nifty50ai.com/" />
<meta property="og:title" content="Nifty50 AI - Daily Stock Recommendations for Indian Markets" />
<meta property="og:description" content="AI-powered daily stock recommendations for all Nifty 50 stocks. Get actionable buy, sell, and hold signals." />
<meta property="og:image" content="/og-image.png" />
<meta property="og:locale" content="en_IN" />
<meta property="og:site_name" content="Nifty50 AI" />
<!-- Twitter -->
<meta property="twitter:card" content="summary_large_image" />
<meta property="twitter:url" content="https://nifty50ai.com/" />
<meta property="twitter:title" content="Nifty50 AI - Daily Stock Recommendations for Indian Markets" />
<meta property="twitter:description" content="AI-powered daily stock recommendations for all Nifty 50 stocks. Get actionable buy, sell, and hold signals." />
<meta property="twitter:image" content="/og-image.png" />
<!-- Theme Color -->
<meta name="theme-color" content="#0284c7" />
<meta name="msapplication-TileColor" content="#0284c7" />
<!-- Canonical URL -->
<link rel="canonical" href="https://nifty50ai.com/" />
<!-- Google Fonts -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&family=Lexend:wght@400;500;600;700&display=swap" rel="stylesheet">
<!-- Structured Data (JSON-LD) -->
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "WebSite",
"name": "Nifty50 AI",
"description": "AI-powered daily stock recommendations for all Nifty 50 stocks",
"url": "https://nifty50ai.com/",
"potentialAction": {
"@type": "SearchAction",
"target": "https://nifty50ai.com/stock/{search_term_string}",
"query-input": "required name=search_term_string"
}
}
</script>
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Nifty50 AI",
"url": "https://nifty50ai.com/",
"logo": "https://nifty50ai.com/logo.png",
"description": "AI-powered stock analysis and recommendations for Indian markets"
}
</script>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
<!-- Noscript fallback -->
<noscript>
<div style="padding: 20px; text-align: center; font-family: system-ui, sans-serif;">
<h1>Nifty50 AI - Stock Recommendations</h1>
<p>Please enable JavaScript to view this website.</p>
</div>
</noscript>
</body>
</html>

5433
frontend/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

41
frontend/package.json Normal file
View File

@ -0,0 +1,41 @@
{
"name": "frontend",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
},
"dependencies": {
"@tailwindcss/postcss": "^4.1.18",
"date-fns": "^4.1.0",
"lucide-react": "^0.563.0",
"react": "^19.2.0",
"react-dom": "^19.2.0",
"react-router-dom": "^7.13.0",
"recharts": "^3.7.0"
},
"devDependencies": {
"@eslint/js": "^9.39.1",
"@tailwindcss/vite": "^4.1.18",
"@types/node": "^24.10.1",
"@types/react": "^19.2.5",
"@types/react-dom": "^19.2.3",
"@vitejs/plugin-react": "^5.1.1",
"autoprefixer": "^10.4.24",
"eslint": "^9.39.1",
"eslint-plugin-react-hooks": "^7.0.1",
"eslint-plugin-react-refresh": "^0.4.24",
"globals": "^16.5.0",
"playwright": "^1.58.1",
"postcss": "^8.5.6",
"puppeteer": "^24.36.1",
"tailwindcss": "^4.1.18",
"typescript": "~5.9.3",
"typescript-eslint": "^8.46.4",
"vite": "^7.2.4"
}
}

View File

@ -0,0 +1,6 @@
export default {
plugins: {
'@tailwindcss/postcss': {},
autoprefixer: {},
},
}

View File

@ -0,0 +1,11 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64">
<defs>
<linearGradient id="bg" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" style="stop-color:#0ea5e9"/>
<stop offset="100%" style="stop-color:#0369a1"/>
</linearGradient>
</defs>
<rect width="64" height="64" rx="14" fill="url(#bg)"/>
<path d="M16 44 L26 28 L36 36 L48 20" stroke="white" stroke-width="4" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
<circle cx="48" cy="20" r="4" fill="#22c55e"/>
</svg>

After

Width:  |  Height:  |  Size: 521 B

1
frontend/public/vite.svg Normal file
View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

42
frontend/src/App.css Normal file
View File

@ -0,0 +1,42 @@
#root {
max-width: 1280px;
margin: 0 auto;
padding: 2rem;
text-align: center;
}
.logo {
height: 6em;
padding: 1.5em;
will-change: filter;
transition: filter 300ms;
}
.logo:hover {
filter: drop-shadow(0 0 2em #646cffaa);
}
.logo.react:hover {
filter: drop-shadow(0 0 2em #61dafbaa);
}
@keyframes logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
@media (prefers-reduced-motion: no-preference) {
a:nth-of-type(2) .logo {
animation: logo-spin infinite 20s linear;
}
}
.card {
padding: 2em;
}
.read-the-docs {
color: #888;
}

34
frontend/src/App.tsx Normal file
View File

@ -0,0 +1,34 @@
import { Routes, Route } from 'react-router-dom';
import { ThemeProvider } from './contexts/ThemeContext';
import { SettingsProvider } from './contexts/SettingsContext';
import Header from './components/Header';
import Footer from './components/Footer';
import SettingsModal from './components/SettingsModal';
import Dashboard from './pages/Dashboard';
import History from './pages/History';
import StockDetail from './pages/StockDetail';
import About from './pages/About';
function App() {
return (
<ThemeProvider>
<SettingsProvider>
<div className="min-h-screen flex flex-col bg-gray-50 dark:bg-slate-900 transition-colors">
<Header />
<main className="flex-1 max-w-7xl mx-auto w-full px-3 sm:px-4 lg:px-6 py-4">
<Routes>
<Route path="/" element={<Dashboard />} />
<Route path="/history" element={<History />} />
<Route path="/stock/:symbol" element={<StockDetail />} />
<Route path="/about" element={<About />} />
</Routes>
</main>
<Footer />
<SettingsModal />
</div>
</SettingsProvider>
</ThemeProvider>
);
}
export default App;

View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="35.93" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 228"><path fill="#00D8FF" d="M210.483 73.824a171.49 171.49 0 0 0-8.24-2.597c.465-1.9.893-3.777 1.273-5.621c6.238-30.281 2.16-54.676-11.769-62.708c-13.355-7.7-35.196.329-57.254 19.526a171.23 171.23 0 0 0-6.375 5.848a155.866 155.866 0 0 0-4.241-3.917C100.759 3.829 77.587-4.822 63.673 3.233C50.33 10.957 46.379 33.89 51.995 62.588a170.974 170.974 0 0 0 1.892 8.48c-3.28.932-6.445 1.924-9.474 2.98C17.309 83.498 0 98.307 0 113.668c0 15.865 18.582 31.778 46.812 41.427a145.52 145.52 0 0 0 6.921 2.165a167.467 167.467 0 0 0-2.01 9.138c-5.354 28.2-1.173 50.591 12.134 58.266c13.744 7.926 36.812-.22 59.273-19.855a145.567 145.567 0 0 0 5.342-4.923a168.064 168.064 0 0 0 6.92 6.314c21.758 18.722 43.246 26.282 56.54 18.586c13.731-7.949 18.194-32.003 12.4-61.268a145.016 145.016 0 0 0-1.535-6.842c1.62-.48 3.21-.974 4.76-1.488c29.348-9.723 48.443-25.443 48.443-41.52c0-15.417-17.868-30.326-45.517-39.844Zm-6.365 70.984c-1.4.463-2.836.91-4.3 1.345c-3.24-10.257-7.612-21.163-12.963-32.432c5.106-11 9.31-21.767 12.459-31.957c2.619.758 5.16 1.557 7.61 2.4c23.69 8.156 38.14 20.213 38.14 29.504c0 9.896-15.606 22.743-40.946 31.14Zm-10.514 20.834c2.562 12.94 2.927 24.64 1.23 33.787c-1.524 8.219-4.59 13.698-8.382 15.893c-8.067 4.67-25.32-1.4-43.927-17.412a156.726 156.726 0 0 1-6.437-5.87c7.214-7.889 14.423-17.06 21.459-27.246c12.376-1.098 24.068-2.894 34.671-5.345a134.17 134.17 0 0 1 1.386 6.193ZM87.276 214.515c-7.882 2.783-14.16 2.863-17.955.675c-8.075-4.657-11.432-22.636-6.853-46.752a156.923 156.923 0 0 1 1.869-8.499c10.486 2.32 22.093 3.988 34.498 4.994c7.084 9.967 14.501 19.128 21.976 27.15a134.668 134.668 0 0 1-4.877 4.492c-9.933 8.682-19.886 14.842-28.658 17.94ZM50.35 144.747c-12.483-4.267-22.792-9.812-29.858-15.863c-6.35-5.437-9.555-10.836-9.555-15.216c0-9.322 13.897-21.212 37.076-29.293c2.813-.98 5.757-1.905 8.812-2.773c3.204 10.42 7.406 21.315 12.477 32.332c-5.137 11.18-9.399 22.249-12.634 32.792a134.718 134.718 0 0 1-6.318-1.979Zm12.378-84.26c-4.811-24.587-1.616-43.134 6.425-47.789c8.564-4.958 27.502 2.111 47.463 19.835a144.318 144.318 0 0 1 3.841 3.545c-7.438 7.987-14.787 17.08-21.808 26.988c-12.04 1.116-23.565 2.908-34.161 5.309a160.342 160.342 0 0 1-1.76-7.887Zm110.427 27.268a347.8 347.8 0 0 0-7.785-12.803c8.168 1.033 15.994 2.404 23.343 4.08c-2.206 7.072-4.956 14.465-8.193 22.045a381.151 381.151 0 0 0-7.365-13.322Zm-45.032-43.861c5.044 5.465 10.096 11.566 15.065 18.186a322.04 322.04 0 0 0-30.257-.006c4.974-6.559 10.069-12.652 15.192-18.18ZM82.802 87.83a323.167 323.167 0 0 0-7.227 13.238c-3.184-7.553-5.909-14.98-8.134-22.152c7.304-1.634 15.093-2.97 23.209-3.984a321.524 321.524 0 0 0-7.848 12.897Zm8.081 65.352c-8.385-.936-16.291-2.203-23.593-3.793c2.26-7.3 5.045-14.885 8.298-22.6a321.187 321.187 0 0 0 7.257 13.246c2.594 4.48 5.28 8.868 8.038 13.147Zm37.542 31.03c-5.184-5.592-10.354-11.779-15.403-18.433c4.902.192 9.899.29 14.978.29c5.218 0 10.376-.117 15.453-.343c-4.985 6.774-10.018 12.97-15.028 18.486Zm52.198-57.817c3.422 7.8 6.306 15.345 8.596 22.52c-7.422 1.694-15.436 3.058-23.88 4.071a382.417 382.417 0 0 0 7.859-13.026a347.403 347.403 0 0 0 7.425-13.565Zm-16.898 8.101a358.557 358.557 0 0 1-12.281 19.815a329.4 329.4 0 0 1-23.444.823c-7.967 0-15.716-.248-23.178-.732a310.202 310.202 0 0 1-12.513-19.846h.001a307.41 307.41 0 0 1-10.923-20.627a310.278 310.278 0 0 1 10.89-20.637l-.001.001a307.318 307.318 0 0 1 12.413-19.761c7.613-.576 15.42-.876 23.31-.876H128c7.926 0 15.743.303 23.354.883a329.357 329.357 0 0 1 12.335 19.695a358.489 358.489 0 0 1 11.036 20.54a329.472 329.472 0 0 1-11 20.722Zm22.56-122.124c8.572 4.944 11.906 24.881 6.52 51.026c-.344 1.668-.73 3.367-1.15 5.09c-10.622-2.452-22.155-4.275-34.23-5.408c-7.034-10.017-14.323-19.124-21.64-27.008a160.789 160.789 0 0 1 5.888-5.4c18.9-16.447 36.564-22.941 44.612-18.3ZM128 90.808c12.625 0 22.86 10.235 22.86 22.86s-10.235 22.86-22.86 22.86s-22.86-10.235-22.86-22.86s10.235-22.86 22.86-22.86Z"></path></svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

@ -0,0 +1,152 @@
import { useState } from 'react';
import { Brain, ChevronDown, ChevronUp, TrendingUp, BarChart2, MessageSquare, AlertTriangle, Target } from 'lucide-react';
import type { Decision } from '../types';
interface AIAnalysisPanelProps {
analysis: string;
decision?: Decision | null;
defaultExpanded?: boolean;
}
interface Section {
title: string;
content: string;
icon: typeof Brain;
}
function parseAnalysis(analysis: string): Section[] {
const sections: Section[] = [];
const iconMap: Record<string, typeof Brain> = {
'Summary': Target,
'Technical Analysis': BarChart2,
'Fundamental Analysis': TrendingUp,
'Sentiment': MessageSquare,
'Risks': AlertTriangle,
};
// Split by markdown headers (##)
const parts = analysis.split(/^## /gm).filter(Boolean);
for (const part of parts) {
const lines = part.trim().split('\n');
const title = lines[0].trim();
const content = lines.slice(1).join('\n').trim();
if (title && content) {
sections.push({
title,
content,
icon: iconMap[title] || Brain,
});
}
}
// If no sections found, treat the whole thing as a summary
if (sections.length === 0 && analysis.trim()) {
sections.push({
title: 'Analysis',
content: analysis.trim(),
icon: Brain,
});
}
return sections;
}
function AnalysisSection({ section, defaultOpen = true }: { section: Section; defaultOpen?: boolean }) {
const [isOpen, setIsOpen] = useState(defaultOpen);
const Icon = section.icon;
return (
<div className="border-b border-gray-100 dark:border-slate-700 last:border-0">
<button
onClick={() => setIsOpen(!isOpen)}
className="w-full flex items-center justify-between px-4 py-2.5 text-left hover:bg-gray-50 dark:hover:bg-slate-700/50 transition-colors"
>
<div className="flex items-center gap-2">
<Icon className="w-4 h-4 text-nifty-600 dark:text-nifty-400" />
<span className="font-medium text-sm text-gray-900 dark:text-gray-100">{section.title}</span>
</div>
{isOpen ? (
<ChevronUp className="w-4 h-4 text-gray-400" />
) : (
<ChevronDown className="w-4 h-4 text-gray-400" />
)}
</button>
{isOpen && (
<div className="px-4 pb-3 text-sm text-gray-600 dark:text-gray-300 whitespace-pre-wrap leading-relaxed">
{section.content.split('\n').map((line, i) => {
// Handle bullet points
if (line.trim().startsWith('- ')) {
return (
<div key={i} className="flex gap-2 mt-1">
<span className="text-nifty-500"></span>
<span>{line.trim().substring(2)}</span>
</div>
);
}
return <p key={i} className={line.trim() ? 'mt-1' : 'mt-2'}>{line}</p>;
})}
</div>
)}
</div>
);
}
export default function AIAnalysisPanel({
analysis,
decision,
defaultExpanded = false,
}: AIAnalysisPanelProps) {
const [isExpanded, setIsExpanded] = useState(defaultExpanded);
const sections = parseAnalysis(analysis);
const decisionGradient = {
BUY: 'from-green-500 to-emerald-600',
SELL: 'from-red-500 to-rose-600',
HOLD: 'from-amber-500 to-orange-600',
};
const gradient = decision ? decisionGradient[decision] : 'from-nifty-500 to-nifty-700';
return (
<section className="card overflow-hidden">
{/* Header with gradient */}
<button
onClick={() => setIsExpanded(!isExpanded)}
className={`w-full bg-gradient-to-r ${gradient} p-3 text-white flex items-center justify-between`}
>
<div className="flex items-center gap-2">
<Brain className="w-5 h-5" />
<span className="font-semibold text-sm">AI Analysis</span>
<span className="text-xs bg-white/20 px-2 py-0.5 rounded-full">
{sections.length} sections
</span>
</div>
<div className="flex items-center gap-2">
<span className="text-xs text-white/80">
{isExpanded ? 'Click to collapse' : 'Click to expand'}
</span>
{isExpanded ? (
<ChevronUp className="w-4 h-4" />
) : (
<ChevronDown className="w-4 h-4" />
)}
</div>
</button>
{/* Content */}
{isExpanded && (
<div className="bg-white dark:bg-slate-800">
{sections.map((section, index) => (
<AnalysisSection
key={index}
section={section}
defaultOpen={index === 0}
/>
))}
</div>
)}
</section>
);
}

View File

@ -0,0 +1,72 @@
import { Check, X, Minus } from 'lucide-react';
interface AccuracyBadgeProps {
correct: boolean | null;
returnPercent: number;
size?: 'small' | 'default';
}
export default function AccuracyBadge({
correct,
returnPercent,
size = 'default',
}: AccuracyBadgeProps) {
const isPositiveReturn = returnPercent >= 0;
const sizeClasses = size === 'small' ? 'text-xs px-1.5 py-0.5 gap-1' : 'text-sm px-2 py-1 gap-1.5';
const iconSize = size === 'small' ? 'w-3 h-3' : 'w-3.5 h-3.5';
if (correct === null) {
return (
<span className={`inline-flex items-center rounded-full font-medium bg-gray-100 dark:bg-slate-700 text-gray-500 dark:text-gray-400 ${sizeClasses}`}>
<Minus className={iconSize} />
<span>Pending</span>
</span>
);
}
if (correct) {
return (
<span className={`inline-flex items-center rounded-full font-medium bg-green-100 dark:bg-green-900/30 text-green-700 dark:text-green-400 ${sizeClasses}`}>
<Check className={iconSize} />
<span className={isPositiveReturn ? '' : 'text-green-600 dark:text-green-400'}>
{isPositiveReturn ? '+' : ''}{returnPercent.toFixed(1)}%
</span>
</span>
);
}
return (
<span className={`inline-flex items-center rounded-full font-medium bg-red-100 dark:bg-red-900/30 text-red-700 dark:text-red-400 ${sizeClasses}`}>
<X className={iconSize} />
<span>
{isPositiveReturn ? '+' : ''}{returnPercent.toFixed(1)}%
</span>
</span>
);
}
interface AccuracyRateProps {
rate: number;
label?: string;
size?: 'small' | 'default';
}
export function AccuracyRate({ rate, label = 'Accuracy', size = 'default' }: AccuracyRateProps) {
const percentage = rate * 100;
const isGood = percentage >= 60;
const isModerate = percentage >= 40 && percentage < 60;
const sizeClasses = size === 'small' ? 'text-xs' : 'text-sm';
const colorClass = isGood
? 'text-green-600 dark:text-green-400'
: isModerate
? 'text-amber-600 dark:text-amber-400'
: 'text-red-600 dark:text-red-400';
return (
<div className={`flex items-center gap-1.5 ${sizeClasses}`}>
<span className="text-gray-500 dark:text-gray-400">{label}:</span>
<span className={`font-semibold ${colorClass}`}>{percentage.toFixed(0)}%</span>
</div>
);
}

View File

@ -0,0 +1,177 @@
import { X, HelpCircle, TrendingUp, TrendingDown, Minus, CheckCircle } from 'lucide-react';
import type { AccuracyMetrics } from '../types';
interface AccuracyExplainModalProps {
isOpen: boolean;
onClose: () => void;
metrics: AccuracyMetrics;
}
export default function AccuracyExplainModal({ isOpen, onClose, metrics }: AccuracyExplainModalProps) {
if (!isOpen) return null;
const buyCorrect = Math.round(metrics.buy_accuracy * metrics.total_predictions * 0.14); // ~7 buy signals
const buyTotal = Math.round(metrics.total_predictions * 0.14);
const sellCorrect = Math.round(metrics.sell_accuracy * metrics.total_predictions * 0.2); // ~10 sell signals
const sellTotal = Math.round(metrics.total_predictions * 0.2);
const holdCorrect = Math.round(metrics.hold_accuracy * metrics.total_predictions * 0.66); // ~33 hold signals
const holdTotal = Math.round(metrics.total_predictions * 0.66);
return (
<div className="fixed inset-0 z-50 flex items-center justify-center p-4">
{/* Backdrop */}
<div
className="absolute inset-0 bg-black/50 backdrop-blur-sm"
onClick={onClose}
/>
{/* Modal */}
<div className="relative bg-white dark:bg-slate-800 rounded-xl shadow-xl max-w-lg w-full max-h-[90vh] overflow-y-auto">
{/* Header */}
<div className="sticky top-0 flex items-center justify-between p-4 border-b border-gray-100 dark:border-slate-700 bg-white dark:bg-slate-800">
<div className="flex items-center gap-2">
<HelpCircle className="w-5 h-5 text-nifty-600 dark:text-nifty-400" />
<h2 className="text-lg font-semibold text-gray-900 dark:text-gray-100">
How Accuracy is Calculated
</h2>
</div>
<button
onClick={onClose}
className="p-1.5 rounded-lg hover:bg-gray-100 dark:hover:bg-slate-700 transition-colors"
>
<X className="w-5 h-5 text-gray-500 dark:text-gray-400" />
</button>
</div>
{/* Content */}
<div className="p-4 space-y-5">
{/* Overview */}
<div className="p-4 rounded-lg bg-nifty-50 dark:bg-nifty-900/20 border border-nifty-100 dark:border-nifty-800">
<h3 className="font-semibold text-gray-900 dark:text-gray-100 mb-2">Overall Accuracy</h3>
<div className="text-3xl font-bold text-nifty-600 dark:text-nifty-400 mb-1">
{(metrics.success_rate * 100).toFixed(1)}%
</div>
<p className="text-sm text-gray-600 dark:text-gray-400">
{metrics.correct_predictions} correct out of {metrics.total_predictions} predictions
</p>
</div>
{/* Formula */}
<div>
<h3 className="font-semibold text-gray-900 dark:text-gray-100 mb-2">Calculation Method</h3>
<div className="p-3 rounded-lg bg-gray-50 dark:bg-slate-700/50 font-mono text-sm">
<p className="text-gray-700 dark:text-gray-300">
Accuracy = (Correct Predictions / Total Predictions) × 100
</p>
<p className="text-gray-500 dark:text-gray-400 mt-2 text-xs">
= ({metrics.correct_predictions} / {metrics.total_predictions}) × 100 = {(metrics.success_rate * 100).toFixed(1)}%
</p>
</div>
</div>
{/* Decision Type Breakdown */}
<div>
<h3 className="font-semibold text-gray-900 dark:text-gray-100 mb-3">Breakdown by Decision Type</h3>
<div className="space-y-3">
{/* BUY */}
<div className="p-3 rounded-lg bg-green-50 dark:bg-green-900/20 border border-green-100 dark:border-green-800">
<div className="flex items-center justify-between mb-2">
<div className="flex items-center gap-2">
<TrendingUp className="w-4 h-4 text-green-600 dark:text-green-400" />
<span className="font-medium text-green-800 dark:text-green-300">BUY Predictions</span>
</div>
<span className="text-lg font-bold text-green-600 dark:text-green-400">
{(metrics.buy_accuracy * 100).toFixed(0)}%
</span>
</div>
<p className="text-xs text-green-700 dark:text-green-400">
A BUY prediction is correct if the stock price <strong>increased</strong> after the recommendation
</p>
<div className="flex items-center gap-2 mt-2 text-xs text-green-600 dark:text-green-500">
<CheckCircle className="w-3 h-3" />
<span>~{buyCorrect} correct / {buyTotal} total BUY signals</span>
</div>
</div>
{/* SELL */}
<div className="p-3 rounded-lg bg-red-50 dark:bg-red-900/20 border border-red-100 dark:border-red-800">
<div className="flex items-center justify-between mb-2">
<div className="flex items-center gap-2">
<TrendingDown className="w-4 h-4 text-red-600 dark:text-red-400" />
<span className="font-medium text-red-800 dark:text-red-300">SELL Predictions</span>
</div>
<span className="text-lg font-bold text-red-600 dark:text-red-400">
{(metrics.sell_accuracy * 100).toFixed(0)}%
</span>
</div>
<p className="text-xs text-red-700 dark:text-red-400">
A SELL prediction is correct if the stock price <strong>decreased</strong> after the recommendation
</p>
<div className="flex items-center gap-2 mt-2 text-xs text-red-600 dark:text-red-500">
<CheckCircle className="w-3 h-3" />
<span>~{sellCorrect} correct / {sellTotal} total SELL signals</span>
</div>
</div>
{/* HOLD */}
<div className="p-3 rounded-lg bg-amber-50 dark:bg-amber-900/20 border border-amber-100 dark:border-amber-800">
<div className="flex items-center justify-between mb-2">
<div className="flex items-center gap-2">
<Minus className="w-4 h-4 text-amber-600 dark:text-amber-400" />
<span className="font-medium text-amber-800 dark:text-amber-300">HOLD Predictions</span>
</div>
<span className="text-lg font-bold text-amber-600 dark:text-amber-400">
{(metrics.hold_accuracy * 100).toFixed(0)}%
</span>
</div>
<p className="text-xs text-amber-700 dark:text-amber-400">
A HOLD prediction is correct if the stock price stayed <strong>relatively stable</strong> (±2% range)
</p>
<div className="flex items-center gap-2 mt-2 text-xs text-amber-600 dark:text-amber-500">
<CheckCircle className="w-3 h-3" />
<span>~{holdCorrect} correct / {holdTotal} total HOLD signals</span>
</div>
</div>
</div>
</div>
{/* Timeframe */}
<div>
<h3 className="font-semibold text-gray-900 dark:text-gray-100 mb-2">Evaluation Timeframe</h3>
<div className="p-3 rounded-lg bg-gray-50 dark:bg-slate-700/50">
<ul className="text-sm text-gray-600 dark:text-gray-400 space-y-1">
<li className="flex items-start gap-2">
<span className="text-nifty-600 dark:text-nifty-400"></span>
<span><strong>1-week return:</strong> Short-term price movement validation</span>
</li>
<li className="flex items-start gap-2">
<span className="text-nifty-600 dark:text-nifty-400"></span>
<span><strong>1-month return:</strong> Primary accuracy metric (shown in results)</span>
</li>
</ul>
</div>
</div>
{/* Disclaimer */}
<div className="p-3 rounded-lg bg-gray-100 dark:bg-slate-700/30 border border-gray-200 dark:border-slate-600">
<p className="text-xs text-gray-500 dark:text-gray-400">
<strong>Note:</strong> Past performance does not guarantee future results.
Accuracy metrics are based on historical data and are for educational purposes only.
Market conditions can change rapidly and predictions may not hold in future periods.
</p>
</div>
</div>
{/* Footer */}
<div className="sticky bottom-0 p-4 border-t border-gray-100 dark:border-slate-700 bg-white dark:bg-slate-800">
<button
onClick={onClose}
className="w-full btn-primary"
>
Got it
</button>
</div>
</div>
</div>
);
}

View File

@ -0,0 +1,92 @@
import { LineChart, Line, XAxis, YAxis, CartesianGrid, Tooltip, ResponsiveContainer, Legend } from 'recharts';
import { getAccuracyTrend } from '../data/recommendations';
interface AccuracyTrendChartProps {
height?: number;
className?: string;
}
export default function AccuracyTrendChart({ height = 200, className = '' }: AccuracyTrendChartProps) {
const data = getAccuracyTrend();
if (data.length === 0) {
return (
<div className={`flex items-center justify-center text-gray-400 ${className}`} style={{ height }}>
No accuracy data available
</div>
);
}
// Format dates for display
const formattedData = data.map(d => ({
...d,
displayDate: new Date(d.date).toLocaleDateString('en-IN', { month: 'short', day: 'numeric' }),
}));
return (
<div className={className} style={{ height }}>
<ResponsiveContainer width="100%" height="100%">
<LineChart data={formattedData} margin={{ top: 5, right: 10, bottom: 5, left: 0 }}>
<CartesianGrid strokeDasharray="3 3" className="stroke-gray-200 dark:stroke-slate-700" />
<XAxis
dataKey="displayDate"
tick={{ fontSize: 11 }}
className="text-gray-500 dark:text-gray-400"
/>
<YAxis
domain={[0, 100]}
tick={{ fontSize: 11 }}
tickFormatter={(v) => `${v}%`}
className="text-gray-500 dark:text-gray-400"
/>
<Tooltip
contentStyle={{
backgroundColor: 'var(--tooltip-bg, #fff)',
border: '1px solid var(--tooltip-border, #e5e7eb)',
borderRadius: '8px',
fontSize: '12px',
}}
formatter={(value) => [`${value}%`, '']}
labelFormatter={(label) => `Date: ${label}`}
/>
<Legend
wrapperStyle={{ fontSize: '11px' }}
formatter={(value) => value.charAt(0).toUpperCase() + value.slice(1)}
/>
<Line
type="monotone"
dataKey="overall"
stroke="#0ea5e9"
strokeWidth={2}
dot={{ fill: '#0ea5e9', r: 3 }}
activeDot={{ r: 5 }}
/>
<Line
type="monotone"
dataKey="buy"
stroke="#22c55e"
strokeWidth={1.5}
dot={{ fill: '#22c55e', r: 2 }}
strokeDasharray="5 5"
/>
<Line
type="monotone"
dataKey="sell"
stroke="#ef4444"
strokeWidth={1.5}
dot={{ fill: '#ef4444', r: 2 }}
strokeDasharray="5 5"
/>
<Line
type="monotone"
dataKey="hold"
stroke="#f59e0b"
strokeWidth={1.5}
dot={{ fill: '#f59e0b', r: 2 }}
strokeDasharray="5 5"
/>
</LineChart>
</ResponsiveContainer>
</div>
);
}

View File

@ -0,0 +1,64 @@
import { AreaChart, Area, ResponsiveContainer, YAxis } from 'recharts';
import type { PricePoint } from '../types';
interface BackgroundSparklineProps {
data: PricePoint[];
trend: 'up' | 'down' | 'flat';
className?: string;
}
export default function BackgroundSparkline({
data,
trend,
className = '',
}: BackgroundSparklineProps) {
if (!data || data.length < 2) {
return null;
}
// Normalize data to percentage change from first point
const basePrice = data[0].price;
const normalizedData = data.map(point => ({
...point,
normalizedPrice: ((point.price - basePrice) / basePrice) * 100,
}));
// Calculate min/max for domain padding
const prices = normalizedData.map(d => d.normalizedPrice);
const minPrice = Math.min(...prices);
const maxPrice = Math.max(...prices);
const padding = Math.max(Math.abs(maxPrice - minPrice) * 0.2, 1);
// Colors based on trend
const colors = {
up: { stroke: '#22c55e', fill: '#22c55e' },
down: { stroke: '#ef4444', fill: '#ef4444' },
flat: { stroke: '#94a3b8', fill: '#94a3b8' },
};
const { stroke, fill } = colors[trend];
return (
<div className={`w-full h-full ${className}`} style={{ filter: 'blur(1px)' }}>
<ResponsiveContainer width="100%" height="100%">
<AreaChart data={normalizedData} margin={{ top: 0, right: 0, bottom: 0, left: 0 }}>
<YAxis domain={[minPrice - padding, maxPrice + padding]} hide />
<defs>
<linearGradient id={`gradient-${trend}`} x1="0" y1="0" x2="0" y2="1">
<stop offset="0%" stopColor={fill} stopOpacity={0.4} />
<stop offset="100%" stopColor={fill} stopOpacity={0.05} />
</linearGradient>
</defs>
<Area
type="monotone"
dataKey="normalizedPrice"
stroke={stroke}
strokeWidth={1}
fill={`url(#gradient-${trend})`}
isAnimationActive={false}
/>
</AreaChart>
</ResponsiveContainer>
</div>
);
}

Some files were not shown because too many files have changed in this diff Show More