Added comprehensive tests and simplified ollama dockersetup

This commit is contained in:
chauhang 2025-06-23 20:32:31 -07:00
parent 780dcabb66
commit a4725b575b
14 changed files with 1207 additions and 196 deletions

View File

@ -1,6 +1,8 @@
# Git specific
.git
.gitignore
.gitattributes
*.git
# Python specific
__pycache__/
@ -11,23 +13,113 @@ __pycache__/
.Python
env/
venv/
.venv/
.pytest_cache/
.coverage
.coverage.*
htmlcov/
.tox/
.mypy_cache/
.dmypy.json
dmypy.json
# Environment files
.env
.env.*
!.env.example
# IDE/Editor specific
.vscode/
.idea/
*.swp
*.swo
*.sublime-*
.spyderproject
.spyproject
# Ollama cache directory (if it somehow ends up in build context)
# Model cache directories (can be large)
.ollama/
ollama_data/
.cache/
.local/
# Docker specific files (should not be part of the image content itself)
Dockerfile
docker-compose.yml
# Documentation and non-essential files
*.md
!README.md
docs/
assets/
*.png
*.jpg
*.jpeg
*.gif
*.svg
!assets/TauricResearch.png
# Build artifacts and logs
build/
dist/
*.log
logs/
*.tmp
*.temp
# Test files (uncomment if you don't want tests in production image)
# tests/test_*.py
# test_*.py
# *_test.py
# Docker and deployment files
Dockerfile*
docker-compose*.yml
.dockerignore
build*.sh
deploy*.sh
k8s/
helm/
# Development tools
.devcontainer/
.github/
.gitlab-ci.yml
.travis.yml
.circleci/
Makefile
# Data files (can be large)
data/
*.csv
*.json
*.xlsx
*.db
*.sqlite
# Temporary and backup files
*.bak
*.backup
*.orig
*.rej
~*
.#*
\#*#
# OS specific
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
Desktop.ini
# Node.js (if any frontend assets)
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Lock files (include them for reproducible builds)
# Uncomment if you want to exclude them
# uv.lock
# poetry.lock
# Pipfile.lock

View File

@ -1,6 +1,6 @@
# This is an example .env file for the Trading Agent project.
# Copy this file to .env and fill in your API keys and environment configurations.
# "NOTE: When using for `docker` command do not use quotes around the values, otherwiser environment variables will not be set."
# "NOTE: When using for `docker` command do not use quotes around the values, otherwise environment variables will not be set."
# API Keys
# Set your OpenAI API key, for OpenAI, Ollama or other OpenAI-compatible models
@ -15,16 +15,16 @@ LLM_PROVIDER=openai
LLM_BACKEND_URL=https://api.openai.com/v1
# Uncomment for LLM Configuration for loacl ollama
# Uncomment for LLM Configuration for local ollama
#LLM_PROVIDER=ollama
## For Ollama running in the same container, /v1 added for OpenAI compatibility
#LLM_BACKEND_URL=http://localhost:11434/v1
# Set name of the Deep think model for the main
#LLM_DEEP_THINK_MODEL=qwen3:0.6b
## Setname of the quick think model for the main
#LLM_QUICK_THINK_MODEL=qwen3:0.6b
# Set name of the Deep think model
LLM_DEEP_THINK_MODEL=llama3.2
## Setname of the quick think model
LLM_QUICK_THINK_MODEL=qwen3
# Set the name of the embedding model
#LLM_EMBEDDING_MODEL=nomic-embed-text
LLM_EMBEDDING_MODEL=nomic-embed-text
# Agent Configuration
# Maximum number of debate rounds for the agent to engage in choose from 1, 3, 5

1
.gitignore vendored
View File

@ -7,6 +7,7 @@ eval_results/
eval_data/
*.egg-info/
.ollama/
ollama_data/
.local/
.cache/
.pytest_cache/

View File

@ -7,28 +7,62 @@ The recommended method is using `docker-compose`, which handles the entire stack
## Prerequisites
Before you begin, ensure you have the following installed:
* [**Docker**](https://docs.docker.com/get-docker/)
* [**Docker Compose**](https://docs.docker.com/compose/install/) (usually included with Docker Desktop)
- [**Docker**](https://docs.docker.com/get-docker/)
- [**Docker Compose**](https://docs.docker.com/compose/install/) (usually included with Docker Desktop)
## 🤔 Which Option Should I Choose?
| Feature | OpenAI | Local Ollama |
| ------------------------- | ------------------------- | ----------------------------- |
| **Setup Time** | 2-5 minutes | 15-30 minutes |
| **Cost** | ~$0.01-0.05 per query | Free after setup |
| **Quality** | GPT-4o (excellent) | Depends on model |
| **Privacy** | Data sent to OpenAI | Fully private |
| **Internet Required** | Yes | No (after setup) |
| **Hardware Requirements** | None | 4GB+ RAM recommended |
| **Model Downloads** | None | Depends on model |
| **Best For** | Quick testing, production | Privacy-focused, cost control |
**💡 Recommendation**: Start with OpenAI for quick testing, then switch to Ollama for production if privacy/cost is important.
## ⚡ Quickstart
For those familiar with Docker, here are the essential steps:
### Option A: Using OpenAI (Recommended for beginners)
```bash
# 1. Clone the repository
git clone https://github.com/AppliedAIMuse/TradingAgents.git
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
# 2. Create the environment file
# 2. Create and configure environment file
cp .env.example .env
# Edit .env: Set LLM_PROVIDER=openai and add your OPENAI_API_KEY
# 3. Edit .env and set your API Keys or pick local LLM settings to run locally
# 3. Build and run with OpenAI
docker compose --profile openai build
docker compose --profile openai run -it app-openai
```
# 4. Build the app
docker-compose build
### Option B: Using Local Ollama (Free but requires more setup)
```bash
# 1. Clone the repository
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
# 2. Create environment file
cp .env.example .env
# Edit .env: Set LLM_PROVIDER=ollama
# 3. Start Ollama service
docker compose --profile ollama up -d --build
# 4. Initialize models (first time only)
./init-ollama.sh
# 5. Run the command-line app
docker-compose run -it app
docker compose --profile ollama run -it app-ollama
```
## Step-by-Step Instructions
@ -48,75 +82,213 @@ The application is configured using an environment file. Create your own `.env`
cp .env.example .env
```
Next, open the `.env` file and customize the settings. The most important variables are `LLM_PROVIDER` and `OPENAI_API_KEY`.
#### Option A: OpenAI Configuration (Recommended)
* **To use the local Ollama server:**
```env
LLM_PROVIDER="ollama"
```
* **To use external provider like OpenAI:**
```env
LLM_PROVIDER="openai"
OPENAI_API_KEY="your-api-key-here"
```
> **Note:** If you use an external provider, the Ollama service will not start, saving system resources.
Edit your `.env` file and set:
### Step 3: Run with `docker-compose` (Recommended)
```env
# LLM Provider Configuration
LLM_PROVIDER=openai
LLM_BACKEND_URL=https://api.openai.com/v1
This is the simplest way to run the entire application.
# API Keys
OPENAI_API_KEY=your-actual-openai-api-key-here
FINNHUB_API_KEY=your-finnhub-api-key-here
#### Build and Start the Containers
The following command will build the Docker image, download the required LLM models (if using Ollama), and start the application.
```bash
# Use --build the first time or when you change dependencies
docker-compose build
# On subsequent runs, you can run directly
docker-compose run -it app
# Agent Configuration
MAX_DEBATE_ROUNDS=1
ONLINE_TOOLS=True
```
The first time you run this, it may take several minutes to download the base image and the LLM models. Subsequent builds will be much faster thanks to Docker's caching.
**Benefits of OpenAI:**
- ✅ No local setup required
- ✅ Higher quality responses (GPT-4o)
- ✅ Faster startup (no model downloads)
- ✅ No GPU/CPU requirements
- ❌ Requires API costs ($0.01-0.05 per query)
#### Option B: Local Ollama Configuration (Free)
Edit your `.env` file and set:
```env
# LLM Provider Configuration
LLM_PROVIDER=ollama
LLM_BACKEND_URL=http://ollama:11434/v1
# Local Models
LLM_DEEP_THINK_MODEL=llama3.2
LLM_QUICK_THINK_MODEL=qwen3
LLM_EMBEDDING_MODEL=nomic-embed-text
# API Keys (still need Finnhub for market data)
FINNHUB_API_KEY=your-finnhub-api-key-here
# Agent Configuration
MAX_DEBATE_ROUNDS=1
ONLINE_TOOLS=True
```
**Benefits of Ollama:**
- ✅ Completely free after setup
- ✅ Data privacy (runs locally)
- ✅ Works offline
- ❌ Requires initial setup and model downloads
- ❌ Slower responses than cloud APIs
### Step 3: Run with Docker Compose
Choose the appropriate method based on your LLM provider configuration:
#### Option A: Running with OpenAI
### Running on GPU machines
For running on GPU machines, uncomment the `deploy gpu resource` section in docker-compose.yml and run the commands.
```bash
docker-compose run -it app
# Build the app container
docker compose --profile openai build
# Test OpenAI connection (optional)
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
# Run the trading agents
docker compose --profile openai run -it app-openai
```
**No additional services needed** - the app connects directly to OpenAI's API.
#### Option B: Running with Ollama (CPU)
```bash
# Start the Ollama service
docker compose --profile ollama up -d --build
# Initialize Ollama models (first time only)
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
# Test Ollama connection (optional)
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
# Run the trading agents
docker compose --profile ollama run -it app-ollama
```
#### Option C: Running with Ollama (GPU)
First, uncomment the GPU configuration in docker-compose.yml:
```yaml
# deploy:
# resources:
# reservations:
# devices:
# - capabilities: ["gpu"]
```
Then run:
```bash
# Start with GPU support
docker compose --profile ollama up -d --build
# Initialize Ollama models (first time only)
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
# Run the trading agents
docker compose --profile ollama run -it app-ollama
```
#### View Logs
To view the application logs in real-time, you can run:
```bash
docker-compose logs -f
docker compose --profile ollama logs -f
```
#### Stop the Containers
To stop and remove the containers, press `Ctrl + C` in the terminal where `docker-compose run` is running, or run the following command from another terminal:
To stop and remove the containers:
```bash
docker-compose down
docker compose --profile ollama down
```
### Step 4: Verify Your Setup (Optional)
### Step 4: Verify the Ollama Setup (Optional)
#### For OpenAI Setup:
If you are using `LLM_PROVIDER="ollama"`, you can verify that the Ollama server is running correctly and has the necessary models.
Run the verification script inside the running container:
```bash
docker-compose exec app python test_ollama_connection.py
# Test OpenAI API connection
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
# Run a quick trading analysis test
docker compose --profile openai run --rm app-openai python tests/test_setup.py
# Run all tests automatically
docker compose --profile openai run --rm app-openai python tests/run_tests.py
```
### Step 5: Run Ollama server commands (Optional)
#### For Ollama Setup:
If you are using `LLM_PROVIDER="ollama"`, you can verify run any of the Ollama server commands like list of all the models using:
```bash
docker-compose exec app ollama list
# Test Ollama connection
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
# Run a quick trading analysis test
docker compose --profile ollama exec app-ollama python tests/test_setup.py
# Run all tests automatically
docker compose --profile ollama exec app-ollama python tests/run_tests.py
```
### Step 5: Model Management (Optional)
#### View and Manage Models
```bash
# List all available models
docker compose --profile ollama exec ollama ollama list
# Check model cache size
du -sh ./ollama_data
# Pull additional models (cached locally)
docker compose --profile ollama exec ollama ollama pull llama3.2
# Remove a model (frees up cache space)
docker compose --profile ollama exec ollama ollama rm model-name
```
#### Model Cache Benefits
- **Persistence**: Models downloaded once are reused across container restarts
- **Speed**: Subsequent startups are much faster (seconds vs minutes)
- **Bandwidth**: No need to re-download multi-GB models
- **Offline**: Once cached, models work without internet connection
#### Troubleshooting Cache Issues
```bash
# If models seem corrupted, clear cache and re-initialize
docker compose --profile ollama down
rm -rf ./ollama_data
docker compose --profile ollama up -d
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
```
✅ **Expected Output:**
```
Testing Ollama connection:
Backend URL: http://localhost:11434/v1
@ -135,40 +307,46 @@ Testing Ollama connection:
If you prefer not to use `docker-compose`, you can build and run the container manually.
**1. Build the Docker Image:**
```bash
docker build -t trading-agents .
```
**2. Test local ollama setup (Optional):**
Make sure you have a `.env` file configured as described in Step 2. If you are using `LLM_PROVIDER="ollama"`, you can verify that the Ollama server is running correctly and has the necessary models.
```bash
docker run -it --env-file .env trading-agents python test_ollama_connection.py
```
for picking environment settings from .env file. You can pass values directly using:
```bash
docker run -it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
trading-agents \
python test_ollama_connection.py
```
To prevent re-downloading of Ollama models, mount folder from your host and run as
```bash
docker run -it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
-v ./ollama_cache:/app/.ollama \
trading-agents \
python test_ollama_connection.py
docker run -it --env-file .env trading-agents python tests/test_ollama_connection.py
```
for picking environment settings from .env file. You can pass values directly using:
```bash
docker run -it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
trading-agents \
python tests/test_ollama_connection.py
```
To prevent re-downloading of Ollama models, mount folder from your host and run as
```bash
docker run -it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
-v ./ollama_cache:/app/.ollama \
trading-agents \
python tests/test_ollama_connection.py
```
**3. Run the Docker Container:**
Make sure you have a `.env` file configured as described in Step 2.
```bash
docker run --rm -it \
--env-file .env \
@ -179,7 +357,8 @@ docker run --rm -it \
```
**4. Run on GPU machine:**
For running on GPU machine, pass `--gpus=all` flag to the `docker run` command:
For running on GPU machine, pass `--gpus=all` flag to the `docker run` command:
```bash
docker run --rm -it \
--gpus=all \
@ -192,20 +371,72 @@ docker run --rm -it \
## Configuration Details
### Test Suite Organization
All test scripts are organized in the `tests/` directory:
```
tests/
├── __init__.py # Python package initialization
├── run_tests.py # Automated test runner
├── test_openai_connection.py # OpenAI API connectivity tests
├── test_ollama_connection.py # Ollama connectivity tests
└── test_setup.py # General setup and configuration tests
```
**Automated Testing:**
```bash
# Run all tests automatically (detects provider) - from project root
python tests/run_tests.py
# Run specific test - from project root
python tests/test_openai_connection.py
python tests/test_ollama_connection.py
python tests/test_setup.py
```
**⚠️ Important**: When running tests locally (outside Docker), always run from the **project root directory**, not from inside the `tests/` folder. The Docker commands automatically handle this.
### Live Reloading
The `app` directory is mounted as a volume into the container. This means any changes you make to the source code on your local machine will be reflected instantly in the running container without needing to rebuild the image.
### Persistent Data
### Persistent Data & Model Caching
The following volumes are used to persist data between container runs:
* `./data`: Stores any data generated by or used by the application.
* `.ollama`: Caches the Ollama models, so they don't need to be re-downloaded every time you restart the container.
- **`./data`**: Stores application data, trading reports, and cached market data
- **`./ollama_data`**: Caches downloaded Ollama models (typically 1-4GB per model)
#### Model Cache Management
The Ollama models are automatically cached in `./ollama_data/` on your host machine:
- **First run**: Models are downloaded automatically (may take 5-15 minutes depending on internet speed)
- **Subsequent runs**: Models are reused from cache, startup is much faster
- **Cache location**: `./ollama_data/` directory in your project folder
- **Cache size**: Typically 2-6GB total for the required models
```bash
# Check cache size
du -sh ./ollama_data
# Clean cache if needed (will require re-downloading models)
rm -rf ./ollama_data
# List cached models
docker compose --profile ollama exec ollama ollama list
```
### GPU troubleshooting
If you find model is running very slow on GPU machine, make sur you the latest GPU drivers installed and GPU is working fine with docker. Eg you can check for Nvidia GPUs by running:
If you find model is running very slow on GPU machine, make sur you the latest GPU drivers installed and GPU is working fine with docker. Eg you can check for Nvidia GPUs by running:
```bash
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
or
nvidia-smi
```
```

View File

@ -16,10 +16,6 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
git \
&& apt-get clean
# Install Ollama in builder stage with cache mount for downloads
RUN --mount=type=cache,target=/tmp/ollama-cache \
curl -fsSL https://ollama.com/install.sh | sh
# Create virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
@ -36,8 +32,7 @@ FROM python:3.9-slim-bookworm AS runtime
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
OLLAMA_HOST=0.0.0.0
PIP_DEFAULT_TIMEOUT=100
# Install runtime dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
@ -47,9 +42,6 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
git \
&& apt-get clean
# Copy Ollama from builder stage instead of installing again
COPY --from=builder /usr/local/bin/ollama /usr/local/bin/ollama
# Copy virtual environment from builder stage
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
@ -60,10 +52,6 @@ RUN groupadd -r appuser && useradd -r -g appuser -s /bin/bash -d /app appuser
# Create app directory
WORKDIR /app
# Copy the entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Copy the application code
COPY . .
@ -73,11 +61,5 @@ RUN chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
# Set the entrypoint
ENTRYPOINT ["/bin/sh", "-c", "if [ \"$LLM_PROVIDER\" = \"ollama\" ]; then ./docker-entrypoint.sh; else exec \"$@\"; fi", "--"]
# Default command (can be overridden, e.g., by pytest command in CI)
CMD ["python", "-m", "cli.main"]
EXPOSE 11434

View File

@ -1,38 +1,74 @@
version: '3.8' # Specify a version
version: "3.8"
services:
app:
build: . # Use the Dockerfile in the current directory
# Ollama service for local LLM
ollama:
image: ollama/ollama:latest
container_name: ollama
network_mode: host
volumes:
- .:/app # Mount current directory to /app in container for live code changes
- ./.ollama:/app/.ollama # Cache Ollama models
- ./data:/app/data # Mount data directory for data files
env_file:
- .env # Load environment variables from files.env
#environment:
# - LLM_PROVIDER=ollama
# - LLM_BACKEND_URL=http://localhost:11434/v1
# - LLM_DEEP_THINK_MODEL=qwen3:0.6b
# - LLM_QUICK_THINK_MODEL=qwen3:0.6b
# - LLM_EMBEDDING_MODEL=nomic-embed-text
# - MAX_DEBATE_ROUNDS=1
# - ONLINE_TOOLS=False
# The default command in the Dockerfile is `python main.py`.
# For running tests with compose, one can use:
# docker-compose run --rm app pytest tests/test_main.py
# Or, we can set a default command here to run tests:
#command: python test_ollama_connection.py # Uncomment to run a specific test script
#command: python -m cli.main # Uncomment to run cli interface
#command: python -m main # uncomment to run the main application
tty: true # Keep the container running
stdin_open: true # Keep stdin open for interactive mode
- ./ollama_data:/root/.ollama
# Uncomment for GPU support
# deploy:
# resources:
# reservations:
# devices:
# - capabilities: ["gpu"]
profiles:
- ollama
# Uncomment the following lines to enable GPU support
# For more information, refer to the Docker documentation: https://docs.docker.com/compose/how-tos/gpu-support/
#deploy:
# resources:
# reservations:
# devices:
# - capabilities: ["gpu"]
ports:
- "11434:11434" # Expose port 11434 for Ollama
# App container for Ollama setup
app-ollama:
build:
context: .
container_name: trading-agents-ollama
network_mode: host
volumes:
- .:/app
- ./data:/app/data
env_file:
- .env
environment:
- LLM_BACKEND_URL=http://localhost:11434/v1
- LLM_PROVIDER=ollama
depends_on:
- ollama
tty: true
stdin_open: true
profiles:
- ollama
# App container for OpenAI setup (no Ollama dependency)
app-openai:
build:
context: .
container_name: trading-agents-openai
network_mode: host
volumes:
- .:/app
- ./data:/app/data
env_file:
- .env
environment:
- LLM_PROVIDER=openai
- LLM_BACKEND_URL=https://api.openai.com/v1
tty: true
stdin_open: true
profiles:
- openai
# Generic app container (uses .env settings as-is)
app:
build:
context: .
container_name: trading-agents
network_mode: host
volumes:
- .:/app
- ./data:/app/data
env_file:
- .env
tty: true
stdin_open: true
profiles:
- default

View File

@ -1,56 +0,0 @@
#!/bin/bash
set -e # Exit immediately if a command exits with a non-zero status.
# Start Ollama serve in the background
echo "Starting Ollama service..."
ollama serve &
OLLAMA_PID=$!
# Wait for Ollama to be ready by checking the API endpoint
echo "Waiting for Ollama to be ready..."
max_attempts=30
attempt=0
while [ $attempt -lt $max_attempts ]; do
if curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then
echo "Ollama is ready!"
break
fi
echo "Waiting for Ollama... (attempt $((attempt + 1))/$max_attempts)"
sleep 2
attempt=$((attempt + 1))
done
if [ $attempt -eq $max_attempts ]; then
echo "Error: Ollama failed to start within the expected time"
exit 1
fi
# Pull the required model. Use LLM_DEEP_THINK_MODEL, default to qwen:0.5b if not set.
MODEL_TO_PULL=${LLM_DEEP_THINK_MODEL:-"qwen3:0.6b"}
echo "Pulling Ollama model: $MODEL_TO_PULL..."
ollama pull "$MODEL_TO_PULL"
echo "Model $MODEL_TO_PULL pulled."
echo "Pulling embeddings model..."
ollama pull nomic-embed-text
echo "Embeddings model pulled."
# List models to verify the pull
echo "Listing available models..."
ollama list # List models for verification
# Test the connection before running the main application
# TODO: run based on flag for testing
echo "Testing Ollama connection..."
python test_ollama_connection.py
if [ $? -ne 0 ]; then
echo "Error: Ollama connection test failed"
exit 1
fi
echo "Ollama setup complete. Executing command: $@"
# Execute the CMD or the command passed to docker run
exec python -m cli.main "$@"
# Optional: clean up Ollama server on exit (might be complex with exec)
# trap "echo 'Stopping Ollama service...'; kill $OLLAMA_PID; exit 0" SIGINT SIGTERM

97
init-ollama.bat Normal file
View File

@ -0,0 +1,97 @@
@echo off
setlocal enabledelayedexpansion
echo 🚀 Initializing Ollama models...
REM Define required models
set DEEP_THINK_MODEL=qwen3:0.6b
set EMBEDDING_MODEL=nomic-embed-text
REM Wait for Ollama to be ready
echo ⏳ Waiting for Ollama service to start...
set max_attempts=30
set attempt=0
:wait_loop
if %attempt% geq %max_attempts% goto timeout_error
docker compose --profile ollama exec ollama ollama list >nul 2>&1
if %errorlevel% equ 0 (
echo ✅ Ollama is ready!
goto ollama_ready
)
set /a attempt=%attempt%+1
echo Waiting for Ollama... (attempt %attempt%/%max_attempts%)
timeout /t 2 /nobreak >nul
goto wait_loop
:timeout_error
echo ❌ Error: Ollama failed to start within the expected time
exit /b 1
:ollama_ready
REM Check cache directory
if exist ".\ollama_data" (
echo 📁 Found existing ollama_data cache directory
for /f "tokens=*" %%a in ('dir ".\ollama_data" /s /-c ^| find "bytes"') do (
echo Cache directory exists
)
) else (
echo 📁 Creating ollama_data cache directory...
mkdir ".\ollama_data"
)
REM Get list of currently available models
echo 🔍 Checking for existing models...
docker compose --profile ollama exec ollama ollama list > temp_models.txt 2>nul
if %errorlevel% neq 0 (
echo > temp_models.txt
)
REM Check if deep thinking model exists
findstr /c:"%DEEP_THINK_MODEL%" temp_models.txt >nul
if %errorlevel% equ 0 (
echo ✅ Deep thinking model '%DEEP_THINK_MODEL%' already available
) else (
echo 📥 Pulling deep thinking model: %DEEP_THINK_MODEL%...
docker compose --profile ollama exec ollama ollama pull %DEEP_THINK_MODEL%
if %errorlevel% equ 0 (
echo ✅ Model %DEEP_THINK_MODEL% pulled successfully
) else (
echo ❌ Failed to pull model %DEEP_THINK_MODEL%
goto cleanup
)
)
REM Check if embedding model exists
findstr /c:"%EMBEDDING_MODEL%" temp_models.txt >nul
if %errorlevel% equ 0 (
echo ✅ Embedding model '%EMBEDDING_MODEL%' already available
) else (
echo 📥 Pulling embedding model: %EMBEDDING_MODEL%...
docker compose --profile ollama exec ollama ollama pull %EMBEDDING_MODEL%
if %errorlevel% equ 0 (
echo ✅ Model %EMBEDDING_MODEL% pulled successfully
) else (
echo ❌ Failed to pull model %EMBEDDING_MODEL%
goto cleanup
)
)
REM List all available models
echo 📋 Available models:
docker compose --profile ollama exec ollama ollama list
REM Show cache info
if exist ".\ollama_data" (
echo 💾 Model cache directory: .\ollama_data
)
echo 🎉 Ollama initialization complete!
echo 💡 Tip: Models are cached in .\ollama_data and will be reused on subsequent runs
:cleanup
if exist temp_models.txt del temp_models.txt
exit /b 0

78
init-ollama.sh Normal file
View File

@ -0,0 +1,78 @@
#!/bin/bash
set -e
echo "🚀 Initializing Ollama models..."
# Define required models
DEEP_THINK_MODEL="qwen3:0.6b"
EMBEDDING_MODEL="nomic-embed-text"
# Wait for Ollama to be ready
echo "⏳ Waiting for Ollama service to start..."
max_attempts=30
attempt=0
while [ $attempt -lt $max_attempts ]; do
if docker compose --profile ollama exec ollama ollama list > /dev/null 2>&1; then
echo "✅ Ollama is ready!"
break
fi
echo " Waiting for Ollama... (attempt $((attempt + 1))/$max_attempts)"
sleep 2
attempt=$((attempt + 1))
done
if [ $attempt -eq $max_attempts ]; then
echo "❌ Error: Ollama failed to start within the expected time"
exit 1
fi
# Check cache directory
if [ -d "./ollama_data" ]; then
echo "📁 Found existing ollama_data cache directory"
cache_size=$(du -sh ./ollama_data 2>/dev/null | cut -f1 || echo "0")
echo " Cache size: $cache_size"
else
echo "📁 Creating ollama_data cache directory..."
mkdir -p ./ollama_data
fi
# Get list of currently available models
echo "🔍 Checking for existing models..."
available_models=$(docker compose --profile ollama exec ollama ollama list 2>/dev/null | tail -n +2 | awk '{print $1}' || echo "")
# Function to check if model exists
model_exists() {
local model_name="$1"
echo "$available_models" | grep -q "^$model_name"
}
# Pull deep thinking model if not present
if model_exists "$DEEP_THINK_MODEL"; then
echo "✅ Deep thinking model '$DEEP_THINK_MODEL' already available"
else
echo "📥 Pulling deep thinking model: $DEEP_THINK_MODEL..."
docker compose --profile ollama exec ollama ollama pull "$DEEP_THINK_MODEL"
echo "✅ Model $DEEP_THINK_MODEL pulled successfully"
fi
# Pull embedding model if not present
if model_exists "$EMBEDDING_MODEL"; then
echo "✅ Embedding model '$EMBEDDING_MODEL' already available"
else
echo "📥 Pulling embedding model: $EMBEDDING_MODEL..."
docker compose --profile ollama exec ollama ollama pull "$EMBEDDING_MODEL"
echo "✅ Model $EMBEDDING_MODEL pulled successfully"
fi
# List all available models
echo "📋 Available models:"
docker compose --profile ollama exec ollama ollama list
# Show cache info
if [ -d "./ollama_data" ]; then
cache_size=$(du -sh ./ollama_data 2>/dev/null | cut -f1 || echo "unknown")
echo "💾 Model cache size: $cache_size (stored in ./ollama_data)"
fi
echo "🎉 Ollama initialization complete!"
echo "💡 Tip: Models are cached in ./ollama_data and will be reused on subsequent runs"

185
tests/README.md Normal file
View File

@ -0,0 +1,185 @@
# TradingAgents Test Suite
This directory contains all test scripts for validating the TradingAgents setup and configuration.
## Test Scripts
### 🧪 `run_tests.py` - Automated Test Runner
**Purpose**: Automatically detects your LLM provider and runs appropriate tests.
**Usage**:
```bash
# Run all tests (auto-detects provider from LLM_PROVIDER env var)
# Always run from project root, not from tests/ directory
python tests/run_tests.py
# In Docker
docker compose --profile openai run --rm app-openai python tests/run_tests.py
docker compose --profile ollama exec app-ollama python tests/run_tests.py
```
**Important**: Always run the test runner from the **project root directory**, not from inside the `tests/` directory. The runner automatically handles path resolution and changes to the correct working directory.
**Features**:
- Auto-detects LLM provider from environment
- Runs provider-specific tests only
- Provides comprehensive test summary
- Handles timeouts and error reporting
---
### 🔌 `test_openai_connection.py` - OpenAI API Tests
**Purpose**: Validates OpenAI API connectivity and functionality.
**Tests**:
- ✅ API key validation
- ✅ Chat completion (using `gpt-4o-mini`)
- ✅ Embeddings (using `text-embedding-3-small`)
- ✅ Configuration validation
**Usage**:
```bash
# From project root
python tests/test_openai_connection.py
# In Docker
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
```
**Requirements**:
- `OPENAI_API_KEY` environment variable
- `LLM_PROVIDER=openai`
---
### 🦙 `test_ollama_connection.py` - Ollama Connectivity Tests
**Purpose**: Validates Ollama server connectivity and model availability.
**Tests**:
- ✅ Ollama API accessibility
- ✅ Model availability (`qwen3:0.6b`, `nomic-embed-text`)
- ✅ OpenAI-compatible API functionality
- ✅ Chat completion and embeddings
**Usage**:
```bash
# From project root
python tests/test_ollama_connection.py
# In Docker
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
```
**Requirements**:
- Ollama server running
- Required models downloaded
- `LLM_PROVIDER=ollama`
---
### ⚙️ `test_setup.py` - General Setup Validation
**Purpose**: Validates basic TradingAgents setup and configuration.
**Tests**:
- ✅ Python package imports
- ✅ Configuration loading
- ✅ TradingAgentsGraph initialization
- ✅ Data access capabilities
**Usage**:
```bash
# From project root
python tests/test_setup.py
# In Docker
docker compose --profile openai run --rm app-openai python tests/test_setup.py
docker compose --profile ollama exec app-ollama python tests/test_setup.py
```
**Requirements**:
- TradingAgents dependencies installed
- Basic environment configuration
---
## Test Results Interpretation
### ✅ Success Indicators
- All tests pass
- API connections established
- Models available and responding
- Configuration properly loaded
### ❌ Common Issues
**OpenAI Tests Failing**:
- Check `OPENAI_API_KEY` is set correctly
- Verify API key has sufficient quota
- Ensure internet connectivity
**Ollama Tests Failing**:
- Verify Ollama service is running
- Check if models are downloaded (`./init-ollama.sh`)
- Confirm `ollama list` shows required models
**Setup Tests Failing**:
- Check Python dependencies are installed
- Verify environment variables are set
- Ensure `.env` file is properly configured
---
## Quick Testing Commands
**⚠️ Important**: Always run these commands from the **project root directory** (not from inside `tests/`):
```bash
# Test everything automatically (from project root)
python tests/run_tests.py
# Test specific provider (from project root)
LLM_PROVIDER=openai python tests/run_tests.py
LLM_PROVIDER=ollama python tests/run_tests.py
# Test individual components (from project root)
python tests/test_openai_connection.py
python tests/test_ollama_connection.py
python tests/test_setup.py
```
**Why from project root?**
- Tests need to import the `tradingagents` package
- The `tradingagents` package is located in the project root
- Running from `tests/` directory would cause import errors
---
## Adding New Tests
To add new tests:
1. Create new test script in `tests/` directory
2. Follow the naming convention: `test_<component>.py`
3. Include proper error handling and status reporting
4. Update `run_tests.py` if automatic detection is needed
5. Document the test in this README
**Test Script Template**:
```python
#!/usr/bin/env python3
"""Test script for <component>"""
def test_component():
"""Test <component> functionality."""
try:
# Test implementation
print("✅ Test passed")
return True
except Exception as e:
print(f"❌ Test failed: {e}")
return False
if __name__ == "__main__":
success = test_component()
exit(0 if success else 1)
```

101
tests/run_tests.py Normal file
View File

@ -0,0 +1,101 @@
#!/usr/bin/env python3
"""
Test runner script for TradingAgents
This script automatically detects the LLM provider and runs appropriate tests.
"""
import os
import sys
import subprocess
def get_llm_provider():
"""Get the configured LLM provider from environment."""
return os.environ.get("LLM_PROVIDER", "").lower()
def run_test_script(script_name):
"""Run a test script and return success status."""
try:
print(f"🧪 Running {script_name}...")
result = subprocess.run([sys.executable, script_name],
capture_output=True, text=True, timeout=120)
if result.returncode == 0:
print(f"{script_name} passed")
if result.stdout:
print(f" Output: {result.stdout.strip()}")
return True
else:
print(f"{script_name} failed")
if result.stderr:
print(f" Error: {result.stderr.strip()}")
return False
except subprocess.TimeoutExpired:
print(f"{script_name} timed out")
return False
except Exception as e:
print(f"💥 {script_name} crashed: {e}")
return False
def main():
"""Main test runner function."""
print("🚀 TradingAgents Test Runner")
print("=" * 50)
# Get project root directory (parent of tests directory)
tests_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.dirname(tests_dir)
os.chdir(project_root)
provider = get_llm_provider()
print(f"📋 Detected LLM Provider: {provider or 'not set'}")
tests_run = []
tests_passed = []
# Always run setup tests
if run_test_script("tests/test_setup.py"):
tests_passed.append("tests/test_setup.py")
tests_run.append("tests/test_setup.py")
# Run provider-specific tests
if provider == "openai":
print("\n🔍 Running OpenAI-specific tests...")
if run_test_script("tests/test_openai_connection.py"):
tests_passed.append("tests/test_openai_connection.py")
tests_run.append("tests/test_openai_connection.py")
elif provider == "ollama":
print("\n🔍 Running Ollama-specific tests...")
if run_test_script("tests/test_ollama_connection.py"):
tests_passed.append("tests/test_ollama_connection.py")
tests_run.append("tests/test_ollama_connection.py")
else:
print(f"\n⚠️ Unknown or unset LLM provider: '{provider}'")
print(" Running all connectivity tests...")
for test_script in ["tests/test_openai_connection.py", "tests/test_ollama_connection.py"]:
if run_test_script(test_script):
tests_passed.append(test_script)
tests_run.append(test_script)
# Summary
print("\n" + "=" * 50)
print(f"📊 Test Results: {len(tests_passed)}/{len(tests_run)} tests passed")
for test in tests_run:
status = "✅ PASS" if test in tests_passed else "❌ FAIL"
print(f" {test}: {status}")
if len(tests_passed) == len(tests_run):
print("\n🎉 All tests passed! TradingAgents is ready to use.")
return 0
else:
print(f"\n⚠️ {len(tests_run) - len(tests_passed)} test(s) failed. Check configuration.")
return 1
if __name__ == "__main__":
exit_code = main()
sys.exit(exit_code)

View File

@ -65,7 +65,7 @@ def test_ollama_connection():
# Test 4: Check if the embedding model is available
try:
response = requests.get(f"{api_url}/api/tags", timeout=10)
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
models = response.json().get("models", [])
model_names = [m.get("name") for m in models if m.get("name")]

View File

@ -0,0 +1,142 @@
#!/usr/bin/env python3
"""
Test script to verify OpenAI API connection is working.
"""
import os
import sys
from openai import OpenAI
def test_openai_connection():
"""Test if OpenAI API is accessible and responding."""
# Get configuration from environment
api_key = os.environ.get("OPENAI_API_KEY")
backend_url = os.environ.get("LLM_BACKEND_URL", "https://api.openai.com/v1")
provider = os.environ.get("LLM_PROVIDER", "openai")
print(f"Testing OpenAI API connection:")
print(f" Provider: {provider}")
print(f" Backend URL: {backend_url}")
print(f" API Key: {'✅ Set' if api_key and api_key != '<your-openai-key>' else '❌ Not set or using placeholder'}")
if not api_key or api_key == "<your-openai-key>":
print("❌ OPENAI_API_KEY is not set or still using placeholder value")
print(" Please set your OpenAI API key in the .env file")
return False
# Test 1: Initialize OpenAI client
try:
client = OpenAI(
api_key=api_key,
base_url=backend_url
)
print("✅ OpenAI client initialized successfully")
except Exception as e:
print(f"❌ Failed to initialize OpenAI client: {e}")
return False
# Test 2: Test chat completion with a simple query
try:
print("🧪 Testing chat completion...")
response = client.chat.completions.create(
model="gpt-4o-mini", # Use the most cost-effective model for testing
messages=[
{"role": "user", "content": "Hello! Please respond with exactly: 'OpenAI API test successful'"}
],
max_tokens=50,
temperature=0
)
if response.choices and response.choices[0].message.content:
content = response.choices[0].message.content.strip()
print(f"✅ Chat completion successful")
print(f" Model: {response.model}")
print(f" Response: {content}")
print(f" Tokens used: {response.usage.total_tokens if response.usage else 'unknown'}")
else:
print("❌ Chat completion returned empty response")
return False
except Exception as e:
print(f"❌ Chat completion test failed: {e}")
if "insufficient_quota" in str(e).lower():
print(" 💡 This might be a quota/billing issue. Check your OpenAI account.")
elif "invalid_api_key" in str(e).lower():
print(" 💡 Invalid API key. Please check your OPENAI_API_KEY.")
return False
# Test 3: Test embeddings (optional, for completeness)
try:
print("🧪 Testing embeddings...")
response = client.embeddings.create(
model="text-embedding-3-small", # Cost-effective embedding model
input="This is a test sentence for embeddings."
)
if response.data and len(response.data) > 0 and response.data[0].embedding:
embedding = response.data[0].embedding
print(f"✅ Embeddings successful")
print(f" Model: {response.model}")
print(f" Embedding dimension: {len(embedding)}")
print(f" Tokens used: {response.usage.total_tokens if response.usage else 'unknown'}")
else:
print("❌ Embeddings returned empty response")
return False
except Exception as e:
print(f"❌ Embeddings test failed: {e}")
print(" ⚠️ Embeddings test failed but chat completion worked. This is usually fine for basic usage.")
# Don't return False here as embeddings might not be critical for all use cases
return True
def test_config_validation():
"""Validate the configuration is properly set for OpenAI."""
provider = os.environ.get("LLM_PROVIDER", "").lower()
backend_url = os.environ.get("LLM_BACKEND_URL", "")
print("\n🔧 Configuration validation:")
if provider != "openai":
print(f"⚠️ LLM_PROVIDER is '{provider}', expected 'openai'")
print(" The app might still work if the provider supports OpenAI-compatible API")
else:
print("✅ LLM_PROVIDER correctly set to 'openai'")
if "openai.com" in backend_url:
print("✅ Using official OpenAI API endpoint")
elif backend_url:
print(f" Using custom endpoint: {backend_url}")
print(" Make sure this endpoint is OpenAI-compatible")
else:
print("⚠️ LLM_BACKEND_URL not set, using default")
# Check for common environment issues
finnhub_key = os.environ.get("FINNHUB_API_KEY")
if not finnhub_key or finnhub_key == "<your_finnhub_api_key_here>":
print("⚠️ FINNHUB_API_KEY not set - financial data fetching may not work")
else:
print("✅ FINNHUB_API_KEY is set")
return True
if __name__ == "__main__":
print("🧪 OpenAI API Connection Test\n")
config_ok = test_config_validation()
api_ok = test_openai_connection()
print(f"\n📊 Test Results:")
print(f" Configuration: {'✅ OK' if config_ok else '❌ Issues'}")
print(f" API Connection: {'✅ OK' if api_ok else '❌ Failed'}")
if config_ok and api_ok:
print("\n🎉 All tests passed! OpenAI API is ready for TradingAgents.")
print("💡 You can now run the trading agents with OpenAI as the LLM provider.")
else:
print("\n💥 Some tests failed. Please check your configuration and API key.")
print("💡 Make sure OPENAI_API_KEY is set correctly in your .env file.")
sys.exit(0 if (config_ok and api_ok) else 1)

122
tests/test_setup.py Normal file
View File

@ -0,0 +1,122 @@
#!/usr/bin/env python3
"""
Test script to verify the complete TradingAgents setup works end-to-end.
"""
import os
import sys
from datetime import datetime, timedelta
def test_basic_setup():
"""Test basic imports and configuration"""
try:
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
print("✅ Basic imports successful")
return True
except Exception as e:
print(f"❌ Basic import failed: {e}")
return False
def test_config():
"""Test configuration loading"""
try:
from tradingagents.default_config import DEFAULT_CONFIG
# Check required environment variables
required_vars = ['LLM_PROVIDER', 'OPENAI_API_KEY', 'FINNHUB_API_KEY']
missing_vars = []
for var in required_vars:
if not os.environ.get(var):
missing_vars.append(var)
if missing_vars:
print(f"⚠️ Missing environment variables: {missing_vars}")
print(" This may cause issues with data fetching or LLM calls")
else:
print("✅ Required environment variables set")
print(f"✅ Configuration loaded successfully")
print(f" LLM Provider: {os.environ.get('LLM_PROVIDER', 'not set')}")
print(f" OPENAI API KEY: {os.environ.get('OPENAI_API_KEY', 'not set')}")
print(f" Backend URL: {os.environ.get('LLM_BACKEND_URL', 'not set')}")
return True
except Exception as e:
print(f"❌ Configuration test failed: {e}")
return False
def test_trading_graph_init():
"""Test TradingAgentsGraph initialization"""
try:
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
# Create a minimal config for testing
config = DEFAULT_CONFIG.copy()
config["online_tools"] = False # Use cached data for testing
config["max_debate_rounds"] = 1 # Minimize API calls
ta = TradingAgentsGraph(debug=True, config=config)
print("✅ TradingAgentsGraph initialized successfully")
return True
except Exception as e:
print(f"❌ TradingAgentsGraph initialization failed: {e}")
return False
def test_data_access():
"""Test if we can access basic data"""
try:
from tradingagents.dataflows.yfin_utils import get_stock_data
# Test with a simple stock query
test_date = (datetime.now() - timedelta(days=30)).strftime('%Y-%m-%d')
# This should work even without API keys if using cached data
data = get_stock_data("AAPL", test_date)
if data:
print("✅ Data access test successful")
return True
else:
print("⚠️ Data access returned empty results (may be expected with cached data)")
return True
except Exception as e:
print(f"❌ Data access test failed: {e}")
return False
def run_all_tests():
"""Run all tests"""
print("🧪 Running TradingAgents setup tests...\n")
tests = [
("Basic Setup", test_basic_setup),
("Configuration", test_config),
("TradingGraph Init", test_trading_graph_init),
("Data Access", test_data_access),
]
passed = 0
total = len(tests)
for test_name, test_func in tests:
print(f"Running {test_name} test...")
try:
if test_func():
passed += 1
print()
except Exception as e:
print(f"{test_name} test crashed: {e}\n")
print(f"📊 Test Results: {passed}/{total} tests passed")
if passed == total:
print("🎉 All tests passed! TradingAgents setup is working correctly.")
return True
else:
print("⚠️ Some tests failed. Check the output above for details.")
return False
if __name__ == "__main__":
success = run_all_tests()
sys.exit(0 if success else 1)