Docker support with tests for local ollama

This commit is contained in:
chauhang 2025-06-20 21:33:00 -07:00
parent 1e86e74314
commit 5e2fc25dfe
12 changed files with 442 additions and 15 deletions

32
.dockerignore Normal file
View File

@ -0,0 +1,32 @@
# Git specific
.git
.gitignore
# Python specific
__pycache__/
*.pyc
*.pyo
*.pyd
*.egg-info/
.Python
env/
venv/
.env
# IDE/Editor specific
.vscode/
.idea/
*.swp
*.swo
# Ollama cache directory (if it somehow ends up in build context)
.ollama_cache/
# Docker specific files (should not be part of the image content itself)
Dockerfile
docker-compose.yml
.dockerignore
# OS specific
.DS_Store
Thumbs.db

14
.env.example Normal file
View File

@ -0,0 +1,14 @@
# LLM Configuration
LLM_PROVIDER="ollama"
LLM_BACKEND_URL="http://localhost:11434/v1" # For Ollama running in the same container, /v1 added for OpenAI compatibility
LLM_DEEP_THINK_MODEL="qwen3:0.6b"
LLM_QUICK_THINK_MODEL="qwen3:0.6b"
OPENAI_API_KEY="ollama-key" # Optional, if you want to use OpenAI models or ollama models with OpenAI API compatibility
# Agent Configuration
MAX_DEBATE_ROUNDS="1"
ONLINE_TOOLS="False" # Set to True if you want to enable tools that access the internet
# Note: For local Docker Compose when Ollama runs on the host machine (not in container),
# you might use LLM_BACKEND_URL="http://host.docker.internal:11434/v1"
# The current docker-compose setup runs Ollama inside the app service.

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
docker-entrypoint.sh text eol=lf

94
Docker-readme.md Normal file
View File

@ -0,0 +1,94 @@
# Local run with Docker or docker-compse
## Environment Configuration with .env
This project uses a `.env` file to manage environment-specific configurations for local development, especially when using Docker Compose. This allows you to customize settings without modifying version-controlled files like `docker-compose.yml`.
### Setup
1. **Create your local `.env` file:**
Copy the example configuration to a new `.env` file:
```bash
cp .env.example .env
```
2. **Customize your `.env` file:**
Open the `.env` file in a text editor and modify the variables as needed for your local setup. For example, you might want to change LLM models or API keys (if applicable in the future).
### How it Works with Docker Compose
When you run `docker-compose up` or `docker-compose run`, Docker Compose automatically looks for a `.env` file in the project root directory (where `docker-compose.yml` is located) and loads the environment variables defined in it. These variables are then passed into the container environment for the `app` service.
The `.env` file itself is ignored by Git (as specified in `.gitignore`), so your local configurations will not be committed to the repository.
## Running with Docker
This project supports running within a Docker container, which ensures a consistent environment for development and testing.
### Prerequisites
- Docker installed and running on your system.
### Build the Docker Image
Navigate to the root directory of the project (where the `Dockerfile` is located) and run:
```bash
docker build -t tradingagents .
```
### Test local ollama setup
To test ollama connectivity and local model:
```bash
docker run --rm \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_QUICK_THINK_MODEL="qwen3:0.6b" \
-e MAX_DEBATE_ROUNDS="1" \
-e ONLINE_TOOLS="False" \
tradingagents \
python test_ollama_connection.py
```
**Note on Ollama for Local Docker:**
The `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`. This assumes you have Ollama running on your host machine and accessible at port 11434. '/v1' is added to url at the end for OpenAI api compatibility.
### Run the Main Application
To run the `main.py` script (default command for the Docker image):
```bash
docker run --rm \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
# Add other necessary environment variables for main.py
tradingagents
```
Adjust environment variables as needed for your local setup.
### Using Docker Compose
For a more streamlined local development experience, you can use Docker Compose. The `docker-compose.yml` file in the project root is configured to use the existing `Dockerfile`.
**Build and Run Tests:**
The default command in `docker-compose.yml` is set to run the test suite.
```bash
docker-compose up --build
```
This command will build the image (if it's not already built or if changes are detected) and then run the `pytest tests/test_main.py` command. The `--rm` flag is implicitly handled by `up` when the process finishes, or you can run:
```bash
docker-compose run --rm app # This will use the default command from docker-compose.yml
```
If you want to explicitly run the tests:
```bash
docker-compose run --rm app python test_ollama_connection.py
```
**Run the Main Application:**
To run the `main.py` script, you can override the default command:
```bash
docker-compose run --rm app python main.py
```
Or, you can modify the `command` in `docker-compose.yml` if you primarily want `docker-compose up` to run the main application.
**Environment Variables:**
The necessary environment variables (like `LLM_PROVIDER`, `LLM_BACKEND_URL`, model names, etc.) are pre-configured in the `docker-compose.yml` for the `app` service. Ollama is started by the entrypoint script within the same container, so `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`.
**Live Code Reloading:**
The current directory is mounted as a volume into the container at `/app`. This means changes you make to your local code will be reflected inside the container, which is useful for development. You might need to rebuild the image with `docker-compose build` or `docker-compose up --build` if you change dependencies in `requirements.txt` or modify the `Dockerfile` itself.
**Ollama Model Caching:**
To prevent re-downloading Ollama models, `docker-compose.yml` now mounts `./.ollama` on your host to `/app/.ollama` in the container. Models pulled by Ollama will be stored in `./.ollama/models` locally and persist across runs. Ensure this directory is in your `.gitignore`. If Docker has permission issues creating this directory, you might need to create it manually (`mkdir .ollama`).

82
Dockerfile Normal file
View File

@ -0,0 +1,82 @@
# syntax=docker/dockerfile:1.4
# Build stage for dependencies
FROM python:3.9-slim-bookworm AS builder
# Set environment variables for build
ENV PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
&& apt-get clean
# Install Ollama in builder stage with cache mount for downloads
RUN --mount=type=cache,target=/tmp/ollama-cache \
curl -fsSL https://ollama.com/install.sh | sh
# Create virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Copy requirements and install Python dependencies
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.9-slim-bookworm AS runtime
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
OLLAMA_HOST=0.0.0.0
# Install runtime dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
&& apt-get clean
# Copy Ollama from builder stage instead of installing again
COPY --from=builder /usr/local/bin/ollama /usr/local/bin/ollama
# Copy virtual environment from builder stage
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Create a non-root user and group
RUN groupadd -r appuser && useradd -r -g appuser -s /bin/bash -d /app appuser
# Create app directory
WORKDIR /app
# Copy the entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Copy the application code
COPY . .
# Change ownership of the app directory to the non-root user
RUN chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
# Set the entrypoint
ENTRYPOINT ["docker-entrypoint.sh"]
# Default command (can be overridden, e.g., by pytest command in CI)
CMD ["python", "main.py"]
EXPOSE 11434

View File

@ -192,6 +192,10 @@ print(decision)
You can view the full list of configurations in `tradingagents/default_config.py`.
## Docker usage and local ollama tests ##
See [Docker Readme](./Docker-readme.md) for details.
## Contributing
We welcome contributions from the community! Whether it's fixing a bug, improving documentation, or suggesting a new feature, your input helps make this project better. If you are interested in this line of research, please consider joining our open-source financial AI research community [Tauric Research](https://tauric.ai/).

22
build.sh Normal file
View File

@ -0,0 +1,22 @@
#!/bin/bash
# Enable BuildKit for faster builds
export DOCKER_BUILDKIT=1
# Build the image with BuildKit optimizations
echo "Building with BuildKit optimizations..."
docker build \
--progress=plain \
--tag tradingagents:latest \
.
echo "Build completed!"
echo ""
echo "To run the container:"
echo "docker run -it tradingagents2:latest"
echo ""
echo "To test Ollama connection first:"
echo "docker run --rm -it tradingagents:latest python test_ollama_connection.py"
echo ""
echo "To build with additional BuildKit features:"
echo "docker build --build-arg BUILDKIT_INLINE_CACHE=1 --tag tradingagents:latest ."

27
docker-compose.yml Normal file
View File

@ -0,0 +1,27 @@
version: '3.8' # Specify a version
services:
app:
build: . # Use the Dockerfile in the current directory
volumes:
- .:/app # Mount current directory to /app in container for live code changes
- ./.ollama:/app/.ollama # Cache Ollama models
#environment:
# - LLM_PROVIDER=ollama
# - LLM_BACKEND_URL=http://localhost:11434/v1
# - LLM_DEEP_THINK_MODEL=qwen3:0.6b
# - LLM_QUICK_THINK_MODEL=qwen3:0.6b
# - MAX_DEBATE_ROUNDS=1
# - ONLINE_TOOLS=False
# The default command in the Dockerfile is `python main.py`.
# For running tests with compose, one can use:
# docker-compose run --rm app pytest tests/test_main.py
# Or, we can set a default command here to run tests:
env_file:
- .env # Load environment variables from files.env
#command: python test_ollama_connection.py # Uncomment to run a specific test script
# If you want `docker-compose up` to run tests and then exit, this command is fine.
# If you want `docker-compose up` to run main.py, change command or remove it to use Dockerfile's CMD.
# For more flexibility, users can override the command when using `docker-compose run`.
ports:
- "11434:11434" # Expose port 11434 for Ollama

49
docker-entrypoint.sh Normal file
View File

@ -0,0 +1,49 @@
#!/bin/bash
set -e # Exit immediately if a command exits with a non-zero status.
# Start Ollama serve in the background
echo "Starting Ollama service..."
ollama serve &
OLLAMA_PID=$!
# Wait for Ollama to be ready by checking the API endpoint
echo "Waiting for Ollama to be ready..."
max_attempts=30
attempt=0
while [ $attempt -lt $max_attempts ]; do
if curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then
echo "Ollama is ready!"
break
fi
echo "Waiting for Ollama... (attempt $((attempt + 1))/$max_attempts)"
sleep 2
attempt=$((attempt + 1))
done
if [ $attempt -eq $max_attempts ]; then
echo "Error: Ollama failed to start within the expected time"
exit 1
fi
# Pull the required model. Use LLM_DEEP_THINK_MODEL, default to qwen:0.5b if not set.
MODEL_TO_PULL=${LLM_DEEP_THINK_MODEL:-"qwen3:0.6b"}
echo "Pulling Ollama model: $MODEL_TO_PULL..."
ollama pull "$MODEL_TO_PULL"
echo "Model $MODEL_TO_PULL pulled."
ollama list # List models for verification
# Test the connection before running the main application
# TODO: run based on flag for testing
echo "Testing Ollama connection..."
python test_ollama_connection.py
if [ $? -ne 0 ]; then
echo "Error: Ollama connection test failed"
exit 1
fi
echo "Ollama setup complete. Executing command: $@"
# Execute the CMD or the command passed to docker run
exec "$@"
# Optional: clean up Ollama server on exit (might be complex with exec)
# trap "echo 'Stopping Ollama service...'; kill $OLLAMA_PID; exit 0" SIGINT SIGTERM

55
main.py
View File

@ -1,21 +1,46 @@
import os
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from dotenv import load_dotenv
# Create a custom config
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "google" # Use a different model
config["backend_url"] = "https://generativelanguage.googleapis.com/v1" # Use a different backend
config["deep_think_llm"] = "gemini-2.0-flash" # Use a different model
config["quick_think_llm"] = "gemini-2.0-flash" # Use a different model
config["max_debate_rounds"] = 1 # Increase debate rounds
config["online_tools"] = True # Increase debate rounds
def run_analysis(config_overrides=None):
"""
Initializes and runs a trading cycle with configurable overrides.
"""
load_dotenv() # Load .env file variables
# Initialize with custom config
ta = TradingAgentsGraph(debug=True, config=config)
config = DEFAULT_CONFIG.copy()
# forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)
# Override with environment variables if set
config["llm_provider"] = os.environ.get("LLM_PROVIDER", config.get("llm_provider", "google"))
config["backend_url"] = os.environ.get("LLM_BACKEND_URL", config.get("backend_url", "https://generativelanguage.googleapis.com/v1"))
config["deep_think_llm"] = os.environ.get("LLM_DEEP_THINK_MODEL", config.get("deep_think_llm", "gemini-2.0-flash"))
config["quick_think_llm"] = os.environ.get("LLM_QUICK_THINK_MODEL", config.get("quick_think_llm", "gemini-2.0-flash"))
config["max_debate_rounds"] = int(os.environ.get("MAX_DEBATE_ROUNDS", config.get("max_debate_rounds", 1)))
config["online_tools"] = os.environ.get("ONLINE_TOOLS", str(config.get("online_tools", True))).lower() == 'true'
# Memorize mistakes and reflect
# ta.reflect_and_remember(1000) # parameter is the position returns
# Apply overrides from function argument
if config_overrides:
config.update(config_overrides)
print("Using configuration:")
for key, value in config.items():
print(f"{key}: {value}")
# Initialize with the final config
ta = TradingAgentsGraph(debug=True, config=config)
# Forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
return decision
if __name__ == "__main__":
# Example of running the trading analysis
# You can override specific configurations here if needed, e.g.:
# decision = run_trading_cyrun_analysiscle(config_overrides={"max_debate_rounds": 2})
decision = run_analysis()
print(decision)
# Memorize mistakes and reflect
# ta.reflect_and_remember(1000) # parameter is the position returns

View File

@ -1,6 +1,8 @@
typing-extensions
langchain-openai
langchain-experimental
langchain_anthropic
langchain_google_genai
pandas
yfinance
praw
@ -22,3 +24,7 @@ redis
chainlit
rich
questionary
ollama
pytest
python-dotenv

71
test_ollama_connection.py Normal file
View File

@ -0,0 +1,71 @@
#!/usr/bin/env python3
"""
Simple test script to verify Ollama connection is working.
"""
import os
import requests
import time
from openai import OpenAI
def test_ollama_connection():
"""Test if Ollama is accessible and responding."""
# Get configuration from environment
backend_url = os.environ.get("LLM_BACKEND_URL", "http://localhost:11434/v1")
model = os.environ.get("LLM_DEEP_THINK_MODEL", "qwen2.5")
print(f"Testing Ollama connection:")
print(f" Backend URL: {backend_url}")
print(f" Model: {model}")
# Test 1: Check if Ollama API is responding
try:
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
if response.status_code == 200:
print("✅ Ollama API is responding")
else:
print(f"❌ Ollama API returned status code: {response.status_code}")
return False
except Exception as e:
print(f"❌ Failed to connect to Ollama API: {e}")
return False
# Test 2: Check if the model is available
try:
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
models = response.json().get("models", [])
model_names = [m.get("name", "") for m in models]
if model in model_names:
print(f"✅ Model '{model}' is available")
else:
print(f"❌ Model '{model}' not found. Available models: {model_names}")
return False
except Exception as e:
print(f"❌ Failed to check model availability: {e}")
return False
# Test 3: Test OpenAI-compatible API
try:
client = OpenAI(base_url=backend_url, api_key="dummy")
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": "Hello, say 'test successful'"}],
max_tokens=50
)
print("✅ OpenAI-compatible API is working")
print(f" Response: {response.choices[0].message.content}")
return True
except Exception as e:
print(f"❌ OpenAI-compatible API test failed: {e}")
return False
if __name__ == "__main__":
success = test_ollama_connection()
if success:
print("\n🎉 All tests passed! Ollama is ready.")
exit(0)
else:
print("\n💥 Tests failed! Check Ollama configuration.")
exit(1)