Updates for environment file proper use with docker command

This commit is contained in:
chauhang 2025-06-21 12:28:10 -07:00
parent 1bd4cd8ca0
commit 3154dfc23b
4 changed files with 72 additions and 26 deletions

View File

@ -1,25 +1,36 @@
# This is an example .env file for the Trading Agent project.
# Copy this file to .env and fill in your API keys and environment configurations.
"NOTE: When using for `docker` command do not use quotes around the values, otherwiser environment variables will not be set."
# API Keys
OPENAI_API_KEY="<your-openai-key>" # Replace with your OpenAI API key, for OpenAI, Ollama or other OpenAI-compatible models
FINNHUB_API_KEY="<your_finnhub_api_key_here>" # Replace with your Finnhub API key
# Set your OpenAI API key, for OpenAI, Ollama or other OpenAI-compatible models
OPENAI_API_KEY=<your-openai-key>
# Set your Finnhub API key
FINNHUB_API_KEY=<your_finnhub_api_key_here>
#LLM Configuration for OpenAI
LLM_PROVIDER="openai" # Set to one of: openai, anthropic, google, openrouter or ollama,
LLM_BACKEND_URL="https://api.openai.com/v1" # API URL
# Set LLM_Provider to one of: openai, anthropic, google, openrouter or ollama,
LLM_PROVIDER=openai
# Set the API URL for the LLM backend
LLM_BACKEND_URL=https://api.openai.com/v1
# Uncomment for LLM Configuration for loacl ollama
#LLM_PROVIDER="ollama" # Set to one of: openai, anthropic, google, openrouter or ollama,
#LLM_BACKEND_URL="http://localhost:11434/v1" # For Ollama running in the same container, /v1 added for OpenAI compatibility
#LLM_DEEP_THINK_MODEL="qwen3:0.6b" # name of the Deep think model for the main
#LLM_QUICK_THINK_MODEL="qwen3:0.6b" # name of the quick think model for the main
#LLM_EMBEDDING_MODEL="nomic-embed-text" # name of the embedding model
#LLM_PROVIDER=ollama
## For Ollama running in the same container, /v1 added for OpenAI compatibility
#LLM_BACKEND_URL=http://localhost:11434/v1
# Set name of the Deep think model for the main
#LLM_DEEP_THINK_MODEL=qwen3:0.6b
## Setname of the quick think model for the main
#LLM_QUICK_THINK_MODEL=qwen3:0.6b
# Set the name of the embedding model
#LLM_EMBEDDING_MODEL=nomic-embed-text
# Agent Configuration
MAX_DEBATE_ROUNDS="1" # Maximum number of debate rounds for the agent to engage in choose from 1, 3, 5
ONLINE_TOOLS="True" # Set to False if you want to disable tools that access the internet
# Maximum number of debate rounds for the agent to engage in choose from 1, 3, 5
MAX_DEBATE_ROUNDS=1
# Set to False if you want to disable tools that access the internet
ONLINE_TOOLS=True

View File

@ -11,13 +11,16 @@ This project uses a `.env` file to manage environment-specific configurations fo
cp .env.example .env
```
2. **Customize your `.env` file:**
Open the `.env` file in a text editor and modify the variables as needed for your local setup. For example, you might want to change LLM models or API keys (if applicable in the future).
Open the `.env` file in a text editor and modify the variables as needed for your local setup. For example, you might want to change LLM models or API keys.
### How it Works with Docker Compose
When you run `docker-compose up` or `docker-compose run`, Docker Compose automatically looks for a `.env` file in the project root directory (where `docker-compose.yml` is located) and loads the environment variables defined in it. These variables are then passed into the container environment for the `app` service.
The `.env` file itself is ignored by Git (as specified in `.gitignore`), so your local configurations will not be committed to the repository.
For streamlined experience, it is recommended to use docker compose as it simplifies experience eg handles mounting of directories for caching. Skip to [Docker Compose](#Using Docker Compose) section for this.
## Running with Docker
This project supports running within a Docker container, which ensures a consistent environment for development and testing.
@ -34,27 +37,49 @@ docker build -t tradingagents .
### Test local ollama setup
To test ollama connectivity and local model:
```bash
docker run --rm \
docker run -it --env-file .env tradingagents python test_ollama_connection.py
```
for picking environment settings from .env file. You can pass values directly using:
```bash
docker run -it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_QUICK_THINK_MODEL="qwen3:0.6b" \
-e MAX_DEBATE_ROUNDS="1" \
-e ONLINE_TOOLS="False" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
tradingagents \
python test_ollama_connection.py
```
**Note on Ollama for Local Docker:**
The `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`. This assumes you have Ollama running on your host machine and accessible at port 11434. '/v1' is added to url at the end for OpenAI api compatibility.
For prevent re-downloading of Ollama models, mount folder from your host and run as
```bash
docker run -it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
-v ./ollama_cache:/app/.ollma \
tradingagents \
python test_ollama_connection.py
```
**Notes on Ollama for Local Docker:**
When `LLM_PROVIDER` is set to `ollama` the ollama server is automatically started in the docker conatiner. The `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`. This assumes you have Ollama running on your host machine and accessible at port 11434. '/v1' is added to url at the end for OpenAI api compatibility.
### Run the Main Application
To run the `main.py` script:
```bash
docker run --rm \
docker run -it --env-file .env tradingagents python -m main
```
or
```bash
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
# Add other necessary environment variables for main.py
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_QUICK_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
-e MAX_DEBATE_ROUNDS="1" \
-e ONLINE_TOOLS="False" \
-v ./ollama_cache:/app/.ollama \
tradingagents python -m main
```
Adjust environment variables as needed for your local setup.
@ -62,17 +87,21 @@ Adjust environment variables as needed for your local setup.
### Run the TradingAgents CLI
To run the cli interface (default in the container)
```bash
docker run --it \
docker run -it --env-file .env tradingagents
```
or
or
```bash
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
# Add other necessary environment variables for main.py
-v ./ollama_cache:/app/.ollma \
tradingagents python -m cli.main
```
Adjust environment variables as needed for your local setup.
### Using Docker Compose
For a more streamlined local development experience, you can use Docker Compose. The `docker-compose.yml` file in the project root is configured to use the existing `Dockerfile`.
For a more streamlined local development experience, it is recommended to use Docker Compose. The `docker-compose.yml` file in the project root is configured to use the existing `Dockerfile`.
**Build and Run Tests:**
@ -105,10 +134,14 @@ docker-compose run --it app python -m cli.main
```
**Environment Variables:**
The necessary environment variables (like `LLM_PROVIDER`, `LLM_BACKEND_URL`, model names, etc.) are pre-configured in the `docker-compose.yml` for the `app` service. Ollama is started by the entrypoint script within the same container, so `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`.
The necessary environment variables (like `LLM_PROVIDER`, `LLM_BACKEND_URL`, model names, etc.) are configured in the `docker-compose.yml` for the `app` service. Ollama is started by the entrypoint script within the same container when LLM_PROVIDER is set to `ollama`, and `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`.
When using enviroment file for `docker` command, please do not put extra quotes around the values, or any extra comments at the end otherwise docker will not pick the values.
**Live Code Reloading:**
The current directory is mounted as a volume into the container at `/app`. This means changes you make to your local code will be reflected inside the container, which is useful for development. You might need to rebuild the image with `docker-compose build` or `docker-compose up --build` if you change dependencies in `requirements.txt` or modify the `Dockerfile` itself.
**Ollama Model Caching:**
To prevent re-downloading Ollama models, `docker-compose.yml` now mounts `./.ollama` on your host to `/app/.ollama` in the container. Models pulled by Ollama will be stored in `./.ollama/models` locally and persist across runs. Ensure this directory is in your `.gitignore`. If Docker has permission issues creating this directory, you might need to create it manually (`mkdir .ollama`).

View File

@ -74,9 +74,10 @@ RUN chown -R appuser:appuser /app
USER appuser
# Set the entrypoint
ENTRYPOINT ["docker-entrypoint.sh"]
ENTRYPOINT ["/bin/sh", "-c", "if [ \"$LLM_PROVIDER\" = \"ollama\" ]; then ./docker-entrypoint.sh; else exec \"$@\"; fi", "--"]
# Default command (can be overridden, e.g., by pytest command in CI)
CMD ["python", "main.py"]
CMD ["python", "-m", "cli.main"]
EXPOSE 11434

View File

@ -12,6 +12,7 @@ services:
# - LLM_BACKEND_URL=http://localhost:11434/v1
# - LLM_DEEP_THINK_MODEL=qwen3:0.6b
# - LLM_QUICK_THINK_MODEL=qwen3:0.6b
# - LLM_EMBEDDING_MODEL=nomic-embed-text
# - MAX_DEBATE_ROUNDS=1
# - ONLINE_TOOLS=False
# The default command in the Dockerfile is `python main.py`.