Run ollama server only when LLM_PROVIDER set to ollama

This commit is contained in:
chauhang 2025-06-21 10:44:57 -07:00
parent 83c86aee61
commit 1bd4cd8ca0
3 changed files with 30 additions and 19 deletions

View File

@ -1,15 +1,25 @@
# LLM Configuration
LLM_PROVIDER="ollama"
LLM_BACKEND_URL="http://localhost:11434/v1" # For Ollama running in the same container, /v1 added for OpenAI compatibility
LLM_DEEP_THINK_MODEL="qwen3:0.6b"
LLM_QUICK_THINK_MODEL="qwen3:0.6b"
LLM_EMBEDDING_MODEL="nomic-embed-text"
OPENAI_API_KEY="ollama-key" # Optional, if you want to use OpenAI models or ollama models with OpenAI API compatibility
# This is an example .env file for the Trading Agent project.
# Copy this file to .env and fill in your API keys and environment configurations.
# API Keys
OPENAI_API_KEY="<your-openai-key>" # Replace with your OpenAI API key, for OpenAI, Ollama or other OpenAI-compatible models
FINNHUB_API_KEY="<your_finnhub_api_key_here>" # Replace with your Finnhub API key
#LLM Configuration for OpenAI
LLM_PROVIDER="openai" # Set to one of: openai, anthropic, google, openrouter or ollama,
LLM_BACKEND_URL="https://api.openai.com/v1" # API URL
# Uncomment for LLM Configuration for loacl ollama
#LLM_PROVIDER="ollama" # Set to one of: openai, anthropic, google, openrouter or ollama,
#LLM_BACKEND_URL="http://localhost:11434/v1" # For Ollama running in the same container, /v1 added for OpenAI compatibility
#LLM_DEEP_THINK_MODEL="qwen3:0.6b" # name of the Deep think model for the main
#LLM_QUICK_THINK_MODEL="qwen3:0.6b" # name of the quick think model for the main
#LLM_EMBEDDING_MODEL="nomic-embed-text" # name of the embedding model
# Agent Configuration
MAX_DEBATE_ROUNDS="1"
ONLINE_TOOLS="False" # Set to True if you want to enable tools that access the internet
MAX_DEBATE_ROUNDS="1" # Maximum number of debate rounds for the agent to engage in choose from 1, 3, 5
ONLINE_TOOLS="True" # Set to False if you want to disable tools that access the internet
# Note: For local Docker Compose when Ollama runs on the host machine (not in container),
# you might use LLM_BACKEND_URL="http://host.docker.internal:11434/v1"
# The current docker-compose setup runs Ollama inside the app service.

View File

@ -49,20 +49,20 @@ The `LLM_BACKEND_URL` is set to `http://localhost:11434/v1`. This assumes you ha
### Run the Main Application
To run the `main.py` script (default command for the Docker image):
To run the `main.py` script:
```bash
docker run --rm \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
# Add other necessary environment variables for main.py
tradingagents
tradingagents python -m main
```
Adjust environment variables as needed for your local setup.
### Run the TradingAgents CLI
To run the cli interface
To run the cli interface (default in the container)
```bash
docker run --rm \
docker run --it \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
# Add other necessary environment variables for main.py
@ -82,7 +82,7 @@ docker-compose up --build
```
This command will build the image (if it's not already built or if changes are detected) and then run the `pytest tests/test_main.py` command. The `--rm` flag is implicitly handled by `up` when the process finishes, or you can run:
```bash
docker-compose run --rm app # This will use the default command from docker-compose.yml
docker-compose run --it app # This will use the default command from docker-compose.yml
```
If you want to explicitly run the tests:
```bash
@ -93,7 +93,7 @@ docker-compose run --rm app python test_ollama_connection.py
To run the `main.py` script, you can override the default command:
```bash
docker-compose run --rm app python main.py
docker-compose run --rm app python -m main
```
Or, you can modify the `command` in `docker-compose.yml` if you primarily want `docker-compose up` to run the main application.

View File

@ -49,7 +49,8 @@ fi
echo "Ollama setup complete. Executing command: $@"
# Execute the CMD or the command passed to docker run
exec "$@"
exec python -m cli.main "$@"
# Optional: clean up Ollama server on exit (might be complex with exec)
# trap "echo 'Stopping Ollama service...'; kill $OLLAMA_PID; exit 0" SIGINT SIGTERM