Commit Graph

6 Commits

Author SHA1 Message Date
Surapong Kanoktipsatharporn 08bad661c3 docs: Comprehensive .env.example and configuration guide
Major updates to environment configuration:

## .env.example Updates (291 lines)
- Added 7 deployment scenarios with complete examples
- Scenario 1: OpenAI Everything (Production)
- Scenario 2: OpenRouter + OpenAI Embeddings (Cost Optimized)
- Scenario 3: All Local with Ollama (Privacy/Offline)
- Scenario 4: Anthropic + OpenAI Embeddings (High Quality)
- Scenario 5: Google Gemini + OpenAI Embeddings (Balanced)
- Scenario 6: OpenRouter + No Memory (Minimal)
- Scenario 7: Mixed Models (Advanced)

Each scenario includes:
- Complete configuration example
- Use case description
- Pros/cons analysis
- Cost estimates
- Prerequisites

## New Documentation
- docs/CONFIGURATION_GUIDE.md (691 lines)
  - Complete setup guide for all scenarios
  - API key acquisition instructions
  - CLI vs Module usage comparison
  - Environment variable reference
  - Troubleshooting section
  - Security best practices

## Additional Features
- API key sources and links
- Security notes and best practices
- Troubleshooting common issues
- Configuration validation checklist
- Multiple deployment patterns
- Cost optimization strategies

Makes it easy for users to:
- Choose the right setup for their needs
- Understand cost implications
- Configure mixed provider scenarios
- Troubleshoot authentication issues
- Switch between CLI and module usage
2025-10-20 15:52:02 +07:00
Surapong Kanoktipsatharporn c9d3eff62e feat: Implement comprehensive logging system
Adds a production-ready logging system with the following features:

## Core Features
- Structured logging with rich context and metadata
- Multiple log levels (DEBUG, INFO, WARNING, ERROR, CRITICAL)
- File and console output with separate handlers
- Automatic log rotation (prevents disk space issues)
- Component-specific loggers for different parts of the system
- Performance tracking with built-in metrics
- API call tracking with cost and token monitoring

## New Components
- tradingagents/utils/logging_config.py: Core logging module
  - TradingAgentsLogger: Main logger class
  - StructuredFormatter: Custom formatter with context
  - APICallLogger: Dedicated API call tracking
  - PerformanceLogger: Operation timing and metrics

## Log Files Created (in logs/ directory)
- tradingagents.log: All application logs (10MB rotation, 5 backups)
- errors.log: Errors only (5MB rotation, 3 backups)
- api_calls.log: API call tracking
- memory.log: Memory operations
- agents.log: Agent execution
- performance.log: Performance metrics

## Integration
- Updated memory.py: Full logging integration with context
  - Logs initialization, embeddings, add/get operations
  - Tracks API calls and performance
  - Provides detailed error context

- Updated trading_graph.py: Comprehensive graph logging
  - Logs initialization, propagation, reflection
  - Tracks component setup and execution
  - Performance metrics for all major operations

- Updated default_config.py: Added logging configuration
  - log_level, log_dir, log_to_console, log_to_file

## Documentation
- docs/LOGGING.md: Complete logging documentation (797 lines)
  - Quick start guide
  - Configuration examples
  - Best practices
  - API reference
  - Troubleshooting

## Benefits
- Better debugging and troubleshooting
- Production monitoring capabilities
- API cost tracking
- Performance analysis
- Audit trail for decisions
- Easier issue diagnosis

Tested and working. See docs/LOGGING.md for complete usage guide.
2025-10-20 15:47:29 +07:00
Surapong Kanoktipsatharporn 2869ab3c5f feat: Separate embedding configuration from chat model configuration
This commit implements a comprehensive solution for separating embedding
and chat model configurations, enabling flexible provider combinations
and graceful handling of embedding failures.

## Problem Statement

Previously, the TradingAgents memory system used the same backend_url for
both chat models and embeddings. This caused critical failures when:

- Using OpenRouter for chat (doesn't support OpenAI embedding endpoints)
- Using Anthropic/Google for chat (don't provide embeddings)
- The embedding endpoint returned HTML error pages instead of JSON
- Users wanted to mix providers (e.g., OpenRouter chat + OpenAI embeddings)

Error example:
  AttributeError: 'str' object has no attribute 'data'
  # Caused by: OpenRouter returned HTML page instead of embedding JSON

## Solution

Implemented three key features:

1. **Separate Embedding Client Configuration**
   - New config parameters independent of chat LLM settings
   - embedding_provider: "openai", "ollama", or "none"
   - embedding_backend_url: Separate API endpoint
   - embedding_model: Specific model to use
   - enable_memory: Boolean flag to enable/disable memory

2. **Multiple Provider Support**
   - OpenAI: Production-grade embeddings (recommended)
   - Ollama: Local embeddings for offline/development
   - None: Disable memory system entirely

3. **Graceful Fallback**
   - System continues when embeddings fail
   - Comprehensive error logging
   - Memory operations return empty results instead of crashing
   - Agents function without historical context when memory disabled

## Changes

### Core Framework
- tradingagents/default_config.py: Added 4 new embedding config params
- tradingagents/agents/utils/memory.py: Complete refactor with error handling
- tradingagents/graph/trading_graph.py: Separated embedding initialization

### CLI/User Interface
- cli/utils.py: Added select_embedding_provider() function
- cli/main.py: Added Step 7 for embedding provider selection

### Documentation (New Files)
- docs/EMBEDDING_CONFIGURATION.md: Complete usage guide (381 lines)
- docs/EMBEDDING_MIGRATION.md: Implementation details (374 lines)
- CHANGELOG_EMBEDDING.md: Release notes (225 lines)
- FEATURE_EMBEDDING_README.md: Branch overview (418 lines)

### Testing & Verification
- tests/test_embedding_config.py: Comprehensive test suite
- verify_config.py: Simple config verification script

## Example Usage

```python
# OpenRouter for chat, OpenAI for embeddings
config = {
    "llm_provider": "openrouter",
    "backend_url": "https://openrouter.ai/api/v1",
    "deep_think_llm": "deepseek/deepseek-chat-v3-0324:free",

    "embedding_provider": "openai",
    "embedding_backend_url": "https://api.openai.com/v1",
    "embedding_model": "text-embedding-3-small",
    "enable_memory": True,
}
```

## Backward Compatibility

 100% Backward Compatible - No breaking changes!

Existing configurations work without modification. Smart defaults
applied when embedding settings are omitted.

## Testing

- All core files pass diagnostics with no errors
- Configuration verification script passes all checks
- Supports scenarios: OpenRouter+OpenAI, All Ollama, Disabled Memory
- Graceful fallback tested for invalid URLs and missing API keys

## Benefits

- Enables using OpenRouter/other providers for chat
- Reduces costs (can use local embeddings or disable memory)
- Improves reliability (graceful degradation on failures)
- Maintains full backward compatibility
- Comprehensive documentation and examples

Fixes: OpenRouter compatibility issues
Closes: Embedding/chat provider coupling
Implements: Graceful fallback for memory operations
2025-10-20 15:24:51 +07:00
luohy15 7fc9c28a94 Add environment variable configuration support
- Add .env.example file with API key placeholders
- Update README.md with .env file setup instructions
- Add dotenv loading in main.py for environment variables

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 23:58:51 +08:00
Yijia Xiao 26c5ba5a78
Revert "Docker support and Ollama support (#47)" (#57)
This reverts commit 78ea029a0b.
2025-06-26 00:07:58 -04:00
Geeta Chauhan 78ea029a0b
Docker support and Ollama support (#47)
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER
2025-06-25 23:57:05 -04:00