- Add docs/ directory with 17 documentation files
- Architecture: multi-agent-system, data-flow, llm-integration
- API Reference: trading-graph, agents, dataflows
- Guides: adding-new-analyst, adding-llm-provider, adding-data-vendor, configuration
- Testing: README, running-tests, writing-tests
- Development: setup, contributing
- Update PROJECT.md with TESTING STRATEGY requirements
- Add test_documentation_structure.py for validation
🤖 Generated with Claude Code
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| README.md | ||
| running-tests.md | ||
| writing-tests.md | ||
README.md
Testing Overview
TradingAgents uses a comprehensive testing strategy to ensure code quality and reliability.
Testing Philosophy
Our testing approach combines:
- Unit Tests: Fast, isolated tests for individual components
- Integration Tests: Tests for component interactions
- End-to-End Tests: Full workflow validation
- Regression Tests: Prevent fixed bugs from returning
Test Structure
tests/
├── unit/ # Unit tests (fast, isolated)
│ ├── test_analysts.py
│ ├── test_dataflows.py
│ └── test_utils.py
├── integration/ # Integration tests (medium speed)
│ ├── test_graph.py
│ ├── test_llm_providers.py
│ └── test_data_vendors.py
├── regression/ # Regression tests
│ └── smoke/ # Critical path tests (CI gate)
├── fixtures/ # Shared test fixtures
└── conftest.py # pytest configuration
Running Tests
All Tests
pytest tests/
Specific Test Categories
# Unit tests only
pytest tests/unit/
# Integration tests only
pytest tests/integration/
# Regression tests only
pytest tests/regression/
# Smoke tests (critical path)
pytest -m smoke
With Coverage
pytest tests/ --cov=tradingagents --cov-report=html
Specific Test File
pytest tests/unit/test_analysts.py -v
Specific Test Function
pytest tests/unit/test_analysts.py::test_market_analyst_initialization -v
Test Categories
Unit Tests
Purpose: Test individual functions and classes in isolation
Characteristics:
- Fast (<1 second per test)
- No external dependencies
- Use mocks for LLMs and data vendors
- High coverage target (90%+)
Example:
def test_analyst_initialization():
"""Test analyst can be initialized."""
llm = Mock()
tools = []
analyst = MarketAnalyst(llm, tools)
assert analyst.name == "market"
assert analyst.llm == llm
Integration Tests
Purpose: Test component interactions
Characteristics:
- Medium speed (1-30 seconds)
- May use test APIs or mocks
- Validate workflows
- Coverage target (70%+)
Example:
def test_data_vendor_integration():
"""Test data vendor can provide data."""
interface = DataInterface()
data = interface.get_stock_data("NVDA", "2024-01-01", "2024-01-10")
assert "close" in data
assert len(data["close"]) > 0
End-to-End Tests
Purpose: Test complete workflows
Characteristics:
- Slow (30+ seconds)
- Use real or test LLM APIs
- Validate full system
- Minimal count (critical paths only)
Example:
@pytest.mark.integration
def test_full_analysis_workflow():
"""Test complete trading analysis."""
ta = TradingAgentsGraph()
state, decision = ta.propagate("NVDA", "2024-05-10")
assert decision["action"] in ["BUY", "SELL", "HOLD"]
assert 0.0 <= decision["confidence_score"] <= 1.0
Test Fixtures
Common fixtures are defined in tests/conftest.py:
@pytest.fixture
def mock_llm():
"""Mock LLM for testing."""
llm = Mock()
llm.invoke.return_value = Mock(content="Test response")
return llm
@pytest.fixture
def mock_data_tools():
"""Mock data access tools."""
return {
"get_stock_data": Mock(return_value={"close": [150, 151, 152]}),
"get_indicators": Mock(return_value={"RSI": {"rsi": [65]}}),
}
@pytest.fixture
def test_config():
"""Test configuration."""
from tradingagents.default_config import DEFAULT_CONFIG
config = DEFAULT_CONFIG.copy()
config["max_debate_rounds"] = 1
return config
Writing Tests
See Writing Tests Guide for detailed patterns and examples.
Coverage Goals
- Overall: 80%+
- Unit Tests: 90%+
- Integration Tests: 70%+
- Critical Paths: 100%
Continuous Integration
Tests run automatically on:
- Pull requests
- Pushes to main branch
- Pre-commit hooks (optional)
Best Practices
- Write Tests First: TDD approach when possible
- One Assertion: Focus tests on single behaviors
- Clear Names:
test_<function>_<scenario>_<expected> - Use Fixtures: DRY principle for setup
- Mock External Calls: Don't hit real APIs in unit tests
- Fast Tests: Keep unit tests under 1 second
- Isolation: Tests should not depend on each other
- Documentation: Add docstrings to complex tests