- Introduced TokenCallbackHandler to track input and output token usage during LLM operations.
- Updated RunResult model to include token usage data.
- Enhanced RunsStore to support token usage persistence in the database.
- Modified RunService to yield token usage information during event streaming.
- Implemented UI components to display token statistics in the run detail view.
- Added tests for token handling and reporting functionality.
Made-with: Cursor
- Added FastAPI-based API structure with routers for runs and settings management.
- Implemented endpoints for creating, listing, and retrieving run configurations.
- Introduced settings management with load and update functionality.
- Created models for run configurations and settings using Pydantic.
- Established a store for managing run states and results.
- Enhanced .gitignore to exclude node_modules and results directories.
- Added package.json and package-lock.json for frontend dependencies.
- Included initial tests for API endpoints and model validations.
- Introduced a new API structure with FastAPI, including routers for runs and settings.
- Implemented endpoints for creating, listing, and retrieving run configurations.
- Added settings management with load and update functionality.
- Integrated SQLite checkpointing for durable state management during analysis.
- Updated dependencies in `pyproject.toml` and `requirements.txt` to include FastAPI and related packages.
- Enhanced `.gitignore` to exclude SQLite checkpoints and results directories.
- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER