Go to file
JackChen 24a2c4fe1a chore: add tsx to devDependencies for running examples 2026-04-03 02:20:47 +08:00
.github chore: add tests, CI, contributing guide, and PR template 2026-04-02 23:43:54 +08:00
examples docs: add examples for local models (Ollama) and fan-out/aggregate pattern 2026-04-03 02:12:05 +08:00
src fix: use explicit crypto import for Node 18 compatibility 2026-04-02 23:46:43 +08:00
tests chore: add tests, CI, contributing guide, and PR template 2026-04-02 23:43:54 +08:00
.gitignore Bust GitHub cache for star history chart, ignore non-tech dirs 2026-04-02 00:56:15 +08:00
CONTRIBUTING.md chore: add tests, CI, contributing guide, and PR template 2026-04-02 23:43:54 +08:00
LICENSE Initial release: open-multi-agent v0.1.0 2026-04-01 04:33:15 +08:00
README.md docs: replace inline code examples with examples/ index table 2026-04-03 02:18:58 +08:00
README_zh.md docs: replace inline code examples with examples/ index table 2026-04-03 02:18:58 +08:00
package-lock.json chore: add tsx to devDependencies for running examples 2026-04-03 02:20:47 +08:00
package.json chore: add tsx to devDependencies for running examples 2026-04-03 02:20:47 +08:00
tsconfig.json Initial release: open-multi-agent v0.1.0 2026-04-01 04:33:15 +08:00

README.md

Open Multi-Agent

Build AI agent teams that decompose goals into tasks automatically. Define agents with roles and tools, describe a goal — the framework plans the task graph, schedules dependencies, and runs everything in parallel.

3 runtime dependencies. 27 source files. One runTeam() call from goal to result.

GitHub stars license TypeScript

English | 中文

Why Open Multi-Agent?

  • Auto Task Decomposition — Describe a goal in plain text. A built-in coordinator agent breaks it into a task DAG with dependencies and assignees — no manual orchestration needed.
  • Multi-Agent Teams — Define agents with different roles, tools, and even different models. They collaborate through a message bus and shared memory.
  • Task DAG Scheduling — Tasks have dependencies. The framework resolves them topologically — dependent tasks wait, independent tasks run in parallel.
  • Model Agnostic — Claude, GPT, and local models (Ollama, vLLM, LM Studio) in the same team. Swap models per agent via baseURL.
  • In-Process Execution — No subprocess overhead. Everything runs in one Node.js process. Deploy to serverless, Docker, CI/CD.

Quick Start

Requires Node.js >= 18.

npm install @jackchen_me/open-multi-agent

Set ANTHROPIC_API_KEY (and optionally OPENAI_API_KEY or GITHUB_TOKEN for Copilot) in your environment.

Three agents, one goal — the framework handles the rest:

import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
import type { AgentConfig } from '@jackchen_me/open-multi-agent'

const architect: AgentConfig = {
  name: 'architect',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You design clean API contracts and file structures.',
  tools: ['file_write'],
}

const developer: AgentConfig = {
  name: 'developer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You implement what the architect designs.',
  tools: ['bash', 'file_read', 'file_write', 'file_edit'],
}

const reviewer: AgentConfig = {
  name: 'reviewer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You review code for correctness and clarity.',
  tools: ['file_read', 'grep'],
}

const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => console.log(event.type, event.agent ?? event.task ?? ''),
})

const team = orchestrator.createTeam('api-team', {
  name: 'api-team',
  agents: [architect, developer, reviewer],
  sharedMemory: true,
})

// Describe a goal — the framework breaks it into tasks and orchestrates execution
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')

console.log(`Success: ${result.success}`)
console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)

What happens under the hood:

agent_start coordinator
task_start architect
task_complete architect
task_start developer
task_start developer              // independent tasks run in parallel
task_complete developer
task_start reviewer               // unblocked after implementation
task_complete developer
task_complete reviewer
agent_complete coordinator        // synthesizes final result
Success: true
Tokens: 12847 output tokens

Three Ways to Run

Mode Method When to use
Single agent runAgent() One agent, one prompt — simplest entry point
Auto-orchestrated team runTeam() Give a goal, framework plans and executes
Explicit pipeline runTasks() You define the task graph and assignments

Contributors

Examples

All examples are runnable scripts in examples/. Run any of them with npx tsx:

npx tsx examples/01-single-agent.ts
Example What it shows
01 — Single Agent runAgent() one-shot, stream() streaming, prompt() multi-turn
02 — Team Collaboration runTeam() auto-orchestration with coordinator pattern
03 — Task Pipeline runTasks() explicit dependency graph (design → implement → test + review)
04 — Multi-Model Team defineTool() custom tools, mixed Anthropic + OpenAI providers, AgentPool
05 — Copilot GitHub Copilot as an LLM provider
06 — Local Model Ollama + Claude in one pipeline via baseURL (works with vLLM, LM Studio, etc.)
07 — Fan-Out / Aggregate runParallel() MapReduce — 3 analysts in parallel, then synthesize

Architecture

┌─────────────────────────────────────────────────────────────────┐
│  OpenMultiAgent (Orchestrator)                                  │
│                                                                 │
│  createTeam()  runTeam()  runTasks()  runAgent()  getStatus()   │
└──────────────────────┬──────────────────────────────────────────┘
                       │
            ┌──────────▼──────────┐
            │  Team               │
            │  - AgentConfig[]    │
            │  - MessageBus       │
            │  - TaskQueue        │
            │  - SharedMemory     │
            └──────────┬──────────┘
                       │
         ┌─────────────┴─────────────┐
         │                           │
┌────────▼──────────┐    ┌───────────▼───────────┐
│  AgentPool        │    │  TaskQueue             │
│  - Semaphore      │    │  - dependency graph    │
│  - runParallel()  │    │  - auto unblock        │
└────────┬──────────┘    │  - cascade failure     │
         │               └───────────────────────┘
┌────────▼──────────┐
│  Agent            │
│  - run()          │    ┌──────────────────────┐
│  - prompt()       │───►│  LLMAdapter          │
│  - stream()       │    │  - AnthropicAdapter  │
└────────┬──────────┘    │  - OpenAIAdapter     │
         │               │  - CopilotAdapter    │
         │               └──────────────────────┘
┌────────▼──────────┐
│  AgentRunner      │    ┌──────────────────────┐
│  - conversation   │───►│  ToolRegistry        │
│    loop           │    │  - defineTool()      │
│  - tool dispatch  │    │  - 5 built-in tools  │
└───────────────────┘    └──────────────────────┘

Built-in Tools

Tool Description
bash Execute shell commands. Returns stdout + stderr. Supports timeout and cwd.
file_read Read file contents at an absolute path. Supports offset/limit for large files.
file_write Write or create a file. Auto-creates parent directories.
file_edit Edit a file by replacing an exact string match.
grep Search file contents with regex. Uses ripgrep when available, falls back to Node.js.

Contributing

Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:

  • LLM Adapters — Anthropic, OpenAI, and Copilot are supported out of the box. Any OpenAI-compatible API (Ollama, vLLM, LM Studio, etc.) works via baseURL. Additional adapters for Gemini and other providers are welcome. The LLMAdapter interface requires just two methods: chat() and stream().
  • Examples — Real-world workflows and use cases.
  • Documentation — Guides, tutorials, and API docs.

Star History

Star History Chart

License

MIT