Go to file
JackChen e0fd1860fc docs: rewrite README top fold and trim Examples section
Strategic rewrite following docs/project-evaluation-2026-04-09.md.
README.md and README_zh.md updated in lockstep.

Top fold changes:
- New tagline positioning against CrewAI and LangGraph
- Replace 11-bullet "Why" with 3 bullets (runTeam / 3 deps / multi-model)
- New Philosophy section with "we build / we don't build / tracking"
- "We don't build" limited to handoffs and checkpointing (softened);
  Cloud/Studio bullet dropped to preserve future Hosted option
- New "How is this different from X?" FAQ covering LangGraph JS, CrewAI,
  and Vercel AI SDK
- New "Used by" section with three early-stage integrations, framed
  honestly for a new project (temodar-agent, rentech-quant-platform,
  cybersecurity SOC home lab)

Examples section:
- Shrink 15-row catalog table to 4 featured entries + link to examples/
- Featured: 02 team collaboration, 06 local model, 09 structured output,
  11 trace observability
- Eliminates maintenance debt of updating the table on every new example

Refinements during alignment pass:
- Launch date corrected to 2026-04-01 (matches first commit timestamp)
- Surface Gemini @google/genai peer dep in top fold and Providers table
- Rephrase "Agent handoffs" bullet to avoid reading as single-agent framework
- Update prose example to Opus 4.6 / GPT-5.4 / local Gemma 4
- Quick Start code example shortened ~30% (developer/reviewer collapsed
  to stubs, still demonstrates multi-agent team shape)
- Remove CrewAI endorsement stats (48K stars / Andrew Ng / $18M) to keep
  comparisons technical
- Drop Star History cache-buster since growth has stabilized; bump
  contributors cache-buster to max=20 so all 8 contributors render
- Delete Author section; shrink Contributing to Examples + Documentation

Small carry-over fixes:
- Fix duplicated task_complete line in Quick Start output sample
- Add AgentPool.runParallel() note to Three Ways to Run
- Update source file count 33 → 35

Kept unchanged per scope:
- Architecture diagram, Built-in Tools, Supported Providers

Does not touch source code or package.json.
2026-04-11 14:13:53 +08:00
.github ci: fix cross-platform CI failures after Gemini adapter merge 2026-04-05 12:09:31 +08:00
examples chore: rename research aggregation example to 15- (avoid collision with 14-) 2026-04-08 12:14:23 +08:00
src Merge pull request #85 from ibrahimkzmv/feat.customizable-coordinator 2026-04-08 18:56:12 +08:00
tests Merge pull request #85 from ibrahimkzmv/feat.customizable-coordinator 2026-04-08 18:56:12 +08:00
.gitignore chore: update .gitignore and revise MCP decision in DECISIONS.md 2026-04-07 11:20:32 +08:00
CLAUDE.md docs: update CLAUDE.md concurrency section for per-agent mutex 2026-04-07 10:46:08 +08:00
CODE_OF_CONDUCT.md docs: add Code of Conduct and issue templates 2026-04-03 12:09:41 +08:00
CONTRIBUTING.md chore: add tests, CI, contributing guide, and PR template 2026-04-02 23:43:54 +08:00
DECISIONS.md docs: fix MCP tracking issue link to #86 2026-04-09 03:29:58 +08:00
LICENSE Initial release: open-multi-agent v0.1.0 2026-04-01 04:33:15 +08:00
README.md docs: rewrite README top fold and trim Examples section 2026-04-11 14:13:53 +08:00
README_zh.md docs: rewrite README top fold and trim Examples section 2026-04-11 14:13:53 +08:00
SECURITY.md docs: add security policy 2026-04-03 12:10:47 +08:00
package-lock.json feat: add AbortSignal support to runTeam() and runTasks() (#69) 2026-04-06 12:49:01 +08:00
package.json test: add LLM adapter contract tests, improve coverage from 71% to 88% (#56) 2026-04-05 14:14:43 +08:00
tsconfig.json Initial release: open-multi-agent v0.1.0 2026-04-01 04:33:15 +08:00
vitest.config.ts test: add LLM adapter contract tests, improve coverage from 71% to 88% (#56) 2026-04-05 14:14:43 +08:00

README.md

Open Multi-Agent

The lightweight multi-agent orchestration engine for TypeScript. Three runtime dependencies, zero config, goal to result in one runTeam() call.

CrewAI is Python. LangGraph makes you draw the graph by hand. open-multi-agent is the npm install you drop into an existing Node.js backend when you need a team of agents to work on a goal together. Nothing more, nothing less.

3 runtime dependencies · 35 source files · Deploys anywhere Node.js runs · Mentioned in Latent Space AI News (top AI engineering newsletter, 170k+ subscribers)

GitHub stars license TypeScript coverage

English | 中文

What you actually get

  • Goal to result in one call. runTeam(team, "Build a REST API") kicks off a coordinator agent that decomposes the goal into a task DAG, resolves dependencies, runs independent tasks in parallel, and synthesizes the final output. No graph to draw, no tasks to wire up.
  • TypeScript-native, three runtime dependencies. @anthropic-ai/sdk, openai, zod. That is the whole runtime. Embed in Express, Next.js, serverless functions, or CI/CD pipelines. No Python runtime, no subprocess bridge, no cloud sidecar.
  • Multi-model teams. Claude, GPT, Gemini, Grok, Copilot, or any OpenAI-compatible local model (Ollama, vLLM, LM Studio, llama.cpp) in the same team. Run the architect on Opus 4.6, the developer on GPT-5.4, the reviewer on local Gemma 4, all in one runTeam() call. Gemini ships as an optional peer dependency: npm install @google/genai to enable.

Other features (structured output, task retry, human-in-the-loop, lifecycle hooks, loop detection, observability) live below the fold and in examples/.

Philosophy: what we build, what we don't

Our goal is to be the simplest multi-agent framework for TypeScript. Simplicity does not mean closed. We believe the long-term value of a framework is the size of the network it connects to, not its feature checklist.

We build:

  • A coordinator that decomposes a goal into a task DAG.
  • A task queue that runs independent tasks in parallel and cascades failures to dependents.
  • A shared memory and message bus so agents can see each other's output.
  • Multi-model teams where each agent can use a different LLM provider.

We don't build:

  • Agent handoffs. If agent A needs to transfer mid-conversation to agent B, use OpenAI Agents SDK. In our model, each agent owns one task end-to-end, with no mid-conversation transfers.
  • State persistence / checkpointing. Not planned for now. Adding a storage backend would break the three-dependency promise, and our workflows run in seconds to minutes, not hours. If real usage shifts toward long-running workflows, we will revisit.

Tracking:

  • MCP support. Next up, see #86.
  • A2A protocol. Watching, will move when production adoption is real.

See DECISIONS.md for the full rationale.

How is this different from X?

vs. LangGraph JS. LangGraph is declarative graph orchestration: you define nodes, edges, and conditional routing, then compile() and invoke(). open-multi-agent is goal-driven: you declare a team and a goal, a coordinator decomposes it into a task DAG at runtime. LangGraph gives you total control of topology (great for fixed production workflows). This gives you less typing and faster iteration (great for exploratory multi-agent work). LangGraph also has mature checkpointing; we do not.

vs. CrewAI. CrewAI is the mature Python choice. If your stack is Python, use CrewAI. open-multi-agent is TypeScript-native: three runtime dependencies, embeds directly in Node.js without a subprocess bridge. Roughly comparable capability on the orchestration side. Choose on language fit.

vs. Vercel AI SDK. AI SDK is the LLM call layer: a unified TypeScript client for 60+ providers with streaming, tool calls, and structured outputs. It does not orchestrate multi-agent teams. open-multi-agent sits on top when you need that. They compose: use AI SDK for single-agent work, reach for this when you need a team.

Used by

open-multi-agent is a new project (launched 2026-04-01, MIT, 5,500+ stars). The ecosystem is still forming, so the list below is short and honest:

  • temodar-agent (~50 stars). WordPress security analysis platform by Ali Sünbül. Uses our built-in tools (bash, file_*, grep) directly in its Docker runtime. Confirmed production use.
  • rentech-quant-platform. Multi-agent quant trading research platform. Five pipelines plus MCP integrations, built on top of open-multi-agent. Early signal, very new.
  • Cybersecurity SOC (home lab). A private setup running Qwen 2.5 + DeepSeek Coder entirely offline via Ollama, building an autonomous SOC pipeline on Wazuh + Proxmox. Early user, not yet public.

Using open-multi-agent in production or a side project? Open a discussion and we will list it here.

Quick Start

Requires Node.js >= 18.

npm install @jackchen_me/open-multi-agent

Set the API key for your provider. Local models via Ollama require no API key — see example 06.

  • ANTHROPIC_API_KEY
  • OPENAI_API_KEY
  • GEMINI_API_KEY
  • XAI_API_KEY (for Grok)
  • GITHUB_TOKEN (for Copilot)

Three agents, one goal — the framework handles the rest:

import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
import type { AgentConfig } from '@jackchen_me/open-multi-agent'

const architect: AgentConfig = {
  name: 'architect',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You design clean API contracts and file structures.',
  tools: ['file_write'],
}

const developer: AgentConfig = { /* same shape, tools: ['bash', 'file_read', 'file_write', 'file_edit'] */ }
const reviewer: AgentConfig = { /* same shape, tools: ['file_read', 'grep'] */ }

const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => console.log(event.type, event.agent ?? event.task ?? ''),
})

const team = orchestrator.createTeam('api-team', {
  name: 'api-team',
  agents: [architect, developer, reviewer],
  sharedMemory: true,
})

// Describe a goal — the framework breaks it into tasks and orchestrates execution
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')

console.log(`Success: ${result.success}`)
console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)

What happens under the hood:

agent_start coordinator
task_start architect
task_complete architect
task_start developer
task_start developer              // independent tasks run in parallel
task_complete developer
task_complete developer
task_start reviewer               // unblocked after implementation
task_complete reviewer
agent_complete coordinator        // synthesizes final result
Success: true
Tokens: 12847 output tokens

Three Ways to Run

Mode Method When to use
Single agent runAgent() One agent, one prompt — simplest entry point
Auto-orchestrated team runTeam() Give a goal, framework plans and executes
Explicit pipeline runTasks() You define the task graph and assignments

For MapReduce-style fan-out without task dependencies, use AgentPool.runParallel() directly. See example 07.

Examples

15 runnable scripts in examples/. Start with these four:

Run any with npx tsx examples/02-team-collaboration.ts.

Architecture

┌─────────────────────────────────────────────────────────────────┐
│  OpenMultiAgent (Orchestrator)                                  │
│                                                                 │
│  createTeam()  runTeam()  runTasks()  runAgent()  getStatus()   │
└──────────────────────┬──────────────────────────────────────────┘
                       │
            ┌──────────▼──────────┐
            │  Team               │
            │  - AgentConfig[]    │
            │  - MessageBus       │
            │  - TaskQueue        │
            │  - SharedMemory     │
            └──────────┬──────────┘
                       │
         ┌─────────────┴─────────────┐
         │                           │
┌────────▼──────────┐    ┌───────────▼───────────┐
│  AgentPool        │    │  TaskQueue             │
│  - Semaphore      │    │  - dependency graph    │
│  - runParallel()  │    │  - auto unblock        │
└────────┬──────────┘    │  - cascade failure     │
         │               └───────────────────────┘
┌────────▼──────────┐
│  Agent            │
│  - run()          │    ┌──────────────────────┐
│  - prompt()       │───►│  LLMAdapter          │
│  - stream()       │    │  - AnthropicAdapter  │
└────────┬──────────┘    │  - OpenAIAdapter     │
         │               │  - CopilotAdapter    │
         │               │  - GeminiAdapter     │
         │               │  - GrokAdapter       │
         │               └──────────────────────┘
┌────────▼──────────┐
│  AgentRunner      │    ┌──────────────────────┐
│  - conversation   │───►│  ToolRegistry        │
│    loop           │    │  - defineTool()      │
│  - tool dispatch  │    │  - 5 built-in tools  │
└───────────────────┘    └──────────────────────┘

Built-in Tools

Tool Description
bash Execute shell commands. Returns stdout + stderr. Supports timeout and cwd.
file_read Read file contents at an absolute path. Supports offset/limit for large files.
file_write Write or create a file. Auto-creates parent directories.
file_edit Edit a file by replacing an exact string match.
grep Search file contents with regex. Uses ripgrep when available, falls back to Node.js.

Tool Configuration

Agents can be configured with fine-grained tool access control using presets, allowlists, and denylists.

Tool Presets

Predefined tool sets for common use cases:

const readonlyAgent: AgentConfig = {
  name: 'reader',
  model: 'claude-sonnet-4-6',
  toolPreset: 'readonly',  // file_read, grep, glob
}

const readwriteAgent: AgentConfig = {
  name: 'editor',
  model: 'claude-sonnet-4-6',
  toolPreset: 'readwrite',  // file_read, file_write, file_edit, grep, glob
}

const fullAgent: AgentConfig = {
  name: 'executor',
  model: 'claude-sonnet-4-6',
  toolPreset: 'full',  // file_read, file_write, file_edit, grep, glob, bash
}

Advanced Filtering

Combine presets with allowlists and denylists for precise control:

const customAgent: AgentConfig = {
  name: 'custom',
  model: 'claude-sonnet-4-6',
  toolPreset: 'readwrite',        // Start with: file_read, file_write, file_edit, grep, glob
  tools: ['file_read', 'grep'],   // Allowlist: intersect with preset = file_read, grep
  disallowedTools: ['grep'],      // Denylist: subtract = file_read only
}

Resolution order: preset → allowlist → denylist → framework safety rails.

Custom Tools

Tools added via agent.addTool() are always available regardless of filtering.

Supported Providers

Provider Config Env var Status
Anthropic (Claude) provider: 'anthropic' ANTHROPIC_API_KEY Verified
OpenAI (GPT) provider: 'openai' OPENAI_API_KEY Verified
Grok (xAI) provider: 'grok' XAI_API_KEY Verified
GitHub Copilot provider: 'copilot' GITHUB_TOKEN Verified
Gemini provider: 'gemini' GEMINI_API_KEY Verified
Ollama / vLLM / LM Studio provider: 'openai' + baseURL Verified
llama.cpp server provider: 'openai' + baseURL Verified

Gemini requires npm install @google/genai (optional peer dependency).

Verified local models with tool-calling: Gemma 4 (see example 08).

Any OpenAI-compatible API should work via provider: 'openai' + baseURL (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). Grok now has first-class support via provider: 'grok'.

Local Model Tool-Calling

The framework supports tool-calling with local models served by Ollama, vLLM, LM Studio, or llama.cpp. Tool-calling is handled natively by these servers via the OpenAI-compatible API.

Verified models: Gemma 4, Llama 3.1, Qwen 3, Mistral, Phi-4. See the full list at ollama.com/search?c=tools.

Fallback extraction: If a local model returns tool calls as text instead of using the tool_calls wire format (common with thinking models or misconfigured servers), the framework automatically extracts them from the text output.

Timeout: Local inference can be slow. Use timeoutMs on AgentConfig to prevent indefinite hangs:

const localAgent: AgentConfig = {
  name: 'local',
  model: 'llama3.1',
  provider: 'openai',
  baseURL: 'http://localhost:11434/v1',
  apiKey: 'ollama',
  tools: ['bash', 'file_read'],
  timeoutMs: 120_000, // abort after 2 minutes
}

Troubleshooting:

  • Model not calling tools? Ensure it appears in Ollama's Tools category. Not all models support tool-calling.
  • Using Ollama? Update to the latest version (ollama update) — older versions have known tool-calling bugs.
  • Proxy interfering? Use no_proxy=localhost when running against local servers.

LLM Configuration Examples

const grokAgent: AgentConfig = {
  name: 'grok-agent',
  provider: 'grok',
  model: 'grok-4',
  systemPrompt: 'You are a helpful assistant.',
}

(Set your XAI_API_KEY environment variable — no baseURL needed anymore.)

Contributing

Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:

  • Examples — Real-world workflows and use cases.
  • Documentation — Guides, tutorials, and API docs.

Contributors

Star History

Star History Chart

Translations

Help translate this README — open a PR.

License

MIT