Go to file
JackChen 25b144acf3 chore: remove hardcoded cache bust param from Star History URLs 2026-04-04 18:29:11 +08:00
.github docs: add Code of Conduct and issue templates 2026-04-03 12:09:41 +08:00
examples feat(llm): add first-class Grok (xAI) support with dedicated GrokAdapter (#44) 2026-04-04 18:20:55 +08:00
src feat(llm): add first-class Grok (xAI) support with dedicated GrokAdapter (#44) 2026-04-04 18:20:55 +08:00
tests feat(llm): add first-class Grok (xAI) support with dedicated GrokAdapter (#44) 2026-04-04 18:20:55 +08:00
.gitignore chore: add coverage/ to .gitignore 2026-04-03 17:38:41 +08:00
CLAUDE.md docs: update CLAUDE.md with structured output and task retry 2026-04-03 14:16:44 +08:00
CODE_OF_CONDUCT.md docs: add Code of Conduct and issue templates 2026-04-03 12:09:41 +08:00
CONTRIBUTING.md chore: add tests, CI, contributing guide, and PR template 2026-04-02 23:43:54 +08:00
DECISIONS.md docs: add DECISIONS.md recording deliberate "won't do" choices 2026-04-03 03:02:56 +08:00
LICENSE Initial release: open-multi-agent v0.1.0 2026-04-01 04:33:15 +08:00
README.md chore: remove hardcoded cache bust param from Star History URLs 2026-04-04 18:29:11 +08:00
README_zh.md docs: add Latent Space mention to README 2026-04-03 19:18:52 +08:00
SECURITY.md docs: add security policy 2026-04-03 12:10:47 +08:00
package-lock.json chore: add tsx to devDependencies for running examples 2026-04-03 02:20:47 +08:00
package.json chore: bump version to 0.2.0 2026-04-03 14:13:33 +08:00
tsconfig.json Initial release: open-multi-agent v0.1.0 2026-04-01 04:33:15 +08:00

README.md

Open Multi-Agent

TypeScript framework for multi-agent orchestration. One runTeam() call from goal to result — the framework decomposes it into tasks, resolves dependencies, and runs agents in parallel.

3 runtime dependencies · 27 source files · Deploys anywhere Node.js runs · Mentioned in Latent Space AI News

GitHub stars license TypeScript

English | 中文

Why Open Multi-Agent?

  • Goal In, Result OutrunTeam(team, "Build a REST API"). A coordinator agent auto-decomposes the goal into a task DAG with dependencies and assignees, runs independent tasks in parallel, and synthesizes the final output. No manual task definitions or graph wiring required.
  • TypeScript-Native — Built for the Node.js ecosystem. npm install, import, run. No Python runtime, no subprocess bridge, no sidecar services. Embed in Express, Next.js, serverless functions, or CI/CD pipelines.
  • Auditable and Lightweight — 3 runtime dependencies (@anthropic-ai/sdk, openai, zod). 27 source files. The entire codebase is readable in an afternoon.
  • Model Agnostic — Claude, GPT, Gemma 4, and local models (Ollama, vLLM, LM Studio) in the same team. Swap models per agent via baseURL.
  • Multi-Agent Collaboration — Agents with different roles, tools, and models collaborate through a message bus and shared memory.
  • Structured Output — Add outputSchema (Zod) to any agent. Output is parsed as JSON, validated, and auto-retried once on failure. Access typed results via result.structured.
  • Task Retry — Set maxRetries on tasks for automatic retry with exponential backoff. Failed attempts accumulate token usage for accurate billing.
  • Observability — Optional onTrace callback emits structured spans for every LLM call, tool execution, task, and agent run — with timing, token usage, and a shared runId for correlation. Zero overhead when not subscribed, zero extra dependencies.

Quick Start

Requires Node.js >= 18.

npm install @jackchen_me/open-multi-agent

Set ANTHROPIC_API_KEY (and optionally OPENAI_API_KEY or GITHUB_TOKEN for Copilot) in your environment. Local models via Ollama require no API key — see example 06.

Three agents, one goal — the framework handles the rest:

import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
import type { AgentConfig } from '@jackchen_me/open-multi-agent'

const architect: AgentConfig = {
  name: 'architect',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You design clean API contracts and file structures.',
  tools: ['file_write'],
}

const developer: AgentConfig = {
  name: 'developer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You implement what the architect designs.',
  tools: ['bash', 'file_read', 'file_write', 'file_edit'],
}

const reviewer: AgentConfig = {
  name: 'reviewer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You review code for correctness and clarity.',
  tools: ['file_read', 'grep'],
}

const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => console.log(event.type, event.agent ?? event.task ?? ''),
})

const team = orchestrator.createTeam('api-team', {
  name: 'api-team',
  agents: [architect, developer, reviewer],
  sharedMemory: true,
})

// Describe a goal — the framework breaks it into tasks and orchestrates execution
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')

console.log(`Success: ${result.success}`)
console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)

What happens under the hood:

agent_start coordinator
task_start architect
task_complete architect
task_start developer
task_start developer              // independent tasks run in parallel
task_complete developer
task_start reviewer               // unblocked after implementation
task_complete developer
task_complete reviewer
agent_complete coordinator        // synthesizes final result
Success: true
Tokens: 12847 output tokens

Three Ways to Run

Mode Method When to use
Single agent runAgent() One agent, one prompt — simplest entry point
Auto-orchestrated team runTeam() Give a goal, framework plans and executes
Explicit pipeline runTasks() You define the task graph and assignments

Examples

All examples are runnable scripts in examples/. Run any of them with npx tsx:

npx tsx examples/01-single-agent.ts
Example What it shows
01 — Single Agent runAgent() one-shot, stream() streaming, prompt() multi-turn
02 — Team Collaboration runTeam() auto-orchestration with coordinator pattern
03 — Task Pipeline runTasks() explicit dependency graph (design → implement → test + review)
04 — Multi-Model Team defineTool() custom tools, mixed Anthropic + OpenAI providers, AgentPool
05 — Copilot GitHub Copilot as an LLM provider
06 — Local Model Ollama + Claude in one pipeline via baseURL (works with vLLM, LM Studio, etc.)
07 — Fan-Out / Aggregate runParallel() MapReduce — 3 analysts in parallel, then synthesize
08 — Gemma 4 Local runTasks() + runTeam() with local Gemma 4 via Ollama — zero API cost
09 — Structured Output outputSchema (Zod) on AgentConfig — validated JSON via result.structured
10 — Task Retry maxRetries / retryDelayMs / retryBackoff with task_retry progress events
11 — Trace Observability onTrace callback — structured spans for LLM calls, tools, tasks, and agents
12 — Grok Same as example 02 (runTeam() collaboration) with Grok (XAI_API_KEY)

Architecture

┌─────────────────────────────────────────────────────────────────┐
│  OpenMultiAgent (Orchestrator)                                  │
│                                                                 │
│  createTeam()  runTeam()  runTasks()  runAgent()  getStatus()   │
└──────────────────────┬──────────────────────────────────────────┘
                       │
            ┌──────────▼──────────┐
            │  Team               │
            │  - AgentConfig[]    │
            │  - MessageBus       │
            │  - TaskQueue        │
            │  - SharedMemory     │
            └──────────┬──────────┘
                       │
         ┌─────────────┴─────────────┐
         │                           │
┌────────▼──────────┐    ┌───────────▼───────────┐
│  AgentPool        │    │  TaskQueue             │
│  - Semaphore      │    │  - dependency graph    │
│  - runParallel()  │    │  - auto unblock        │
└────────┬──────────┘    │  - cascade failure     │
         │               └───────────────────────┘
┌────────▼──────────┐
│  Agent            │
│  - run()          │    ┌──────────────────────┐
│  - prompt()       │───►│  LLMAdapter          │
│  - stream()       │    │  - AnthropicAdapter  │
└────────┬──────────┘    │  - OpenAIAdapter     │
         │               │  - CopilotAdapter    │
         │               └──────────────────────┘
┌────────▼──────────┐
│  AgentRunner      │    ┌──────────────────────┐
│  - conversation   │───►│  ToolRegistry        │
│    loop           │    │  - defineTool()      │
│  - tool dispatch  │    │  - 5 built-in tools  │
└───────────────────┘    └──────────────────────┘

Built-in Tools

Tool Description
bash Execute shell commands. Returns stdout + stderr. Supports timeout and cwd.
file_read Read file contents at an absolute path. Supports offset/limit for large files.
file_write Write or create a file. Auto-creates parent directories.
file_edit Edit a file by replacing an exact string match.
grep Search file contents with regex. Uses ripgrep when available, falls back to Node.js.

Supported Providers

Provider Config Env var Status
Anthropic (Claude) provider: 'anthropic' ANTHROPIC_API_KEY Verified
OpenAI (GPT) provider: 'openai' OPENAI_API_KEY Verified
Grok (xAI) provider: 'grok' XAI_API_KEY Verified
GitHub Copilot provider: 'copilot' GITHUB_TOKEN Verified
Ollama / vLLM / LM Studio provider: 'openai' + baseURL Verified

Verified local models with tool-calling: Gemma 4 (see example 08).

Any OpenAI-compatible API should work via provider: 'openai' + baseURL (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). Grok now has first-class support via provider: 'grok'.

LLM Configuration Examples

const grokAgent: AgentConfig = {
  name: 'grok-agent',
  provider: 'grok',
  model: 'grok-4',
  systemPrompt: 'You are a helpful assistant.',
}

(Set your XAI_API_KEY environment variable — no baseURL needed anymore.)

Contributing

Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:

  • Provider integrations — Verify and document OpenAI-compatible providers (DeepSeek, Groq, Qwen, MiniMax, etc.) via baseURL. See #25. For providers that are NOT OpenAI-compatible (e.g. Gemini), a new LLMAdapter implementation is welcome — the interface requires just two methods: chat() and stream().
  • Examples — Real-world workflows and use cases.
  • Documentation — Guides, tutorials, and API docs.

Author

JackChen — Ex PM (¥100M+ revenue), now indie builder. Follow on X for AI Agent insights.

Contributors

Star History

Star History Chart

License

MIT