Merge branch 'main' into feat.mcp-tool-integration
This commit is contained in:
commit
167085c3a7
123
README.md
123
README.md
|
|
@ -1,8 +1,10 @@
|
||||||
# Open Multi-Agent
|
# Open Multi-Agent
|
||||||
|
|
||||||
TypeScript framework for multi-agent orchestration. One `runTeam()` call from goal to result — the framework decomposes it into tasks, resolves dependencies, and runs agents in parallel.
|
The lightweight multi-agent orchestration engine for TypeScript. Three runtime dependencies, zero config, goal to result in one `runTeam()` call.
|
||||||
|
|
||||||
3 runtime dependencies · 33 source files · Deploys anywhere Node.js runs · Mentioned in [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News
|
CrewAI is Python. LangGraph makes you draw the graph by hand. `open-multi-agent` is the `npm install` you drop into an existing Node.js backend when you need a team of agents to work on a goal together. Nothing more, nothing less.
|
||||||
|
|
||||||
|
3 runtime dependencies · 35 source files · Deploys anywhere Node.js runs · Mentioned in [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News (top AI engineering newsletter, 170k+ subscribers)
|
||||||
|
|
||||||
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
||||||
[](./LICENSE)
|
[](./LICENSE)
|
||||||
|
|
@ -11,19 +13,51 @@ TypeScript framework for multi-agent orchestration. One `runTeam()` call from go
|
||||||
|
|
||||||
**English** | [中文](./README_zh.md)
|
**English** | [中文](./README_zh.md)
|
||||||
|
|
||||||
## Why Open Multi-Agent?
|
## What you actually get
|
||||||
|
|
||||||
- **Goal In, Result Out** — `runTeam(team, "Build a REST API")`. A coordinator agent auto-decomposes the goal into a task DAG with dependencies and assignees, runs independent tasks in parallel, and synthesizes the final output. No manual task definitions or graph wiring required.
|
- **Goal to result in one call.** `runTeam(team, "Build a REST API")` kicks off a coordinator agent that decomposes the goal into a task DAG, resolves dependencies, runs independent tasks in parallel, and synthesizes the final output. No graph to draw, no tasks to wire up.
|
||||||
- **TypeScript-Native** — Built for the Node.js ecosystem. `npm install`, import, run. No Python runtime, no subprocess bridge, no sidecar services. Embed in Express, Next.js, serverless functions, or CI/CD pipelines.
|
- **TypeScript-native, three runtime dependencies.** `@anthropic-ai/sdk`, `openai`, `zod`. That is the whole runtime. Embed in Express, Next.js, serverless functions, or CI/CD pipelines. No Python runtime, no subprocess bridge, no cloud sidecar.
|
||||||
- **Auditable and Lightweight** — 3 runtime dependencies (`@anthropic-ai/sdk`, `openai`, `zod`). 33 source files. The entire codebase is readable in an afternoon.
|
- **Multi-model teams.** Claude, GPT, Gemini, Grok, Copilot, or any OpenAI-compatible local model (Ollama, vLLM, LM Studio, llama.cpp) in the same team. Run the architect on Opus 4.6, the developer on GPT-5.4, the reviewer on local Gemma 4, all in one `runTeam()` call. Gemini ships as an optional peer dependency: `npm install @google/genai` to enable.
|
||||||
- **Model Agnostic** — Claude, GPT, Gemma 4, and local models (Ollama, vLLM, LM Studio, llama.cpp server) in the same team. Swap models per agent via `baseURL`.
|
|
||||||
- **Multi-Agent Collaboration** — Agents with different roles, tools, and models collaborate through a message bus and shared memory.
|
Other features (structured output, task retry, human-in-the-loop, lifecycle hooks, loop detection, observability) live below the fold and in [`examples/`](./examples/).
|
||||||
- **Structured Output** — Add `outputSchema` (Zod) to any agent. Output is parsed as JSON, validated, and auto-retried once on failure. Access typed results via `result.structured`.
|
|
||||||
- **Task Retry** — Set `maxRetries` on tasks for automatic retry with exponential backoff. Failed attempts accumulate token usage for accurate billing.
|
## Philosophy: what we build, what we don't
|
||||||
- **Human-in-the-Loop** — Optional `onApproval` callback on `runTasks()`. After each batch of tasks completes, your callback decides whether to proceed or abort remaining work.
|
|
||||||
- **Lifecycle Hooks** — `beforeRun` / `afterRun` on `AgentConfig`. Intercept the prompt before execution or post-process results after. Throw from either hook to abort.
|
Our goal is to be the simplest multi-agent framework for TypeScript. Simplicity does not mean closed. We believe the long-term value of a framework is the size of the network it connects to, not its feature checklist.
|
||||||
- **Loop Detection** — `loopDetection` on `AgentConfig` catches stuck agents repeating the same tool calls or text output. Configurable action: warn (default), terminate, or custom callback.
|
|
||||||
- **Observability** — Optional `onTrace` callback emits structured spans for every LLM call, tool execution, task, and agent run — with timing, token usage, and a shared `runId` for correlation. Zero overhead when not subscribed, zero extra dependencies.
|
**We build:**
|
||||||
|
- A coordinator that decomposes a goal into a task DAG.
|
||||||
|
- A task queue that runs independent tasks in parallel and cascades failures to dependents.
|
||||||
|
- A shared memory and message bus so agents can see each other's output.
|
||||||
|
- Multi-model teams where each agent can use a different LLM provider.
|
||||||
|
|
||||||
|
**We don't build:**
|
||||||
|
- **Agent handoffs.** If agent A needs to transfer mid-conversation to agent B, use [OpenAI Agents SDK](https://github.com/openai/openai-agents-python). In our model, each agent owns one task end-to-end, with no mid-conversation transfers.
|
||||||
|
- **State persistence / checkpointing.** Not planned for now. Adding a storage backend would break the three-dependency promise, and our workflows run in seconds to minutes, not hours. If real usage shifts toward long-running workflows, we will revisit.
|
||||||
|
|
||||||
|
**Tracking:**
|
||||||
|
- **MCP support.** Next up, see [#86](https://github.com/JackChen-me/open-multi-agent/issues/86).
|
||||||
|
- **A2A protocol.** Watching, will move when production adoption is real.
|
||||||
|
|
||||||
|
See [`DECISIONS.md`](./DECISIONS.md) for the full rationale.
|
||||||
|
|
||||||
|
## How is this different from X?
|
||||||
|
|
||||||
|
**vs. [LangGraph JS](https://github.com/langchain-ai/langgraphjs).** LangGraph is declarative graph orchestration: you define nodes, edges, and conditional routing, then `compile()` and `invoke()`. `open-multi-agent` is goal-driven: you declare a team and a goal, a coordinator decomposes it into a task DAG at runtime. LangGraph gives you total control of topology (great for fixed production workflows). This gives you less typing and faster iteration (great for exploratory multi-agent work). LangGraph also has mature checkpointing; we do not.
|
||||||
|
|
||||||
|
**vs. [CrewAI](https://github.com/crewAIInc/crewAI).** CrewAI is the mature Python choice. If your stack is Python, use CrewAI. `open-multi-agent` is TypeScript-native: three runtime dependencies, embeds directly in Node.js without a subprocess bridge. Roughly comparable capability on the orchestration side. Choose on language fit.
|
||||||
|
|
||||||
|
**vs. [Vercel AI SDK](https://github.com/vercel/ai).** AI SDK is the LLM call layer: a unified TypeScript client for 60+ providers with streaming, tool calls, and structured outputs. It does not orchestrate multi-agent teams. `open-multi-agent` sits on top when you need that. They compose: use AI SDK for single-agent work, reach for this when you need a team.
|
||||||
|
|
||||||
|
## Used by
|
||||||
|
|
||||||
|
`open-multi-agent` is a new project (launched 2026-04-01, MIT, 5,500+ stars). The ecosystem is still forming, so the list below is short and honest:
|
||||||
|
|
||||||
|
- **[temodar-agent](https://github.com/xeloxa/temodar-agent)** (~50 stars). WordPress security analysis platform by [Ali Sünbül](https://github.com/xeloxa). Uses our built-in tools (`bash`, `file_*`, `grep`) directly in its Docker runtime. Confirmed production use.
|
||||||
|
- **[rentech-quant-platform](https://github.com/rookiecoderasz/rentech-quant-platform).** Multi-agent quant trading research platform. Five pipelines plus MCP integrations, built on top of `open-multi-agent`. Early signal, very new.
|
||||||
|
- **Cybersecurity SOC (home lab).** A private setup running Qwen 2.5 + DeepSeek Coder entirely offline via Ollama, building an autonomous SOC pipeline on Wazuh + Proxmox. Early user, not yet public.
|
||||||
|
|
||||||
|
Using `open-multi-agent` in production or a side project? [Open a discussion](https://github.com/JackChen-me/open-multi-agent/discussions) and we will list it here.
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
|
|
@ -38,6 +72,7 @@ Set the API key for your provider. Local models via Ollama require no API key
|
||||||
- `ANTHROPIC_API_KEY`
|
- `ANTHROPIC_API_KEY`
|
||||||
- `OPENAI_API_KEY`
|
- `OPENAI_API_KEY`
|
||||||
- `GEMINI_API_KEY`
|
- `GEMINI_API_KEY`
|
||||||
|
- `XAI_API_KEY` (for Grok)
|
||||||
- `GITHUB_TOKEN` (for Copilot)
|
- `GITHUB_TOKEN` (for Copilot)
|
||||||
|
|
||||||
Three agents, one goal — the framework handles the rest:
|
Three agents, one goal — the framework handles the rest:
|
||||||
|
|
@ -53,19 +88,8 @@ const architect: AgentConfig = {
|
||||||
tools: ['file_write'],
|
tools: ['file_write'],
|
||||||
}
|
}
|
||||||
|
|
||||||
const developer: AgentConfig = {
|
const developer: AgentConfig = { /* same shape, tools: ['bash', 'file_read', 'file_write', 'file_edit'] */ }
|
||||||
name: 'developer',
|
const reviewer: AgentConfig = { /* same shape, tools: ['file_read', 'grep'] */ }
|
||||||
model: 'claude-sonnet-4-6',
|
|
||||||
systemPrompt: 'You implement what the architect designs.',
|
|
||||||
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
|
||||||
}
|
|
||||||
|
|
||||||
const reviewer: AgentConfig = {
|
|
||||||
name: 'reviewer',
|
|
||||||
model: 'claude-sonnet-4-6',
|
|
||||||
systemPrompt: 'You review code for correctness and clarity.',
|
|
||||||
tools: ['file_read', 'grep'],
|
|
||||||
}
|
|
||||||
|
|
||||||
const orchestrator = new OpenMultiAgent({
|
const orchestrator = new OpenMultiAgent({
|
||||||
defaultModel: 'claude-sonnet-4-6',
|
defaultModel: 'claude-sonnet-4-6',
|
||||||
|
|
@ -94,8 +118,8 @@ task_complete architect
|
||||||
task_start developer
|
task_start developer
|
||||||
task_start developer // independent tasks run in parallel
|
task_start developer // independent tasks run in parallel
|
||||||
task_complete developer
|
task_complete developer
|
||||||
task_start reviewer // unblocked after implementation
|
|
||||||
task_complete developer
|
task_complete developer
|
||||||
|
task_start reviewer // unblocked after implementation
|
||||||
task_complete reviewer
|
task_complete reviewer
|
||||||
agent_complete coordinator // synthesizes final result
|
agent_complete coordinator // synthesizes final result
|
||||||
Success: true
|
Success: true
|
||||||
|
|
@ -110,30 +134,18 @@ Tokens: 12847 output tokens
|
||||||
| Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
|
| Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
|
||||||
| Explicit pipeline | `runTasks()` | You define the task graph and assignments |
|
| Explicit pipeline | `runTasks()` | You define the task graph and assignments |
|
||||||
|
|
||||||
|
For MapReduce-style fan-out without task dependencies, use `AgentPool.runParallel()` directly. See [example 07](examples/07-fan-out-aggregate.ts).
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
All examples are runnable scripts in [`examples/`](./examples/). Run any of them with `npx tsx`:
|
16 runnable scripts in [`examples/`](./examples/). Start with these four:
|
||||||
|
|
||||||
```bash
|
- [02 — Team Collaboration](examples/02-team-collaboration.ts): `runTeam()` coordinator pattern.
|
||||||
npx tsx examples/01-single-agent.ts
|
- [06 — Local Model](examples/06-local-model.ts): Ollama and Claude in one pipeline via `baseURL`.
|
||||||
```
|
- [09 — Structured Output](examples/09-structured-output.ts): any agent returns Zod-validated JSON.
|
||||||
|
- [11 — Trace Observability](examples/11-trace-observability.ts): `onTrace` spans for LLM calls, tools, and tasks.
|
||||||
|
|
||||||
| Example | What it shows |
|
Run any with `npx tsx examples/02-team-collaboration.ts`.
|
||||||
|---------|---------------|
|
|
||||||
| [01 — Single Agent](examples/01-single-agent.ts) | `runAgent()` one-shot, `stream()` streaming, `prompt()` multi-turn |
|
|
||||||
| [02 — Team Collaboration](examples/02-team-collaboration.ts) | `runTeam()` auto-orchestration with coordinator pattern |
|
|
||||||
| [03 — Task Pipeline](examples/03-task-pipeline.ts) | `runTasks()` explicit dependency graph (design → implement → test + review) |
|
|
||||||
| [04 — Multi-Model Team](examples/04-multi-model-team.ts) | `defineTool()` custom tools, mixed Anthropic + OpenAI providers, `AgentPool` |
|
|
||||||
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot as an LLM provider |
|
|
||||||
| [06 — Local Model](examples/06-local-model.ts) | Ollama + Claude in one pipeline via `baseURL` (works with vLLM, LM Studio, etc.) |
|
|
||||||
| [07 — Fan-Out / Aggregate](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 analysts in parallel, then synthesize |
|
|
||||||
| [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` with local Gemma 4 via Ollama — zero API cost |
|
|
||||||
| [09 — Structured Output](examples/09-structured-output.ts) | `outputSchema` (Zod) on AgentConfig — validated JSON via `result.structured` |
|
|
||||||
| [10 — Task Retry](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` with `task_retry` progress events |
|
|
||||||
| [11 — Trace Observability](examples/11-trace-observability.ts) | `onTrace` callback — structured spans for LLM calls, tools, tasks, and agents |
|
|
||||||
| [12 — Grok](examples/12-grok.ts) | Same as example 02 (`runTeam()` collaboration) with Grok (`XAI_API_KEY`) |
|
|
||||||
| [13 — Gemini](examples/13-gemini.ts) | Gemini adapter smoke test with `gemini-2.5-flash` (`GEMINI_API_KEY`) |
|
|
||||||
| [16 — MCP GitHub Tools](examples/16-mcp-github.ts) | Connect MCP over stdio and use server tools as native `ToolDefinition`s |
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
|
|
@ -272,6 +284,8 @@ Notes:
|
||||||
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | Verified |
|
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | Verified |
|
||||||
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | Verified |
|
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | Verified |
|
||||||
|
|
||||||
|
Gemini requires `npm install @google/genai` (optional peer dependency).
|
||||||
|
|
||||||
Verified local models with tool-calling: **Gemma 4** (see [example 08](examples/08-gemma4-local.ts)).
|
Verified local models with tool-calling: **Gemma 4** (see [example 08](examples/08-gemma4-local.ts)).
|
||||||
|
|
||||||
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). **Grok now has first-class support** via `provider: 'grok'`.
|
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). **Grok now has first-class support** via `provider: 'grok'`.
|
||||||
|
|
@ -320,27 +334,22 @@ const grokAgent: AgentConfig = {
|
||||||
|
|
||||||
Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:
|
Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:
|
||||||
|
|
||||||
- **Provider integrations** — Verify and document OpenAI-compatible providers (DeepSeek, Groq, Qwen, MiniMax, etc.) via `baseURL`. See [#25](https://github.com/JackChen-me/open-multi-agent/issues/25). For providers that are NOT OpenAI-compatible (e.g. Gemini), a new `LLMAdapter` implementation is welcome — the interface requires just two methods: `chat()` and `stream()`.
|
|
||||||
- **Examples** — Real-world workflows and use cases.
|
- **Examples** — Real-world workflows and use cases.
|
||||||
- **Documentation** — Guides, tutorials, and API docs.
|
- **Documentation** — Guides, tutorials, and API docs.
|
||||||
|
|
||||||
## Author
|
|
||||||
|
|
||||||
> JackChen — Ex PM (¥100M+ revenue), now indie builder. Follow on [X](https://x.com/JackChen_x) for AI Agent insights.
|
|
||||||
|
|
||||||
## Contributors
|
## Contributors
|
||||||
|
|
||||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&v=20260408" />
|
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260411" />
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
## Star History
|
## Star History
|
||||||
|
|
||||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||||
<picture>
|
<picture>
|
||||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260408" />
|
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark" />
|
||||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||||
</picture>
|
</picture>
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
|
|
|
||||||
173
README_zh.md
173
README_zh.md
|
|
@ -1,8 +1,10 @@
|
||||||
# Open Multi-Agent
|
# Open Multi-Agent
|
||||||
|
|
||||||
TypeScript 多智能体编排框架。一次 `runTeam()` 调用从目标到结果——框架自动拆解任务、解析依赖、并行执行。
|
面向 TypeScript 的轻量多智能体编排引擎。3 个运行时依赖,零配置,一次 `runTeam()` 调用从目标到结果。
|
||||||
|
|
||||||
3 个运行时依赖 · 33 个源文件 · Node.js 能跑的地方都能部署 · 被 [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News 提及(AI 工程领域头部 Newsletter,17 万+订阅者)
|
CrewAI 是 Python。LangGraph 需要你自己画图。`open-multi-agent` 是你现有 Node.js 后端里 `npm install` 一下就能用的那一层。当你需要让一支 agent 团队围绕一个目标协作时,只提供这个,不多不少。
|
||||||
|
|
||||||
|
3 个运行时依赖 · 35 个源文件 · Node.js 能跑的地方都能部署 · 被 [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News 提及(AI 工程领域头部 Newsletter,17 万+订阅者)
|
||||||
|
|
||||||
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
||||||
[](./LICENSE)
|
[](./LICENSE)
|
||||||
|
|
@ -11,19 +13,51 @@ TypeScript 多智能体编排框架。一次 `runTeam()` 调用从目标到结
|
||||||
|
|
||||||
[English](./README.md) | **中文**
|
[English](./README.md) | **中文**
|
||||||
|
|
||||||
## 为什么选择 Open Multi-Agent?
|
## 你真正得到的三件事
|
||||||
|
|
||||||
- **目标进,结果出** — `runTeam(team, "构建一个 REST API")`。协调者智能体自动将目标拆解为带依赖关系的任务图,分配给对应智能体,独立任务并行执行,最终合成输出。无需手动定义任务或编排流程图。
|
- **一次调用从目标到结果。** `runTeam(team, "构建一个 REST API")` 启动一个协调者 agent,把目标拆成任务 DAG,解析依赖,独立任务并行执行,最终合成输出。不需要画图,不需要手动连任务。
|
||||||
- **TypeScript 原生** — 为 Node.js 生态而生。`npm install` 即用,无需 Python 运行时、无子进程桥接、无额外基础设施。可嵌入 Express、Next.js、Serverless 函数或 CI/CD 流水线。
|
- **TypeScript 原生,3 个运行时依赖。** `@anthropic-ai/sdk`、`openai`、`zod`。这就是全部运行时。可嵌入 Express、Next.js、Serverless 函数或 CI/CD 流水线。没有 Python 运行时,没有子进程桥接,没有云端 sidecar。
|
||||||
- **可审计、极轻量** — 3 个运行时依赖(`@anthropic-ai/sdk`、`openai`、`zod`),33 个源文件。一个下午就能读完全部源码。
|
- **多模型团队。** Claude、GPT、Gemini、Grok、Copilot,或任何 OpenAI 兼容的本地模型(Ollama、vLLM、LM Studio、llama.cpp)可以在同一个团队中使用。让架构师用 Opus 4.6,开发者用 GPT-5.4,评审用本地的 Gemma 4,一次 `runTeam()` 调用全部搞定。Gemini 作为 optional peer dependency 提供:使用前需 `npm install @google/genai`。
|
||||||
- **模型无关** — Claude、GPT、Gemma 4 和本地模型(Ollama、vLLM、LM Studio、llama.cpp server)可以在同一个团队中使用。通过 `baseURL` 即可接入任何 OpenAI 兼容服务。
|
|
||||||
- **多智能体协作** — 定义不同角色、工具和模型的智能体,通过消息总线和共享内存协作。
|
其他能力(结构化输出、任务重试、人机协同、生命周期钩子、循环检测、可观测性)在下方章节和 [`examples/`](./examples/) 里。
|
||||||
- **结构化输出** — 为任意智能体添加 `outputSchema`(Zod),输出自动解析为 JSON 并校验,校验失败自动重试一次。通过 `result.structured` 获取类型化结果。
|
|
||||||
- **任务重试** — 为任务设置 `maxRetries`,失败时自动指数退避重试。所有尝试的 token 用量累计,确保计费准确。
|
## 哲学:我们做什么,不做什么
|
||||||
- **人机协同** — `runTasks()` 支持可选的 `onApproval` 回调。每批任务完成后,由你的回调决定是否继续执行后续任务。
|
|
||||||
- **生命周期钩子** — `AgentConfig` 上的 `beforeRun` / `afterRun`。在执行前拦截 prompt,或在执行后处理结果。从钩子中 throw 可中止运行。
|
我们的目标是做 TypeScript 生态里最简单的多智能体框架。简单不等于封闭。框架的长期价值不在于功能清单的长度,而在于它连接的网络有多大。
|
||||||
- **循环检测** — `AgentConfig` 上的 `loopDetection` 可检测智能体重复相同工具调用或文本输出的卡死循环。可配置行为:警告(默认)、终止、或自定义回调。
|
|
||||||
- **可观测性** — 可选的 `onTrace` 回调为每次 LLM 调用、工具执行、任务和智能体运行发出结构化 span 事件——包含耗时、token 用量和共享的 `runId` 用于关联追踪。未订阅时零开销,零额外依赖。
|
**我们做:**
|
||||||
|
- 一个协调者,把目标拆成任务 DAG。
|
||||||
|
- 一个任务队列,独立任务并行执行,失败级联到下游。
|
||||||
|
- 共享内存和消息总线,让 agent 之间能看到彼此的输出。
|
||||||
|
- 多模型团队,每个 agent 可以用不同的 LLM provider。
|
||||||
|
|
||||||
|
**我们不做:**
|
||||||
|
- **Agent Handoffs。** 如果 agent A 需要把对话中途交接给 agent B,去用 [OpenAI Agents SDK](https://github.com/openai/openai-agents-python)。在我们的模型里,每个 agent 完整负责自己的任务,不会中途交接。
|
||||||
|
- **状态持久化 / 检查点。** 短期内不做。加存储后端会打破 3 个依赖的承诺,而且我们的工作流执行时间是秒到分钟级,不是小时级。如果真实使用场景转向长时间工作流,我们会重新评估。
|
||||||
|
|
||||||
|
**正在跟踪:**
|
||||||
|
- **MCP 支持。** 下一个要做的,见 [#86](https://github.com/JackChen-me/open-multi-agent/issues/86)。
|
||||||
|
- **A2A 协议。** 观望中,等生产级采纳到位再行动。
|
||||||
|
|
||||||
|
完整理由见 [`DECISIONS.md`](./DECISIONS.md)。
|
||||||
|
|
||||||
|
## 和 X 有什么不同?
|
||||||
|
|
||||||
|
**vs. [LangGraph JS](https://github.com/langchain-ai/langgraphjs)。** LangGraph 是声明式图编排:你定义节点、边、条件路由,然后 `compile()` + `invoke()`。`open-multi-agent` 是目标驱动:你声明团队和目标,协调者在运行时把目标拆成任务 DAG。LangGraph 给你完全的拓扑控制(适合固定的生产工作流)。这个框架代码更少、迭代更快(适合探索型多智能体协作)。LangGraph 还有成熟的检查点能力,我们没有。
|
||||||
|
|
||||||
|
**vs. [CrewAI](https://github.com/crewAIInc/crewAI)。** CrewAI 是成熟的 Python 选择。如果你的技术栈是 Python,用 CrewAI。`open-multi-agent` 是 TypeScript 原生:3 个运行时依赖,直接嵌入 Node.js,不需要子进程桥接。编排能力大致相当,按语言契合度选。
|
||||||
|
|
||||||
|
**vs. [Vercel AI SDK](https://github.com/vercel/ai)。** AI SDK 是 LLM 调用层:统一的 TypeScript 客户端,支持 60+ provider,带流式、tool calls、结构化输出。它不做多智能体编排。`open-multi-agent` 需要多 agent 时叠在它之上。两者互补:单 agent 用 AI SDK,需要团队用这个。
|
||||||
|
|
||||||
|
## 谁在用
|
||||||
|
|
||||||
|
`open-multi-agent` 是一个新项目(2026-04-01 发布,MIT 许可,5,500+ stars)。生态还在成形,下面这份列表很短,但都真实:
|
||||||
|
|
||||||
|
- **[temodar-agent](https://github.com/xeloxa/temodar-agent)**(约 50 stars)。WordPress 安全分析平台,作者 [Ali Sünbül](https://github.com/xeloxa)。在 Docker runtime 里直接使用我们的内置工具(`bash`、`file_*`、`grep`)。已确认生产环境使用。
|
||||||
|
- **[rentech-quant-platform](https://github.com/rookiecoderasz/rentech-quant-platform)。** 多智能体量化交易研究平台,5 条管线 + MCP 集成,基于 `open-multi-agent` 构建。早期信号,项目非常新。
|
||||||
|
- **家用服务器 Cybersecurity SOC。** 本地完全离线运行 Qwen 2.5 + DeepSeek Coder(通过 Ollama),在 Wazuh + Proxmox 上构建自主 SOC 流水线。早期用户,未公开。
|
||||||
|
|
||||||
|
你在生产环境或 side project 里用 `open-multi-agent` 吗?[开一个 Discussion](https://github.com/JackChen-me/open-multi-agent/discussions),我们会把你列上来。
|
||||||
|
|
||||||
## 快速开始
|
## 快速开始
|
||||||
|
|
||||||
|
|
@ -54,19 +88,8 @@ const architect: AgentConfig = {
|
||||||
tools: ['file_write'],
|
tools: ['file_write'],
|
||||||
}
|
}
|
||||||
|
|
||||||
const developer: AgentConfig = {
|
const developer: AgentConfig = { /* 同样结构,tools: ['bash', 'file_read', 'file_write', 'file_edit'] */ }
|
||||||
name: 'developer',
|
const reviewer: AgentConfig = { /* 同样结构,tools: ['file_read', 'grep'] */ }
|
||||||
model: 'claude-sonnet-4-6',
|
|
||||||
systemPrompt: 'You implement what the architect designs.',
|
|
||||||
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
|
||||||
}
|
|
||||||
|
|
||||||
const reviewer: AgentConfig = {
|
|
||||||
name: 'reviewer',
|
|
||||||
model: 'claude-sonnet-4-6',
|
|
||||||
systemPrompt: 'You review code for correctness and clarity.',
|
|
||||||
tools: ['file_read', 'grep'],
|
|
||||||
}
|
|
||||||
|
|
||||||
const orchestrator = new OpenMultiAgent({
|
const orchestrator = new OpenMultiAgent({
|
||||||
defaultModel: 'claude-sonnet-4-6',
|
defaultModel: 'claude-sonnet-4-6',
|
||||||
|
|
@ -82,8 +105,8 @@ const team = orchestrator.createTeam('api-team', {
|
||||||
// 描述一个目标——框架将其拆解为任务并编排执行
|
// 描述一个目标——框架将其拆解为任务并编排执行
|
||||||
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')
|
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')
|
||||||
|
|
||||||
console.log(`成功: ${result.success}`)
|
console.log(`Success: ${result.success}`)
|
||||||
console.log(`Token 用量: ${result.totalTokenUsage.output_tokens} output tokens`)
|
console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)
|
||||||
```
|
```
|
||||||
|
|
||||||
执行过程:
|
执行过程:
|
||||||
|
|
@ -95,8 +118,8 @@ task_complete architect
|
||||||
task_start developer
|
task_start developer
|
||||||
task_start developer // 无依赖的任务并行执行
|
task_start developer // 无依赖的任务并行执行
|
||||||
task_complete developer
|
task_complete developer
|
||||||
task_start reviewer // 实现完成后自动解锁
|
|
||||||
task_complete developer
|
task_complete developer
|
||||||
|
task_start reviewer // 实现完成后自动解锁
|
||||||
task_complete reviewer
|
task_complete reviewer
|
||||||
agent_complete coordinator // 综合所有结果
|
agent_complete coordinator // 综合所有结果
|
||||||
Success: true
|
Success: true
|
||||||
|
|
@ -111,29 +134,18 @@ Tokens: 12847 output tokens
|
||||||
| 自动编排团队 | `runTeam()` | 给一个目标,框架自动规划和执行 |
|
| 自动编排团队 | `runTeam()` | 给一个目标,框架自动规划和执行 |
|
||||||
| 显式任务管线 | `runTasks()` | 你自己定义任务图和分配 |
|
| 显式任务管线 | `runTasks()` | 你自己定义任务图和分配 |
|
||||||
|
|
||||||
|
如果需要 MapReduce 风格的扇出而不涉及任务依赖,直接使用 `AgentPool.runParallel()`。参见[示例 07](examples/07-fan-out-aggregate.ts)。
|
||||||
|
|
||||||
## 示例
|
## 示例
|
||||||
|
|
||||||
所有示例都是可运行脚本,位于 [`examples/`](./examples/) 目录。使用 `npx tsx` 运行:
|
[`examples/`](./examples/) 里有 15 个可运行脚本。推荐从这 4 个开始:
|
||||||
|
|
||||||
```bash
|
- [02 — 团队协作](examples/02-team-collaboration.ts):`runTeam()` 协调者模式。
|
||||||
npx tsx examples/01-single-agent.ts
|
- [06 — 本地模型](examples/06-local-model.ts):通过 `baseURL` 把 Ollama 和 Claude 放在同一条管线。
|
||||||
```
|
- [09 — 结构化输出](examples/09-structured-output.ts):任意 agent 产出 Zod 校验过的 JSON。
|
||||||
|
- [11 — 可观测性](examples/11-trace-observability.ts):`onTrace` 回调,为 LLM 调用、工具、任务发出结构化 span。
|
||||||
|
|
||||||
| 示例 | 展示内容 |
|
用 `npx tsx examples/02-team-collaboration.ts` 运行任意一个。
|
||||||
|------|----------|
|
|
||||||
| [01 — 单智能体](examples/01-single-agent.ts) | `runAgent()` 单次调用、`stream()` 流式输出、`prompt()` 多轮对话 |
|
|
||||||
| [02 — 团队协作](examples/02-team-collaboration.ts) | `runTeam()` 自动编排 + 协调者模式 |
|
|
||||||
| [03 — 任务流水线](examples/03-task-pipeline.ts) | `runTasks()` 显式依赖图(设计 → 实现 → 测试 + 评审) |
|
|
||||||
| [04 — 多模型团队](examples/04-multi-model-team.ts) | `defineTool()` 自定义工具、Anthropic + OpenAI 混合、`AgentPool` |
|
|
||||||
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot 作为 LLM 提供者 |
|
|
||||||
| [06 — 本地模型](examples/06-local-model.ts) | Ollama + Claude 混合流水线,通过 `baseURL` 接入(兼容 vLLM、LM Studio 等) |
|
|
||||||
| [07 — 扇出聚合](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 个分析师并行,然后综合 |
|
|
||||||
| [08 — Gemma 4 本地](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` 本地 Gemma 4 via Ollama — 零 API 费用 |
|
|
||||||
| [09 — 结构化输出](examples/09-structured-output.ts) | `outputSchema`(Zod)— 校验 JSON 输出,通过 `result.structured` 获取 |
|
|
||||||
| [10 — 任务重试](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` + `task_retry` 进度事件 |
|
|
||||||
| [11 — 可观测性](examples/11-trace-observability.ts) | `onTrace` 回调 — LLM 调用、工具、任务、智能体的结构化 span 事件 |
|
|
||||||
| [12 — Grok](examples/12-grok.ts) | 同示例 02(`runTeam()` 团队协作),使用 Grok(`XAI_API_KEY`) |
|
|
||||||
| [13 — Gemini](examples/13-gemini.ts) | Gemini 适配器测试,使用 `gemini-2.5-flash`(`GEMINI_API_KEY`) |
|
|
||||||
|
|
||||||
## 架构
|
## 架构
|
||||||
|
|
||||||
|
|
@ -188,6 +200,54 @@ npx tsx examples/01-single-agent.ts
|
||||||
| `file_edit` | 通过精确字符串匹配编辑文件。 |
|
| `file_edit` | 通过精确字符串匹配编辑文件。 |
|
||||||
| `grep` | 使用正则表达式搜索文件内容。优先使用 ripgrep,回退到 Node.js 实现。 |
|
| `grep` | 使用正则表达式搜索文件内容。优先使用 ripgrep,回退到 Node.js 实现。 |
|
||||||
|
|
||||||
|
## 工具配置
|
||||||
|
|
||||||
|
可以通过预设、白名单和黑名单对 agent 的工具访问进行精细控制。
|
||||||
|
|
||||||
|
### 工具预设
|
||||||
|
|
||||||
|
为常见场景预定义的工具组合:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const readonlyAgent: AgentConfig = {
|
||||||
|
name: 'reader',
|
||||||
|
model: 'claude-sonnet-4-6',
|
||||||
|
toolPreset: 'readonly', // file_read, grep, glob
|
||||||
|
}
|
||||||
|
|
||||||
|
const readwriteAgent: AgentConfig = {
|
||||||
|
name: 'editor',
|
||||||
|
model: 'claude-sonnet-4-6',
|
||||||
|
toolPreset: 'readwrite', // file_read, file_write, file_edit, grep, glob
|
||||||
|
}
|
||||||
|
|
||||||
|
const fullAgent: AgentConfig = {
|
||||||
|
name: 'executor',
|
||||||
|
model: 'claude-sonnet-4-6',
|
||||||
|
toolPreset: 'full', // file_read, file_write, file_edit, grep, glob, bash
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 高级过滤
|
||||||
|
|
||||||
|
将预设与白名单、黑名单组合,实现精确控制:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const customAgent: AgentConfig = {
|
||||||
|
name: 'custom',
|
||||||
|
model: 'claude-sonnet-4-6',
|
||||||
|
toolPreset: 'readwrite', // 起点:file_read, file_write, file_edit, grep, glob
|
||||||
|
tools: ['file_read', 'grep'], // 白名单:与预设取交集 = file_read, grep
|
||||||
|
disallowedTools: ['grep'], // 黑名单:再减去 = 只剩 file_read
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**解析顺序:** preset → allowlist → denylist → 框架安全护栏。
|
||||||
|
|
||||||
|
### 自定义工具
|
||||||
|
|
||||||
|
通过 `agent.addTool()` 添加的工具始终可用,不受过滤规则影响。
|
||||||
|
|
||||||
## 支持的 Provider
|
## 支持的 Provider
|
||||||
|
|
||||||
| Provider | 配置 | 环境变量 | 状态 |
|
| Provider | 配置 | 环境变量 | 状态 |
|
||||||
|
|
@ -200,6 +260,8 @@ npx tsx examples/01-single-agent.ts
|
||||||
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
||||||
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
||||||
|
|
||||||
|
Gemini 需要 `npm install @google/genai`(optional peer dependency)。
|
||||||
|
|
||||||
已验证支持 tool-calling 的本地模型:**Gemma 4**(见[示例 08](examples/08-gemma4-local.ts))。
|
已验证支持 tool-calling 的本地模型:**Gemma 4**(见[示例 08](examples/08-gemma4-local.ts))。
|
||||||
|
|
||||||
任何 OpenAI 兼容 API 均可通过 `provider: 'openai'` + `baseURL` 接入(DeepSeek、Groq、Mistral、Qwen、MiniMax 等)。**Grok 现已原生支持**,使用 `provider: 'grok'`。
|
任何 OpenAI 兼容 API 均可通过 `provider: 'openai'` + `baseURL` 接入(DeepSeek、Groq、Mistral、Qwen、MiniMax 等)。**Grok 现已原生支持**,使用 `provider: 'grok'`。
|
||||||
|
|
@ -248,27 +310,22 @@ const grokAgent: AgentConfig = {
|
||||||
|
|
||||||
欢迎提 Issue、功能需求和 PR。以下方向的贡献尤其有价值:
|
欢迎提 Issue、功能需求和 PR。以下方向的贡献尤其有价值:
|
||||||
|
|
||||||
- **Provider 集成** — 验证并文档化 OpenAI 兼容 Provider(DeepSeek、Groq、Qwen、MiniMax 等)通过 `baseURL` 接入。详见 [#25](https://github.com/JackChen-me/open-multi-agent/issues/25)。对于非 OpenAI 兼容的 Provider,欢迎贡献新的 `LLMAdapter` 实现——接口只需两个方法:`chat()` 和 `stream()`。
|
|
||||||
- **示例** — 真实场景的工作流和用例。
|
- **示例** — 真实场景的工作流和用例。
|
||||||
- **文档** — 指南、教程和 API 文档。
|
- **文档** — 指南、教程和 API 文档。
|
||||||
|
|
||||||
## 作者
|
|
||||||
|
|
||||||
> JackChen — 前 WPS 产品经理,现独立创业者。关注小红书[「杰克西|硅基杠杆」](https://www.xiaohongshu.com/user/profile/5a1bdc1e4eacab4aa39ea6d6),持续获取我的 AI Agent 观点和思考。
|
|
||||||
|
|
||||||
## 贡献者
|
## 贡献者
|
||||||
|
|
||||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&v=20260408" />
|
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260411" />
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
## Star 趋势
|
## Star 趋势
|
||||||
|
|
||||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||||
<picture>
|
<picture>
|
||||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260408" />
|
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark" />
|
||||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||||
</picture>
|
</picture>
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,6 +4,8 @@
|
||||||
* Demonstrates how to define tasks with explicit dependency chains
|
* Demonstrates how to define tasks with explicit dependency chains
|
||||||
* (design → implement → test → review) using runTasks(). The TaskQueue
|
* (design → implement → test → review) using runTasks(). The TaskQueue
|
||||||
* automatically blocks downstream tasks until their dependencies complete.
|
* automatically blocks downstream tasks until their dependencies complete.
|
||||||
|
* Prompt context is dependency-scoped by default: each task sees only its own
|
||||||
|
* description plus direct dependency results (not unrelated team outputs).
|
||||||
*
|
*
|
||||||
* Run:
|
* Run:
|
||||||
* npx tsx examples/03-task-pipeline.ts
|
* npx tsx examples/03-task-pipeline.ts
|
||||||
|
|
@ -116,6 +118,7 @@ const tasks: Array<{
|
||||||
description: string
|
description: string
|
||||||
assignee?: string
|
assignee?: string
|
||||||
dependsOn?: string[]
|
dependsOn?: string[]
|
||||||
|
memoryScope?: 'dependencies' | 'all'
|
||||||
}> = [
|
}> = [
|
||||||
{
|
{
|
||||||
title: 'Design: URL shortener data model',
|
title: 'Design: URL shortener data model',
|
||||||
|
|
@ -162,6 +165,9 @@ Produce a structured code review with sections:
|
||||||
- Verdict: SHIP or NEEDS WORK`,
|
- Verdict: SHIP or NEEDS WORK`,
|
||||||
assignee: 'reviewer',
|
assignee: 'reviewer',
|
||||||
dependsOn: ['Implement: URL shortener'], // runs in parallel with Test after Implement completes
|
dependsOn: ['Implement: URL shortener'], // runs in parallel with Test after Implement completes
|
||||||
|
// Optional override: reviewers can opt into full shared memory when needed.
|
||||||
|
// Remove this line to keep strict dependency-only context.
|
||||||
|
memoryScope: 'all',
|
||||||
},
|
},
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
{
|
{
|
||||||
"name": "@jackchen_me/open-multi-agent",
|
"name": "@jackchen_me/open-multi-agent",
|
||||||
"version": "1.0.1",
|
"version": "1.1.0",
|
||||||
"description": "TypeScript multi-agent framework — one runTeam() call from goal to result. Auto task decomposition, parallel execution. 3 dependencies, deploys anywhere Node.js runs.",
|
"description": "TypeScript multi-agent framework — one runTeam() call from goal to result. Auto task decomposition, parallel execution. 3 dependencies, deploys anywhere Node.js runs.",
|
||||||
"files": [
|
"files": [
|
||||||
"dist",
|
"dist",
|
||||||
|
|
|
||||||
|
|
@ -124,8 +124,18 @@ export class SharedMemory {
|
||||||
* - plan: Implement feature X using const type params
|
* - plan: Implement feature X using const type params
|
||||||
* ```
|
* ```
|
||||||
*/
|
*/
|
||||||
async getSummary(): Promise<string> {
|
async getSummary(filter?: { taskIds?: string[] }): Promise<string> {
|
||||||
const all = await this.store.list()
|
let all = await this.store.list()
|
||||||
|
if (filter?.taskIds && filter.taskIds.length > 0) {
|
||||||
|
const taskIds = new Set(filter.taskIds)
|
||||||
|
all = all.filter((entry) => {
|
||||||
|
const slashIdx = entry.key.indexOf('/')
|
||||||
|
const localKey = slashIdx === -1 ? entry.key : entry.key.slice(slashIdx + 1)
|
||||||
|
if (!localKey.startsWith('task:') || !localKey.endsWith(':result')) return false
|
||||||
|
const taskId = localKey.slice('task:'.length, localKey.length - ':result'.length)
|
||||||
|
return taskIds.has(taskId)
|
||||||
|
})
|
||||||
|
}
|
||||||
if (all.length === 0) return ''
|
if (all.length === 0) return ''
|
||||||
|
|
||||||
// Group entries by agent name.
|
// Group entries by agent name.
|
||||||
|
|
|
||||||
|
|
@ -324,6 +324,10 @@ interface ParsedTaskSpec {
|
||||||
description: string
|
description: string
|
||||||
assignee?: string
|
assignee?: string
|
||||||
dependsOn?: string[]
|
dependsOn?: string[]
|
||||||
|
memoryScope?: 'dependencies' | 'all'
|
||||||
|
maxRetries?: number
|
||||||
|
retryDelayMs?: number
|
||||||
|
retryBackoff?: number
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -362,6 +366,10 @@ function parseTaskSpecs(raw: string): ParsedTaskSpec[] | null {
|
||||||
dependsOn: Array.isArray(obj['dependsOn'])
|
dependsOn: Array.isArray(obj['dependsOn'])
|
||||||
? (obj['dependsOn'] as unknown[]).filter((x): x is string => typeof x === 'string')
|
? (obj['dependsOn'] as unknown[]).filter((x): x is string => typeof x === 'string')
|
||||||
: undefined,
|
: undefined,
|
||||||
|
memoryScope: obj['memoryScope'] === 'all' ? 'all' : undefined,
|
||||||
|
maxRetries: typeof obj['maxRetries'] === 'number' ? obj['maxRetries'] : undefined,
|
||||||
|
retryDelayMs: typeof obj['retryDelayMs'] === 'number' ? obj['retryDelayMs'] : undefined,
|
||||||
|
retryBackoff: typeof obj['retryBackoff'] === 'number' ? obj['retryBackoff'] : undefined,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -492,8 +500,8 @@ async function executeQueue(
|
||||||
data: task,
|
data: task,
|
||||||
} satisfies OrchestratorEvent)
|
} satisfies OrchestratorEvent)
|
||||||
|
|
||||||
// Build the prompt: inject shared memory context + task description
|
// Build the prompt: task description + dependency-only context by default.
|
||||||
const prompt = await buildTaskPrompt(task, team)
|
const prompt = await buildTaskPrompt(task, team, queue)
|
||||||
|
|
||||||
// Build trace context for this task's agent run
|
// Build trace context for this task's agent run
|
||||||
const traceOptions: Partial<RunOptions> | undefined = config.onTrace
|
const traceOptions: Partial<RunOptions> | undefined = config.onTrace
|
||||||
|
|
@ -626,17 +634,19 @@ async function executeQueue(
|
||||||
*
|
*
|
||||||
* Injects:
|
* Injects:
|
||||||
* - Task title and description
|
* - Task title and description
|
||||||
* - Dependency results from shared memory (if available)
|
* - Direct dependency task results by default (clean slate when none)
|
||||||
|
* - Optional full shared-memory context when `task.memoryScope === 'all'`
|
||||||
* - Any messages addressed to this agent from the team bus
|
* - Any messages addressed to this agent from the team bus
|
||||||
*/
|
*/
|
||||||
async function buildTaskPrompt(task: Task, team: Team): Promise<string> {
|
async function buildTaskPrompt(task: Task, team: Team, queue: TaskQueue): Promise<string> {
|
||||||
const lines: string[] = [
|
const lines: string[] = [
|
||||||
`# Task: ${task.title}`,
|
`# Task: ${task.title}`,
|
||||||
'',
|
'',
|
||||||
task.description,
|
task.description,
|
||||||
]
|
]
|
||||||
|
|
||||||
// Inject shared memory summary so the agent sees its teammates' work
|
if (task.memoryScope === 'all') {
|
||||||
|
// Explicit opt-in for full visibility (legacy/shared-memory behavior).
|
||||||
const sharedMem = team.getSharedMemoryInstance()
|
const sharedMem = team.getSharedMemoryInstance()
|
||||||
if (sharedMem) {
|
if (sharedMem) {
|
||||||
const summary = await sharedMem.getSummary()
|
const summary = await sharedMem.getSummary()
|
||||||
|
|
@ -644,6 +654,19 @@ async function buildTaskPrompt(task: Task, team: Team): Promise<string> {
|
||||||
lines.push('', summary)
|
lines.push('', summary)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
} else if (task.dependsOn && task.dependsOn.length > 0) {
|
||||||
|
// Default-deny: inject only explicit prerequisite outputs.
|
||||||
|
const depResults: string[] = []
|
||||||
|
for (const depId of task.dependsOn) {
|
||||||
|
const depTask = queue.get(depId)
|
||||||
|
if (depTask?.status === 'completed' && depTask.result) {
|
||||||
|
depResults.push(`### ${depTask.title} (by ${depTask.assignee ?? 'unknown'})\n${depTask.result}`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (depResults.length > 0) {
|
||||||
|
lines.push('', '## Context from prerequisite tasks', '', ...depResults)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Inject messages from other agents addressed to this assignee
|
// Inject messages from other agents addressed to this assignee
|
||||||
if (task.assignee) {
|
if (task.assignee) {
|
||||||
|
|
@ -1071,6 +1094,7 @@ export class OpenMultiAgent {
|
||||||
description: string
|
description: string
|
||||||
assignee?: string
|
assignee?: string
|
||||||
dependsOn?: string[]
|
dependsOn?: string[]
|
||||||
|
memoryScope?: 'dependencies' | 'all'
|
||||||
maxRetries?: number
|
maxRetries?: number
|
||||||
retryDelayMs?: number
|
retryDelayMs?: number
|
||||||
retryBackoff?: number
|
retryBackoff?: number
|
||||||
|
|
@ -1087,6 +1111,7 @@ export class OpenMultiAgent {
|
||||||
description: t.description,
|
description: t.description,
|
||||||
assignee: t.assignee,
|
assignee: t.assignee,
|
||||||
dependsOn: t.dependsOn,
|
dependsOn: t.dependsOn,
|
||||||
|
memoryScope: t.memoryScope,
|
||||||
maxRetries: t.maxRetries,
|
maxRetries: t.maxRetries,
|
||||||
retryDelayMs: t.retryDelayMs,
|
retryDelayMs: t.retryDelayMs,
|
||||||
retryBackoff: t.retryBackoff,
|
retryBackoff: t.retryBackoff,
|
||||||
|
|
@ -1308,6 +1333,7 @@ export class OpenMultiAgent {
|
||||||
*/
|
*/
|
||||||
private loadSpecsIntoQueue(
|
private loadSpecsIntoQueue(
|
||||||
specs: ReadonlyArray<ParsedTaskSpec & {
|
specs: ReadonlyArray<ParsedTaskSpec & {
|
||||||
|
memoryScope?: 'dependencies' | 'all'
|
||||||
maxRetries?: number
|
maxRetries?: number
|
||||||
retryDelayMs?: number
|
retryDelayMs?: number
|
||||||
retryBackoff?: number
|
retryBackoff?: number
|
||||||
|
|
@ -1328,6 +1354,7 @@ export class OpenMultiAgent {
|
||||||
assignee: spec.assignee && agentNames.has(spec.assignee)
|
assignee: spec.assignee && agentNames.has(spec.assignee)
|
||||||
? spec.assignee
|
? spec.assignee
|
||||||
: undefined,
|
: undefined,
|
||||||
|
memoryScope: spec.memoryScope,
|
||||||
maxRetries: spec.maxRetries,
|
maxRetries: spec.maxRetries,
|
||||||
retryDelayMs: spec.retryDelayMs,
|
retryDelayMs: spec.retryDelayMs,
|
||||||
retryBackoff: spec.retryBackoff,
|
retryBackoff: spec.retryBackoff,
|
||||||
|
|
|
||||||
|
|
@ -289,6 +289,11 @@ export class TaskQueue {
|
||||||
return this.list().filter((t) => t.status === status)
|
return this.list().filter((t) => t.status === status)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Returns a task by ID, if present. */
|
||||||
|
get(taskId: string): Task | undefined {
|
||||||
|
return this.tasks.get(taskId)
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns `true` when every task in the queue has reached a terminal state
|
* Returns `true` when every task in the queue has reached a terminal state
|
||||||
* (`'completed'`, `'failed'`, or `'skipped'`), **or** the queue is empty.
|
* (`'completed'`, `'failed'`, or `'skipped'`), **or** the queue is empty.
|
||||||
|
|
|
||||||
|
|
@ -31,6 +31,7 @@ export function createTask(input: {
|
||||||
description: string
|
description: string
|
||||||
assignee?: string
|
assignee?: string
|
||||||
dependsOn?: string[]
|
dependsOn?: string[]
|
||||||
|
memoryScope?: 'dependencies' | 'all'
|
||||||
maxRetries?: number
|
maxRetries?: number
|
||||||
retryDelayMs?: number
|
retryDelayMs?: number
|
||||||
retryBackoff?: number
|
retryBackoff?: number
|
||||||
|
|
@ -43,6 +44,7 @@ export function createTask(input: {
|
||||||
status: 'pending' as TaskStatus,
|
status: 'pending' as TaskStatus,
|
||||||
assignee: input.assignee,
|
assignee: input.assignee,
|
||||||
dependsOn: input.dependsOn ? [...input.dependsOn] : undefined,
|
dependsOn: input.dependsOn ? [...input.dependsOn] : undefined,
|
||||||
|
memoryScope: input.memoryScope,
|
||||||
result: undefined,
|
result: undefined,
|
||||||
createdAt: now,
|
createdAt: now,
|
||||||
updatedAt: now,
|
updatedAt: now,
|
||||||
|
|
|
||||||
|
|
@ -361,6 +361,12 @@ export interface Task {
|
||||||
assignee?: string
|
assignee?: string
|
||||||
/** IDs of tasks that must complete before this one can start. */
|
/** IDs of tasks that must complete before this one can start. */
|
||||||
dependsOn?: readonly string[]
|
dependsOn?: readonly string[]
|
||||||
|
/**
|
||||||
|
* Controls what prior team context is injected into this task's prompt.
|
||||||
|
* - `dependencies` (default): only direct dependency task results
|
||||||
|
* - `all`: full shared-memory summary
|
||||||
|
*/
|
||||||
|
readonly memoryScope?: 'dependencies' | 'all'
|
||||||
result?: string
|
result?: string
|
||||||
readonly createdAt: Date
|
readonly createdAt: Date
|
||||||
updatedAt: Date
|
updatedAt: Date
|
||||||
|
|
|
||||||
|
|
@ -43,6 +43,7 @@ function createMockAdapter(responses: string[]): LLMAdapter {
|
||||||
*/
|
*/
|
||||||
let mockAdapterResponses: string[] = []
|
let mockAdapterResponses: string[] = []
|
||||||
let capturedChatOptions: LLMChatOptions[] = []
|
let capturedChatOptions: LLMChatOptions[] = []
|
||||||
|
let capturedPrompts: string[] = []
|
||||||
|
|
||||||
vi.mock('../src/llm/adapter.js', () => ({
|
vi.mock('../src/llm/adapter.js', () => ({
|
||||||
createAdapter: async () => {
|
createAdapter: async () => {
|
||||||
|
|
@ -51,6 +52,12 @@ vi.mock('../src/llm/adapter.js', () => ({
|
||||||
name: 'mock',
|
name: 'mock',
|
||||||
async chat(_msgs: LLMMessage[], options: LLMChatOptions): Promise<LLMResponse> {
|
async chat(_msgs: LLMMessage[], options: LLMChatOptions): Promise<LLMResponse> {
|
||||||
capturedChatOptions.push(options)
|
capturedChatOptions.push(options)
|
||||||
|
const lastUser = [..._msgs].reverse().find((m) => m.role === 'user')
|
||||||
|
const prompt = (lastUser?.content ?? [])
|
||||||
|
.filter((b): b is { type: 'text'; text: string } => b.type === 'text')
|
||||||
|
.map((b) => b.text)
|
||||||
|
.join('\n')
|
||||||
|
capturedPrompts.push(prompt)
|
||||||
const text = mockAdapterResponses[callIndex] ?? 'default mock response'
|
const text = mockAdapterResponses[callIndex] ?? 'default mock response'
|
||||||
callIndex++
|
callIndex++
|
||||||
return {
|
return {
|
||||||
|
|
@ -97,6 +104,7 @@ describe('OpenMultiAgent', () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
mockAdapterResponses = []
|
mockAdapterResponses = []
|
||||||
capturedChatOptions = []
|
capturedChatOptions = []
|
||||||
|
capturedPrompts = []
|
||||||
})
|
})
|
||||||
|
|
||||||
describe('createTeam', () => {
|
describe('createTeam', () => {
|
||||||
|
|
@ -198,6 +206,67 @@ describe('OpenMultiAgent', () => {
|
||||||
|
|
||||||
expect(result.success).toBe(true)
|
expect(result.success).toBe(true)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
it('uses a clean slate for tasks without dependencies', async () => {
|
||||||
|
mockAdapterResponses = ['alpha done', 'beta done']
|
||||||
|
|
||||||
|
const oma = new OpenMultiAgent({ defaultModel: 'mock-model' })
|
||||||
|
const team = oma.createTeam('t', teamCfg())
|
||||||
|
|
||||||
|
await oma.runTasks(team, [
|
||||||
|
{ title: 'Independent A', description: 'Do independent A', assignee: 'worker-a' },
|
||||||
|
{ title: 'Independent B', description: 'Do independent B', assignee: 'worker-b' },
|
||||||
|
])
|
||||||
|
|
||||||
|
const workerPrompts = capturedPrompts.slice(0, 2)
|
||||||
|
expect(workerPrompts[0]).toContain('# Task: Independent A')
|
||||||
|
expect(workerPrompts[1]).toContain('# Task: Independent B')
|
||||||
|
expect(workerPrompts[0]).not.toContain('## Shared Team Memory')
|
||||||
|
expect(workerPrompts[1]).not.toContain('## Shared Team Memory')
|
||||||
|
expect(workerPrompts[0]).not.toContain('## Context from prerequisite tasks')
|
||||||
|
expect(workerPrompts[1]).not.toContain('## Context from prerequisite tasks')
|
||||||
|
})
|
||||||
|
|
||||||
|
it('injects only dependency results into dependent task prompts', async () => {
|
||||||
|
mockAdapterResponses = ['first output', 'second output']
|
||||||
|
|
||||||
|
const oma = new OpenMultiAgent({ defaultModel: 'mock-model' })
|
||||||
|
const team = oma.createTeam('t', teamCfg())
|
||||||
|
|
||||||
|
await oma.runTasks(team, [
|
||||||
|
{ title: 'First', description: 'Produce first', assignee: 'worker-a' },
|
||||||
|
{ title: 'Second', description: 'Use first', assignee: 'worker-b', dependsOn: ['First'] },
|
||||||
|
])
|
||||||
|
|
||||||
|
const secondPrompt = capturedPrompts[1] ?? ''
|
||||||
|
expect(secondPrompt).toContain('## Context from prerequisite tasks')
|
||||||
|
expect(secondPrompt).toContain('### First (by worker-a)')
|
||||||
|
expect(secondPrompt).toContain('first output')
|
||||||
|
expect(secondPrompt).not.toContain('## Shared Team Memory')
|
||||||
|
})
|
||||||
|
|
||||||
|
it('supports memoryScope all opt-in for full shared memory visibility', async () => {
|
||||||
|
mockAdapterResponses = ['writer output', 'reader output']
|
||||||
|
|
||||||
|
const oma = new OpenMultiAgent({ defaultModel: 'mock-model' })
|
||||||
|
const team = oma.createTeam('t', teamCfg())
|
||||||
|
|
||||||
|
await oma.runTasks(team, [
|
||||||
|
{ title: 'Write', description: 'Write something', assignee: 'worker-a' },
|
||||||
|
{
|
||||||
|
title: 'Read all',
|
||||||
|
description: 'Read everything',
|
||||||
|
assignee: 'worker-b',
|
||||||
|
memoryScope: 'all',
|
||||||
|
dependsOn: ['Write'],
|
||||||
|
},
|
||||||
|
])
|
||||||
|
|
||||||
|
const secondPrompt = capturedPrompts[1] ?? ''
|
||||||
|
expect(secondPrompt).toContain('## Shared Team Memory')
|
||||||
|
expect(secondPrompt).toContain('task:')
|
||||||
|
expect(secondPrompt).not.toContain('## Context from prerequisite tasks')
|
||||||
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
describe('runTeam', () => {
|
describe('runTeam', () => {
|
||||||
|
|
|
||||||
|
|
@ -107,6 +107,19 @@ describe('SharedMemory', () => {
|
||||||
expect(summary).toContain('…')
|
expect(summary).toContain('…')
|
||||||
})
|
})
|
||||||
|
|
||||||
|
it('filters summary to only requested task IDs', async () => {
|
||||||
|
const mem = new SharedMemory()
|
||||||
|
await mem.write('alice', 'task:t1:result', 'output 1')
|
||||||
|
await mem.write('bob', 'task:t2:result', 'output 2')
|
||||||
|
await mem.write('alice', 'notes', 'not a task result')
|
||||||
|
|
||||||
|
const summary = await mem.getSummary({ taskIds: ['t2'] })
|
||||||
|
expect(summary).toContain('### bob')
|
||||||
|
expect(summary).toContain('task:t2:result: output 2')
|
||||||
|
expect(summary).not.toContain('task:t1:result: output 1')
|
||||||
|
expect(summary).not.toContain('notes: not a task result')
|
||||||
|
})
|
||||||
|
|
||||||
// -------------------------------------------------------------------------
|
// -------------------------------------------------------------------------
|
||||||
// listAll
|
// listAll
|
||||||
// -------------------------------------------------------------------------
|
// -------------------------------------------------------------------------
|
||||||
|
|
|
||||||
|
|
@ -27,6 +27,7 @@ describe('TaskQueue', () => {
|
||||||
q.add(task('a'))
|
q.add(task('a'))
|
||||||
expect(q.list()).toHaveLength(1)
|
expect(q.list()).toHaveLength(1)
|
||||||
expect(q.list()[0].id).toBe('a')
|
expect(q.list()[0].id).toBe('a')
|
||||||
|
expect(q.get('a')?.title).toBe('a')
|
||||||
})
|
})
|
||||||
|
|
||||||
it('fires task:ready for a task with no dependencies', () => {
|
it('fires task:ready for a task with no dependencies', () => {
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue