Compare commits
3 Commits
4d284a1013
...
926b88b4bd
| Author | SHA1 | Date |
|---|---|---|
|
|
926b88b4bd | |
|
|
06cc415ddf | |
|
|
fb6051146f |
122
README.md
122
README.md
|
|
@ -1,8 +1,10 @@
|
|||
# Open Multi-Agent
|
||||
|
||||
TypeScript framework for multi-agent orchestration. One `runTeam()` call from goal to result — the framework decomposes it into tasks, resolves dependencies, and runs agents in parallel.
|
||||
The lightweight multi-agent orchestration engine for TypeScript. Three runtime dependencies, zero config, goal to result in one `runTeam()` call.
|
||||
|
||||
3 runtime dependencies · 33 source files · Deploys anywhere Node.js runs · Mentioned in [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News
|
||||
CrewAI is Python. LangGraph makes you draw the graph by hand. `open-multi-agent` is the `npm install` you drop into an existing Node.js backend when you need a team of agents to work on a goal together. Nothing more, nothing less.
|
||||
|
||||
3 runtime dependencies · 35 source files · Deploys anywhere Node.js runs · Mentioned in [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News (top AI engineering newsletter, 170k+ subscribers)
|
||||
|
||||
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
||||
[](./LICENSE)
|
||||
|
|
@ -11,19 +13,51 @@ TypeScript framework for multi-agent orchestration. One `runTeam()` call from go
|
|||
|
||||
**English** | [中文](./README_zh.md)
|
||||
|
||||
## Why Open Multi-Agent?
|
||||
## What you actually get
|
||||
|
||||
- **Goal In, Result Out** — `runTeam(team, "Build a REST API")`. A coordinator agent auto-decomposes the goal into a task DAG with dependencies and assignees, runs independent tasks in parallel, and synthesizes the final output. No manual task definitions or graph wiring required.
|
||||
- **TypeScript-Native** — Built for the Node.js ecosystem. `npm install`, import, run. No Python runtime, no subprocess bridge, no sidecar services. Embed in Express, Next.js, serverless functions, or CI/CD pipelines.
|
||||
- **Auditable and Lightweight** — 3 runtime dependencies (`@anthropic-ai/sdk`, `openai`, `zod`). 33 source files. The entire codebase is readable in an afternoon.
|
||||
- **Model Agnostic** — Claude, GPT, Gemma 4, and local models (Ollama, vLLM, LM Studio, llama.cpp server) in the same team. Swap models per agent via `baseURL`.
|
||||
- **Multi-Agent Collaboration** — Agents with different roles, tools, and models collaborate through a message bus and shared memory.
|
||||
- **Structured Output** — Add `outputSchema` (Zod) to any agent. Output is parsed as JSON, validated, and auto-retried once on failure. Access typed results via `result.structured`.
|
||||
- **Task Retry** — Set `maxRetries` on tasks for automatic retry with exponential backoff. Failed attempts accumulate token usage for accurate billing.
|
||||
- **Human-in-the-Loop** — Optional `onApproval` callback on `runTasks()`. After each batch of tasks completes, your callback decides whether to proceed or abort remaining work.
|
||||
- **Lifecycle Hooks** — `beforeRun` / `afterRun` on `AgentConfig`. Intercept the prompt before execution or post-process results after. Throw from either hook to abort.
|
||||
- **Loop Detection** — `loopDetection` on `AgentConfig` catches stuck agents repeating the same tool calls or text output. Configurable action: warn (default), terminate, or custom callback.
|
||||
- **Observability** — Optional `onTrace` callback emits structured spans for every LLM call, tool execution, task, and agent run — with timing, token usage, and a shared `runId` for correlation. Zero overhead when not subscribed, zero extra dependencies.
|
||||
- **Goal to result in one call.** `runTeam(team, "Build a REST API")` kicks off a coordinator agent that decomposes the goal into a task DAG, resolves dependencies, runs independent tasks in parallel, and synthesizes the final output. No graph to draw, no tasks to wire up.
|
||||
- **TypeScript-native, three runtime dependencies.** `@anthropic-ai/sdk`, `openai`, `zod`. That is the whole runtime. Embed in Express, Next.js, serverless functions, or CI/CD pipelines. No Python runtime, no subprocess bridge, no cloud sidecar.
|
||||
- **Multi-model teams.** Claude, GPT, Gemini, Grok, Copilot, or any OpenAI-compatible local model (Ollama, vLLM, LM Studio, llama.cpp) in the same team. Run the architect on Opus 4.6, the developer on GPT-5.4, the reviewer on local Gemma 4, all in one `runTeam()` call. Gemini ships as an optional peer dependency: `npm install @google/genai` to enable.
|
||||
|
||||
Other features (structured output, task retry, human-in-the-loop, lifecycle hooks, loop detection, observability) live below the fold and in [`examples/`](./examples/).
|
||||
|
||||
## Philosophy: what we build, what we don't
|
||||
|
||||
Our goal is to be the simplest multi-agent framework for TypeScript. Simplicity does not mean closed. We believe the long-term value of a framework is the size of the network it connects to, not its feature checklist.
|
||||
|
||||
**We build:**
|
||||
- A coordinator that decomposes a goal into a task DAG.
|
||||
- A task queue that runs independent tasks in parallel and cascades failures to dependents.
|
||||
- A shared memory and message bus so agents can see each other's output.
|
||||
- Multi-model teams where each agent can use a different LLM provider.
|
||||
|
||||
**We don't build:**
|
||||
- **Agent handoffs.** If agent A needs to transfer mid-conversation to agent B, use [OpenAI Agents SDK](https://github.com/openai/openai-agents-python). In our model, each agent owns one task end-to-end, with no mid-conversation transfers.
|
||||
- **State persistence / checkpointing.** Not planned for now. Adding a storage backend would break the three-dependency promise, and our workflows run in seconds to minutes, not hours. If real usage shifts toward long-running workflows, we will revisit.
|
||||
|
||||
**Tracking:**
|
||||
- **MCP support.** Next up, see [#86](https://github.com/JackChen-me/open-multi-agent/issues/86).
|
||||
- **A2A protocol.** Watching, will move when production adoption is real.
|
||||
|
||||
See [`DECISIONS.md`](./DECISIONS.md) for the full rationale.
|
||||
|
||||
## How is this different from X?
|
||||
|
||||
**vs. [LangGraph JS](https://github.com/langchain-ai/langgraphjs).** LangGraph is declarative graph orchestration: you define nodes, edges, and conditional routing, then `compile()` and `invoke()`. `open-multi-agent` is goal-driven: you declare a team and a goal, a coordinator decomposes it into a task DAG at runtime. LangGraph gives you total control of topology (great for fixed production workflows). This gives you less typing and faster iteration (great for exploratory multi-agent work). LangGraph also has mature checkpointing; we do not.
|
||||
|
||||
**vs. [CrewAI](https://github.com/crewAIInc/crewAI).** CrewAI is the mature Python choice. If your stack is Python, use CrewAI. `open-multi-agent` is TypeScript-native: three runtime dependencies, embeds directly in Node.js without a subprocess bridge. Roughly comparable capability on the orchestration side. Choose on language fit.
|
||||
|
||||
**vs. [Vercel AI SDK](https://github.com/vercel/ai).** AI SDK is the LLM call layer: a unified TypeScript client for 60+ providers with streaming, tool calls, and structured outputs. It does not orchestrate multi-agent teams. `open-multi-agent` sits on top when you need that. They compose: use AI SDK for single-agent work, reach for this when you need a team.
|
||||
|
||||
## Used by
|
||||
|
||||
`open-multi-agent` is a new project (launched 2026-04-01, MIT, 5,500+ stars). The ecosystem is still forming, so the list below is short and honest:
|
||||
|
||||
- **[temodar-agent](https://github.com/xeloxa/temodar-agent)** (~50 stars). WordPress security analysis platform by [Ali Sünbül](https://github.com/xeloxa). Uses our built-in tools (`bash`, `file_*`, `grep`) directly in its Docker runtime. Confirmed production use.
|
||||
- **[rentech-quant-platform](https://github.com/rookiecoderasz/rentech-quant-platform).** Multi-agent quant trading research platform. Five pipelines plus MCP integrations, built on top of `open-multi-agent`. Early signal, very new.
|
||||
- **Cybersecurity SOC (home lab).** A private setup running Qwen 2.5 + DeepSeek Coder entirely offline via Ollama, building an autonomous SOC pipeline on Wazuh + Proxmox. Early user, not yet public.
|
||||
|
||||
Using `open-multi-agent` in production or a side project? [Open a discussion](https://github.com/JackChen-me/open-multi-agent/discussions) and we will list it here.
|
||||
|
||||
## Quick Start
|
||||
|
||||
|
|
@ -38,6 +72,7 @@ Set the API key for your provider. Local models via Ollama require no API key
|
|||
- `ANTHROPIC_API_KEY`
|
||||
- `OPENAI_API_KEY`
|
||||
- `GEMINI_API_KEY`
|
||||
- `XAI_API_KEY` (for Grok)
|
||||
- `GITHUB_TOKEN` (for Copilot)
|
||||
|
||||
Three agents, one goal — the framework handles the rest:
|
||||
|
|
@ -53,19 +88,8 @@ const architect: AgentConfig = {
|
|||
tools: ['file_write'],
|
||||
}
|
||||
|
||||
const developer: AgentConfig = {
|
||||
name: 'developer',
|
||||
model: 'claude-sonnet-4-6',
|
||||
systemPrompt: 'You implement what the architect designs.',
|
||||
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
||||
}
|
||||
|
||||
const reviewer: AgentConfig = {
|
||||
name: 'reviewer',
|
||||
model: 'claude-sonnet-4-6',
|
||||
systemPrompt: 'You review code for correctness and clarity.',
|
||||
tools: ['file_read', 'grep'],
|
||||
}
|
||||
const developer: AgentConfig = { /* same shape, tools: ['bash', 'file_read', 'file_write', 'file_edit'] */ }
|
||||
const reviewer: AgentConfig = { /* same shape, tools: ['file_read', 'grep'] */ }
|
||||
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: 'claude-sonnet-4-6',
|
||||
|
|
@ -94,8 +118,8 @@ task_complete architect
|
|||
task_start developer
|
||||
task_start developer // independent tasks run in parallel
|
||||
task_complete developer
|
||||
task_start reviewer // unblocked after implementation
|
||||
task_complete developer
|
||||
task_start reviewer // unblocked after implementation
|
||||
task_complete reviewer
|
||||
agent_complete coordinator // synthesizes final result
|
||||
Success: true
|
||||
|
|
@ -110,29 +134,18 @@ Tokens: 12847 output tokens
|
|||
| Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
|
||||
| Explicit pipeline | `runTasks()` | You define the task graph and assignments |
|
||||
|
||||
For MapReduce-style fan-out without task dependencies, use `AgentPool.runParallel()` directly. See [example 07](examples/07-fan-out-aggregate.ts).
|
||||
|
||||
## Examples
|
||||
|
||||
All examples are runnable scripts in [`examples/`](./examples/). Run any of them with `npx tsx`:
|
||||
15 runnable scripts in [`examples/`](./examples/). Start with these four:
|
||||
|
||||
```bash
|
||||
npx tsx examples/01-single-agent.ts
|
||||
```
|
||||
- [02 — Team Collaboration](examples/02-team-collaboration.ts): `runTeam()` coordinator pattern.
|
||||
- [06 — Local Model](examples/06-local-model.ts): Ollama and Claude in one pipeline via `baseURL`.
|
||||
- [09 — Structured Output](examples/09-structured-output.ts): any agent returns Zod-validated JSON.
|
||||
- [11 — Trace Observability](examples/11-trace-observability.ts): `onTrace` spans for LLM calls, tools, and tasks.
|
||||
|
||||
| Example | What it shows |
|
||||
|---------|---------------|
|
||||
| [01 — Single Agent](examples/01-single-agent.ts) | `runAgent()` one-shot, `stream()` streaming, `prompt()` multi-turn |
|
||||
| [02 — Team Collaboration](examples/02-team-collaboration.ts) | `runTeam()` auto-orchestration with coordinator pattern |
|
||||
| [03 — Task Pipeline](examples/03-task-pipeline.ts) | `runTasks()` explicit dependency graph (design → implement → test + review) |
|
||||
| [04 — Multi-Model Team](examples/04-multi-model-team.ts) | `defineTool()` custom tools, mixed Anthropic + OpenAI providers, `AgentPool` |
|
||||
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot as an LLM provider |
|
||||
| [06 — Local Model](examples/06-local-model.ts) | Ollama + Claude in one pipeline via `baseURL` (works with vLLM, LM Studio, etc.) |
|
||||
| [07 — Fan-Out / Aggregate](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 analysts in parallel, then synthesize |
|
||||
| [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` with local Gemma 4 via Ollama — zero API cost |
|
||||
| [09 — Structured Output](examples/09-structured-output.ts) | `outputSchema` (Zod) on AgentConfig — validated JSON via `result.structured` |
|
||||
| [10 — Task Retry](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` with `task_retry` progress events |
|
||||
| [11 — Trace Observability](examples/11-trace-observability.ts) | `onTrace` callback — structured spans for LLM calls, tools, tasks, and agents |
|
||||
| [12 — Grok](examples/12-grok.ts) | Same as example 02 (`runTeam()` collaboration) with Grok (`XAI_API_KEY`) |
|
||||
| [13 — Gemini](examples/13-gemini.ts) | Gemini adapter smoke test with `gemini-2.5-flash` (`GEMINI_API_KEY`) |
|
||||
Run any with `npx tsx examples/02-team-collaboration.ts`.
|
||||
|
||||
## Architecture
|
||||
|
||||
|
|
@ -247,6 +260,8 @@ Tools added via `agent.addTool()` are always available regardless of filtering.
|
|||
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | Verified |
|
||||
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | Verified |
|
||||
|
||||
Gemini requires `npm install @google/genai` (optional peer dependency).
|
||||
|
||||
Verified local models with tool-calling: **Gemma 4** (see [example 08](examples/08-gemma4-local.ts)).
|
||||
|
||||
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (DeepSeek, Groq, Mistral, Qwen, MiniMax, etc.). **Grok now has first-class support** via `provider: 'grok'`.
|
||||
|
|
@ -295,27 +310,22 @@ const grokAgent: AgentConfig = {
|
|||
|
||||
Issues, feature requests, and PRs are welcome. Some areas where contributions would be especially valuable:
|
||||
|
||||
- **Provider integrations** — Verify and document OpenAI-compatible providers (DeepSeek, Groq, Qwen, MiniMax, etc.) via `baseURL`. See [#25](https://github.com/JackChen-me/open-multi-agent/issues/25). For providers that are NOT OpenAI-compatible (e.g. Gemini), a new `LLMAdapter` implementation is welcome — the interface requires just two methods: `chat()` and `stream()`.
|
||||
- **Examples** — Real-world workflows and use cases.
|
||||
- **Documentation** — Guides, tutorials, and API docs.
|
||||
|
||||
## Author
|
||||
|
||||
> JackChen — Ex PM (¥100M+ revenue), now indie builder. Follow on [X](https://x.com/JackChen_x) for AI Agent insights.
|
||||
|
||||
## Contributors
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&v=20260408" />
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260411" />
|
||||
</a>
|
||||
|
||||
## Star History
|
||||
|
||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260408" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
|
|
|||
173
README_zh.md
173
README_zh.md
|
|
@ -1,8 +1,10 @@
|
|||
# Open Multi-Agent
|
||||
|
||||
TypeScript 多智能体编排框架。一次 `runTeam()` 调用从目标到结果——框架自动拆解任务、解析依赖、并行执行。
|
||||
面向 TypeScript 的轻量多智能体编排引擎。3 个运行时依赖,零配置,一次 `runTeam()` 调用从目标到结果。
|
||||
|
||||
3 个运行时依赖 · 33 个源文件 · Node.js 能跑的地方都能部署 · 被 [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News 提及(AI 工程领域头部 Newsletter,17 万+订阅者)
|
||||
CrewAI 是 Python。LangGraph 需要你自己画图。`open-multi-agent` 是你现有 Node.js 后端里 `npm install` 一下就能用的那一层。当你需要让一支 agent 团队围绕一个目标协作时,只提供这个,不多不少。
|
||||
|
||||
3 个运行时依赖 · 35 个源文件 · Node.js 能跑的地方都能部署 · 被 [Latent Space](https://www.latent.space/p/ainews-a-quiet-april-fools) AI News 提及(AI 工程领域头部 Newsletter,17 万+订阅者)
|
||||
|
||||
[](https://github.com/JackChen-me/open-multi-agent/stargazers)
|
||||
[](./LICENSE)
|
||||
|
|
@ -11,19 +13,51 @@ TypeScript 多智能体编排框架。一次 `runTeam()` 调用从目标到结
|
|||
|
||||
[English](./README.md) | **中文**
|
||||
|
||||
## 为什么选择 Open Multi-Agent?
|
||||
## 你真正得到的三件事
|
||||
|
||||
- **目标进,结果出** — `runTeam(team, "构建一个 REST API")`。协调者智能体自动将目标拆解为带依赖关系的任务图,分配给对应智能体,独立任务并行执行,最终合成输出。无需手动定义任务或编排流程图。
|
||||
- **TypeScript 原生** — 为 Node.js 生态而生。`npm install` 即用,无需 Python 运行时、无子进程桥接、无额外基础设施。可嵌入 Express、Next.js、Serverless 函数或 CI/CD 流水线。
|
||||
- **可审计、极轻量** — 3 个运行时依赖(`@anthropic-ai/sdk`、`openai`、`zod`),33 个源文件。一个下午就能读完全部源码。
|
||||
- **模型无关** — Claude、GPT、Gemma 4 和本地模型(Ollama、vLLM、LM Studio、llama.cpp server)可以在同一个团队中使用。通过 `baseURL` 即可接入任何 OpenAI 兼容服务。
|
||||
- **多智能体协作** — 定义不同角色、工具和模型的智能体,通过消息总线和共享内存协作。
|
||||
- **结构化输出** — 为任意智能体添加 `outputSchema`(Zod),输出自动解析为 JSON 并校验,校验失败自动重试一次。通过 `result.structured` 获取类型化结果。
|
||||
- **任务重试** — 为任务设置 `maxRetries`,失败时自动指数退避重试。所有尝试的 token 用量累计,确保计费准确。
|
||||
- **人机协同** — `runTasks()` 支持可选的 `onApproval` 回调。每批任务完成后,由你的回调决定是否继续执行后续任务。
|
||||
- **生命周期钩子** — `AgentConfig` 上的 `beforeRun` / `afterRun`。在执行前拦截 prompt,或在执行后处理结果。从钩子中 throw 可中止运行。
|
||||
- **循环检测** — `AgentConfig` 上的 `loopDetection` 可检测智能体重复相同工具调用或文本输出的卡死循环。可配置行为:警告(默认)、终止、或自定义回调。
|
||||
- **可观测性** — 可选的 `onTrace` 回调为每次 LLM 调用、工具执行、任务和智能体运行发出结构化 span 事件——包含耗时、token 用量和共享的 `runId` 用于关联追踪。未订阅时零开销,零额外依赖。
|
||||
- **一次调用从目标到结果。** `runTeam(team, "构建一个 REST API")` 启动一个协调者 agent,把目标拆成任务 DAG,解析依赖,独立任务并行执行,最终合成输出。不需要画图,不需要手动连任务。
|
||||
- **TypeScript 原生,3 个运行时依赖。** `@anthropic-ai/sdk`、`openai`、`zod`。这就是全部运行时。可嵌入 Express、Next.js、Serverless 函数或 CI/CD 流水线。没有 Python 运行时,没有子进程桥接,没有云端 sidecar。
|
||||
- **多模型团队。** Claude、GPT、Gemini、Grok、Copilot,或任何 OpenAI 兼容的本地模型(Ollama、vLLM、LM Studio、llama.cpp)可以在同一个团队中使用。让架构师用 Opus 4.6,开发者用 GPT-5.4,评审用本地的 Gemma 4,一次 `runTeam()` 调用全部搞定。Gemini 作为 optional peer dependency 提供:使用前需 `npm install @google/genai`。
|
||||
|
||||
其他能力(结构化输出、任务重试、人机协同、生命周期钩子、循环检测、可观测性)在下方章节和 [`examples/`](./examples/) 里。
|
||||
|
||||
## 哲学:我们做什么,不做什么
|
||||
|
||||
我们的目标是做 TypeScript 生态里最简单的多智能体框架。简单不等于封闭。框架的长期价值不在于功能清单的长度,而在于它连接的网络有多大。
|
||||
|
||||
**我们做:**
|
||||
- 一个协调者,把目标拆成任务 DAG。
|
||||
- 一个任务队列,独立任务并行执行,失败级联到下游。
|
||||
- 共享内存和消息总线,让 agent 之间能看到彼此的输出。
|
||||
- 多模型团队,每个 agent 可以用不同的 LLM provider。
|
||||
|
||||
**我们不做:**
|
||||
- **Agent Handoffs。** 如果 agent A 需要把对话中途交接给 agent B,去用 [OpenAI Agents SDK](https://github.com/openai/openai-agents-python)。在我们的模型里,每个 agent 完整负责自己的任务,不会中途交接。
|
||||
- **状态持久化 / 检查点。** 短期内不做。加存储后端会打破 3 个依赖的承诺,而且我们的工作流执行时间是秒到分钟级,不是小时级。如果真实使用场景转向长时间工作流,我们会重新评估。
|
||||
|
||||
**正在跟踪:**
|
||||
- **MCP 支持。** 下一个要做的,见 [#86](https://github.com/JackChen-me/open-multi-agent/issues/86)。
|
||||
- **A2A 协议。** 观望中,等生产级采纳到位再行动。
|
||||
|
||||
完整理由见 [`DECISIONS.md`](./DECISIONS.md)。
|
||||
|
||||
## 和 X 有什么不同?
|
||||
|
||||
**vs. [LangGraph JS](https://github.com/langchain-ai/langgraphjs)。** LangGraph 是声明式图编排:你定义节点、边、条件路由,然后 `compile()` + `invoke()`。`open-multi-agent` 是目标驱动:你声明团队和目标,协调者在运行时把目标拆成任务 DAG。LangGraph 给你完全的拓扑控制(适合固定的生产工作流)。这个框架代码更少、迭代更快(适合探索型多智能体协作)。LangGraph 还有成熟的检查点能力,我们没有。
|
||||
|
||||
**vs. [CrewAI](https://github.com/crewAIInc/crewAI)。** CrewAI 是成熟的 Python 选择。如果你的技术栈是 Python,用 CrewAI。`open-multi-agent` 是 TypeScript 原生:3 个运行时依赖,直接嵌入 Node.js,不需要子进程桥接。编排能力大致相当,按语言契合度选。
|
||||
|
||||
**vs. [Vercel AI SDK](https://github.com/vercel/ai)。** AI SDK 是 LLM 调用层:统一的 TypeScript 客户端,支持 60+ provider,带流式、tool calls、结构化输出。它不做多智能体编排。`open-multi-agent` 需要多 agent 时叠在它之上。两者互补:单 agent 用 AI SDK,需要团队用这个。
|
||||
|
||||
## 谁在用
|
||||
|
||||
`open-multi-agent` 是一个新项目(2026-04-01 发布,MIT 许可,5,500+ stars)。生态还在成形,下面这份列表很短,但都真实:
|
||||
|
||||
- **[temodar-agent](https://github.com/xeloxa/temodar-agent)**(约 50 stars)。WordPress 安全分析平台,作者 [Ali Sünbül](https://github.com/xeloxa)。在 Docker runtime 里直接使用我们的内置工具(`bash`、`file_*`、`grep`)。已确认生产环境使用。
|
||||
- **[rentech-quant-platform](https://github.com/rookiecoderasz/rentech-quant-platform)。** 多智能体量化交易研究平台,5 条管线 + MCP 集成,基于 `open-multi-agent` 构建。早期信号,项目非常新。
|
||||
- **家用服务器 Cybersecurity SOC。** 本地完全离线运行 Qwen 2.5 + DeepSeek Coder(通过 Ollama),在 Wazuh + Proxmox 上构建自主 SOC 流水线。早期用户,未公开。
|
||||
|
||||
你在生产环境或 side project 里用 `open-multi-agent` 吗?[开一个 Discussion](https://github.com/JackChen-me/open-multi-agent/discussions),我们会把你列上来。
|
||||
|
||||
## 快速开始
|
||||
|
||||
|
|
@ -54,19 +88,8 @@ const architect: AgentConfig = {
|
|||
tools: ['file_write'],
|
||||
}
|
||||
|
||||
const developer: AgentConfig = {
|
||||
name: 'developer',
|
||||
model: 'claude-sonnet-4-6',
|
||||
systemPrompt: 'You implement what the architect designs.',
|
||||
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
||||
}
|
||||
|
||||
const reviewer: AgentConfig = {
|
||||
name: 'reviewer',
|
||||
model: 'claude-sonnet-4-6',
|
||||
systemPrompt: 'You review code for correctness and clarity.',
|
||||
tools: ['file_read', 'grep'],
|
||||
}
|
||||
const developer: AgentConfig = { /* 同样结构,tools: ['bash', 'file_read', 'file_write', 'file_edit'] */ }
|
||||
const reviewer: AgentConfig = { /* 同样结构,tools: ['file_read', 'grep'] */ }
|
||||
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: 'claude-sonnet-4-6',
|
||||
|
|
@ -82,8 +105,8 @@ const team = orchestrator.createTeam('api-team', {
|
|||
// 描述一个目标——框架将其拆解为任务并编排执行
|
||||
const result = await orchestrator.runTeam(team, 'Create a REST API for a todo list in /tmp/todo-api/')
|
||||
|
||||
console.log(`成功: ${result.success}`)
|
||||
console.log(`Token 用量: ${result.totalTokenUsage.output_tokens} output tokens`)
|
||||
console.log(`Success: ${result.success}`)
|
||||
console.log(`Tokens: ${result.totalTokenUsage.output_tokens} output tokens`)
|
||||
```
|
||||
|
||||
执行过程:
|
||||
|
|
@ -95,8 +118,8 @@ task_complete architect
|
|||
task_start developer
|
||||
task_start developer // 无依赖的任务并行执行
|
||||
task_complete developer
|
||||
task_start reviewer // 实现完成后自动解锁
|
||||
task_complete developer
|
||||
task_start reviewer // 实现完成后自动解锁
|
||||
task_complete reviewer
|
||||
agent_complete coordinator // 综合所有结果
|
||||
Success: true
|
||||
|
|
@ -111,29 +134,18 @@ Tokens: 12847 output tokens
|
|||
| 自动编排团队 | `runTeam()` | 给一个目标,框架自动规划和执行 |
|
||||
| 显式任务管线 | `runTasks()` | 你自己定义任务图和分配 |
|
||||
|
||||
如果需要 MapReduce 风格的扇出而不涉及任务依赖,直接使用 `AgentPool.runParallel()`。参见[示例 07](examples/07-fan-out-aggregate.ts)。
|
||||
|
||||
## 示例
|
||||
|
||||
所有示例都是可运行脚本,位于 [`examples/`](./examples/) 目录。使用 `npx tsx` 运行:
|
||||
[`examples/`](./examples/) 里有 15 个可运行脚本。推荐从这 4 个开始:
|
||||
|
||||
```bash
|
||||
npx tsx examples/01-single-agent.ts
|
||||
```
|
||||
- [02 — 团队协作](examples/02-team-collaboration.ts):`runTeam()` 协调者模式。
|
||||
- [06 — 本地模型](examples/06-local-model.ts):通过 `baseURL` 把 Ollama 和 Claude 放在同一条管线。
|
||||
- [09 — 结构化输出](examples/09-structured-output.ts):任意 agent 产出 Zod 校验过的 JSON。
|
||||
- [11 — 可观测性](examples/11-trace-observability.ts):`onTrace` 回调,为 LLM 调用、工具、任务发出结构化 span。
|
||||
|
||||
| 示例 | 展示内容 |
|
||||
|------|----------|
|
||||
| [01 — 单智能体](examples/01-single-agent.ts) | `runAgent()` 单次调用、`stream()` 流式输出、`prompt()` 多轮对话 |
|
||||
| [02 — 团队协作](examples/02-team-collaboration.ts) | `runTeam()` 自动编排 + 协调者模式 |
|
||||
| [03 — 任务流水线](examples/03-task-pipeline.ts) | `runTasks()` 显式依赖图(设计 → 实现 → 测试 + 评审) |
|
||||
| [04 — 多模型团队](examples/04-multi-model-team.ts) | `defineTool()` 自定义工具、Anthropic + OpenAI 混合、`AgentPool` |
|
||||
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot 作为 LLM 提供者 |
|
||||
| [06 — 本地模型](examples/06-local-model.ts) | Ollama + Claude 混合流水线,通过 `baseURL` 接入(兼容 vLLM、LM Studio 等) |
|
||||
| [07 — 扇出聚合](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 个分析师并行,然后综合 |
|
||||
| [08 — Gemma 4 本地](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` 本地 Gemma 4 via Ollama — 零 API 费用 |
|
||||
| [09 — 结构化输出](examples/09-structured-output.ts) | `outputSchema`(Zod)— 校验 JSON 输出,通过 `result.structured` 获取 |
|
||||
| [10 — 任务重试](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` + `task_retry` 进度事件 |
|
||||
| [11 — 可观测性](examples/11-trace-observability.ts) | `onTrace` 回调 — LLM 调用、工具、任务、智能体的结构化 span 事件 |
|
||||
| [12 — Grok](examples/12-grok.ts) | 同示例 02(`runTeam()` 团队协作),使用 Grok(`XAI_API_KEY`) |
|
||||
| [13 — Gemini](examples/13-gemini.ts) | Gemini 适配器测试,使用 `gemini-2.5-flash`(`GEMINI_API_KEY`) |
|
||||
用 `npx tsx examples/02-team-collaboration.ts` 运行任意一个。
|
||||
|
||||
## 架构
|
||||
|
||||
|
|
@ -188,6 +200,54 @@ npx tsx examples/01-single-agent.ts
|
|||
| `file_edit` | 通过精确字符串匹配编辑文件。 |
|
||||
| `grep` | 使用正则表达式搜索文件内容。优先使用 ripgrep,回退到 Node.js 实现。 |
|
||||
|
||||
## 工具配置
|
||||
|
||||
可以通过预设、白名单和黑名单对 agent 的工具访问进行精细控制。
|
||||
|
||||
### 工具预设
|
||||
|
||||
为常见场景预定义的工具组合:
|
||||
|
||||
```typescript
|
||||
const readonlyAgent: AgentConfig = {
|
||||
name: 'reader',
|
||||
model: 'claude-sonnet-4-6',
|
||||
toolPreset: 'readonly', // file_read, grep, glob
|
||||
}
|
||||
|
||||
const readwriteAgent: AgentConfig = {
|
||||
name: 'editor',
|
||||
model: 'claude-sonnet-4-6',
|
||||
toolPreset: 'readwrite', // file_read, file_write, file_edit, grep, glob
|
||||
}
|
||||
|
||||
const fullAgent: AgentConfig = {
|
||||
name: 'executor',
|
||||
model: 'claude-sonnet-4-6',
|
||||
toolPreset: 'full', // file_read, file_write, file_edit, grep, glob, bash
|
||||
}
|
||||
```
|
||||
|
||||
### 高级过滤
|
||||
|
||||
将预设与白名单、黑名单组合,实现精确控制:
|
||||
|
||||
```typescript
|
||||
const customAgent: AgentConfig = {
|
||||
name: 'custom',
|
||||
model: 'claude-sonnet-4-6',
|
||||
toolPreset: 'readwrite', // 起点:file_read, file_write, file_edit, grep, glob
|
||||
tools: ['file_read', 'grep'], // 白名单:与预设取交集 = file_read, grep
|
||||
disallowedTools: ['grep'], // 黑名单:再减去 = 只剩 file_read
|
||||
}
|
||||
```
|
||||
|
||||
**解析顺序:** preset → allowlist → denylist → 框架安全护栏。
|
||||
|
||||
### 自定义工具
|
||||
|
||||
通过 `agent.addTool()` 添加的工具始终可用,不受过滤规则影响。
|
||||
|
||||
## 支持的 Provider
|
||||
|
||||
| Provider | 配置 | 环境变量 | 状态 |
|
||||
|
|
@ -200,6 +260,8 @@ npx tsx examples/01-single-agent.ts
|
|||
| Ollama / vLLM / LM Studio | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
||||
| llama.cpp server | `provider: 'openai'` + `baseURL` | — | 已验证 |
|
||||
|
||||
Gemini 需要 `npm install @google/genai`(optional peer dependency)。
|
||||
|
||||
已验证支持 tool-calling 的本地模型:**Gemma 4**(见[示例 08](examples/08-gemma4-local.ts))。
|
||||
|
||||
任何 OpenAI 兼容 API 均可通过 `provider: 'openai'` + `baseURL` 接入(DeepSeek、Groq、Mistral、Qwen、MiniMax 等)。**Grok 现已原生支持**,使用 `provider: 'grok'`。
|
||||
|
|
@ -248,27 +310,22 @@ const grokAgent: AgentConfig = {
|
|||
|
||||
欢迎提 Issue、功能需求和 PR。以下方向的贡献尤其有价值:
|
||||
|
||||
- **Provider 集成** — 验证并文档化 OpenAI 兼容 Provider(DeepSeek、Groq、Qwen、MiniMax 等)通过 `baseURL` 接入。详见 [#25](https://github.com/JackChen-me/open-multi-agent/issues/25)。对于非 OpenAI 兼容的 Provider,欢迎贡献新的 `LLMAdapter` 实现——接口只需两个方法:`chat()` 和 `stream()`。
|
||||
- **示例** — 真实场景的工作流和用例。
|
||||
- **文档** — 指南、教程和 API 文档。
|
||||
|
||||
## 作者
|
||||
|
||||
> JackChen — 前 WPS 产品经理,现独立创业者。关注小红书[「杰克西|硅基杠杆」](https://www.xiaohongshu.com/user/profile/5a1bdc1e4eacab4aa39ea6d6),持续获取我的 AI Agent 观点和思考。
|
||||
|
||||
## 贡献者
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&v=20260408" />
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260411" />
|
||||
</a>
|
||||
|
||||
## Star 趋势
|
||||
|
||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260408" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260408" />
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
/**
|
||||
* Example 16 — Synchronous agent handoff via `delegate_to_agent`
|
||||
*
|
||||
* During `runTeam` / `runTasks`, pool agents register the built-in
|
||||
* `delegate_to_agent` tool so one specialist can run a sub-prompt on another
|
||||
* roster agent and read the answer in the same conversation turn.
|
||||
*
|
||||
* Whitelist `delegate_to_agent` in `tools` when you want the model to see it;
|
||||
* standalone `runAgent()` does not register this tool by default.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/16-agent-handoff.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig } from '../src/types.js'
|
||||
|
||||
const researcher: AgentConfig = {
|
||||
name: 'researcher',
|
||||
model: 'claude-sonnet-4-6',
|
||||
provider: 'anthropic',
|
||||
systemPrompt:
|
||||
'You answer factual questions briefly. When the user asks for a second opinion ' +
|
||||
'from the analyst, use delegate_to_agent to ask the analyst agent, then summarize both views.',
|
||||
tools: ['delegate_to_agent'],
|
||||
maxTurns: 6,
|
||||
}
|
||||
|
||||
const analyst: AgentConfig = {
|
||||
name: 'analyst',
|
||||
model: 'claude-sonnet-4-6',
|
||||
provider: 'anthropic',
|
||||
systemPrompt: 'You give short, skeptical analysis of claims. Push back when evidence is weak.',
|
||||
tools: [],
|
||||
maxTurns: 4,
|
||||
}
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const orchestrator = new OpenMultiAgent({ maxConcurrency: 2 })
|
||||
const team = orchestrator.createTeam('handoff-demo', {
|
||||
name: 'handoff-demo',
|
||||
agents: [researcher, analyst],
|
||||
sharedMemory: true,
|
||||
})
|
||||
|
||||
const goal =
|
||||
'In one paragraph: state a simple fact about photosynthesis. ' +
|
||||
'Then ask the analyst (via delegate_to_agent) for a one-sentence critique of overstated claims in popular science. ' +
|
||||
'Merge both into a final short answer.'
|
||||
|
||||
const result = await orchestrator.runTeam(team, goal)
|
||||
console.log('Success:', result.success)
|
||||
for (const [name, ar] of result.agentResults) {
|
||||
console.log(`\n--- ${name} ---\n${ar.output.slice(0, 2000)}`)
|
||||
}
|
||||
}
|
||||
|
||||
main().catch((err) => {
|
||||
console.error(err)
|
||||
process.exit(1)
|
||||
})
|
||||
|
|
@ -731,6 +731,34 @@
|
|||
"dev": true,
|
||||
"license": "BSD-3-Clause"
|
||||
},
|
||||
"node_modules/@rollup/rollup-android-arm-eabi": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.60.1.tgz",
|
||||
"integrity": "sha512-d6FinEBLdIiK+1uACUttJKfgZREXrF0Qc2SmLII7W2AD8FfiZ9Wjd+rD/iRuf5s5dWrr1GgwXCvPqOuDquOowA==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-android-arm64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.60.1.tgz",
|
||||
"integrity": "sha512-YjG/EwIDvvYI1YvYbHvDz/BYHtkY4ygUIXHnTdLhG+hKIQFBiosfWiACWortsKPKU/+dUwQQCKQM3qrDe8c9BA==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"android"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-darwin-arm64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmmirror.com/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.60.1.tgz",
|
||||
|
|
@ -745,6 +773,314 @@
|
|||
"darwin"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-darwin-x64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.60.1.tgz",
|
||||
"integrity": "sha512-haZ7hJ1JT4e9hqkoT9R/19XW2QKqjfJVv+i5AGg57S+nLk9lQnJ1F/eZloRO3o9Scy9CM3wQ9l+dkXtcBgN5Ew==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"darwin"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-freebsd-arm64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.60.1.tgz",
|
||||
"integrity": "sha512-czw90wpQq3ZsAVBlinZjAYTKduOjTywlG7fEeWKUA7oCmpA8xdTkxZZlwNJKWqILlq0wehoZcJYfBvOyhPTQ6w==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-freebsd-x64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.60.1.tgz",
|
||||
"integrity": "sha512-KVB2rqsxTHuBtfOeySEyzEOB7ltlB/ux38iu2rBQzkjbwRVlkhAGIEDiiYnO2kFOkJp+Z7pUXKyrRRFuFUKt+g==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"freebsd"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-arm-gnueabihf": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.60.1.tgz",
|
||||
"integrity": "sha512-L+34Qqil+v5uC0zEubW7uByo78WOCIrBvci69E7sFASRl0X7b/MB6Cqd1lky/CtcSVTydWa2WZwFuWexjS5o6g==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-arm-musleabihf": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.60.1.tgz",
|
||||
"integrity": "sha512-n83O8rt4v34hgFzlkb1ycniJh7IR5RCIqt6mz1VRJD6pmhRi0CXdmfnLu9dIUS6buzh60IvACM842Ffb3xd6Gg==",
|
||||
"cpu": [
|
||||
"arm"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-arm64-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-Nql7sTeAzhTAja3QXeAI48+/+GjBJ+QmAH13snn0AJSNL50JsDqotyudHyMbO2RbJkskbMbFJfIJKWA6R1LCJQ==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-arm64-musl": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.60.1.tgz",
|
||||
"integrity": "sha512-+pUymDhd0ys9GcKZPPWlFiZ67sTWV5UU6zOJat02M1+PiuSGDziyRuI/pPue3hoUwm2uGfxdL+trT6Z9rxnlMA==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-loong64-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-VSvgvQeIcsEvY4bKDHEDWcpW4Yw7BtlKG1GUT4FzBUlEKQK0rWHYBqQt6Fm2taXS+1bXvJT6kICu5ZwqKCnvlQ==",
|
||||
"cpu": [
|
||||
"loong64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-loong64-musl": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-musl/-/rollup-linux-loong64-musl-4.60.1.tgz",
|
||||
"integrity": "sha512-4LqhUomJqwe641gsPp6xLfhqWMbQV04KtPp7/dIp0nzPxAkNY1AbwL5W0MQpcalLYk07vaW9Kp1PBhdpZYYcEw==",
|
||||
"cpu": [
|
||||
"loong64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-ppc64-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-tLQQ9aPvkBxOc/EUT6j3pyeMD6Hb8QF2BTBnCQWP/uu1lhc9AIrIjKnLYMEroIz/JvtGYgI9dF3AxHZNaEH0rw==",
|
||||
"cpu": [
|
||||
"ppc64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-ppc64-musl": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-musl/-/rollup-linux-ppc64-musl-4.60.1.tgz",
|
||||
"integrity": "sha512-RMxFhJwc9fSXP6PqmAz4cbv3kAyvD1etJFjTx4ONqFP9DkTkXsAMU4v3Vyc5BgzC+anz7nS/9tp4obsKfqkDHg==",
|
||||
"cpu": [
|
||||
"ppc64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-riscv64-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-QKgFl+Yc1eEk6MmOBfRHYF6lTxiiiV3/z/BRrbSiW2I7AFTXoBFvdMEyglohPj//2mZS4hDOqeB0H1ACh3sBbg==",
|
||||
"cpu": [
|
||||
"riscv64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-riscv64-musl": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.60.1.tgz",
|
||||
"integrity": "sha512-RAjXjP/8c6ZtzatZcA1RaQr6O1TRhzC+adn8YZDnChliZHviqIjmvFwHcxi4JKPSDAt6Uhf/7vqcBzQJy0PDJg==",
|
||||
"cpu": [
|
||||
"riscv64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-s390x-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-wcuocpaOlaL1COBYiA89O6yfjlp3RwKDeTIA0hM7OpmhR1Bjo9j31G1uQVpDlTvwxGn2nQs65fBFL5UFd76FcQ==",
|
||||
"cpu": [
|
||||
"s390x"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-x64-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-77PpsFQUCOiZR9+LQEFg9GClyfkNXj1MP6wRnzYs0EeWbPcHs02AXu4xuUbM1zhwn3wqaizle3AEYg5aeoohhg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-linux-x64-musl": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.60.1.tgz",
|
||||
"integrity": "sha512-5cIATbk5vynAjqqmyBjlciMJl1+R/CwX9oLk/EyiFXDWd95KpHdrOJT//rnUl4cUcskrd0jCCw3wpZnhIHdD9w==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"linux"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-openbsd-x64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-openbsd-x64/-/rollup-openbsd-x64-4.60.1.tgz",
|
||||
"integrity": "sha512-cl0w09WsCi17mcmWqqglez9Gk8isgeWvoUZ3WiJFYSR3zjBQc2J5/ihSjpl+VLjPqjQ/1hJRcqBfLjssREQILw==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openbsd"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-openharmony-arm64": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.60.1.tgz",
|
||||
"integrity": "sha512-4Cv23ZrONRbNtbZa37mLSueXUCtN7MXccChtKpUnQNgF010rjrjfHx3QxkS2PI7LqGT5xXyYs1a7LbzAwT0iCA==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"openharmony"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-win32-arm64-msvc": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.60.1.tgz",
|
||||
"integrity": "sha512-i1okWYkA4FJICtr7KpYzFpRTHgy5jdDbZiWfvny21iIKky5YExiDXP+zbXzm3dUcFpkEeYNHgQ5fuG236JPq0g==",
|
||||
"cpu": [
|
||||
"arm64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-win32-ia32-msvc": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.60.1.tgz",
|
||||
"integrity": "sha512-u09m3CuwLzShA0EYKMNiFgcjjzwqtUMLmuCJLeZWjjOYA3IT2Di09KaxGBTP9xVztWyIWjVdsB2E9goMjZvTQg==",
|
||||
"cpu": [
|
||||
"ia32"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-win32-x64-gnu": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.60.1.tgz",
|
||||
"integrity": "sha512-k+600V9Zl1CM7eZxJgMyTUzmrmhB/0XZnF4pRypKAlAgxmedUA+1v9R+XOFv56W4SlHEzfeMtzujLJD22Uz5zg==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
]
|
||||
},
|
||||
"node_modules/@rollup/rollup-win32-x64-msvc": {
|
||||
"version": "4.60.1",
|
||||
"resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.60.1.tgz",
|
||||
"integrity": "sha512-lWMnixq/QzxyhTV6NjQJ4SFo1J6PvOX8vUx5Wb4bBPsEb+8xZ89Bz6kOXpfXj9ak9AHTQVQzlgzBEc1SyM27xQ==",
|
||||
"cpu": [
|
||||
"x64"
|
||||
],
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"os": [
|
||||
"win32"
|
||||
]
|
||||
},
|
||||
"node_modules/@types/estree": {
|
||||
"version": "1.0.8",
|
||||
"resolved": "https://registry.npmmirror.com/@types/estree/-/estree-1.0.8.tgz",
|
||||
|
|
@ -3064,6 +3400,7 @@
|
|||
"integrity": "sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"esbuild": "^0.21.3",
|
||||
"postcss": "^8.4.43",
|
||||
|
|
@ -3147,6 +3484,7 @@
|
|||
"integrity": "sha512-MSmPM9REYqDGBI8439mA4mWhV5sKmDlBKWIYbA3lRb2PTHACE0mgKwA8yQ2xq9vxDTuk4iPrECBAEW2aoFXY0Q==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@vitest/expect": "2.1.9",
|
||||
"@vitest/mocker": "2.1.9",
|
||||
|
|
@ -3369,6 +3707,7 @@
|
|||
"integrity": "sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==",
|
||||
"devOptional": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=10.0.0"
|
||||
},
|
||||
|
|
@ -3390,6 +3729,7 @@
|
|||
"resolved": "https://registry.npmmirror.com/zod/-/zod-3.25.76.tgz",
|
||||
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/colinhacks"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -77,6 +77,16 @@ export class AgentPool {
|
|||
this.semaphore = new Semaphore(maxConcurrency)
|
||||
}
|
||||
|
||||
/**
|
||||
* Pool semaphore slots not currently held (`maxConcurrency - active`).
|
||||
* Used to avoid deadlocks when a nested `run()` would wait forever for a slot
|
||||
* held by the parent run. Best-effort only if multiple nested runs start in
|
||||
* parallel after the same synchronous check.
|
||||
*/
|
||||
get availableRunSlots(): number {
|
||||
return this.maxConcurrency - this.semaphore.active
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Registry operations
|
||||
// -------------------------------------------------------------------------
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ import type {
|
|||
StreamEvent,
|
||||
ToolResult,
|
||||
ToolUseContext,
|
||||
TeamInfo,
|
||||
LLMAdapter,
|
||||
LLMChatOptions,
|
||||
TraceEvent,
|
||||
|
|
@ -125,6 +126,11 @@ export interface RunOptions {
|
|||
* {@link RunnerOptions.abortSignal}. Useful for per-run timeouts.
|
||||
*/
|
||||
readonly abortSignal?: AbortSignal
|
||||
/**
|
||||
* Team context for built-in tools such as `delegate_to_agent`.
|
||||
* Injected by the orchestrator during `runTeam` / `runTasks` pool runs.
|
||||
*/
|
||||
readonly team?: TeamInfo
|
||||
}
|
||||
|
||||
/** The aggregated result returned when a full run completes. */
|
||||
|
|
@ -495,7 +501,7 @@ export class AgentRunner {
|
|||
// Parallel execution is critical for multi-tool responses where the
|
||||
// tools are independent (e.g. reading several files at once).
|
||||
// ------------------------------------------------------------------
|
||||
const toolContext: ToolUseContext = this.buildToolContext()
|
||||
const toolContext: ToolUseContext = this.buildToolContext(options)
|
||||
|
||||
const executionPromises = toolUseBlocks.map(async (block): Promise<{
|
||||
resultBlock: ToolResultBlock
|
||||
|
|
@ -630,14 +636,15 @@ export class AgentRunner {
|
|||
* Build the {@link ToolUseContext} passed to every tool execution.
|
||||
* Identifies this runner as the invoking agent.
|
||||
*/
|
||||
private buildToolContext(): ToolUseContext {
|
||||
private buildToolContext(options: RunOptions = {}): ToolUseContext {
|
||||
return {
|
||||
agent: {
|
||||
name: this.options.agentName ?? 'runner',
|
||||
role: this.options.agentRole ?? 'assistant',
|
||||
model: this.options.model,
|
||||
},
|
||||
abortSignal: this.options.abortSignal,
|
||||
abortSignal: options.abortSignal ?? this.options.abortSignal,
|
||||
...(options.team !== undefined ? { team: options.team } : {}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -94,12 +94,15 @@ export type { ToolExecutorOptions, BatchToolCall } from './tool/executor.js'
|
|||
export {
|
||||
registerBuiltInTools,
|
||||
BUILT_IN_TOOLS,
|
||||
ALL_BUILT_IN_TOOLS_WITH_DELEGATE,
|
||||
bashTool,
|
||||
delegateToAgentTool,
|
||||
fileReadTool,
|
||||
fileWriteTool,
|
||||
fileEditTool,
|
||||
grepTool,
|
||||
} from './tool/built-in/index.js'
|
||||
export type { RegisterBuiltInToolsOptions } from './tool/built-in/index.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// LLM adapters
|
||||
|
|
@ -144,6 +147,7 @@ export type {
|
|||
ToolUseContext,
|
||||
AgentInfo,
|
||||
TeamInfo,
|
||||
DelegationPoolView,
|
||||
|
||||
// Agent
|
||||
AgentConfig,
|
||||
|
|
|
|||
|
|
@ -50,6 +50,7 @@ import type {
|
|||
Task,
|
||||
TaskStatus,
|
||||
TeamConfig,
|
||||
TeamInfo,
|
||||
TeamRunResult,
|
||||
TokenUsage,
|
||||
} from '../types.js'
|
||||
|
|
@ -73,6 +74,7 @@ import { extractKeywords, keywordScore } from '../utils/keywords.js'
|
|||
|
||||
const ZERO_USAGE: TokenUsage = { input_tokens: 0, output_tokens: 0 }
|
||||
const DEFAULT_MAX_CONCURRENCY = 5
|
||||
const DEFAULT_MAX_DELEGATION_DEPTH = 3
|
||||
const DEFAULT_MODEL = 'claude-opus-4-6'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -207,11 +209,14 @@ function resolveTokenBudget(primary?: number, fallback?: number): number | undef
|
|||
|
||||
/**
|
||||
* Build a minimal {@link Agent} with its own fresh registry/executor.
|
||||
* Registers all built-in tools so coordinator/worker agents can use them.
|
||||
* Pool workers pass `includeDelegateTool` so `delegate_to_agent` is available during `runTeam` / `runTasks`.
|
||||
*/
|
||||
function buildAgent(config: AgentConfig): Agent {
|
||||
function buildAgent(
|
||||
config: AgentConfig,
|
||||
toolRegistration?: { readonly includeDelegateTool?: boolean },
|
||||
): Agent {
|
||||
const registry = new ToolRegistry()
|
||||
registerBuiltInTools(registry)
|
||||
registerBuiltInTools(registry, toolRegistration)
|
||||
const executor = new ToolExecutor(registry)
|
||||
return new Agent(config, registry, executor)
|
||||
}
|
||||
|
|
@ -402,6 +407,54 @@ interface RunContext {
|
|||
budgetExceededReason?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Build {@link TeamInfo} for tool context, including nested `runDelegatedAgent`
|
||||
* that respects pool capacity to avoid semaphore deadlocks.
|
||||
*/
|
||||
function buildTaskAgentTeamInfo(
|
||||
ctx: RunContext,
|
||||
taskId: string,
|
||||
traceBase: Partial<RunOptions>,
|
||||
delegationDepth: number,
|
||||
): TeamInfo {
|
||||
const sharedMem = ctx.team.getSharedMemoryInstance()
|
||||
const maxDepth = ctx.config.maxDelegationDepth
|
||||
const agentNames = ctx.team.getAgents().map((a) => a.name)
|
||||
|
||||
const runDelegatedAgent = async (targetAgent: string, prompt: string): Promise<AgentRunResult> => {
|
||||
const pool = ctx.pool
|
||||
if (pool.availableRunSlots < 1) {
|
||||
return {
|
||||
success: false,
|
||||
output:
|
||||
'Agent pool has no free concurrency slot for a delegated run (would deadlock). ' +
|
||||
'Increase maxConcurrency or reduce parallel delegation.',
|
||||
messages: [],
|
||||
tokenUsage: ZERO_USAGE,
|
||||
toolCalls: [],
|
||||
}
|
||||
}
|
||||
const nestedTeam = buildTaskAgentTeamInfo(ctx, taskId, traceBase, delegationDepth + 1)
|
||||
const childOpts: Partial<RunOptions> = {
|
||||
...traceBase,
|
||||
traceAgent: targetAgent,
|
||||
taskId,
|
||||
team: nestedTeam,
|
||||
}
|
||||
return pool.run(targetAgent, prompt, childOpts)
|
||||
}
|
||||
|
||||
return {
|
||||
name: ctx.team.name,
|
||||
agents: agentNames,
|
||||
...(sharedMem ? { sharedMemory: sharedMem.getStore() } : {}),
|
||||
delegationDepth,
|
||||
maxDelegationDepth: maxDepth,
|
||||
delegationPool: ctx.pool,
|
||||
runDelegatedAgent,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute all tasks in `queue` using agents in `pool`, respecting dependencies
|
||||
* and running independent tasks in parallel.
|
||||
|
|
@ -503,16 +556,28 @@ async function executeQueue(
|
|||
// Build the prompt: task description + dependency-only context by default.
|
||||
const prompt = await buildTaskPrompt(task, team, queue)
|
||||
|
||||
// Build trace context for this task's agent run
|
||||
const traceOptions: Partial<RunOptions> | undefined = config.onTrace
|
||||
? { onTrace: config.onTrace, runId: ctx.runId ?? '', taskId: task.id, traceAgent: assignee, abortSignal: ctx.abortSignal }
|
||||
: ctx.abortSignal ? { abortSignal: ctx.abortSignal } : undefined
|
||||
// Trace + abort + team tool context (delegate_to_agent)
|
||||
const traceBase: Partial<RunOptions> = {
|
||||
...(config.onTrace
|
||||
? {
|
||||
onTrace: config.onTrace,
|
||||
runId: ctx.runId ?? '',
|
||||
taskId: task.id,
|
||||
traceAgent: assignee,
|
||||
}
|
||||
: {}),
|
||||
...(ctx.abortSignal ? { abortSignal: ctx.abortSignal } : {}),
|
||||
}
|
||||
const runOptions: Partial<RunOptions> = {
|
||||
...traceBase,
|
||||
team: buildTaskAgentTeamInfo(ctx, task.id, traceBase, 0),
|
||||
}
|
||||
|
||||
const taskStartMs = config.onTrace ? Date.now() : 0
|
||||
let retryCount = 0
|
||||
|
||||
const result = await executeWithRetry(
|
||||
() => pool.run(assignee, prompt, traceOptions),
|
||||
() => pool.run(assignee, prompt, runOptions),
|
||||
task,
|
||||
(retryData) => {
|
||||
retryCount++
|
||||
|
|
@ -705,12 +770,14 @@ export class OpenMultiAgent {
|
|||
*
|
||||
* Sensible defaults:
|
||||
* - `maxConcurrency`: 5
|
||||
* - `maxDelegationDepth`: 3
|
||||
* - `defaultModel`: `'claude-opus-4-6'`
|
||||
* - `defaultProvider`: `'anthropic'`
|
||||
*/
|
||||
constructor(config: OrchestratorConfig = {}) {
|
||||
this.config = {
|
||||
maxConcurrency: config.maxConcurrency ?? DEFAULT_MAX_CONCURRENCY,
|
||||
maxDelegationDepth: config.maxDelegationDepth ?? DEFAULT_MAX_DELEGATION_DEPTH,
|
||||
defaultModel: config.defaultModel ?? DEFAULT_MODEL,
|
||||
defaultProvider: config.defaultProvider ?? 'anthropic',
|
||||
defaultBaseURL: config.defaultBaseURL,
|
||||
|
|
@ -1403,7 +1470,7 @@ export class OpenMultiAgent {
|
|||
baseURL: config.baseURL ?? this.config.defaultBaseURL,
|
||||
apiKey: config.apiKey ?? this.config.defaultApiKey,
|
||||
}
|
||||
pool.add(buildAgent(effective))
|
||||
pool.add(buildAgent(effective, { includeDelegateTool: true }))
|
||||
}
|
||||
return pool
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,98 @@
|
|||
/**
|
||||
* @fileoverview Built-in `delegate_to_agent` tool for synchronous handoff to a roster agent.
|
||||
*/
|
||||
|
||||
import { z } from 'zod'
|
||||
import type { ToolDefinition, ToolResult, ToolUseContext } from '../../types.js'
|
||||
|
||||
const inputSchema = z.object({
|
||||
target_agent: z.string().min(1).describe('Name of the team agent to run the sub-task.'),
|
||||
prompt: z.string().min(1).describe('Instructions / question for the target agent.'),
|
||||
})
|
||||
|
||||
/**
|
||||
* Delegates a sub-task to another agent on the team and returns that agent's final text output.
|
||||
*
|
||||
* Only available when the orchestrator injects {@link ToolUseContext.team} with
|
||||
* `runDelegatedAgent` (pool-backed `runTeam` / `runTasks`). Standalone `runAgent`
|
||||
* does not register this tool by default.
|
||||
*
|
||||
* Nested {@link AgentRunResult.tokenUsage} from the delegated run is not merged into
|
||||
* the parent agent's run totals (traces may still record usage via `onTrace`).
|
||||
*/
|
||||
export const delegateToAgentTool: ToolDefinition<z.infer<typeof inputSchema>> = {
|
||||
name: 'delegate_to_agent',
|
||||
description:
|
||||
'Run a sub-task on another agent from this team and return that agent\'s final answer as the tool result. ' +
|
||||
'Use when you need a specialist teammate to produce output you will incorporate. ' +
|
||||
'The target agent runs in a fresh conversation for this prompt only.',
|
||||
inputSchema,
|
||||
async execute(
|
||||
{ target_agent: targetAgent, prompt },
|
||||
context: ToolUseContext,
|
||||
): Promise<ToolResult> {
|
||||
const team = context.team
|
||||
if (!team?.runDelegatedAgent) {
|
||||
return {
|
||||
data:
|
||||
'delegate_to_agent is only available during orchestrated team runs with the delegation tool enabled. ' +
|
||||
'Use SharedMemory or explicit tasks instead.',
|
||||
isError: true,
|
||||
}
|
||||
}
|
||||
|
||||
const depth = team.delegationDepth ?? 0
|
||||
const maxDepth = team.maxDelegationDepth ?? 3
|
||||
if (depth >= maxDepth) {
|
||||
return {
|
||||
data: `Maximum delegation depth (${maxDepth}) reached; cannot delegate further.`,
|
||||
isError: true,
|
||||
}
|
||||
}
|
||||
|
||||
if (targetAgent === context.agent.name) {
|
||||
return {
|
||||
data: 'Cannot delegate to yourself; use another team member.',
|
||||
isError: true,
|
||||
}
|
||||
}
|
||||
|
||||
if (!team.agents.includes(targetAgent)) {
|
||||
return {
|
||||
data: `Unknown agent "${targetAgent}". Roster: ${team.agents.join(', ')}`,
|
||||
isError: true,
|
||||
}
|
||||
}
|
||||
|
||||
if (team.delegationPool !== undefined && team.delegationPool.availableRunSlots < 1) {
|
||||
return {
|
||||
data:
|
||||
'Agent pool has no free concurrency slot for a delegated run (nested run would block indefinitely). ' +
|
||||
'Increase orchestrator maxConcurrency, wait for parallel work to finish, or avoid delegating while the pool is saturated.',
|
||||
isError: true,
|
||||
}
|
||||
}
|
||||
|
||||
const result = await team.runDelegatedAgent(targetAgent, prompt)
|
||||
// Nested run tokenUsage is not merged into the parent agent's AgentRunResult (onTrace may still show it).
|
||||
|
||||
if (team.sharedMemory) {
|
||||
const suffix = `${Date.now()}-${Math.random().toString(36).slice(2, 10)}`
|
||||
const key = `delegation:${targetAgent}:${suffix}`
|
||||
try {
|
||||
await team.sharedMemory.set(`${context.agent.name}/${key}`, result.output, {
|
||||
agent: context.agent.name,
|
||||
delegatedTo: targetAgent,
|
||||
success: String(result.success),
|
||||
})
|
||||
} catch {
|
||||
// Audit is best-effort; do not fail the tool on store errors.
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
data: result.output,
|
||||
isError: !result.success,
|
||||
}
|
||||
},
|
||||
}
|
||||
|
|
@ -8,12 +8,22 @@
|
|||
import type { ToolDefinition } from '../../types.js'
|
||||
import { ToolRegistry } from '../framework.js'
|
||||
import { bashTool } from './bash.js'
|
||||
import { delegateToAgentTool } from './delegate.js'
|
||||
import { fileEditTool } from './file-edit.js'
|
||||
import { fileReadTool } from './file-read.js'
|
||||
import { fileWriteTool } from './file-write.js'
|
||||
import { grepTool } from './grep.js'
|
||||
|
||||
export { bashTool, fileEditTool, fileReadTool, fileWriteTool, grepTool }
|
||||
export { bashTool, delegateToAgentTool, fileEditTool, fileReadTool, fileWriteTool, grepTool }
|
||||
|
||||
/** Options for {@link registerBuiltInTools}. */
|
||||
export interface RegisterBuiltInToolsOptions {
|
||||
/**
|
||||
* When true, registers `delegate_to_agent` (team orchestration handoff).
|
||||
* Default false so standalone agents and `runAgent` do not expose a tool that always errors.
|
||||
*/
|
||||
readonly includeDelegateTool?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* The ordered list of all built-in tools. Import this when you need to
|
||||
|
|
@ -31,6 +41,12 @@ export const BUILT_IN_TOOLS: ToolDefinition<any>[] = [
|
|||
grepTool,
|
||||
]
|
||||
|
||||
/** All built-ins including `delegate_to_agent` (for team registry setup). */
|
||||
export const ALL_BUILT_IN_TOOLS_WITH_DELEGATE: ToolDefinition<any>[] = [
|
||||
...BUILT_IN_TOOLS,
|
||||
delegateToAgentTool,
|
||||
]
|
||||
|
||||
/**
|
||||
* Register all built-in tools with the given registry.
|
||||
*
|
||||
|
|
@ -43,8 +59,14 @@ export const BUILT_IN_TOOLS: ToolDefinition<any>[] = [
|
|||
* registerBuiltInTools(registry)
|
||||
* ```
|
||||
*/
|
||||
export function registerBuiltInTools(registry: ToolRegistry): void {
|
||||
export function registerBuiltInTools(
|
||||
registry: ToolRegistry,
|
||||
options?: RegisterBuiltInToolsOptions,
|
||||
): void {
|
||||
for (const tool of BUILT_IN_TOOLS) {
|
||||
registry.register(tool)
|
||||
}
|
||||
if (options?.includeDelegateTool) {
|
||||
registry.register(delegateToAgentTool)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
27
src/types.ts
27
src/types.ts
|
|
@ -153,11 +153,29 @@ export interface AgentInfo {
|
|||
readonly model: string
|
||||
}
|
||||
|
||||
/** Descriptor for a team of agents with shared memory. */
|
||||
/**
|
||||
* Minimal pool surface used by `delegate_to_agent` to detect nested-run capacity.
|
||||
* {@link AgentPool} satisfies this structurally via {@link AgentPool.availableRunSlots}.
|
||||
*/
|
||||
export interface DelegationPoolView {
|
||||
readonly availableRunSlots: number
|
||||
}
|
||||
|
||||
/** Descriptor for a team of agents (orchestrator-injected into tool context). */
|
||||
export interface TeamInfo {
|
||||
readonly name: string
|
||||
readonly agents: readonly string[]
|
||||
readonly sharedMemory: MemoryStore
|
||||
/** When the team has shared memory enabled; used for delegation audit writes. */
|
||||
readonly sharedMemory?: MemoryStore
|
||||
/** Zero-based depth of nested delegation from the root task run. */
|
||||
readonly delegationDepth?: number
|
||||
readonly maxDelegationDepth?: number
|
||||
readonly delegationPool?: DelegationPoolView
|
||||
/**
|
||||
* Run another roster agent to completion and return its result.
|
||||
* Only set during orchestrated pool execution (`runTeam` / `runTasks`).
|
||||
*/
|
||||
readonly runDelegatedAgent?: (targetAgent: string, prompt: string) => Promise<AgentRunResult>
|
||||
}
|
||||
|
||||
/** Value returned by a tool's `execute` function. */
|
||||
|
|
@ -401,6 +419,11 @@ export interface OrchestratorEvent {
|
|||
/** Top-level configuration for the orchestrator. */
|
||||
export interface OrchestratorConfig {
|
||||
readonly maxConcurrency?: number
|
||||
/**
|
||||
* Maximum depth of `delegate_to_agent` chains from a task run (default `3`).
|
||||
* Depth is per nested delegated run, not per team.
|
||||
*/
|
||||
readonly maxDelegationDepth?: number
|
||||
/** Maximum cumulative tokens (input + output) allowed per orchestrator run. */
|
||||
readonly maxTokenBudget?: number
|
||||
readonly defaultModel?: string
|
||||
|
|
|
|||
|
|
@ -34,6 +34,11 @@ export class Semaphore {
|
|||
}
|
||||
}
|
||||
|
||||
/** Maximum concurrent holders configured for this semaphore. */
|
||||
get limit(): number {
|
||||
return this.max
|
||||
}
|
||||
|
||||
/**
|
||||
* Acquire a slot. Resolves immediately when one is free, or waits until a
|
||||
* holder calls `release()`.
|
||||
|
|
|
|||
|
|
@ -291,5 +291,32 @@ describe('AgentPool', () => {
|
|||
|
||||
expect(maxConcurrent).toBeLessThanOrEqual(2)
|
||||
})
|
||||
|
||||
it('availableRunSlots matches maxConcurrency when idle', () => {
|
||||
const pool = new AgentPool(3)
|
||||
pool.add(createMockAgent('a'))
|
||||
expect(pool.availableRunSlots).toBe(3)
|
||||
})
|
||||
|
||||
it('availableRunSlots is zero while a run holds the pool slot', async () => {
|
||||
const pool = new AgentPool(1)
|
||||
const agent = createMockAgent('solo')
|
||||
pool.add(agent)
|
||||
|
||||
let finishRun!: (value: AgentRunResult) => void
|
||||
const holdPromise = new Promise<AgentRunResult>((resolve) => {
|
||||
finishRun = resolve
|
||||
})
|
||||
vi.mocked(agent.run).mockReturnValue(holdPromise)
|
||||
|
||||
const runPromise = pool.run('solo', 'hold-slot')
|
||||
await Promise.resolve()
|
||||
await Promise.resolve()
|
||||
expect(pool.availableRunSlots).toBe(0)
|
||||
|
||||
finishRun(SUCCESS_RESULT)
|
||||
await runPromise
|
||||
expect(pool.availableRunSlots).toBe(1)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
|
||||
import { mkdtemp, rm, writeFile, readFile } from 'fs/promises'
|
||||
import { join } from 'path'
|
||||
import { tmpdir } from 'os'
|
||||
|
|
@ -7,9 +7,14 @@ import { fileWriteTool } from '../src/tool/built-in/file-write.js'
|
|||
import { fileEditTool } from '../src/tool/built-in/file-edit.js'
|
||||
import { bashTool } from '../src/tool/built-in/bash.js'
|
||||
import { grepTool } from '../src/tool/built-in/grep.js'
|
||||
import { registerBuiltInTools, BUILT_IN_TOOLS } from '../src/tool/built-in/index.js'
|
||||
import {
|
||||
registerBuiltInTools,
|
||||
BUILT_IN_TOOLS,
|
||||
delegateToAgentTool,
|
||||
} from '../src/tool/built-in/index.js'
|
||||
import { ToolRegistry } from '../src/tool/framework.js'
|
||||
import type { ToolUseContext } from '../src/types.js'
|
||||
import { InMemoryStore } from '../src/memory/store.js'
|
||||
import type { AgentRunResult, ToolUseContext } from '../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
|
|
@ -43,6 +48,13 @@ describe('registerBuiltInTools', () => {
|
|||
expect(registry.get('file_write')).toBeDefined()
|
||||
expect(registry.get('file_edit')).toBeDefined()
|
||||
expect(registry.get('grep')).toBeDefined()
|
||||
expect(registry.get('delegate_to_agent')).toBeUndefined()
|
||||
})
|
||||
|
||||
it('registers delegate_to_agent when includeDelegateTool is set', () => {
|
||||
const registry = new ToolRegistry()
|
||||
registerBuiltInTools(registry, { includeDelegateTool: true })
|
||||
expect(registry.get('delegate_to_agent')).toBeDefined()
|
||||
})
|
||||
|
||||
it('BUILT_IN_TOOLS has correct length', () => {
|
||||
|
|
@ -391,3 +403,191 @@ describe('grep', () => {
|
|||
expect(result.data.toLowerCase()).toContain('no such file')
|
||||
})
|
||||
})
|
||||
|
||||
// ===========================================================================
|
||||
// delegate_to_agent
|
||||
// ===========================================================================
|
||||
|
||||
const DELEGATE_OK: AgentRunResult = {
|
||||
success: true,
|
||||
output: 'research done',
|
||||
messages: [],
|
||||
tokenUsage: { input_tokens: 1, output_tokens: 2 },
|
||||
toolCalls: [],
|
||||
}
|
||||
|
||||
describe('delegate_to_agent', () => {
|
||||
it('returns delegated agent output on success', async () => {
|
||||
const runDelegatedAgent = vi.fn().mockResolvedValue(DELEGATE_OK)
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
delegationDepth: 0,
|
||||
maxDelegationDepth: 3,
|
||||
delegationPool: { availableRunSlots: 2 },
|
||||
runDelegatedAgent,
|
||||
},
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'bob', prompt: 'Summarize X.' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(false)
|
||||
expect(result.data).toBe('research done')
|
||||
expect(runDelegatedAgent).toHaveBeenCalledWith('bob', 'Summarize X.')
|
||||
})
|
||||
|
||||
it('errors when delegation is not configured', async () => {
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: { name: 't', agents: ['alice', 'bob'] },
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'bob', prompt: 'Hi' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(true)
|
||||
expect(result.data).toMatch(/only available during orchestrated team runs/i)
|
||||
})
|
||||
|
||||
it('errors for unknown target agent', async () => {
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
runDelegatedAgent: vi.fn(),
|
||||
delegationPool: { availableRunSlots: 1 },
|
||||
},
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'charlie', prompt: 'Hi' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(true)
|
||||
expect(result.data).toMatch(/Unknown agent/)
|
||||
})
|
||||
|
||||
it('errors on self-delegation', async () => {
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
runDelegatedAgent: vi.fn(),
|
||||
delegationPool: { availableRunSlots: 1 },
|
||||
},
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'alice', prompt: 'Hi' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(true)
|
||||
expect(result.data).toMatch(/yourself/)
|
||||
})
|
||||
|
||||
it('errors when delegation depth limit is reached', async () => {
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
delegationDepth: 3,
|
||||
maxDelegationDepth: 3,
|
||||
runDelegatedAgent: vi.fn(),
|
||||
delegationPool: { availableRunSlots: 1 },
|
||||
},
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'bob', prompt: 'Hi' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(true)
|
||||
expect(result.data).toMatch(/Maximum delegation depth/)
|
||||
})
|
||||
|
||||
it('errors fast when pool has no free slots without calling runDelegatedAgent', async () => {
|
||||
const runDelegatedAgent = vi.fn()
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
delegationPool: { availableRunSlots: 0 },
|
||||
runDelegatedAgent,
|
||||
},
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'bob', prompt: 'Hi' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(true)
|
||||
expect(result.data).toMatch(/no free concurrency slot/i)
|
||||
expect(runDelegatedAgent).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('writes unique SharedMemory audit keys for repeated delegations', async () => {
|
||||
const store = new InMemoryStore()
|
||||
const runDelegatedAgent = vi.fn().mockResolvedValue(DELEGATE_OK)
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
sharedMemory: store,
|
||||
delegationPool: { availableRunSlots: 2 },
|
||||
runDelegatedAgent,
|
||||
},
|
||||
}
|
||||
|
||||
await delegateToAgentTool.execute({ target_agent: 'bob', prompt: 'a' }, ctx)
|
||||
await delegateToAgentTool.execute({ target_agent: 'bob', prompt: 'b' }, ctx)
|
||||
|
||||
const keys = (await store.list()).map((e) => e.key)
|
||||
const delegationKeys = keys.filter((k) => k.includes('delegation:bob:'))
|
||||
expect(delegationKeys).toHaveLength(2)
|
||||
expect(delegationKeys[0]).not.toBe(delegationKeys[1])
|
||||
})
|
||||
|
||||
it('returns isError when delegated run reports success false', async () => {
|
||||
const runDelegatedAgent = vi.fn().mockResolvedValue({
|
||||
success: false,
|
||||
output: 'delegated agent failed',
|
||||
messages: [],
|
||||
tokenUsage: { input_tokens: 0, output_tokens: 0 },
|
||||
toolCalls: [],
|
||||
} satisfies AgentRunResult)
|
||||
|
||||
const ctx: ToolUseContext = {
|
||||
agent: { name: 'alice', role: 'lead', model: 'test' },
|
||||
team: {
|
||||
name: 't',
|
||||
agents: ['alice', 'bob'],
|
||||
delegationPool: { availableRunSlots: 1 },
|
||||
runDelegatedAgent,
|
||||
},
|
||||
}
|
||||
|
||||
const result = await delegateToAgentTool.execute(
|
||||
{ target_agent: 'bob', prompt: 'Hi' },
|
||||
ctx,
|
||||
)
|
||||
|
||||
expect(result.isError).toBe(true)
|
||||
expect(result.data).toBe('delegated agent failed')
|
||||
})
|
||||
})
|
||||
|
|
|
|||
|
|
@ -6,6 +6,10 @@ describe('Semaphore', () => {
|
|||
expect(() => new Semaphore(0)).toThrow()
|
||||
})
|
||||
|
||||
it('exposes configured limit', () => {
|
||||
expect(new Semaphore(5).limit).toBe(5)
|
||||
})
|
||||
|
||||
it('allows up to max concurrent holders', async () => {
|
||||
const sem = new Semaphore(2)
|
||||
let running = 0
|
||||
|
|
|
|||
Loading…
Reference in New Issue