refactor: reorganize examples by category (#125)
Examples grew to 19 flat files mixing basics, provider demos, orchestration patterns, and integrations, with two files colliding on the number 16. Reorganized into category folders so the structure scales as new providers and patterns get added. Layout: examples/basics/ core execution modes (4 files) examples/providers/ one example per supported model provider (8 files) examples/patterns/ reusable orchestration patterns (6 files) examples/integrations/ MCP, observability, AI SDK (3 entries) examples/production/ placeholder for end-to-end use cases Notable changes: - Dropped numeric prefixes; folder + filename now signal category and intent. - Rewrote former smoke-test scripts (copilot, gemini) into proper three-agent team examples matching the deepseek/grok/minimax/groq template. Adapter unit tests in tests/ already cover correctness, so this only improves documentation quality. - Added examples/README.md as the categorized index plus maintenance rules for new submissions. - Added examples/production/README.md with acceptance criteria for the new production category. - Updated all internal npx tsx paths and import paths (../src/ to ../../src/). - Updated README.md and README_zh.md links. - Fixed stale cd paths inside examples/integrations/with-vercel-ai-sdk/README.md.
This commit is contained in:
parent
b857c001a8
commit
ffec13e915
31
README.md
31
README.md
|
|
@ -65,7 +65,7 @@ Requires Node.js >= 18.
|
|||
npm install @jackchen_me/open-multi-agent
|
||||
```
|
||||
|
||||
Set the API key for your provider. Local models via Ollama require no API key — see [example 06](examples/06-local-model.ts).
|
||||
Set the API key for your provider. Local models via Ollama require no API key — see [`providers/ollama`](examples/providers/ollama.ts).
|
||||
|
||||
- `ANTHROPIC_API_KEY`
|
||||
- `OPENAI_API_KEY`
|
||||
|
|
@ -137,23 +137,22 @@ Tokens: 12847 output tokens
|
|||
| Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
|
||||
| Explicit pipeline | `runTasks()` | You define the task graph and assignments |
|
||||
|
||||
For MapReduce-style fan-out without task dependencies, use `AgentPool.runParallel()` directly. See [example 07](examples/07-fan-out-aggregate.ts).
|
||||
For MapReduce-style fan-out without task dependencies, use `AgentPool.runParallel()` directly. See [`patterns/fan-out-aggregate`](examples/patterns/fan-out-aggregate.ts).
|
||||
|
||||
## Examples
|
||||
|
||||
19 runnable scripts and 1 full-stack demo in [`examples/`](./examples/). Start with these:
|
||||
[`examples/`](./examples/) is organized by category — basics, providers, patterns, integrations, and production. See [`examples/README.md`](./examples/README.md) for the full index. Highlights:
|
||||
|
||||
- [02 — Team Collaboration](examples/02-team-collaboration.ts): `runTeam()` coordinator pattern.
|
||||
- [06 — Local Model](examples/06-local-model.ts): Ollama and Claude in one pipeline via `baseURL`.
|
||||
- [09 — Structured Output](examples/09-structured-output.ts): any agent returns Zod-validated JSON.
|
||||
- [11 — Trace Observability](examples/11-trace-observability.ts): `onTrace` spans for LLM calls, tools, and tasks.
|
||||
- [16 — MCP (GitHub)](examples/16-mcp-github.ts): expose an MCP server's tools to an agent via `connectMCPTools()`.
|
||||
- [17 — MiniMax](examples/17-minimax.ts): three-agent team using MiniMax M2.7.
|
||||
- [18 — DeepSeek](examples/18-deepseek.ts): three-agent team using DeepSeek Chat.
|
||||
- [19 — Groq](examples/19-groq.ts): OpenAI-compatible Groq endpoint with fast free-tier models.
|
||||
- [with-vercel-ai-sdk](examples/with-vercel-ai-sdk/): Next.js app — OMA `runTeam()` + AI SDK `useChat` streaming.
|
||||
- [`basics/team-collaboration`](examples/basics/team-collaboration.ts): `runTeam()` coordinator pattern.
|
||||
- [`providers/ollama`](examples/providers/ollama.ts): Ollama and Claude in one pipeline via `baseURL`.
|
||||
- [`patterns/structured-output`](examples/patterns/structured-output.ts): any agent returns Zod-validated JSON.
|
||||
- [`patterns/agent-handoff`](examples/patterns/agent-handoff.ts): synchronous sub-agent delegation via `delegate_to_agent`.
|
||||
- [`integrations/trace-observability`](examples/integrations/trace-observability.ts): `onTrace` spans for LLM calls, tools, and tasks.
|
||||
- [`integrations/mcp-github`](examples/integrations/mcp-github.ts): expose an MCP server's tools to an agent via `connectMCPTools()`.
|
||||
- [`providers/minimax`](examples/providers/minimax.ts), [`providers/deepseek`](examples/providers/deepseek.ts), [`providers/groq`](examples/providers/groq.ts): three-agent teams on each provider.
|
||||
- [`integrations/with-vercel-ai-sdk`](examples/integrations/with-vercel-ai-sdk/): Next.js app — OMA `runTeam()` + AI SDK `useChat` streaming.
|
||||
|
||||
Run scripts with `npx tsx examples/02-team-collaboration.ts`.
|
||||
Run scripts with `npx tsx examples/basics/team-collaboration.ts`.
|
||||
|
||||
## Architecture
|
||||
|
||||
|
|
@ -334,7 +333,7 @@ Notes:
|
|||
- Current transport support is stdio.
|
||||
- MCP input validation is delegated to the MCP server (`inputSchema` is `z.any()`).
|
||||
|
||||
See [example 16](examples/16-mcp-github.ts) for a full runnable setup.
|
||||
See [`integrations/mcp-github`](examples/integrations/mcp-github.ts) for a full runnable setup.
|
||||
|
||||
## Context Management
|
||||
|
||||
|
|
@ -379,9 +378,9 @@ Pairs well with `compressToolResults` and `maxToolOutputChars` above.
|
|||
|
||||
Gemini requires `npm install @google/genai` (optional peer dependency).
|
||||
|
||||
Verified local models with tool-calling: **Gemma 4** (see [example 08](examples/08-gemma4-local.ts)).
|
||||
Verified local models with tool-calling: **Gemma 4** (see [`providers/gemma4-local`](examples/providers/gemma4-local.ts)).
|
||||
|
||||
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (Mistral, Qwen, Moonshot, Doubao, etc.). Groq is now verified in [example 19](examples/19-groq.ts). **Grok, MiniMax, and DeepSeek now have first-class support** via `provider: 'grok'`, `provider: 'minimax'`, and `provider: 'deepseek'`.
|
||||
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (Mistral, Qwen, Moonshot, Doubao, etc.). Groq is now verified in [`providers/groq`](examples/providers/groq.ts). **Grok, MiniMax, and DeepSeek now have first-class support** via `provider: 'grok'`, `provider: 'minimax'`, and `provider: 'deepseek'`.
|
||||
|
||||
### Local Model Tool-Calling
|
||||
|
||||
|
|
|
|||
31
README_zh.md
31
README_zh.md
|
|
@ -65,7 +65,7 @@ CrewAI 是 Python。LangGraph 需要你自己画图。`open-multi-agent` 是你
|
|||
npm install @jackchen_me/open-multi-agent
|
||||
```
|
||||
|
||||
根据使用的 Provider 设置对应的 API key。通过 Ollama 使用本地模型无需 API key — 参见 [example 06](examples/06-local-model.ts)。
|
||||
根据使用的 Provider 设置对应的 API key。通过 Ollama 使用本地模型无需 API key — 参见 [`providers/ollama`](examples/providers/ollama.ts)。
|
||||
|
||||
- `ANTHROPIC_API_KEY`
|
||||
- `OPENAI_API_KEY`
|
||||
|
|
@ -137,23 +137,22 @@ Tokens: 12847 output tokens
|
|||
| 自动编排团队 | `runTeam()` | 给一个目标,框架自动规划和执行 |
|
||||
| 显式任务管线 | `runTasks()` | 你自己定义任务图和分配 |
|
||||
|
||||
如果需要 MapReduce 风格的扇出而不涉及任务依赖,直接使用 `AgentPool.runParallel()`。参见[示例 07](examples/07-fan-out-aggregate.ts)。
|
||||
如果需要 MapReduce 风格的扇出而不涉及任务依赖,直接使用 `AgentPool.runParallel()`。参见 [`patterns/fan-out-aggregate`](examples/patterns/fan-out-aggregate.ts)。
|
||||
|
||||
## 示例
|
||||
|
||||
[`examples/`](./examples/) 里有 19 个可运行脚本和 1 个完整项目。推荐从这几个开始:
|
||||
[`examples/`](./examples/) 按类别组织——basics、providers、patterns、integrations、production。完整索引见 [`examples/README.md`](./examples/README.md)。推荐起步:
|
||||
|
||||
- [02 — 团队协作](examples/02-team-collaboration.ts):`runTeam()` 协调者模式。
|
||||
- [06 — 本地模型](examples/06-local-model.ts):通过 `baseURL` 把 Ollama 和 Claude 放在同一条管线。
|
||||
- [09 — 结构化输出](examples/09-structured-output.ts):任意 agent 产出 Zod 校验过的 JSON。
|
||||
- [11 — 可观测性](examples/11-trace-observability.ts):`onTrace` 回调,为 LLM 调用、工具、任务发出结构化 span。
|
||||
- [16 — MCP (GitHub)](examples/16-mcp-github.ts):通过 `connectMCPTools()` 把 MCP 服务器的工具暴露给 agent。
|
||||
- [17 — MiniMax](examples/17-minimax.ts):使用 MiniMax M2.7 的三智能体团队。
|
||||
- [18 — DeepSeek](examples/18-deepseek.ts):使用 DeepSeek Chat 的三智能体团队。
|
||||
- [19 — Groq](examples/19-groq.ts):OpenAI 兼容的 Groq 端点,搭配高速免费档模型。
|
||||
- [with-vercel-ai-sdk](examples/with-vercel-ai-sdk/):Next.js 应用 — OMA `runTeam()` + AI SDK `useChat` 流式输出。
|
||||
- [`basics/team-collaboration`](examples/basics/team-collaboration.ts):`runTeam()` 协调者模式。
|
||||
- [`providers/ollama`](examples/providers/ollama.ts):通过 `baseURL` 把 Ollama 和 Claude 放在同一条管线。
|
||||
- [`patterns/structured-output`](examples/patterns/structured-output.ts):任意 agent 产出 Zod 校验过的 JSON。
|
||||
- [`patterns/agent-handoff`](examples/patterns/agent-handoff.ts):`delegate_to_agent` 同步子智能体委派。
|
||||
- [`integrations/trace-observability`](examples/integrations/trace-observability.ts):`onTrace` 回调,为 LLM 调用、工具、任务发出结构化 span。
|
||||
- [`integrations/mcp-github`](examples/integrations/mcp-github.ts):通过 `connectMCPTools()` 把 MCP 服务器的工具暴露给 agent。
|
||||
- [`providers/minimax`](examples/providers/minimax.ts)、[`providers/deepseek`](examples/providers/deepseek.ts)、[`providers/groq`](examples/providers/groq.ts):各 provider 的三智能体团队。
|
||||
- [`integrations/with-vercel-ai-sdk`](examples/integrations/with-vercel-ai-sdk/):Next.js 应用 — OMA `runTeam()` + AI SDK `useChat` 流式输出。
|
||||
|
||||
用 `npx tsx examples/02-team-collaboration.ts` 运行脚本示例。
|
||||
用 `npx tsx examples/basics/team-collaboration.ts` 运行脚本示例。
|
||||
|
||||
## 架构
|
||||
|
||||
|
|
@ -334,7 +333,7 @@ await disconnect()
|
|||
- 当前仅支持 stdio transport。
|
||||
- MCP 的入参校验交给 MCP 服务器自身(`inputSchema` 是 `z.any()`)。
|
||||
|
||||
完整可运行示例见 [example 16](examples/16-mcp-github.ts)。
|
||||
完整可运行示例见 [`integrations/mcp-github`](examples/integrations/mcp-github.ts)。
|
||||
|
||||
## 上下文管理
|
||||
|
||||
|
|
@ -379,9 +378,9 @@ const agent: AgentConfig = {
|
|||
|
||||
Gemini 需要 `npm install @google/genai`(optional peer dependency)。
|
||||
|
||||
已验证支持 tool-calling 的本地模型:**Gemma 4**(见[示例 08](examples/08-gemma4-local.ts))。
|
||||
已验证支持 tool-calling 的本地模型:**Gemma 4**(见 [`providers/gemma4-local`](examples/providers/gemma4-local.ts))。
|
||||
|
||||
任何 OpenAI 兼容 API 均可通过 `provider: 'openai'` + `baseURL` 接入(Mistral、Qwen、Moonshot、Doubao 等)。Groq 已在[示例 19](examples/19-groq.ts) 中验证。**Grok、MiniMax 和 DeepSeek 现已原生支持**,分别使用 `provider: 'grok'`、`provider: 'minimax'` 和 `provider: 'deepseek'`。
|
||||
任何 OpenAI 兼容 API 均可通过 `provider: 'openai'` + `baseURL` 接入(Mistral、Qwen、Moonshot、Doubao 等)。Groq 已在 [`providers/groq`](examples/providers/groq.ts) 中验证。**Grok、MiniMax 和 DeepSeek 现已原生支持**,分别使用 `provider: 'grok'`、`provider: 'minimax'` 和 `provider: 'deepseek'`。
|
||||
|
||||
### 本地模型 Tool-Calling
|
||||
|
||||
|
|
|
|||
|
|
@ -1,49 +0,0 @@
|
|||
/**
|
||||
* Quick smoke test for the Copilot adapter.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/05-copilot-test.ts
|
||||
*
|
||||
* If GITHUB_COPILOT_TOKEN is not set, the adapter will start an interactive
|
||||
* OAuth2 device flow — you'll be prompted to sign in via your browser.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { OrchestratorEvent } from '../src/types.js'
|
||||
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: 'gpt-4o',
|
||||
defaultProvider: 'copilot',
|
||||
onProgress: (event: OrchestratorEvent) => {
|
||||
if (event.type === 'agent_start') {
|
||||
console.log(`[start] agent=${event.agent}`)
|
||||
} else if (event.type === 'agent_complete') {
|
||||
console.log(`[complete] agent=${event.agent}`)
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
console.log('Testing Copilot adapter with gpt-4o...\n')
|
||||
|
||||
const result = await orchestrator.runAgent(
|
||||
{
|
||||
name: 'assistant',
|
||||
model: 'gpt-4o',
|
||||
provider: 'copilot',
|
||||
systemPrompt: 'You are a helpful assistant. Keep answers brief.',
|
||||
maxTurns: 1,
|
||||
maxTokens: 256,
|
||||
},
|
||||
'What is 2 + 2? Reply in one sentence.',
|
||||
)
|
||||
|
||||
if (result.success) {
|
||||
console.log('\nAgent output:')
|
||||
console.log('─'.repeat(60))
|
||||
console.log(result.output)
|
||||
console.log('─'.repeat(60))
|
||||
console.log(`\nTokens: input=${result.tokenUsage.input_tokens}, output=${result.tokenUsage.output_tokens}`)
|
||||
} else {
|
||||
console.error('Agent failed:', result.output)
|
||||
process.exit(1)
|
||||
}
|
||||
|
|
@ -1,48 +0,0 @@
|
|||
/**
|
||||
* Quick smoke test for the Gemini adapter.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/13-gemini.ts
|
||||
*
|
||||
* If GEMINI_API_KEY is not set, the adapter will not work.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { OrchestratorEvent } from '../src/types.js'
|
||||
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: 'gemini-2.5-flash',
|
||||
defaultProvider: 'gemini',
|
||||
onProgress: (event: OrchestratorEvent) => {
|
||||
if (event.type === 'agent_start') {
|
||||
console.log(`[start] agent=${event.agent}`)
|
||||
} else if (event.type === 'agent_complete') {
|
||||
console.log(`[complete] agent=${event.agent}`)
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
console.log('Testing Gemini adapter with gemini-2.5-flash...\n')
|
||||
|
||||
const result = await orchestrator.runAgent(
|
||||
{
|
||||
name: 'assistant',
|
||||
model: 'gemini-2.5-flash',
|
||||
provider: 'gemini',
|
||||
systemPrompt: 'You are a helpful assistant. Keep answers brief.',
|
||||
maxTurns: 1,
|
||||
maxTokens: 256,
|
||||
},
|
||||
'What is 2 + 2? Reply in one sentence.',
|
||||
)
|
||||
|
||||
if (result.success) {
|
||||
console.log('\nAgent output:')
|
||||
console.log('─'.repeat(60))
|
||||
console.log(result.output)
|
||||
console.log('─'.repeat(60))
|
||||
console.log(`\nTokens: input=${result.tokenUsage.input_tokens}, output=${result.tokenUsage.output_tokens}`)
|
||||
} else {
|
||||
console.error('Agent failed:', result.output)
|
||||
process.exit(1)
|
||||
}
|
||||
|
|
@ -0,0 +1,79 @@
|
|||
# Examples
|
||||
|
||||
Runnable scripts demonstrating `open-multi-agent`. Organized by category — pick one that matches what you're trying to do.
|
||||
|
||||
All scripts run with `npx tsx examples/<category>/<name>.ts` and require the corresponding API key in your environment.
|
||||
|
||||
---
|
||||
|
||||
## basics — start here
|
||||
|
||||
The four core execution modes. Read these first.
|
||||
|
||||
| Example | What it shows |
|
||||
|---------|---------------|
|
||||
| [`basics/single-agent`](basics/single-agent.ts) | One agent with bash + file tools, then streaming via the `Agent` class. |
|
||||
| [`basics/team-collaboration`](basics/team-collaboration.ts) | `runTeam()` coordinator pattern — goal in, results out. |
|
||||
| [`basics/task-pipeline`](basics/task-pipeline.ts) | `runTasks()` with explicit task DAG and dependencies. |
|
||||
| [`basics/multi-model-team`](basics/multi-model-team.ts) | Different models per agent in one team. |
|
||||
|
||||
## providers — model & adapter examples
|
||||
|
||||
One example per supported provider. All follow the same three-agent (architect / developer / reviewer) shape so they're easy to compare.
|
||||
|
||||
| Example | Provider | Env var |
|
||||
|---------|----------|---------|
|
||||
| [`providers/ollama`](providers/ollama.ts) | Ollama (local) + Claude | `ANTHROPIC_API_KEY` |
|
||||
| [`providers/gemma4-local`](providers/gemma4-local.ts) | Gemma 4 via Ollama (100% local) | — |
|
||||
| [`providers/copilot`](providers/copilot.ts) | GitHub Copilot (GPT-4o + Claude) | `GITHUB_TOKEN` |
|
||||
| [`providers/grok`](providers/grok.ts) | xAI Grok | `XAI_API_KEY` |
|
||||
| [`providers/gemini`](providers/gemini.ts) | Google Gemini | `GEMINI_API_KEY` |
|
||||
| [`providers/minimax`](providers/minimax.ts) | MiniMax M2.7 | `MINIMAX_API_KEY` |
|
||||
| [`providers/deepseek`](providers/deepseek.ts) | DeepSeek Chat | `DEEPSEEK_API_KEY` |
|
||||
| [`providers/groq`](providers/groq.ts) | Groq (OpenAI-compatible) | `GROQ_API_KEY` |
|
||||
|
||||
## patterns — orchestration patterns
|
||||
|
||||
Reusable shapes for common multi-agent problems.
|
||||
|
||||
| Example | Pattern |
|
||||
|---------|---------|
|
||||
| [`patterns/fan-out-aggregate`](patterns/fan-out-aggregate.ts) | MapReduce-style fan-out via `AgentPool.runParallel()`. |
|
||||
| [`patterns/structured-output`](patterns/structured-output.ts) | Zod-validated JSON output from an agent. |
|
||||
| [`patterns/task-retry`](patterns/task-retry.ts) | Per-task retry with exponential backoff. |
|
||||
| [`patterns/multi-perspective-code-review`](patterns/multi-perspective-code-review.ts) | Multiple reviewer agents in parallel, then synthesis. |
|
||||
| [`patterns/research-aggregation`](patterns/research-aggregation.ts) | Multi-source research collated by a synthesis agent. |
|
||||
| [`patterns/agent-handoff`](patterns/agent-handoff.ts) | Synchronous sub-agent delegation via `delegate_to_agent`. |
|
||||
|
||||
## integrations — external systems
|
||||
|
||||
Hooking the framework up to outside-the-box tooling.
|
||||
|
||||
| Example | Integrates with |
|
||||
|---------|-----------------|
|
||||
| [`integrations/trace-observability`](integrations/trace-observability.ts) | `onTrace` spans for LLM calls, tools, and tasks. |
|
||||
| [`integrations/mcp-github`](integrations/mcp-github.ts) | An MCP server's tools exposed to an agent via `connectMCPTools()`. |
|
||||
| [`integrations/with-vercel-ai-sdk/`](integrations/with-vercel-ai-sdk/) | Next.js app — OMA `runTeam()` + AI SDK `useChat` streaming. |
|
||||
|
||||
## production — real-world use cases
|
||||
|
||||
End-to-end examples wired to real workflows. Higher bar than the categories above. See [`production/README.md`](production/README.md) for the acceptance criteria and how to contribute.
|
||||
|
||||
---
|
||||
|
||||
## Adding a new example
|
||||
|
||||
| You're adding… | Goes in… | Filename |
|
||||
|----------------|----------|----------|
|
||||
| A new model provider | `providers/` | `<provider-name>.ts` (lowercase, hyphenated) |
|
||||
| A reusable orchestration pattern | `patterns/` | `<pattern-name>.ts` |
|
||||
| Integration with an outside system (MCP server, observability backend, framework, app) | `integrations/` | `<system>.ts` or `<system>/` for multi-file |
|
||||
| A real-world end-to-end use case | `production/` | `<use-case>/` directory with its own README |
|
||||
|
||||
Conventions:
|
||||
|
||||
- **No numeric prefixes.** Folders signal category; reading order is set by this README.
|
||||
- **File header docstring** with one-line title, `Run:` block, and prerequisites.
|
||||
- **Imports** should resolve as `from '../../src/index.js'` (one level deeper than the old flat layout).
|
||||
- **Match the provider template** when adding a provider: three-agent team (architect / developer / reviewer) building a small REST API. Keeps comparisons honest.
|
||||
- **Add a row** to the table in this file for the corresponding category.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 04 — Multi-Model Team with Custom Tools
|
||||
* Multi-Model Team with Custom Tools
|
||||
*
|
||||
* Demonstrates:
|
||||
* - Mixing Anthropic and OpenAI models in the same team
|
||||
|
|
@ -8,7 +8,7 @@
|
|||
* - Running a team goal that uses the custom tools
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/04-multi-model-team.ts
|
||||
* npx tsx examples/basics/multi-model-team.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY and OPENAI_API_KEY env vars must be set.
|
||||
|
|
@ -16,8 +16,8 @@
|
|||
*/
|
||||
|
||||
import { z } from 'zod'
|
||||
import { OpenMultiAgent, defineTool } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent, defineTool } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Custom tools — defined with defineTool() + Zod schemas
|
||||
|
|
@ -113,7 +113,7 @@ const formatCurrencyTool = defineTool({
|
|||
// directly through AgentPool rather than through the OpenMultiAgent high-level API.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import { Agent, AgentPool, ToolRegistry, ToolExecutor, registerBuiltInTools } from '../src/index.js'
|
||||
import { Agent, AgentPool, ToolRegistry, ToolExecutor, registerBuiltInTools } from '../../src/index.js'
|
||||
|
||||
/**
|
||||
* Build an Agent with both built-in and custom tools registered.
|
||||
|
|
@ -1,18 +1,18 @@
|
|||
/**
|
||||
* Example 01 — Single Agent
|
||||
* Single Agent
|
||||
*
|
||||
* The simplest possible usage: one agent with bash and file tools, running
|
||||
* a coding task. Then shows streaming output using the Agent class directly.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/01-single-agent.ts
|
||||
* npx tsx examples/basics/single-agent.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent, Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from '../src/index.js'
|
||||
import type { OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent, Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from '../../src/index.js'
|
||||
import type { OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Part 1: Single agent via OpenMultiAgent (simplest path)
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 03 — Explicit Task Pipeline with Dependencies
|
||||
* Explicit Task Pipeline with Dependencies
|
||||
*
|
||||
* Demonstrates how to define tasks with explicit dependency chains
|
||||
* (design → implement → test → review) using runTasks(). The TaskQueue
|
||||
|
|
@ -8,14 +8,14 @@
|
|||
* description plus direct dependency results (not unrelated team outputs).
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/03-task-pipeline.ts
|
||||
* npx tsx examples/basics/task-pipeline.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agents
|
||||
|
|
@ -1,19 +1,19 @@
|
|||
/**
|
||||
* Example 02 — Multi-Agent Team Collaboration
|
||||
* Multi-Agent Team Collaboration
|
||||
*
|
||||
* Three specialised agents (architect, developer, reviewer) collaborate on a
|
||||
* shared goal. The OpenMultiAgent orchestrator breaks the goal into tasks, assigns
|
||||
* them to the right agents, and collects the results.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/02-team-collaboration.ts
|
||||
* npx tsx examples/basics/team-collaboration.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
/**
|
||||
* Example 16 — MCP GitHub Tools
|
||||
* MCP GitHub Tools
|
||||
*
|
||||
* Connect an MCP server over stdio and register all exposed MCP tools as
|
||||
* standard open-multi-agent tools.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/16-mcp-github.ts
|
||||
* npx tsx examples/integrations/mcp-github.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* - GEMINI_API_KEY
|
||||
|
|
@ -13,8 +13,8 @@
|
|||
* - @modelcontextprotocol/sdk installed
|
||||
*/
|
||||
|
||||
import { Agent, ToolExecutor, ToolRegistry, registerBuiltInTools } from '../src/index.js'
|
||||
import { connectMCPTools } from '../src/mcp.js'
|
||||
import { Agent, ToolExecutor, ToolRegistry, registerBuiltInTools } from '../../src/index.js'
|
||||
import { connectMCPTools } from '../../src/mcp.js'
|
||||
|
||||
if (!process.env.GITHUB_TOKEN?.trim()) {
|
||||
console.error('Missing GITHUB_TOKEN: set a GitHub personal access token in the environment.')
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 11 — Trace Observability
|
||||
* Trace Observability
|
||||
*
|
||||
* Demonstrates the `onTrace` callback for lightweight observability. Every LLM
|
||||
* call, tool execution, task lifecycle, and agent run emits a structured trace
|
||||
|
|
@ -11,14 +11,14 @@
|
|||
* dashboard.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/11-trace-observability.ts
|
||||
* npx tsx examples/integrations/trace-observability.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, TraceEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, TraceEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agents
|
||||
|
|
@ -27,11 +27,11 @@ Chat UI (app/page.tsx) — useChat hook renders streamed response
|
|||
|
||||
```bash
|
||||
# 1. From repo root, install OMA dependencies
|
||||
cd ../..
|
||||
cd ../../..
|
||||
npm install
|
||||
|
||||
# 2. Back to this example
|
||||
cd examples/with-vercel-ai-sdk
|
||||
cd examples/integrations/with-vercel-ai-sdk
|
||||
npm install
|
||||
|
||||
# 3. Set your API key
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 16 — Synchronous agent handoff via `delegate_to_agent`
|
||||
* Synchronous agent handoff via `delegate_to_agent`
|
||||
*
|
||||
* During `runTeam` / `runTasks`, pool agents register the built-in
|
||||
* `delegate_to_agent` tool so one specialist can run a sub-prompt on another
|
||||
|
|
@ -9,14 +9,14 @@
|
|||
* standalone `runAgent()` does not register this tool by default.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/16-agent-handoff.ts
|
||||
* npx tsx examples/patterns/agent-handoff.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig } from '../../src/types.js'
|
||||
|
||||
const researcher: AgentConfig = {
|
||||
name: 'researcher',
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 07 — Fan-Out / Aggregate (MapReduce) Pattern
|
||||
* Fan-Out / Aggregate (MapReduce) Pattern
|
||||
*
|
||||
* Demonstrates:
|
||||
* - Fan-out: send the same question to N "analyst" agents in parallel
|
||||
|
|
@ -9,14 +9,14 @@
|
|||
* - No tools needed — pure LLM reasoning to keep the focus on the pattern
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/07-fan-out-aggregate.ts
|
||||
* npx tsx examples/patterns/fan-out-aggregate.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { Agent, AgentPool, ToolRegistry, ToolExecutor, registerBuiltInTools } from '../src/index.js'
|
||||
import type { AgentConfig, AgentRunResult } from '../src/types.js'
|
||||
import { Agent, AgentPool, ToolRegistry, ToolExecutor, registerBuiltInTools } from '../../src/index.js'
|
||||
import type { AgentConfig, AgentRunResult } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Analysis topic
|
||||
|
|
@ -11,14 +11,14 @@
|
|||
* generator → [security-reviewer, performance-reviewer, style-reviewer] (parallel) → synthesizer
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/14-multi-perspective-code-review.ts
|
||||
* npx tsx examples/patterns/multi-perspective-code-review.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// API spec to implement
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 15 — Multi-Source Research Aggregation
|
||||
* Multi-Source Research Aggregation
|
||||
*
|
||||
* Demonstrates runTasks() with explicit dependency chains:
|
||||
* - Parallel execution: three analyst agents research the same topic independently
|
||||
|
|
@ -14,14 +14,14 @@
|
|||
* [technical-analyst, market-analyst, community-analyst] (parallel) → synthesizer
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/15-research-aggregation.ts
|
||||
* npx tsx examples/patterns/research-aggregation.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Topic
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 09 — Structured Output
|
||||
* Structured Output
|
||||
*
|
||||
* Demonstrates `outputSchema` on AgentConfig. The agent's response is
|
||||
* automatically parsed as JSON and validated against a Zod schema.
|
||||
|
|
@ -8,15 +8,15 @@
|
|||
* The validated result is available via `result.structured`.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/09-structured-output.ts
|
||||
* npx tsx examples/patterns/structured-output.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { z } from 'zod'
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Define a Zod schema for the expected output
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 10 — Task Retry with Exponential Backoff
|
||||
* Task Retry with Exponential Backoff
|
||||
*
|
||||
* Demonstrates `maxRetries`, `retryDelayMs`, and `retryBackoff` on task config.
|
||||
* When a task fails, the framework automatically retries with exponential
|
||||
|
|
@ -10,14 +10,14 @@
|
|||
* to retry on failure, and the second task (analysis) depends on it.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/10-task-retry.ts
|
||||
* npx tsx examples/patterns/task-retry.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agents
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
# Production Examples
|
||||
|
||||
End-to-end examples that demonstrate `open-multi-agent` running on real-world use cases — not toy demos.
|
||||
|
||||
The other example categories (`basics/`, `providers/`, `patterns/`, `integrations/`) optimize for clarity and small surface area. This directory optimizes for **showing the framework solving an actual problem**, with the operational concerns that come with it.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
A submission belongs in `production/` if it meets all of:
|
||||
|
||||
1. **Real use case.** Solves a concrete problem someone would actually pay for or use daily — not "build me a TODO API".
|
||||
2. **Error handling.** Handles LLM failures, tool failures, and partial team failures gracefully. No bare `await` chains that crash on the first error.
|
||||
3. **Documentation.** Each example lives in its own subdirectory with a `README.md` covering:
|
||||
- What problem it solves
|
||||
- Architecture diagram or task DAG description
|
||||
- Required env vars / external services
|
||||
- How to run locally
|
||||
- Expected runtime and approximate token cost
|
||||
4. **Reproducible.** Pinned model versions; no reliance on private datasets or unpublished APIs.
|
||||
5. **Tested.** At least one test or smoke check that verifies the example still runs after framework updates.
|
||||
|
||||
If a submission falls short on (2)–(5), it probably belongs in `patterns/` or `integrations/` instead.
|
||||
|
||||
## Layout
|
||||
|
||||
```
|
||||
production/
|
||||
└── <use-case>/
|
||||
├── README.md # required
|
||||
├── index.ts # entry point
|
||||
├── agents/ # AgentConfig definitions
|
||||
├── tools/ # custom tools, if any
|
||||
└── tests/ # smoke test or e2e test
|
||||
```
|
||||
|
||||
## Submitting
|
||||
|
||||
Open a PR. In the PR description, address each of the five acceptance criteria above.
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
/**
|
||||
* Multi-Agent Team Collaboration with GitHub Copilot
|
||||
*
|
||||
* Three specialized agents (architect, developer, reviewer) collaborate via `runTeam()`
|
||||
* to build a minimal Express.js REST API. Routes through GitHub Copilot's OpenAI-compatible
|
||||
* endpoint, mixing GPT-4o (architect/reviewer) and Claude Sonnet (developer) in one team.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/providers/copilot.ts
|
||||
*
|
||||
* Authentication (one of):
|
||||
* GITHUB_COPILOT_TOKEN env var (preferred)
|
||||
* GITHUB_TOKEN env var (fallback)
|
||||
* Otherwise: an interactive OAuth2 device flow starts on first run and prompts
|
||||
* you to sign in via your browser. Requires an active Copilot subscription.
|
||||
*
|
||||
* Available models (subset):
|
||||
* gpt-4o — included, no premium request
|
||||
* claude-sonnet-4.5 — premium, 1x multiplier
|
||||
* claude-sonnet-4.6 — premium, 1x multiplier
|
||||
* See src/llm/copilot.ts for the full model list and premium multipliers.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions (mixing GPT-4o and Claude Sonnet, both via Copilot)
|
||||
// ---------------------------------------------------------------------------
|
||||
const architect: AgentConfig = {
|
||||
name: 'architect',
|
||||
model: 'gpt-4o',
|
||||
provider: 'copilot',
|
||||
systemPrompt: `You are a software architect with deep experience in Node.js and REST API design.
|
||||
Your job is to design clear, production-quality API contracts and file/directory structures.
|
||||
Output concise plans in markdown — no unnecessary prose.`,
|
||||
tools: ['bash', 'file_write'],
|
||||
maxTurns: 5,
|
||||
temperature: 0.2,
|
||||
}
|
||||
|
||||
const developer: AgentConfig = {
|
||||
name: 'developer',
|
||||
model: 'claude-sonnet-4.5',
|
||||
provider: 'copilot',
|
||||
systemPrompt: `You are a TypeScript/Node.js developer. You implement what the architect specifies.
|
||||
Write clean, runnable code with proper error handling. Use the tools to write files and run tests.`,
|
||||
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
||||
maxTurns: 12,
|
||||
temperature: 0.1,
|
||||
}
|
||||
|
||||
const reviewer: AgentConfig = {
|
||||
name: 'reviewer',
|
||||
model: 'gpt-4o',
|
||||
provider: 'copilot',
|
||||
systemPrompt: `You are a senior code reviewer. Review code for correctness, security, and clarity.
|
||||
Provide a structured review with: LGTM items, suggestions, and any blocking issues.
|
||||
Read files using the tools before reviewing.`,
|
||||
tools: ['bash', 'file_read', 'grep'],
|
||||
maxTurns: 5,
|
||||
temperature: 0.3,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Progress tracking
|
||||
// ---------------------------------------------------------------------------
|
||||
const startTimes = new Map<string, number>()
|
||||
|
||||
function handleProgress(event: OrchestratorEvent): void {
|
||||
const ts = new Date().toISOString().slice(11, 23)
|
||||
switch (event.type) {
|
||||
case 'agent_start':
|
||||
startTimes.set(event.agent ?? '', Date.now())
|
||||
console.log(`[${ts}] AGENT START → ${event.agent}`)
|
||||
break
|
||||
case 'agent_complete': {
|
||||
const elapsed = Date.now() - (startTimes.get(event.agent ?? '') ?? Date.now())
|
||||
console.log(`[${ts}] AGENT DONE ← ${event.agent} (${elapsed}ms)`)
|
||||
break
|
||||
}
|
||||
case 'task_start':
|
||||
console.log(`[${ts}] TASK START ↓ ${event.task}`)
|
||||
break
|
||||
case 'task_complete':
|
||||
console.log(`[${ts}] TASK DONE ↑ ${event.task}`)
|
||||
break
|
||||
case 'message':
|
||||
console.log(`[${ts}] MESSAGE • ${event.agent} → (team)`)
|
||||
break
|
||||
case 'error':
|
||||
console.error(`[${ts}] ERROR ✗ agent=${event.agent} task=${event.task}`)
|
||||
if (event.data instanceof Error) console.error(` ${event.data.message}`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Orchestrate
|
||||
// ---------------------------------------------------------------------------
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: 'gpt-4o',
|
||||
defaultProvider: 'copilot',
|
||||
maxConcurrency: 1,
|
||||
onProgress: handleProgress,
|
||||
})
|
||||
|
||||
const team = orchestrator.createTeam('api-team', {
|
||||
name: 'api-team',
|
||||
agents: [architect, developer, reviewer],
|
||||
sharedMemory: true,
|
||||
maxConcurrency: 1,
|
||||
})
|
||||
|
||||
console.log(`Team "${team.name}" created with agents: ${team.getAgents().map(a => a.name).join(', ')}`)
|
||||
console.log('\nStarting team run...\n')
|
||||
console.log('='.repeat(60))
|
||||
|
||||
const goal = `Create a minimal Express.js REST API in /tmp/copilot-api/ with:
|
||||
- GET /health → { status: "ok" }
|
||||
- GET /users → returns a hardcoded array of 2 user objects
|
||||
- POST /users → accepts { name, email } body, logs it, returns 201
|
||||
- Proper error handling middleware
|
||||
- The server should listen on port 3001
|
||||
- Include a package.json with the required dependencies`
|
||||
|
||||
const result = await orchestrator.runTeam(team, goal)
|
||||
|
||||
console.log('\n' + '='.repeat(60))
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Results
|
||||
// ---------------------------------------------------------------------------
|
||||
console.log('\nTeam run complete.')
|
||||
console.log(`Success: ${result.success}`)
|
||||
console.log(`Total tokens — input: ${result.totalTokenUsage.input_tokens}, output: ${result.totalTokenUsage.output_tokens}`)
|
||||
|
||||
console.log('\nPer-agent results:')
|
||||
for (const [agentName, agentResult] of result.agentResults) {
|
||||
const status = agentResult.success ? 'OK' : 'FAILED'
|
||||
const tools = agentResult.toolCalls.length
|
||||
console.log(` ${agentName.padEnd(12)} [${status}] tool_calls=${tools}`)
|
||||
if (!agentResult.success) {
|
||||
console.log(` Error: ${agentResult.output.slice(0, 120)}`)
|
||||
}
|
||||
}
|
||||
|
||||
const developerResult = result.agentResults.get('developer')
|
||||
if (developerResult?.success) {
|
||||
console.log('\nDeveloper output (last 600 chars):')
|
||||
console.log('─'.repeat(60))
|
||||
const out = developerResult.output
|
||||
console.log(out.length > 600 ? '...' + out.slice(-600) : out)
|
||||
console.log('─'.repeat(60))
|
||||
}
|
||||
|
||||
const reviewerResult = result.agentResults.get('reviewer')
|
||||
if (reviewerResult?.success) {
|
||||
console.log('\nReviewer output:')
|
||||
console.log('─'.repeat(60))
|
||||
console.log(reviewerResult.output)
|
||||
console.log('─'.repeat(60))
|
||||
}
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
/**
|
||||
* Example 18 — Multi-Agent Team Collaboration with DeepSeek
|
||||
* Multi-Agent Team Collaboration with DeepSeek
|
||||
*
|
||||
* Three specialized agents (architect, developer, reviewer) collaborate via `runTeam()`
|
||||
* to build a minimal Express.js REST API. Every agent uses DeepSeek's flagship model.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/18-deepseek.ts
|
||||
* npx tsx examples/providers/deepseek.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* DEEPSEEK_API_KEY environment variable must be set.
|
||||
|
|
@ -15,8 +15,8 @@
|
|||
* deepseek-reasoner — DeepSeek-V3 (thinking mode, for complex reasoning)
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions (all using deepseek-chat)
|
||||
|
|
@ -0,0 +1,161 @@
|
|||
/**
|
||||
* Multi-Agent Team Collaboration with Google Gemini
|
||||
*
|
||||
* Three specialized agents (architect, developer, reviewer) collaborate via `runTeam()`
|
||||
* to build a minimal Express.js REST API. Every agent uses Google's Gemini models
|
||||
* via the official `@google/genai` SDK.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/providers/gemini.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* GEMINI_API_KEY environment variable must be set.
|
||||
* `@google/genai` is an optional peer dependency — install it first:
|
||||
* npm install @google/genai
|
||||
*
|
||||
* Available models (subset):
|
||||
* gemini-2.5-flash — fast & cheap, good for routine coding tasks
|
||||
* gemini-2.5-pro — more capable, higher latency, larger context
|
||||
* See https://ai.google.dev/gemini-api/docs/models for the full list.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions (mixing pro and flash for a cost/capability balance)
|
||||
// ---------------------------------------------------------------------------
|
||||
const architect: AgentConfig = {
|
||||
name: 'architect',
|
||||
model: 'gemini-2.5-pro',
|
||||
provider: 'gemini',
|
||||
systemPrompt: `You are a software architect with deep experience in Node.js and REST API design.
|
||||
Your job is to design clear, production-quality API contracts and file/directory structures.
|
||||
Output concise plans in markdown — no unnecessary prose.`,
|
||||
tools: ['bash', 'file_write'],
|
||||
maxTurns: 5,
|
||||
temperature: 0.2,
|
||||
}
|
||||
|
||||
const developer: AgentConfig = {
|
||||
name: 'developer',
|
||||
model: 'gemini-2.5-flash',
|
||||
provider: 'gemini',
|
||||
systemPrompt: `You are a TypeScript/Node.js developer. You implement what the architect specifies.
|
||||
Write clean, runnable code with proper error handling. Use the tools to write files and run tests.`,
|
||||
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
|
||||
maxTurns: 12,
|
||||
temperature: 0.1,
|
||||
}
|
||||
|
||||
const reviewer: AgentConfig = {
|
||||
name: 'reviewer',
|
||||
model: 'gemini-2.5-flash',
|
||||
provider: 'gemini',
|
||||
systemPrompt: `You are a senior code reviewer. Review code for correctness, security, and clarity.
|
||||
Provide a structured review with: LGTM items, suggestions, and any blocking issues.
|
||||
Read files using the tools before reviewing.`,
|
||||
tools: ['bash', 'file_read', 'grep'],
|
||||
maxTurns: 5,
|
||||
temperature: 0.3,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Progress tracking
|
||||
// ---------------------------------------------------------------------------
|
||||
const startTimes = new Map<string, number>()
|
||||
|
||||
function handleProgress(event: OrchestratorEvent): void {
|
||||
const ts = new Date().toISOString().slice(11, 23)
|
||||
switch (event.type) {
|
||||
case 'agent_start':
|
||||
startTimes.set(event.agent ?? '', Date.now())
|
||||
console.log(`[${ts}] AGENT START → ${event.agent}`)
|
||||
break
|
||||
case 'agent_complete': {
|
||||
const elapsed = Date.now() - (startTimes.get(event.agent ?? '') ?? Date.now())
|
||||
console.log(`[${ts}] AGENT DONE ← ${event.agent} (${elapsed}ms)`)
|
||||
break
|
||||
}
|
||||
case 'task_start':
|
||||
console.log(`[${ts}] TASK START ↓ ${event.task}`)
|
||||
break
|
||||
case 'task_complete':
|
||||
console.log(`[${ts}] TASK DONE ↑ ${event.task}`)
|
||||
break
|
||||
case 'message':
|
||||
console.log(`[${ts}] MESSAGE • ${event.agent} → (team)`)
|
||||
break
|
||||
case 'error':
|
||||
console.error(`[${ts}] ERROR ✗ agent=${event.agent} task=${event.task}`)
|
||||
if (event.data instanceof Error) console.error(` ${event.data.message}`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Orchestrate
|
||||
// ---------------------------------------------------------------------------
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: 'gemini-2.5-flash',
|
||||
defaultProvider: 'gemini',
|
||||
maxConcurrency: 1,
|
||||
onProgress: handleProgress,
|
||||
})
|
||||
|
||||
const team = orchestrator.createTeam('api-team', {
|
||||
name: 'api-team',
|
||||
agents: [architect, developer, reviewer],
|
||||
sharedMemory: true,
|
||||
maxConcurrency: 1,
|
||||
})
|
||||
|
||||
console.log(`Team "${team.name}" created with agents: ${team.getAgents().map(a => a.name).join(', ')}`)
|
||||
console.log('\nStarting team run...\n')
|
||||
console.log('='.repeat(60))
|
||||
|
||||
const goal = `Create a minimal Express.js REST API in /tmp/gemini-api/ with:
|
||||
- GET /health → { status: "ok" }
|
||||
- GET /users → returns a hardcoded array of 2 user objects
|
||||
- POST /users → accepts { name, email } body, logs it, returns 201
|
||||
- Proper error handling middleware
|
||||
- The server should listen on port 3001
|
||||
- Include a package.json with the required dependencies`
|
||||
|
||||
const result = await orchestrator.runTeam(team, goal)
|
||||
|
||||
console.log('\n' + '='.repeat(60))
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Results
|
||||
// ---------------------------------------------------------------------------
|
||||
console.log('\nTeam run complete.')
|
||||
console.log(`Success: ${result.success}`)
|
||||
console.log(`Total tokens — input: ${result.totalTokenUsage.input_tokens}, output: ${result.totalTokenUsage.output_tokens}`)
|
||||
|
||||
console.log('\nPer-agent results:')
|
||||
for (const [agentName, agentResult] of result.agentResults) {
|
||||
const status = agentResult.success ? 'OK' : 'FAILED'
|
||||
const tools = agentResult.toolCalls.length
|
||||
console.log(` ${agentName.padEnd(12)} [${status}] tool_calls=${tools}`)
|
||||
if (!agentResult.success) {
|
||||
console.log(` Error: ${agentResult.output.slice(0, 120)}`)
|
||||
}
|
||||
}
|
||||
|
||||
const developerResult = result.agentResults.get('developer')
|
||||
if (developerResult?.success) {
|
||||
console.log('\nDeveloper output (last 600 chars):')
|
||||
console.log('─'.repeat(60))
|
||||
const out = developerResult.output
|
||||
console.log(out.length > 600 ? '...' + out.slice(-600) : out)
|
||||
console.log('─'.repeat(60))
|
||||
}
|
||||
|
||||
const reviewerResult = result.agentResults.get('reviewer')
|
||||
if (reviewerResult?.success) {
|
||||
console.log('\nReviewer output:')
|
||||
console.log('─'.repeat(60))
|
||||
console.log(reviewerResult.output)
|
||||
console.log('─'.repeat(60))
|
||||
}
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 08 — Gemma 4 Local (100% Local, Zero API Cost)
|
||||
* Gemma 4 Local (100% Local, Zero API Cost)
|
||||
*
|
||||
* Demonstrates both execution modes with a fully local Gemma 4 model via
|
||||
* Ollama. No cloud API keys needed — everything runs on your machine.
|
||||
|
|
@ -13,7 +13,7 @@
|
|||
* Gemma 4 e2b (5.1B params) handles both reliably.
|
||||
*
|
||||
* Run:
|
||||
* no_proxy=localhost npx tsx examples/08-gemma4-local.ts
|
||||
* no_proxy=localhost npx tsx examples/providers/gemma4-local.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* 1. Ollama >= 0.20.0 installed and running: https://ollama.com
|
||||
|
|
@ -26,8 +26,8 @@
|
|||
* through the proxy.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Configuration — change this to match your Ollama setup
|
||||
|
|
@ -1,18 +1,18 @@
|
|||
/**
|
||||
* Example 12 — Multi-Agent Team Collaboration with Grok (xAI)
|
||||
* Multi-Agent Team Collaboration with Grok (xAI)
|
||||
*
|
||||
* Three specialized agents (architect, developer, reviewer) collaborate via `runTeam()`
|
||||
* to build a minimal Express.js REST API. Every agent uses Grok's coding-optimized model.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/12-grok.ts
|
||||
* npx tsx examples/providers/grok.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* XAI_API_KEY environment variable must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions (all using grok-code-fast-1)
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
/**
|
||||
* Example 19 — Multi-Agent Team Collaboration with Groq
|
||||
* Multi-Agent Team Collaboration with Groq
|
||||
*
|
||||
* Three specialized agents (architect, developer, reviewer) collaborate via `runTeam()`
|
||||
* to build a minimal Express.js REST API. Every agent uses Groq via the OpenAI-compatible adapter.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/19-groq.ts
|
||||
* npx tsx examples/providers/groq.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* GROQ_API_KEY environment variable must be set.
|
||||
|
|
@ -15,8 +15,8 @@
|
|||
* deepseek-r1-distill-llama-70b — Groq reasoning model
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions (all using Groq via the OpenAI-compatible adapter)
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
/**
|
||||
* Example 17 — Multi-Agent Team Collaboration with MiniMax
|
||||
* Multi-Agent Team Collaboration with MiniMax
|
||||
*
|
||||
* Three specialized agents (architect, developer, reviewer) collaborate via `runTeam()`
|
||||
* to build a minimal Express.js REST API. Every agent uses MiniMax's flagship model.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/17-minimax.ts
|
||||
* npx tsx examples/providers/minimax.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* MINIMAX_API_KEY environment variable must be set.
|
||||
|
|
@ -16,8 +16,8 @@
|
|||
* China mainland: https://api.minimaxi.com/v1 (set MINIMAX_BASE_URL)
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent definitions (all using MiniMax-M2.7)
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 06 — Local Model + Cloud Model Team (Ollama + Claude)
|
||||
* Local Model + Cloud Model Team (Ollama + Claude)
|
||||
*
|
||||
* Demonstrates mixing a local model served by Ollama with a cloud model
|
||||
* (Claude) in the same task pipeline. The key technique is using
|
||||
|
|
@ -14,7 +14,7 @@
|
|||
* Just change the baseURL and model name below.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/06-local-model.ts
|
||||
* npx tsx examples/providers/ollama.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* 1. Ollama installed and running: https://ollama.com
|
||||
|
|
@ -22,8 +22,8 @@
|
|||
* 3. ANTHROPIC_API_KEY env var must be set.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../src/types.js'
|
||||
import { OpenMultiAgent } from '../../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agents
|
||||
Loading…
Reference in New Issue