docs: merge Gemma 4 examples, reorder README sections
- Merge examples 08 (runTasks) and 09 (runTeam) into a single Gemma 4 example - Renumber: structured output → 09, task retry → 10 - Move Author and Contributors sections to bottom in both READMEs - Add Author section to English README
This commit is contained in:
parent
27c0103736
commit
17546fd93e
23
README.md
23
README.md
|
|
@ -100,12 +100,6 @@ Tokens: 12847 output tokens
|
|||
| Auto-orchestrated team | `runTeam()` | Give a goal, framework plans and executes |
|
||||
| Explicit pipeline | `runTasks()` | You define the task graph and assignments |
|
||||
|
||||
## Contributors
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent" />
|
||||
</a>
|
||||
|
||||
## Examples
|
||||
|
||||
All examples are runnable scripts in [`examples/`](./examples/). Run any of them with `npx tsx`:
|
||||
|
|
@ -123,10 +117,9 @@ npx tsx examples/01-single-agent.ts
|
|||
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot as an LLM provider |
|
||||
| [06 — Local Model](examples/06-local-model.ts) | Ollama + Claude in one pipeline via `baseURL` (works with vLLM, LM Studio, etc.) |
|
||||
| [07 — Fan-Out / Aggregate](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 analysts in parallel, then synthesize |
|
||||
| [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | Pure-local Gemma 4 agent team with tool-calling — zero API cost |
|
||||
| [09 — Gemma 4 Auto-Orchestration](examples/09-gemma4-auto-orchestration.ts) | `runTeam()` with Gemma 4 as coordinator — auto task decomposition, fully local |
|
||||
| [10 — Structured Output](examples/10-structured-output.ts) | `outputSchema` (Zod) on AgentConfig — validated JSON via `result.structured` |
|
||||
| [11 — Task Retry](examples/11-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` with `task_retry` progress events |
|
||||
| [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` with local Gemma 4 via Ollama — zero API cost |
|
||||
| [09 — Structured Output](examples/09-structured-output.ts) | `outputSchema` (Zod) on AgentConfig — validated JSON via `result.structured` |
|
||||
| [10 — Task Retry](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` with `task_retry` progress events |
|
||||
|
||||
## Architecture
|
||||
|
||||
|
|
@ -200,6 +193,16 @@ Issues, feature requests, and PRs are welcome. Some areas where contributions wo
|
|||
- **Examples** — Real-world workflows and use cases.
|
||||
- **Documentation** — Guides, tutorials, and API docs.
|
||||
|
||||
## Author
|
||||
|
||||
> JackChen — Ex PM (¥100M+ revenue), now indie builder. Follow on [X](https://x.com/JackChen_x) for AI Agent insights.
|
||||
|
||||
## Contributors
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent" />
|
||||
</a>
|
||||
|
||||
## Star History
|
||||
|
||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||
|
|
|
|||
25
README_zh.md
25
README_zh.md
|
|
@ -92,10 +92,6 @@ Success: true
|
|||
Tokens: 12847 output tokens
|
||||
```
|
||||
|
||||
## 作者
|
||||
|
||||
> JackChen — 前 WPS 产品经理,现独立创业者。关注小红书[「杰克西|硅基杠杆」](https://www.xiaohongshu.com/user/profile/5a1bdc1e4eacab4aa39ea6d6),持续获取我的 AI Agent 观点和思考。
|
||||
|
||||
## 三种运行模式
|
||||
|
||||
| 模式 | 方法 | 适用场景 |
|
||||
|
|
@ -104,12 +100,6 @@ Tokens: 12847 output tokens
|
|||
| 自动编排团队 | `runTeam()` | 给一个目标,框架自动规划和执行 |
|
||||
| 显式任务管线 | `runTasks()` | 你自己定义任务图和分配 |
|
||||
|
||||
## 贡献者
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent" />
|
||||
</a>
|
||||
|
||||
## 示例
|
||||
|
||||
所有示例都是可运行脚本,位于 [`examples/`](./examples/) 目录。使用 `npx tsx` 运行:
|
||||
|
|
@ -127,8 +117,9 @@ npx tsx examples/01-single-agent.ts
|
|||
| [05 — Copilot](examples/05-copilot-test.ts) | GitHub Copilot 作为 LLM 提供者 |
|
||||
| [06 — 本地模型](examples/06-local-model.ts) | Ollama + Claude 混合流水线,通过 `baseURL` 接入(兼容 vLLM、LM Studio 等) |
|
||||
| [07 — 扇出聚合](examples/07-fan-out-aggregate.ts) | `runParallel()` MapReduce — 3 个分析师并行,然后综合 |
|
||||
| [08 — Gemma 4 本地](examples/08-gemma4-local.ts) | 纯本地 Gemma 4 智能体团队 + tool-calling — 零 API 费用 |
|
||||
| [09 — Gemma 4 自动编排](examples/09-gemma4-auto-orchestration.ts) | `runTeam()` 用 Gemma 4 当 coordinator — 自动任务拆解,完全本地 |
|
||||
| [08 — Gemma 4 本地](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` 本地 Gemma 4 via Ollama — 零 API 费用 |
|
||||
| [09 — 结构化输出](examples/09-structured-output.ts) | `outputSchema`(Zod)— 校验 JSON 输出,通过 `result.structured` 获取 |
|
||||
| [10 — 任务重试](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` + `task_retry` 进度事件 |
|
||||
|
||||
## 架构
|
||||
|
||||
|
|
@ -202,6 +193,16 @@ npx tsx examples/01-single-agent.ts
|
|||
- **示例** — 真实场景的工作流和用例。
|
||||
- **文档** — 指南、教程和 API 文档。
|
||||
|
||||
## 作者
|
||||
|
||||
> JackChen — 前 WPS 产品经理,现独立创业者。关注小红书[「杰克西|硅基杠杆」](https://www.xiaohongshu.com/user/profile/5a1bdc1e4eacab4aa39ea6d6),持续获取我的 AI Agent 观点和思考。
|
||||
|
||||
## 贡献者
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent" />
|
||||
</a>
|
||||
|
||||
## Star 趋势
|
||||
|
||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||
|
|
|
|||
|
|
@ -1,15 +1,16 @@
|
|||
/**
|
||||
* Example 08 — Gemma 4 Local Agent Team (100% Local, Zero API Cost)
|
||||
* Example 08 — Gemma 4 Local (100% Local, Zero API Cost)
|
||||
*
|
||||
* Demonstrates a fully local multi-agent team using Google's Gemma 4 via
|
||||
* Demonstrates both execution modes with a fully local Gemma 4 model via
|
||||
* Ollama. No cloud API keys needed — everything runs on your machine.
|
||||
*
|
||||
* Two agents collaborate through a task pipeline:
|
||||
* - researcher: uses bash + file_write to gather system info and write a report
|
||||
* - summarizer: uses file_read to read the report and produce a concise summary
|
||||
* Part 1 — runTasks(): explicit task pipeline (researcher → summarizer)
|
||||
* Part 2 — runTeam(): auto-orchestration where Gemma 4 acts as coordinator,
|
||||
* decomposes the goal into tasks, and synthesises the final result
|
||||
*
|
||||
* This pattern works with any Ollama model that supports tool-calling.
|
||||
* Gemma 4 (released 2026-04-02) has native tool-calling support.
|
||||
* This is the hardest test for a local model — runTeam() requires it to
|
||||
* produce valid JSON for task decomposition AND do tool-calling for execution.
|
||||
* Gemma 4 e2b (5.1B params) handles both reliably.
|
||||
*
|
||||
* Run:
|
||||
* no_proxy=localhost npx tsx examples/08-gemma4-local.ts
|
||||
|
|
@ -38,46 +39,31 @@ const OLLAMA_BASE_URL = 'http://localhost:11434/v1'
|
|||
const OUTPUT_DIR = '/tmp/gemma4-demo'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agents — both use Gemma 4 locally
|
||||
// Agents
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Researcher — gathers system information using shell commands.
|
||||
*/
|
||||
const researcher: AgentConfig = {
|
||||
name: 'researcher',
|
||||
model: OLLAMA_MODEL,
|
||||
provider: 'openai',
|
||||
baseURL: OLLAMA_BASE_URL,
|
||||
apiKey: 'ollama', // placeholder — Ollama ignores this, but the OpenAI SDK requires a non-empty value
|
||||
systemPrompt: `You are a system researcher. Your job is to gather information
|
||||
about the current machine using shell commands and write a structured report.
|
||||
|
||||
Use the bash tool to run commands like: uname -a, df -h, uptime, and similar
|
||||
non-destructive read-only commands.
|
||||
On macOS you can also use: sw_vers, sysctl -n hw.memsize.
|
||||
On Linux you can also use: cat /etc/os-release, free -h.
|
||||
|
||||
Then use file_write to save a Markdown report to ${OUTPUT_DIR}/system-report.md.
|
||||
The report should have sections: OS, Hardware, Disk, and Uptime.
|
||||
Be concise — one or two lines per section is enough.`,
|
||||
systemPrompt: `You are a system researcher. Use bash to run non-destructive,
|
||||
read-only commands (uname -a, sw_vers, df -h, uptime, etc.) and report results.
|
||||
Use file_write to save reports when asked.`,
|
||||
tools: ['bash', 'file_write'],
|
||||
maxTurns: 8,
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarizer — reads the report and writes a one-paragraph executive summary.
|
||||
*/
|
||||
const summarizer: AgentConfig = {
|
||||
name: 'summarizer',
|
||||
model: OLLAMA_MODEL,
|
||||
provider: 'openai',
|
||||
baseURL: OLLAMA_BASE_URL,
|
||||
apiKey: 'ollama',
|
||||
systemPrompt: `You are a technical writer. Read the system report file provided,
|
||||
then produce a concise one-paragraph executive summary (3-5 sentences).
|
||||
Focus on the key highlights: what OS, how much RAM, disk status, and uptime.`,
|
||||
tools: ['file_read'],
|
||||
systemPrompt: `You are a technical writer. Read files and produce concise,
|
||||
structured Markdown summaries. Use file_write to save reports when asked.`,
|
||||
tools: ['file_read', 'file_write'],
|
||||
maxTurns: 4,
|
||||
}
|
||||
|
||||
|
|
@ -85,23 +71,17 @@ Focus on the key highlights: what OS, how much RAM, disk status, and uptime.`,
|
|||
// Progress handler
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const taskTimes = new Map<string, number>()
|
||||
|
||||
function handleProgress(event: OrchestratorEvent): void {
|
||||
const ts = new Date().toISOString().slice(11, 23)
|
||||
|
||||
switch (event.type) {
|
||||
case 'task_start': {
|
||||
taskTimes.set(event.task ?? '', Date.now())
|
||||
const task = event.data as Task | undefined
|
||||
console.log(`[${ts}] TASK START "${task?.title ?? event.task}" → ${task?.assignee ?? '?'}`)
|
||||
break
|
||||
}
|
||||
case 'task_complete': {
|
||||
const elapsed = Date.now() - (taskTimes.get(event.task ?? '') ?? Date.now())
|
||||
console.log(`[${ts}] TASK DONE "${event.task}" in ${(elapsed / 1000).toFixed(1)}s`)
|
||||
case 'task_complete':
|
||||
console.log(`[${ts}] TASK DONE "${event.task}"`)
|
||||
break
|
||||
}
|
||||
case 'agent_start':
|
||||
console.log(`[${ts}] AGENT START ${event.agent}`)
|
||||
break
|
||||
|
|
@ -114,32 +94,29 @@ function handleProgress(event: OrchestratorEvent): void {
|
|||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Orchestrator + Team
|
||||
// ---------------------------------------------------------------------------
|
||||
// ═══════════════════════════════════════════════════════════════════════════
|
||||
// Part 1: runTasks() — Explicit task pipeline
|
||||
// ═══════════════════════════════════════════════════════════════════════════
|
||||
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
console.log('Part 1: runTasks() — Explicit Pipeline')
|
||||
console.log('='.repeat(60))
|
||||
console.log(` model → ${OLLAMA_MODEL} via Ollama`)
|
||||
console.log(` pipeline → researcher gathers info → summarizer writes summary`)
|
||||
console.log()
|
||||
|
||||
const orchestrator1 = new OpenMultiAgent({
|
||||
defaultModel: OLLAMA_MODEL,
|
||||
maxConcurrency: 1, // run agents sequentially — local model can only serve one at a time
|
||||
maxConcurrency: 1, // local model serves one request at a time
|
||||
onProgress: handleProgress,
|
||||
})
|
||||
|
||||
const team = orchestrator.createTeam('gemma4-team', {
|
||||
name: 'gemma4-team',
|
||||
const team1 = orchestrator1.createTeam('explicit', {
|
||||
name: 'explicit',
|
||||
agents: [researcher, summarizer],
|
||||
sharedMemory: true,
|
||||
})
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Task pipeline: research → summarize
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const tasks: Array<{
|
||||
title: string
|
||||
description: string
|
||||
assignee?: string
|
||||
dependsOn?: string[]
|
||||
}> = [
|
||||
const tasks = [
|
||||
{
|
||||
title: 'Gather system information',
|
||||
description: `Use bash to run system info commands (uname -a, sw_vers, sysctl, df -h, uptime).
|
||||
|
|
@ -156,48 +133,60 @@ Produce a concise one-paragraph executive summary of the system information.`,
|
|||
},
|
||||
]
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Run
|
||||
// ---------------------------------------------------------------------------
|
||||
const start1 = Date.now()
|
||||
const result1 = await orchestrator1.runTasks(team1, tasks)
|
||||
|
||||
console.log('Gemma 4 Local Agent Team — Zero API Cost')
|
||||
console.log('='.repeat(60))
|
||||
console.log(` model → ${OLLAMA_MODEL} via Ollama`)
|
||||
console.log(` researcher → bash + file_write`)
|
||||
console.log(` summarizer → file_read`)
|
||||
console.log(` output dir → ${OUTPUT_DIR}`)
|
||||
console.log()
|
||||
console.log('Pipeline: researcher gathers info → summarizer writes summary')
|
||||
console.log('='.repeat(60))
|
||||
console.log(`\nSuccess: ${result1.success} Time: ${((Date.now() - start1) / 1000).toFixed(1)}s`)
|
||||
console.log(`Tokens — input: ${result1.totalTokenUsage.input_tokens}, output: ${result1.totalTokenUsage.output_tokens}`)
|
||||
|
||||
const start = Date.now()
|
||||
const result = await orchestrator.runTasks(team, tasks)
|
||||
const totalTime = Date.now() - start
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Summary
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('Pipeline complete.\n')
|
||||
console.log(`Overall success: ${result.success}`)
|
||||
console.log(`Total time: ${(totalTime / 1000).toFixed(1)}s`)
|
||||
console.log(`Tokens — input: ${result.totalTokenUsage.input_tokens}, output: ${result.totalTokenUsage.output_tokens}`)
|
||||
|
||||
console.log('\nPer-agent results:')
|
||||
for (const [name, r] of result.agentResults) {
|
||||
const icon = r.success ? 'OK ' : 'FAIL'
|
||||
const tools = r.toolCalls.map(c => c.toolName).join(', ')
|
||||
console.log(` [${icon}] ${name.padEnd(12)} tools: ${tools || '(none)'}`)
|
||||
}
|
||||
|
||||
// Print the summarizer's output
|
||||
const summary = result.agentResults.get('summarizer')
|
||||
const summary = result1.agentResults.get('summarizer')
|
||||
if (summary?.success) {
|
||||
console.log('\nExecutive Summary (from local Gemma 4):')
|
||||
console.log('\nSummary (from local Gemma 4):')
|
||||
console.log('-'.repeat(60))
|
||||
console.log(summary.output)
|
||||
console.log('-'.repeat(60))
|
||||
}
|
||||
|
||||
// ═══════════════════════════════════════════════════════════════════════════
|
||||
// Part 2: runTeam() — Auto-orchestration (Gemma 4 as coordinator)
|
||||
// ═══════════════════════════════════════════════════════════════════════════
|
||||
|
||||
console.log('\n\nPart 2: runTeam() — Auto-Orchestration')
|
||||
console.log('='.repeat(60))
|
||||
console.log(` coordinator → auto-created by runTeam(), also Gemma 4`)
|
||||
console.log(` goal → given in natural language, framework plans everything`)
|
||||
console.log()
|
||||
|
||||
const orchestrator2 = new OpenMultiAgent({
|
||||
defaultModel: OLLAMA_MODEL,
|
||||
defaultProvider: 'openai',
|
||||
defaultBaseURL: OLLAMA_BASE_URL,
|
||||
defaultApiKey: 'ollama',
|
||||
maxConcurrency: 1,
|
||||
onProgress: handleProgress,
|
||||
})
|
||||
|
||||
const team2 = orchestrator2.createTeam('auto', {
|
||||
name: 'auto',
|
||||
agents: [researcher, summarizer],
|
||||
sharedMemory: true,
|
||||
})
|
||||
|
||||
const goal = `Check this machine's Node.js version, npm version, and OS info,
|
||||
then write a short Markdown summary report to /tmp/gemma4-auto/report.md`
|
||||
|
||||
const start2 = Date.now()
|
||||
const result2 = await orchestrator2.runTeam(team2, goal)
|
||||
|
||||
console.log(`\nSuccess: ${result2.success} Time: ${((Date.now() - start2) / 1000).toFixed(1)}s`)
|
||||
console.log(`Tokens — input: ${result2.totalTokenUsage.input_tokens}, output: ${result2.totalTokenUsage.output_tokens}`)
|
||||
|
||||
const coordResult = result2.agentResults.get('coordinator')
|
||||
if (coordResult?.success) {
|
||||
console.log('\nFinal synthesis (from local Gemma 4 coordinator):')
|
||||
console.log('-'.repeat(60))
|
||||
console.log(coordResult.output)
|
||||
console.log('-'.repeat(60))
|
||||
}
|
||||
|
||||
console.log('\nAll processing done locally. $0 API cost.')
|
||||
|
|
|
|||
|
|
@ -1,162 +0,0 @@
|
|||
/**
|
||||
* Example 09 — Gemma 4 Auto-Orchestration (runTeam, 100% Local)
|
||||
*
|
||||
* Demonstrates the framework's key feature — automatic task decomposition —
|
||||
* powered entirely by a local Gemma 4 model. No cloud API needed.
|
||||
*
|
||||
* What happens:
|
||||
* 1. A Gemma 4 "coordinator" receives the goal + agent roster
|
||||
* 2. It outputs a structured JSON task array (title, description, assignee, dependsOn)
|
||||
* 3. The framework resolves dependencies, schedules tasks, and runs agents
|
||||
* 4. The coordinator synthesises all task results into a final answer
|
||||
*
|
||||
* This is the hardest test for a local model — it must produce valid JSON
|
||||
* for task decomposition AND do tool-calling for actual task execution.
|
||||
* Gemma 4 e2b (5.1B params) handles both reliably.
|
||||
*
|
||||
* Run:
|
||||
* no_proxy=localhost npx tsx examples/09-gemma4-auto-orchestration.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* 1. Ollama >= 0.20.0 installed and running: https://ollama.com
|
||||
* 2. Pull the model: ollama pull gemma4:e2b
|
||||
* 3. No API keys needed!
|
||||
*
|
||||
* Note: The no_proxy=localhost prefix is needed if you have an HTTP proxy
|
||||
* configured, since the OpenAI SDK would otherwise route Ollama requests
|
||||
* through the proxy.
|
||||
*/
|
||||
|
||||
import { OpenMultiAgent } from '../src/index.js'
|
||||
import type { AgentConfig, OrchestratorEvent, Task } from '../src/types.js'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Configuration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// See available tags at https://ollama.com/library/gemma4
|
||||
const OLLAMA_MODEL = 'gemma4:e2b' // or 'gemma4:e4b', 'gemma4:26b'
|
||||
const OLLAMA_BASE_URL = 'http://localhost:11434/v1'
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agents — the coordinator is created automatically by runTeam()
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const researcher: AgentConfig = {
|
||||
name: 'researcher',
|
||||
model: OLLAMA_MODEL,
|
||||
provider: 'openai',
|
||||
baseURL: OLLAMA_BASE_URL,
|
||||
apiKey: 'ollama',
|
||||
systemPrompt: `You are a system researcher. Use bash to run non-destructive,
|
||||
read-only commands and report the results concisely.`,
|
||||
tools: ['bash'],
|
||||
maxTurns: 4,
|
||||
}
|
||||
|
||||
const writer: AgentConfig = {
|
||||
name: 'writer',
|
||||
model: OLLAMA_MODEL,
|
||||
provider: 'openai',
|
||||
baseURL: OLLAMA_BASE_URL,
|
||||
apiKey: 'ollama',
|
||||
systemPrompt: `You are a technical writer. Use file_write to create clear,
|
||||
structured Markdown reports based on the information provided.`,
|
||||
tools: ['file_write'],
|
||||
maxTurns: 4,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Progress handler
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function handleProgress(event: OrchestratorEvent): void {
|
||||
const ts = new Date().toISOString().slice(11, 23)
|
||||
switch (event.type) {
|
||||
case 'task_start': {
|
||||
const task = event.data as Task | undefined
|
||||
console.log(`[${ts}] TASK START "${task?.title ?? event.task}" → ${task?.assignee ?? '?'}`)
|
||||
break
|
||||
}
|
||||
case 'task_complete':
|
||||
console.log(`[${ts}] TASK DONE "${event.task}"`)
|
||||
break
|
||||
case 'agent_start':
|
||||
console.log(`[${ts}] AGENT START ${event.agent}`)
|
||||
break
|
||||
case 'agent_complete':
|
||||
console.log(`[${ts}] AGENT DONE ${event.agent}`)
|
||||
break
|
||||
case 'error':
|
||||
console.error(`[${ts}] ERROR ${event.agent ?? ''} task=${event.task ?? '?'}`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Orchestrator — defaultModel is used for the coordinator agent
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const orchestrator = new OpenMultiAgent({
|
||||
defaultModel: OLLAMA_MODEL,
|
||||
defaultProvider: 'openai',
|
||||
defaultBaseURL: OLLAMA_BASE_URL,
|
||||
defaultApiKey: 'ollama',
|
||||
maxConcurrency: 1, // local model serves one request at a time
|
||||
onProgress: handleProgress,
|
||||
})
|
||||
|
||||
const team = orchestrator.createTeam('gemma4-auto', {
|
||||
name: 'gemma4-auto',
|
||||
agents: [researcher, writer],
|
||||
sharedMemory: true,
|
||||
})
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Give a goal — the framework handles the rest
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const goal = `Check this machine's Node.js version, npm version, and OS info,
|
||||
then write a short Markdown summary report to /tmp/gemma4-auto/report.md`
|
||||
|
||||
console.log('Gemma 4 Auto-Orchestration — Zero API Cost')
|
||||
console.log('='.repeat(60))
|
||||
console.log(` model → ${OLLAMA_MODEL} via Ollama (all agents + coordinator)`)
|
||||
console.log(` researcher → bash`)
|
||||
console.log(` writer → file_write`)
|
||||
console.log(` coordinator → auto-created by runTeam()`)
|
||||
console.log()
|
||||
console.log(`Goal: ${goal.replace(/\n/g, ' ').trim()}`)
|
||||
console.log('='.repeat(60))
|
||||
|
||||
const start = Date.now()
|
||||
const result = await orchestrator.runTeam(team, goal)
|
||||
const totalTime = Date.now() - start
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Results
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('Pipeline complete.\n')
|
||||
console.log(`Overall success: ${result.success}`)
|
||||
console.log(`Total time: ${(totalTime / 1000).toFixed(1)}s`)
|
||||
console.log(`Tokens — input: ${result.totalTokenUsage.input_tokens}, output: ${result.totalTokenUsage.output_tokens}`)
|
||||
|
||||
console.log('\nPer-agent results:')
|
||||
for (const [name, r] of result.agentResults) {
|
||||
const icon = r.success ? 'OK ' : 'FAIL'
|
||||
const tools = r.toolCalls.length > 0 ? r.toolCalls.map(c => c.toolName).join(', ') : '(none)'
|
||||
console.log(` [${icon}] ${name.padEnd(24)} tools: ${tools}`)
|
||||
}
|
||||
|
||||
// Print the coordinator's final synthesis
|
||||
const coordResult = result.agentResults.get('coordinator')
|
||||
if (coordResult?.success) {
|
||||
console.log('\nFinal synthesis (from local Gemma 4 coordinator):')
|
||||
console.log('-'.repeat(60))
|
||||
console.log(coordResult.output)
|
||||
console.log('-'.repeat(60))
|
||||
}
|
||||
|
||||
console.log('\nAll processing done locally. $0 API cost.')
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 10 — Structured Output
|
||||
* Example 09 — Structured Output
|
||||
*
|
||||
* Demonstrates `outputSchema` on AgentConfig. The agent's response is
|
||||
* automatically parsed as JSON and validated against a Zod schema.
|
||||
|
|
@ -8,7 +8,7 @@
|
|||
* The validated result is available via `result.structured`.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/10-structured-output.ts
|
||||
* npx tsx examples/09-structured-output.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Example 11 — Task Retry with Exponential Backoff
|
||||
* Example 10 — Task Retry with Exponential Backoff
|
||||
*
|
||||
* Demonstrates `maxRetries`, `retryDelayMs`, and `retryBackoff` on task config.
|
||||
* When a task fails, the framework automatically retries with exponential
|
||||
|
|
@ -10,7 +10,7 @@
|
|||
* to retry on failure, and the second task (analysis) depends on it.
|
||||
*
|
||||
* Run:
|
||||
* npx tsx examples/11-task-retry.ts
|
||||
* npx tsx examples/10-task-retry.ts
|
||||
*
|
||||
* Prerequisites:
|
||||
* ANTHROPIC_API_KEY env var must be set.
|
||||
Loading…
Reference in New Issue