docs: add trace observability to README and example 11

- Add Observability bullet to feature lists (EN + CN)
- Add example 11 row to example tables (EN + CN)
- New examples/11-trace-observability.ts showing onTrace usage
This commit is contained in:
JackChen 2026-04-03 15:27:44 +08:00
parent e696d877e7
commit 604865af13
3 changed files with 137 additions and 0 deletions

View File

@ -18,6 +18,7 @@ Build AI agent teams that decompose goals into tasks automatically. Define agent
- **Model Agnostic** — Claude, GPT, Gemma 4, and local models (Ollama, vLLM, LM Studio) in the same team. Swap models per agent via `baseURL`.
- **Structured Output** — Add `outputSchema` (Zod) to any agent. Output is parsed as JSON, validated, and auto-retried once on failure. Access typed results via `result.structured`.
- **Task Retry** — Set `maxRetries` on tasks for automatic retry with exponential backoff. Failed attempts accumulate token usage for accurate billing.
- **Observability** — Optional `onTrace` callback emits structured spans for every LLM call, tool execution, task, and agent run — with timing, token usage, and a shared `runId` for correlation. Zero overhead when not subscribed, zero extra dependencies.
- **In-Process Execution** — No subprocess overhead. Everything runs in one Node.js process. Deploy to serverless, Docker, CI/CD.
## Quick Start
@ -120,6 +121,7 @@ npx tsx examples/01-single-agent.ts
| [08 — Gemma 4 Local](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` with local Gemma 4 via Ollama — zero API cost |
| [09 — Structured Output](examples/09-structured-output.ts) | `outputSchema` (Zod) on AgentConfig — validated JSON via `result.structured` |
| [10 — Task Retry](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` with `task_retry` progress events |
| [11 — Trace Observability](examples/11-trace-observability.ts) | `onTrace` callback — structured spans for LLM calls, tools, tasks, and agents |
## Architecture

View File

@ -18,6 +18,7 @@
- **模型无关** — Claude、GPT、Gemma 4 和本地模型Ollama、vLLM、LM Studio可以在同一个团队中使用。通过 `baseURL` 即可接入任何 OpenAI 兼容服务。
- **结构化输出** — 为任意智能体添加 `outputSchema`Zod输出自动解析为 JSON 并校验,校验失败自动重试一次。通过 `result.structured` 获取类型化结果。
- **任务重试** — 为任务设置 `maxRetries`,失败时自动指数退避重试。所有尝试的 token 用量累计,确保计费准确。
- **可观测性** — 可选的 `onTrace` 回调为每次 LLM 调用、工具执行、任务和智能体运行发出结构化 span 事件——包含耗时、token 用量和共享的 `runId` 用于关联追踪。未订阅时零开销,零额外依赖。
- **进程内执行** — 没有子进程开销。所有内容在一个 Node.js 进程中运行。可部署到 Serverless、Docker、CI/CD。
## 快速开始
@ -120,6 +121,7 @@ npx tsx examples/01-single-agent.ts
| [08 — Gemma 4 本地](examples/08-gemma4-local.ts) | `runTasks()` + `runTeam()` 本地 Gemma 4 via Ollama — 零 API 费用 |
| [09 — 结构化输出](examples/09-structured-output.ts) | `outputSchema`Zod— 校验 JSON 输出,通过 `result.structured` 获取 |
| [10 — 任务重试](examples/10-task-retry.ts) | `maxRetries` / `retryDelayMs` / `retryBackoff` + `task_retry` 进度事件 |
| [11 — 可观测性](examples/11-trace-observability.ts) | `onTrace` 回调 — LLM 调用、工具、任务、智能体的结构化 span 事件 |
## 架构

View File

@ -0,0 +1,133 @@
/**
* Example 11 Trace Observability
*
* Demonstrates the `onTrace` callback for lightweight observability. Every LLM
* call, tool execution, task lifecycle, and agent run emits a structured trace
* event with timing data and token usage giving you full visibility into
* what's happening inside a multi-agent run.
*
* Trace events share a `runId` for correlation, so you can reconstruct the
* full execution timeline. Pipe them into your own logging, OpenTelemetry, or
* dashboard.
*
* Run:
* npx tsx examples/11-trace-observability.ts
*
* Prerequisites:
* ANTHROPIC_API_KEY env var must be set.
*/
import { OpenMultiAgent } from '../src/index.js'
import type { AgentConfig, TraceEvent } from '../src/types.js'
// ---------------------------------------------------------------------------
// Agents
// ---------------------------------------------------------------------------
const researcher: AgentConfig = {
name: 'researcher',
model: 'claude-sonnet-4-6',
systemPrompt: 'You are a research assistant. Provide concise, factual answers.',
maxTurns: 2,
}
const writer: AgentConfig = {
name: 'writer',
model: 'claude-sonnet-4-6',
systemPrompt: 'You are a technical writer. Summarize research into clear prose.',
maxTurns: 2,
}
// ---------------------------------------------------------------------------
// Trace handler — log every span with timing
// ---------------------------------------------------------------------------
function handleTrace(event: TraceEvent): void {
const dur = `${event.durationMs}ms`.padStart(7)
switch (event.type) {
case 'llm_call':
console.log(
` [LLM] ${dur} agent=${event.agent} model=${event.model} turn=${event.turn}` +
` tokens=${event.tokens.input_tokens}in/${event.tokens.output_tokens}out`,
)
break
case 'tool_call':
console.log(
` [TOOL] ${dur} agent=${event.agent} tool=${event.tool}` +
` error=${event.isError}`,
)
break
case 'task':
console.log(
` [TASK] ${dur} task="${event.taskTitle}" agent=${event.agent}` +
` success=${event.success} retries=${event.retries}`,
)
break
case 'agent':
console.log(
` [AGENT] ${dur} agent=${event.agent} turns=${event.turns}` +
` tools=${event.toolCalls} tokens=${event.tokens.input_tokens}in/${event.tokens.output_tokens}out`,
)
break
}
}
// ---------------------------------------------------------------------------
// Orchestrator + team
// ---------------------------------------------------------------------------
const orchestrator = new OpenMultiAgent({
defaultModel: 'claude-sonnet-4-6',
onTrace: handleTrace,
})
const team = orchestrator.createTeam('trace-demo', {
name: 'trace-demo',
agents: [researcher, writer],
sharedMemory: true,
})
// ---------------------------------------------------------------------------
// Tasks — researcher first, then writer summarizes
// ---------------------------------------------------------------------------
const tasks = [
{
title: 'Research topic',
description: 'List 5 key benefits of TypeScript for large codebases. Be concise.',
assignee: 'researcher',
},
{
title: 'Write summary',
description: 'Read the research from shared memory and write a 3-sentence summary.',
assignee: 'writer',
dependsOn: ['Research topic'],
},
]
// ---------------------------------------------------------------------------
// Run
// ---------------------------------------------------------------------------
console.log('Trace Observability Example')
console.log('='.repeat(60))
console.log('Pipeline: research → write (with full trace output)')
console.log('='.repeat(60))
console.log()
const result = await orchestrator.runTasks(team, tasks)
// ---------------------------------------------------------------------------
// Summary
// ---------------------------------------------------------------------------
console.log('\n' + '='.repeat(60))
console.log(`Overall success: ${result.success}`)
console.log(`Tokens — input: ${result.totalTokenUsage.input_tokens}, output: ${result.totalTokenUsage.output_tokens}`)
for (const [name, r] of result.agentResults) {
const icon = r.success ? 'OK ' : 'FAIL'
console.log(` [${icon}] ${name}`)
console.log(` ${r.output.slice(0, 200)}`)
}