Compare commits

...

6 Commits

Author SHA1 Message Date
NamelessNATM ac8ad8d593
Merge fb6051146f into f1c7477a26 2026-04-09 04:00:49 +08:00
JackChen f1c7477a26 docs: fix MCP tracking issue link to #86 2026-04-09 03:29:58 +08:00
JackChen 664bed987f docs: restructure DECISIONS.md to signal openness on MCP and A2A
Split decisions into "Won't Do" (handoffs, checkpointing) and "Open to
Adoption" (MCP, A2A). Feature parity is a race that can be caught;
network effects from protocol adoption create a different kind of moat.

- MCP marked as "Next up" with optional peer dependency approach
- A2A marked as "Watching" with clear adoption trigger criteria
2026-04-09 03:16:38 +08:00
JackChen 2022882bfb
Merge pull request #85 from ibrahimkzmv/feat.customizable-coordinator
feat: make coordinator configurable (model, prompt, tools, and runtime options)
2026-04-08 18:56:12 +08:00
MrAvalonApple 0b57ffe3e9 feat: enhance CoordinatorConfig with toolPreset and disallowedTools options 2026-04-08 12:34:25 +03:00
MrAvalonApple 30369b0597 feat: add customizable coordinator options for runTeam and enhance system prompt 2026-04-07 22:11:27 +03:00
5 changed files with 278 additions and 21 deletions

View File

@ -1,11 +1,11 @@
# Architecture Decisions
This document records deliberate "won't do" decisions for the project. These are features we evaluated and chose NOT to implement — not because they're bad ideas, but because they conflict with our positioning as the **simplest multi-agent framework**.
If you're considering a PR in any of these areas, please open a discussion first.
This document records our architectural decisions — both what we choose NOT to build, and what we're actively working toward. Our goal is to be the **simplest multi-agent framework**, but simplicity doesn't mean closed. We believe the long-term value of a framework isn't its feature checklist — it's the size of the network it connects to.
## Won't Do
These are paradigms we evaluated and deliberately chose not to implement, because they conflict with our core model.
### 1. Agent Handoffs
**What**: Agent A transfers an in-progress conversation to Agent B (like OpenAI Agents SDK `handoff()`).
@ -20,18 +20,30 @@ If you're considering a PR in any of these areas, please open a discussion first
**Related**: Closing #20 with this rationale.
### 3. A2A Protocol (Agent-to-Agent)
## Open to Adoption
**What**: Google's open protocol for agents on different servers to discover and communicate with each other.
These are protocols we see strategic value in and are actively tracking. We're waiting for the right moment — not the right feature spec, but the right network density.
**Why not**: Too early — the spec is still evolving and adoption is minimal. Our users run agents in a single process, not across distributed services. If A2A matures and there's real demand, we can revisit. Today it would add complexity for zero practical benefit.
> **Our thesis**: Framework competition on features (DAG scheduling, shared memory, zero-dependency) is a race that can always be caught. Network competition — where the value of the framework grows with every agent published to it — creates a fundamentally different moat. MCP and A2A are the protocols that turn a framework from a build tool into a registry.
### 4. MCP Integration (Model Context Protocol)
### 3. MCP Integration (Model Context Protocol)
**What**: Anthropic's protocol for connecting LLMs to external tools and data sources.
**Why not now**: Our `defineTool()` API lets users wrap any external service as a tool in ~10 lines of code, and adding MCP would introduce `@modelcontextprotocol/sdk` as a new dependency plus transport layer management, breaking our 3-dependency minimal principle. However, the MCP tool ecosystem has grown significantly — many services now ship MCP servers directly, and asking users to re-wrap each one via `defineTool()` creates unnecessary friction. **This decision may be revisited** when community demand is clear or a lightweight integration approach emerges (e.g., optional peer dependency).
**Status**: **Next up.** MCP has crossed the adoption threshold — Cursor, Windsurf, Claude Code all ship with built-in support, and many services now provide MCP servers directly. Asking users to re-wrap each one via `defineTool()` creates unnecessary friction.
**Approach**: Optional peer dependency (`@modelcontextprotocol/sdk`). Zero impact on the core — if you don't use MCP, you don't pay for it. This preserves our minimal-dependency principle while connecting to the broader tool ecosystem.
**Tracking**: #86
### 4. A2A Protocol (Agent-to-Agent)
**What**: Google's open protocol for agents on different servers to discover and communicate with each other.
**Status**: **Watching.** The spec is still evolving and production adoption is minimal. But we recognize A2A's potential to enable the network effect we care about — if 1,000 developers publish agent services using open-multi-agent, the 1,001st developer isn't just choosing an API, they're choosing which ecosystem has the most agents they can call.
**When we'll move**: When A2A adoption reaches a tipping point where the protocol connects real, production agent services — not just demos. We'll prioritize a lightweight integration that lets agents be both consumers and providers of A2A services.
---
*Last updated: 2026-04-07*
*Last updated: 2026-04-09*

View File

@ -169,6 +169,7 @@ export type {
// Orchestrator
OrchestratorConfig,
OrchestratorEvent,
CoordinatorConfig,
// Trace
TraceEventType,

View File

@ -44,6 +44,7 @@
import type {
AgentConfig,
AgentRunResult,
CoordinatorConfig,
OrchestratorConfig,
OrchestratorEvent,
Task,
@ -892,8 +893,13 @@ export class OpenMultiAgent {
* @param team - A team created via {@link createTeam} (or `new Team(...)`).
* @param goal - High-level natural-language goal for the team.
*/
async runTeam(team: Team, goal: string, options?: { abortSignal?: AbortSignal }): Promise<TeamRunResult> {
async runTeam(
team: Team,
goal: string,
options?: { abortSignal?: AbortSignal; coordinator?: CoordinatorConfig },
): Promise<TeamRunResult> {
const agentConfigs = team.getAgents()
const coordinatorOverrides = options?.coordinator
// ------------------------------------------------------------------
// Short-circuit: skip coordinator for simple, single-action goals.
@ -966,12 +972,19 @@ export class OpenMultiAgent {
// ------------------------------------------------------------------
const coordinatorConfig: AgentConfig = {
name: 'coordinator',
model: this.config.defaultModel,
provider: this.config.defaultProvider,
baseURL: this.config.defaultBaseURL,
apiKey: this.config.defaultApiKey,
systemPrompt: this.buildCoordinatorSystemPrompt(agentConfigs),
maxTurns: 3,
model: coordinatorOverrides?.model ?? this.config.defaultModel,
provider: coordinatorOverrides?.provider ?? this.config.defaultProvider,
baseURL: coordinatorOverrides?.baseURL ?? this.config.defaultBaseURL,
apiKey: coordinatorOverrides?.apiKey ?? this.config.defaultApiKey,
systemPrompt: this.buildCoordinatorPrompt(agentConfigs, coordinatorOverrides),
maxTurns: coordinatorOverrides?.maxTurns ?? 3,
maxTokens: coordinatorOverrides?.maxTokens,
temperature: coordinatorOverrides?.temperature,
toolPreset: coordinatorOverrides?.toolPreset,
tools: coordinatorOverrides?.tools,
disallowedTools: coordinatorOverrides?.disallowedTools,
loopDetection: coordinatorOverrides?.loopDetection,
timeoutMs: coordinatorOverrides?.timeoutMs,
}
const decompositionPrompt = this.buildDecompositionPrompt(goal, agentConfigs)
@ -1216,6 +1229,47 @@ export class OpenMultiAgent {
/** Build the system prompt given to the coordinator agent. */
private buildCoordinatorSystemPrompt(agents: AgentConfig[]): string {
return [
'You are a task coordinator responsible for decomposing high-level goals',
'into concrete, actionable tasks and assigning them to the right team members.',
'',
this.buildCoordinatorRosterSection(agents),
'',
this.buildCoordinatorOutputFormatSection(),
'',
this.buildCoordinatorSynthesisSection(),
].join('\n')
}
/** Build coordinator system prompt with optional caller overrides. */
private buildCoordinatorPrompt(agents: AgentConfig[], config?: CoordinatorConfig): string {
if (config?.systemPrompt) {
return [
config.systemPrompt,
'',
this.buildCoordinatorRosterSection(agents),
'',
this.buildCoordinatorOutputFormatSection(),
'',
this.buildCoordinatorSynthesisSection(),
].join('\n')
}
const base = this.buildCoordinatorSystemPrompt(agents)
if (!config?.instructions) {
return base
}
return [
base,
'',
'## Additional Instructions',
config.instructions,
].join('\n')
}
/** Build the coordinator team roster section. */
private buildCoordinatorRosterSection(agents: AgentConfig[]): string {
const roster = agents
.map(
(a) =>
@ -1224,12 +1278,14 @@ export class OpenMultiAgent {
.join('\n')
return [
'You are a task coordinator responsible for decomposing high-level goals',
'into concrete, actionable tasks and assigning them to the right team members.',
'',
'## Team Roster',
roster,
'',
].join('\n')
}
/** Build the coordinator JSON output-format section. */
private buildCoordinatorOutputFormatSection(): string {
return [
'## Output Format',
'When asked to decompose a goal, respond ONLY with a JSON array of task objects.',
'Each task must have:',
@ -1240,7 +1296,12 @@ export class OpenMultiAgent {
'',
'Wrap the JSON in a ```json code fence.',
'Do not include any text outside the code fence.',
'',
].join('\n')
}
/** Build the coordinator synthesis guidance section. */
private buildCoordinatorSynthesisSection(): string {
return [
'## When synthesising results',
'You will be given completed task outputs and asked to synthesise a final answer.',
'Write a clear, comprehensive response that addresses the original goal.',

View File

@ -445,6 +445,43 @@ export interface OrchestratorConfig {
readonly onApproval?: (completedTasks: readonly Task[], nextTasks: readonly Task[]) => Promise<boolean>
}
/**
* Optional overrides for the temporary coordinator agent created by `runTeam`.
*
* All fields are optional. Unset fields fall back to orchestrator defaults
* (or coordinator built-in defaults where applicable).
*/
export interface CoordinatorConfig {
/** Coordinator model. Defaults to `OrchestratorConfig.defaultModel`. */
readonly model?: string
readonly provider?: 'anthropic' | 'copilot' | 'grok' | 'openai' | 'gemini'
readonly baseURL?: string
readonly apiKey?: string
/**
* Full system prompt override. When set, this replaces the default
* coordinator preamble and decomposition guidance.
*
* Team roster, output format, and synthesis sections are still appended.
*/
readonly systemPrompt?: string
/**
* Additional instructions appended to the default coordinator prompt.
* Ignored when `systemPrompt` is provided.
*/
readonly instructions?: string
readonly maxTurns?: number
readonly maxTokens?: number
readonly temperature?: number
/** Predefined tool preset for common coordinator use cases. */
readonly toolPreset?: 'readonly' | 'readwrite' | 'full'
/** Tool names available to the coordinator. */
readonly tools?: readonly string[]
/** Tool names explicitly denied to the coordinator. */
readonly disallowedTools?: readonly string[]
readonly loopDetection?: LoopDetectionConfig
readonly timeoutMs?: number
}
// ---------------------------------------------------------------------------
// Trace events — lightweight observability spans
// ---------------------------------------------------------------------------

View File

@ -42,6 +42,7 @@ function createMockAdapter(responses: string[]): LLMAdapter {
* We need to do this at the module level because Agent calls createAdapter internally.
*/
let mockAdapterResponses: string[] = []
let capturedChatOptions: LLMChatOptions[] = []
vi.mock('../src/llm/adapter.js', () => ({
createAdapter: async () => {
@ -49,6 +50,7 @@ vi.mock('../src/llm/adapter.js', () => ({
return {
name: 'mock',
async chat(_msgs: LLMMessage[], options: LLMChatOptions): Promise<LLMResponse> {
capturedChatOptions.push(options)
const text = mockAdapterResponses[callIndex] ?? 'default mock response'
callIndex++
return {
@ -94,6 +96,7 @@ function teamCfg(agents?: AgentConfig[]): TeamConfig {
describe('OpenMultiAgent', () => {
beforeEach(() => {
mockAdapterResponses = []
capturedChatOptions = []
})
describe('createTeam', () => {
@ -237,6 +240,149 @@ describe('OpenMultiAgent', () => {
expect(result.success).toBe(true)
})
it('supports coordinator model override without affecting workers', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Research", "description": "Research", "assignee": "worker-a"}]\n```',
'worker output',
'final synthesis',
]
const oma = new OpenMultiAgent({
defaultModel: 'expensive-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
const result = await oma.runTeam(team, 'First research the topic, then synthesize findings', {
coordinator: { model: 'cheap-model' },
})
expect(result.success).toBe(true)
expect(capturedChatOptions.length).toBe(3)
expect(capturedChatOptions[0]?.model).toBe('cheap-model')
expect(capturedChatOptions[1]?.model).toBe('worker-model')
expect(capturedChatOptions[2]?.model).toBe('cheap-model')
})
it('appends coordinator.instructions to the default system prompt', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Plan", "description": "Plan", "assignee": "worker-a"}]\n```',
'done',
'final',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First implement, then verify', {
coordinator: {
instructions: 'Always create a testing task after implementation tasks.',
},
})
const coordinatorPrompt = capturedChatOptions[0]?.systemPrompt ?? ''
expect(coordinatorPrompt).toContain('You are a task coordinator responsible')
expect(coordinatorPrompt).toContain('## Additional Instructions')
expect(coordinatorPrompt).toContain('Always create a testing task after implementation tasks.')
})
it('uses coordinator.systemPrompt override while still appending required sections', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Plan", "description": "Plan", "assignee": "worker-a"}]\n```',
'done',
'final',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First implement, then verify', {
coordinator: {
systemPrompt: 'You are a custom coordinator for monorepo planning.',
},
})
const coordinatorPrompt = capturedChatOptions[0]?.systemPrompt ?? ''
expect(coordinatorPrompt).toContain('You are a custom coordinator for monorepo planning.')
expect(coordinatorPrompt).toContain('## Team Roster')
expect(coordinatorPrompt).toContain('## Output Format')
expect(coordinatorPrompt).toContain('## When synthesising results')
expect(coordinatorPrompt).not.toContain('You are a task coordinator responsible')
})
it('applies advanced coordinator options (maxTokens, temperature, tools, disallowedTools)', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Inspect", "description": "Inspect", "assignee": "worker-a"}]\n```',
'worker output',
'final synthesis',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First inspect project, then produce output', {
coordinator: {
maxTurns: 5,
maxTokens: 1234,
temperature: 0,
tools: ['file_read', 'grep'],
disallowedTools: ['grep'],
timeoutMs: 1500,
loopDetection: { maxRepetitions: 2, loopDetectionWindow: 3 },
},
})
expect(capturedChatOptions[0]?.maxTokens).toBe(1234)
expect(capturedChatOptions[0]?.temperature).toBe(0)
expect(capturedChatOptions[0]?.tools).toBeDefined()
expect(capturedChatOptions[0]?.tools?.map((t) => t.name)).toContain('file_read')
expect(capturedChatOptions[0]?.tools?.map((t) => t.name)).not.toContain('grep')
})
it('supports coordinator.toolPreset and intersects with tools allowlist', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Inspect", "description": "Inspect", "assignee": "worker-a"}]\n```',
'worker output',
'final synthesis',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First inspect project, then produce output', {
coordinator: {
toolPreset: 'readonly',
tools: ['file_read', 'bash'],
},
})
const coordinatorToolNames = capturedChatOptions[0]?.tools?.map((t) => t.name) ?? []
expect(coordinatorToolNames).toContain('file_read')
expect(coordinatorToolNames).not.toContain('bash')
})
})
describe('config defaults', () => {