Compare commits

..

1 Commits

Author SHA1 Message Date
NamelessNATM d74262f37b
Merge fb6051146f into 03dc897929 2026-04-08 06:02:59 +00:00
5 changed files with 23 additions and 280 deletions

View File

@ -1,11 +1,11 @@
# Architecture Decisions
This document records our architectural decisions — both what we choose NOT to build, and what we're actively working toward. Our goal is to be the **simplest multi-agent framework**, but simplicity doesn't mean closed. We believe the long-term value of a framework isn't its feature checklist — it's the size of the network it connects to.
This document records deliberate "won't do" decisions for the project. These are features we evaluated and chose NOT to implement — not because they're bad ideas, but because they conflict with our positioning as the **simplest multi-agent framework**.
If you're considering a PR in any of these areas, please open a discussion first.
## Won't Do
These are paradigms we evaluated and deliberately chose not to implement, because they conflict with our core model.
### 1. Agent Handoffs
**What**: Agent A transfers an in-progress conversation to Agent B (like OpenAI Agents SDK `handoff()`).
@ -20,30 +20,18 @@ These are paradigms we evaluated and deliberately chose not to implement, becaus
**Related**: Closing #20 with this rationale.
## Open to Adoption
These are protocols we see strategic value in and are actively tracking. We're waiting for the right moment — not the right feature spec, but the right network density.
> **Our thesis**: Framework competition on features (DAG scheduling, shared memory, zero-dependency) is a race that can always be caught. Network competition — where the value of the framework grows with every agent published to it — creates a fundamentally different moat. MCP and A2A are the protocols that turn a framework from a build tool into a registry.
### 3. MCP Integration (Model Context Protocol)
**What**: Anthropic's protocol for connecting LLMs to external tools and data sources.
**Status**: **Next up.** MCP has crossed the adoption threshold — Cursor, Windsurf, Claude Code all ship with built-in support, and many services now provide MCP servers directly. Asking users to re-wrap each one via `defineTool()` creates unnecessary friction.
**Approach**: Optional peer dependency (`@modelcontextprotocol/sdk`). Zero impact on the core — if you don't use MCP, you don't pay for it. This preserves our minimal-dependency principle while connecting to the broader tool ecosystem.
**Tracking**: #86
### 4. A2A Protocol (Agent-to-Agent)
### 3. A2A Protocol (Agent-to-Agent)
**What**: Google's open protocol for agents on different servers to discover and communicate with each other.
**Status**: **Watching.** The spec is still evolving and production adoption is minimal. But we recognize A2A's potential to enable the network effect we care about — if 1,000 developers publish agent services using open-multi-agent, the 1,001st developer isn't just choosing an API, they're choosing which ecosystem has the most agents they can call.
**Why not**: Too early — the spec is still evolving and adoption is minimal. Our users run agents in a single process, not across distributed services. If A2A matures and there's real demand, we can revisit. Today it would add complexity for zero practical benefit.
**When we'll move**: When A2A adoption reaches a tipping point where the protocol connects real, production agent services — not just demos. We'll prioritize a lightweight integration that lets agents be both consumers and providers of A2A services.
### 4. MCP Integration (Model Context Protocol)
**What**: Anthropic's protocol for connecting LLMs to external tools and data sources.
**Why not now**: Our `defineTool()` API lets users wrap any external service as a tool in ~10 lines of code, and adding MCP would introduce `@modelcontextprotocol/sdk` as a new dependency plus transport layer management, breaking our 3-dependency minimal principle. However, the MCP tool ecosystem has grown significantly — many services now ship MCP servers directly, and asking users to re-wrap each one via `defineTool()` creates unnecessary friction. **This decision may be revisited** when community demand is clear or a lightweight integration approach emerges (e.g., optional peer dependency).
---
*Last updated: 2026-04-09*
*Last updated: 2026-04-07*

View File

@ -169,7 +169,6 @@ export type {
// Orchestrator
OrchestratorConfig,
OrchestratorEvent,
CoordinatorConfig,
// Trace
TraceEventType,

View File

@ -44,7 +44,6 @@
import type {
AgentConfig,
AgentRunResult,
CoordinatorConfig,
OrchestratorConfig,
OrchestratorEvent,
Task,
@ -893,13 +892,8 @@ export class OpenMultiAgent {
* @param team - A team created via {@link createTeam} (or `new Team(...)`).
* @param goal - High-level natural-language goal for the team.
*/
async runTeam(
team: Team,
goal: string,
options?: { abortSignal?: AbortSignal; coordinator?: CoordinatorConfig },
): Promise<TeamRunResult> {
async runTeam(team: Team, goal: string, options?: { abortSignal?: AbortSignal }): Promise<TeamRunResult> {
const agentConfigs = team.getAgents()
const coordinatorOverrides = options?.coordinator
// ------------------------------------------------------------------
// Short-circuit: skip coordinator for simple, single-action goals.
@ -972,19 +966,12 @@ export class OpenMultiAgent {
// ------------------------------------------------------------------
const coordinatorConfig: AgentConfig = {
name: 'coordinator',
model: coordinatorOverrides?.model ?? this.config.defaultModel,
provider: coordinatorOverrides?.provider ?? this.config.defaultProvider,
baseURL: coordinatorOverrides?.baseURL ?? this.config.defaultBaseURL,
apiKey: coordinatorOverrides?.apiKey ?? this.config.defaultApiKey,
systemPrompt: this.buildCoordinatorPrompt(agentConfigs, coordinatorOverrides),
maxTurns: coordinatorOverrides?.maxTurns ?? 3,
maxTokens: coordinatorOverrides?.maxTokens,
temperature: coordinatorOverrides?.temperature,
toolPreset: coordinatorOverrides?.toolPreset,
tools: coordinatorOverrides?.tools,
disallowedTools: coordinatorOverrides?.disallowedTools,
loopDetection: coordinatorOverrides?.loopDetection,
timeoutMs: coordinatorOverrides?.timeoutMs,
model: this.config.defaultModel,
provider: this.config.defaultProvider,
baseURL: this.config.defaultBaseURL,
apiKey: this.config.defaultApiKey,
systemPrompt: this.buildCoordinatorSystemPrompt(agentConfigs),
maxTurns: 3,
}
const decompositionPrompt = this.buildDecompositionPrompt(goal, agentConfigs)
@ -1229,47 +1216,6 @@ export class OpenMultiAgent {
/** Build the system prompt given to the coordinator agent. */
private buildCoordinatorSystemPrompt(agents: AgentConfig[]): string {
return [
'You are a task coordinator responsible for decomposing high-level goals',
'into concrete, actionable tasks and assigning them to the right team members.',
'',
this.buildCoordinatorRosterSection(agents),
'',
this.buildCoordinatorOutputFormatSection(),
'',
this.buildCoordinatorSynthesisSection(),
].join('\n')
}
/** Build coordinator system prompt with optional caller overrides. */
private buildCoordinatorPrompt(agents: AgentConfig[], config?: CoordinatorConfig): string {
if (config?.systemPrompt) {
return [
config.systemPrompt,
'',
this.buildCoordinatorRosterSection(agents),
'',
this.buildCoordinatorOutputFormatSection(),
'',
this.buildCoordinatorSynthesisSection(),
].join('\n')
}
const base = this.buildCoordinatorSystemPrompt(agents)
if (!config?.instructions) {
return base
}
return [
base,
'',
'## Additional Instructions',
config.instructions,
].join('\n')
}
/** Build the coordinator team roster section. */
private buildCoordinatorRosterSection(agents: AgentConfig[]): string {
const roster = agents
.map(
(a) =>
@ -1278,14 +1224,12 @@ export class OpenMultiAgent {
.join('\n')
return [
'You are a task coordinator responsible for decomposing high-level goals',
'into concrete, actionable tasks and assigning them to the right team members.',
'',
'## Team Roster',
roster,
].join('\n')
}
/** Build the coordinator JSON output-format section. */
private buildCoordinatorOutputFormatSection(): string {
return [
'',
'## Output Format',
'When asked to decompose a goal, respond ONLY with a JSON array of task objects.',
'Each task must have:',
@ -1296,12 +1240,7 @@ export class OpenMultiAgent {
'',
'Wrap the JSON in a ```json code fence.',
'Do not include any text outside the code fence.',
].join('\n')
}
/** Build the coordinator synthesis guidance section. */
private buildCoordinatorSynthesisSection(): string {
return [
'',
'## When synthesising results',
'You will be given completed task outputs and asked to synthesise a final answer.',
'Write a clear, comprehensive response that addresses the original goal.',

View File

@ -445,43 +445,6 @@ export interface OrchestratorConfig {
readonly onApproval?: (completedTasks: readonly Task[], nextTasks: readonly Task[]) => Promise<boolean>
}
/**
* Optional overrides for the temporary coordinator agent created by `runTeam`.
*
* All fields are optional. Unset fields fall back to orchestrator defaults
* (or coordinator built-in defaults where applicable).
*/
export interface CoordinatorConfig {
/** Coordinator model. Defaults to `OrchestratorConfig.defaultModel`. */
readonly model?: string
readonly provider?: 'anthropic' | 'copilot' | 'grok' | 'openai' | 'gemini'
readonly baseURL?: string
readonly apiKey?: string
/**
* Full system prompt override. When set, this replaces the default
* coordinator preamble and decomposition guidance.
*
* Team roster, output format, and synthesis sections are still appended.
*/
readonly systemPrompt?: string
/**
* Additional instructions appended to the default coordinator prompt.
* Ignored when `systemPrompt` is provided.
*/
readonly instructions?: string
readonly maxTurns?: number
readonly maxTokens?: number
readonly temperature?: number
/** Predefined tool preset for common coordinator use cases. */
readonly toolPreset?: 'readonly' | 'readwrite' | 'full'
/** Tool names available to the coordinator. */
readonly tools?: readonly string[]
/** Tool names explicitly denied to the coordinator. */
readonly disallowedTools?: readonly string[]
readonly loopDetection?: LoopDetectionConfig
readonly timeoutMs?: number
}
// ---------------------------------------------------------------------------
// Trace events — lightweight observability spans
// ---------------------------------------------------------------------------

View File

@ -42,7 +42,6 @@ function createMockAdapter(responses: string[]): LLMAdapter {
* We need to do this at the module level because Agent calls createAdapter internally.
*/
let mockAdapterResponses: string[] = []
let capturedChatOptions: LLMChatOptions[] = []
vi.mock('../src/llm/adapter.js', () => ({
createAdapter: async () => {
@ -50,7 +49,6 @@ vi.mock('../src/llm/adapter.js', () => ({
return {
name: 'mock',
async chat(_msgs: LLMMessage[], options: LLMChatOptions): Promise<LLMResponse> {
capturedChatOptions.push(options)
const text = mockAdapterResponses[callIndex] ?? 'default mock response'
callIndex++
return {
@ -96,7 +94,6 @@ function teamCfg(agents?: AgentConfig[]): TeamConfig {
describe('OpenMultiAgent', () => {
beforeEach(() => {
mockAdapterResponses = []
capturedChatOptions = []
})
describe('createTeam', () => {
@ -240,149 +237,6 @@ describe('OpenMultiAgent', () => {
expect(result.success).toBe(true)
})
it('supports coordinator model override without affecting workers', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Research", "description": "Research", "assignee": "worker-a"}]\n```',
'worker output',
'final synthesis',
]
const oma = new OpenMultiAgent({
defaultModel: 'expensive-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
const result = await oma.runTeam(team, 'First research the topic, then synthesize findings', {
coordinator: { model: 'cheap-model' },
})
expect(result.success).toBe(true)
expect(capturedChatOptions.length).toBe(3)
expect(capturedChatOptions[0]?.model).toBe('cheap-model')
expect(capturedChatOptions[1]?.model).toBe('worker-model')
expect(capturedChatOptions[2]?.model).toBe('cheap-model')
})
it('appends coordinator.instructions to the default system prompt', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Plan", "description": "Plan", "assignee": "worker-a"}]\n```',
'done',
'final',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First implement, then verify', {
coordinator: {
instructions: 'Always create a testing task after implementation tasks.',
},
})
const coordinatorPrompt = capturedChatOptions[0]?.systemPrompt ?? ''
expect(coordinatorPrompt).toContain('You are a task coordinator responsible')
expect(coordinatorPrompt).toContain('## Additional Instructions')
expect(coordinatorPrompt).toContain('Always create a testing task after implementation tasks.')
})
it('uses coordinator.systemPrompt override while still appending required sections', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Plan", "description": "Plan", "assignee": "worker-a"}]\n```',
'done',
'final',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First implement, then verify', {
coordinator: {
systemPrompt: 'You are a custom coordinator for monorepo planning.',
},
})
const coordinatorPrompt = capturedChatOptions[0]?.systemPrompt ?? ''
expect(coordinatorPrompt).toContain('You are a custom coordinator for monorepo planning.')
expect(coordinatorPrompt).toContain('## Team Roster')
expect(coordinatorPrompt).toContain('## Output Format')
expect(coordinatorPrompt).toContain('## When synthesising results')
expect(coordinatorPrompt).not.toContain('You are a task coordinator responsible')
})
it('applies advanced coordinator options (maxTokens, temperature, tools, disallowedTools)', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Inspect", "description": "Inspect", "assignee": "worker-a"}]\n```',
'worker output',
'final synthesis',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First inspect project, then produce output', {
coordinator: {
maxTurns: 5,
maxTokens: 1234,
temperature: 0,
tools: ['file_read', 'grep'],
disallowedTools: ['grep'],
timeoutMs: 1500,
loopDetection: { maxRepetitions: 2, loopDetectionWindow: 3 },
},
})
expect(capturedChatOptions[0]?.maxTokens).toBe(1234)
expect(capturedChatOptions[0]?.temperature).toBe(0)
expect(capturedChatOptions[0]?.tools).toBeDefined()
expect(capturedChatOptions[0]?.tools?.map((t) => t.name)).toContain('file_read')
expect(capturedChatOptions[0]?.tools?.map((t) => t.name)).not.toContain('grep')
})
it('supports coordinator.toolPreset and intersects with tools allowlist', async () => {
mockAdapterResponses = [
'```json\n[{"title": "Inspect", "description": "Inspect", "assignee": "worker-a"}]\n```',
'worker output',
'final synthesis',
]
const oma = new OpenMultiAgent({
defaultModel: 'mock-model',
defaultProvider: 'openai',
})
const team = oma.createTeam('t', teamCfg([
{ ...agentConfig('worker-a'), model: 'worker-model' },
]))
await oma.runTeam(team, 'First inspect project, then produce output', {
coordinator: {
toolPreset: 'readonly',
tools: ['file_read', 'bash'],
},
})
const coordinatorToolNames = capturedChatOptions[0]?.tools?.map((t) => t.name) ?? []
expect(coordinatorToolNames).toContain('file_read')
expect(coordinatorToolNames).not.toContain('bash')
})
})
describe('config defaults', () => {