open-multi-agent/examples
Claire 99d9d7f52e
feat: add Azure OpenAI LLMAdapter (#24) (#143)
- New AzureOpenAIAdapter using AzureOpenAI client from openai SDK
- Registered 'azure-openai' in SupportedProvider and createAdapter()
- model field is primary deployment name; AZURE_OPENAI_DEPLOYMENT as fallback
- Default api-version: 2024-10-21
- Example in examples/providers/azure-openai.ts
- 14 tests covering chat, stream, tool_use, deployment fallback, error path
- Updated README.md, README_zh.md, examples/README.md, src/cli/oma.ts
2026-04-21 14:28:30 +08:00
..
basics refactor: reorganize examples by category (#125) 2026-04-19 17:01:58 +08:00
cookbook chore(examples): introduce cookbook/ and move meeting-summarizer (#140) 2026-04-20 15:51:18 +08:00
fixtures feat(examples): meeting summarizer pattern (#135) (#139) 2026-04-20 15:45:26 +08:00
integrations refactor: reorganize examples by category (#125) 2026-04-19 17:01:58 +08:00
patterns chore(examples): introduce cookbook/ and move meeting-summarizer (#140) 2026-04-20 15:51:18 +08:00
production refactor: reorganize examples by category (#125) 2026-04-19 17:01:58 +08:00
providers feat: add Azure OpenAI LLMAdapter (#24) (#143) 2026-04-21 14:28:30 +08:00
README.md feat: add Azure OpenAI LLMAdapter (#24) (#143) 2026-04-21 14:28:30 +08:00

README.md

Examples

Runnable scripts demonstrating open-multi-agent. Organized by category — pick one that matches what you're trying to do.

All scripts run with npx tsx examples/<category>/<name>.ts and require the corresponding API key in your environment.


basics — start here

The four core execution modes. Read these first.

Example What it shows
basics/single-agent One agent with bash + file tools, then streaming via the Agent class.
basics/team-collaboration runTeam() coordinator pattern — goal in, results out.
basics/task-pipeline runTasks() with explicit task DAG and dependencies.
basics/multi-model-team Different models per agent in one team.

providers — model & adapter examples

One example per supported provider. All follow the same three-agent (architect / developer / reviewer) shape so they're easy to compare.

Example Provider Env var
providers/ollama Ollama (local) + Claude ANTHROPIC_API_KEY
providers/gemma4-local Gemma 4 via Ollama (100% local)
providers/copilot GitHub Copilot (GPT-4o + Claude) GITHUB_TOKEN
providers/azure-openai Azure OpenAI AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT (+ optional AZURE_OPENAI_API_VERSION, AZURE_OPENAI_DEPLOYMENT)
providers/grok xAI Grok XAI_API_KEY
providers/gemini Google Gemini GEMINI_API_KEY
providers/minimax MiniMax M2.7 MINIMAX_API_KEY
providers/deepseek DeepSeek Chat DEEPSEEK_API_KEY
providers/groq Groq (OpenAI-compatible) GROQ_API_KEY

patterns — orchestration patterns

Reusable shapes for common multi-agent problems.

Example Pattern
patterns/fan-out-aggregate MapReduce-style fan-out via AgentPool.runParallel().
patterns/structured-output Zod-validated JSON output from an agent.
patterns/task-retry Per-task retry with exponential backoff.
patterns/multi-perspective-code-review Multiple reviewer agents in parallel, then synthesis.
patterns/research-aggregation Multi-source research collated by a synthesis agent.
patterns/agent-handoff Synchronous sub-agent delegation via delegate_to_agent.

cookbook — use-case recipes

End-to-end examples framed around a concrete problem (meeting summarization, translation QA, competitive monitoring, etc.) rather than a single orchestration primitive. Lighter bar than production/: no tests or pinned model versions required. Good entry point if you want to see how the patterns compose on a real task.

Example Problem solved
cookbook/meeting-summarizer Fan-out post-processing of a transcript into summary, structured action items, and sentiment.

integrations — external systems

Hooking the framework up to outside-the-box tooling.

Example Integrates with
integrations/trace-observability onTrace spans for LLM calls, tools, and tasks.
integrations/mcp-github An MCP server's tools exposed to an agent via connectMCPTools().
integrations/with-vercel-ai-sdk/ Next.js app — OMA runTeam() + AI SDK useChat streaming.

production — real-world use cases

End-to-end examples wired to real workflows. Higher bar than the categories above. See production/README.md for the acceptance criteria and how to contribute.


Adding a new example

You're adding… Goes in… Filename
A new model provider providers/ <provider-name>.ts (lowercase, hyphenated)
A reusable orchestration pattern patterns/ <pattern-name>.ts
A use-case-driven example (problem-first, uses one or more patterns) cookbook/ <use-case>.ts
Integration with an outside system (MCP server, observability backend, framework, app) integrations/ <system>.ts or <system>/ for multi-file
A real-world end-to-end use case, production-grade production/ <use-case>/ directory with its own README

Conventions:

  • No numeric prefixes. Folders signal category; reading order is set by this README.
  • File header docstring with one-line title, Run: block, and prerequisites.
  • Imports should resolve as from '../../src/index.js' (one level deeper than the old flat layout).
  • Match the provider template when adding a provider: three-agent team (architect / developer / reviewer) building a small REST API. Keeps comparisons honest.
  • Add a row to the table in this file for the corresponding category.