Examples grew to 19 flat files mixing basics, provider demos, orchestration patterns, and integrations, with two files colliding on the number 16. Reorganized into category folders so the structure scales as new providers and patterns get added. Layout: examples/basics/ core execution modes (4 files) examples/providers/ one example per supported model provider (8 files) examples/patterns/ reusable orchestration patterns (6 files) examples/integrations/ MCP, observability, AI SDK (3 entries) examples/production/ placeholder for end-to-end use cases Notable changes: - Dropped numeric prefixes; folder + filename now signal category and intent. - Rewrote former smoke-test scripts (copilot, gemini) into proper three-agent team examples matching the deepseek/grok/minimax/groq template. Adapter unit tests in tests/ already cover correctness, so this only improves documentation quality. - Added examples/README.md as the categorized index plus maintenance rules for new submissions. - Added examples/production/README.md with acceptance criteria for the new production category. - Updated all internal npx tsx paths and import paths (../src/ to ../../src/). - Updated README.md and README_zh.md links. - Fixed stale cd paths inside examples/integrations/with-vercel-ai-sdk/README.md. |
||
|---|---|---|
| .. | ||
| README.md | ||
README.md
Production Examples
End-to-end examples that demonstrate open-multi-agent running on real-world use cases — not toy demos.
The other example categories (basics/, providers/, patterns/, integrations/) optimize for clarity and small surface area. This directory optimizes for showing the framework solving an actual problem, with the operational concerns that come with it.
Acceptance criteria
A submission belongs in production/ if it meets all of:
- Real use case. Solves a concrete problem someone would actually pay for or use daily — not "build me a TODO API".
- Error handling. Handles LLM failures, tool failures, and partial team failures gracefully. No bare
awaitchains that crash on the first error. - Documentation. Each example lives in its own subdirectory with a
README.mdcovering:- What problem it solves
- Architecture diagram or task DAG description
- Required env vars / external services
- How to run locally
- Expected runtime and approximate token cost
- Reproducible. Pinned model versions; no reliance on private datasets or unpublished APIs.
- Tested. At least one test or smoke check that verifies the example still runs after framework updates.
If a submission falls short on (2)–(5), it probably belongs in patterns/ or integrations/ instead.
Layout
production/
└── <use-case>/
├── README.md # required
├── index.ts # entry point
├── agents/ # AgentConfig definitions
├── tools/ # custom tools, if any
└── tests/ # smoke test or e2e test
Submitting
Open a PR. In the PR description, address each of the five acceptance criteria above.