docs: restructure Ecosystem section, add Engram integration (#151)
* docs: restructure Ecosystem section, add Engram integration Merge Used by and Integrations into one ## Ecosystem section with three tiers (In production, Integrations free, Featured Partner $3,000/yr). Drop Philosophy section (covered in DECISIONS.md). Add Engram to free tier with maintainer's original tagline verbatim. Mirror in README_zh.md. * chore: add repository, homepage, and bugs npm metadata Standard fields so npm and GitHub surface source links. * docs: README consistency pass - Architecture diagram: add AzureOpenAIAdapter (missed when #143 landed). - Quick Start: promote CLI (oma) to ### subheading for discoverability. - Ecosystem: drop hardcoded "5,500+ stars"; shield badge is the live source. - Intro: bump "Opus 4.6" example to "Opus 4.7" (prose only; code defaults unchanged, to revisit separately). - Supported providers: drop the "Gemma 4" sentence that contradicted the 5-model list below it. - Stats images: bump cache-buster to 20260423 for contrib.rocks and star-history so GitHub camo refetches. Mirror in README_zh.md.
This commit is contained in:
parent
e7e17ee8d7
commit
6e8016df22
74
README.md
74
README.md
|
|
@ -17,29 +17,10 @@ CrewAI is Python. LangGraph makes you draw the graph by hand. `open-multi-agent`
|
|||
|
||||
- **Goal to result in one call.** `runTeam(team, "Build a REST API")` kicks off a coordinator agent that decomposes the goal into a task DAG, resolves dependencies, runs independent tasks in parallel, and synthesizes the final output. No graph to draw, no tasks to wire up.
|
||||
- **TypeScript-native, three runtime dependencies.** `@anthropic-ai/sdk`, `openai`, `zod`. That is the whole runtime. Embed in Express, Next.js, serverless functions, or CI/CD pipelines. No Python runtime, no subprocess bridge, no cloud sidecar.
|
||||
- **Multi-model teams.** Claude, GPT, Gemini, Grok, MiniMax, DeepSeek, Copilot, or any OpenAI-compatible local model (Ollama, vLLM, LM Studio, llama.cpp) in the same team. Run the architect on Opus 4.6, the developer on GPT-5.4, the reviewer on local Gemma 4, all in one `runTeam()` call. Gemini ships as an optional peer dependency: `npm install @google/genai` to enable.
|
||||
- **Multi-model teams.** Claude, GPT, Gemini, Grok, MiniMax, DeepSeek, Copilot, or any OpenAI-compatible local model (Ollama, vLLM, LM Studio, llama.cpp) in the same team. Run the architect on Opus 4.7, the developer on GPT-5.4, the reviewer on local Gemma 4, all in one `runTeam()` call. Gemini ships as an optional peer dependency: `npm install @google/genai` to enable.
|
||||
|
||||
Other features (MCP integration, context strategies, structured output, task retry, human-in-the-loop, lifecycle hooks, loop detection, observability) live below the fold and in [`examples/`](./examples/).
|
||||
|
||||
## Philosophy: what we build, what we don't
|
||||
|
||||
Our goal is to be the simplest multi-agent framework for TypeScript. Simplicity does not mean closed. We believe the long-term value of a framework is the size of the network it connects to, not its feature checklist.
|
||||
|
||||
**We build:**
|
||||
- A coordinator that decomposes a goal into a task DAG.
|
||||
- A task queue that runs independent tasks in parallel and cascades failures to dependents.
|
||||
- A shared memory and message bus so agents can see each other's output.
|
||||
- Multi-model teams where each agent can use a different LLM provider.
|
||||
|
||||
**We don't build:**
|
||||
- **Agent handoffs.** If agent A needs to transfer mid-conversation to agent B, use [OpenAI Agents SDK](https://github.com/openai/openai-agents-python). In our model, each agent owns one task end-to-end, with no mid-conversation transfers.
|
||||
- **State persistence / checkpointing.** Not planned for now. Adding a storage backend would break the three-dependency promise, and our workflows run in seconds to minutes, not hours. If real usage shifts toward long-running workflows, we will revisit.
|
||||
|
||||
**Tracking:**
|
||||
- **A2A protocol.** Watching, will move when production adoption is real.
|
||||
|
||||
See [`DECISIONS.md`](./DECISIONS.md) for the full rationale.
|
||||
|
||||
## How is this different from X?
|
||||
|
||||
**vs. [LangGraph JS](https://github.com/langchain-ai/langgraphjs).** LangGraph is declarative graph orchestration: you define nodes, edges, and conditional routing, then `compile()` and `invoke()`. `open-multi-agent` is goal-driven: you declare a team and a goal, a coordinator decomposes it into a task DAG at runtime. LangGraph gives you total control of topology (great for fixed production workflows). This gives you less typing and faster iteration (great for exploratory multi-agent work). LangGraph also has mature checkpointing; we do not.
|
||||
|
|
@ -48,15 +29,29 @@ See [`DECISIONS.md`](./DECISIONS.md) for the full rationale.
|
|||
|
||||
**vs. [Vercel AI SDK](https://github.com/vercel/ai).** AI SDK is the LLM call layer: a unified TypeScript client for 60+ providers with streaming, tool calls, and structured outputs. It does not orchestrate multi-agent teams. `open-multi-agent` sits on top when you need that. They compose: use AI SDK for single-agent work, reach for this when you need a team.
|
||||
|
||||
## Used by
|
||||
## Ecosystem
|
||||
|
||||
`open-multi-agent` is a new project (launched 2026-04-01, MIT, 5,500+ stars). The ecosystem is still forming, so the list below is short and honest:
|
||||
`open-multi-agent` is a new project (launched 2026-04-01, MIT). The ecosystem is still forming, so the lists below are short and honest.
|
||||
|
||||
### In production
|
||||
|
||||
- **[temodar-agent](https://github.com/xeloxa/temodar-agent)** (~50 stars). WordPress security analysis platform by [Ali Sünbül](https://github.com/xeloxa). Uses our built-in tools (`bash`, `file_*`, `grep`) directly in its Docker runtime. Confirmed production use.
|
||||
- **Cybersecurity SOC (home lab).** A private setup running Qwen 2.5 + DeepSeek Coder entirely offline via Ollama, building an autonomous SOC pipeline on Wazuh + Proxmox. Early user, not yet public.
|
||||
|
||||
Using `open-multi-agent` in production or a side project? [Open a discussion](https://github.com/JackChen-me/open-multi-agent/discussions) and we will list it here.
|
||||
|
||||
### Integrations (free)
|
||||
|
||||
- **[Engram](https://www.engram-memory.com)** — "Git for AI memory." Syncs knowledge across agents instantly and flags conflicts. ([repo](https://github.com/Agentscreator/engram-memory))
|
||||
|
||||
Built an integration? [Open a discussion](https://github.com/JackChen-me/open-multi-agent/discussions) to get listed.
|
||||
|
||||
### Featured Partner ($3,000 / year)
|
||||
|
||||
12 months of prominent placement: logo, 100-word description, and a maintainer endorsement quote. For products or platforms already integrated with `open-multi-agent`.
|
||||
|
||||
[Inquire about Featured Partner](https://github.com/JackChen-me/open-multi-agent/issues/new?title=Featured+Partner+Inquiry&labels=featured-partner-inquiry)
|
||||
|
||||
## Quick Start
|
||||
|
||||
Requires Node.js >= 18.
|
||||
|
|
@ -77,7 +72,9 @@ Set the API key for your provider. Local models via Ollama require no API key. S
|
|||
- `DEEPSEEK_API_KEY` (for DeepSeek)
|
||||
- `GITHUB_TOKEN` (for Copilot)
|
||||
|
||||
**CLI (`oma`).** For shell and CI, the package exposes a JSON-first binary. See [docs/cli.md](./docs/cli.md) for `oma run`, `oma task`, `oma provider`, exit codes, and file formats.
|
||||
### CLI (`oma`)
|
||||
|
||||
For shell and CI, the package exposes a JSON-first binary. See [docs/cli.md](./docs/cli.md) for `oma run`, `oma task`, `oma provider`, exit codes, and file formats.
|
||||
|
||||
Three agents, one goal. The framework handles the rest:
|
||||
|
||||
|
|
@ -181,16 +178,17 @@ Run scripts with `npx tsx examples/basics/team-collaboration.ts`.
|
|||
│ └───────────────────────┘
|
||||
┌────────▼──────────┐
|
||||
│ Agent │
|
||||
│ - run() │ ┌──────────────────────┐
|
||||
│ - prompt() │───►│ LLMAdapter │
|
||||
│ - stream() │ │ - AnthropicAdapter │
|
||||
└────────┬──────────┘ │ - OpenAIAdapter │
|
||||
│ │ - CopilotAdapter │
|
||||
│ │ - GeminiAdapter │
|
||||
│ │ - GrokAdapter │
|
||||
│ │ - MiniMaxAdapter │
|
||||
│ │ - DeepSeekAdapter │
|
||||
│ └──────────────────────┘
|
||||
│ - run() │ ┌────────────────────────┐
|
||||
│ - prompt() │───►│ LLMAdapter │
|
||||
│ - stream() │ │ - AnthropicAdapter │
|
||||
└────────┬──────────┘ │ - OpenAIAdapter │
|
||||
│ │ - AzureOpenAIAdapter │
|
||||
│ │ - CopilotAdapter │
|
||||
│ │ - GeminiAdapter │
|
||||
│ │ - GrokAdapter │
|
||||
│ │ - MiniMaxAdapter │
|
||||
│ │ - DeepSeekAdapter │
|
||||
│ └────────────────────────┘
|
||||
┌────────▼──────────┐
|
||||
│ AgentRunner │ ┌──────────────────────┐
|
||||
│ - conversation │───►│ ToolRegistry │
|
||||
|
|
@ -379,8 +377,6 @@ Pairs well with `compressToolResults` and `maxToolOutputChars` above.
|
|||
|
||||
Gemini requires `npm install @google/genai` (optional peer dependency).
|
||||
|
||||
Verified local models with tool-calling: **Gemma 4** (see [`providers/gemma4-local`](examples/providers/gemma4-local.ts)).
|
||||
|
||||
Any OpenAI-compatible API should work via `provider: 'openai'` + `baseURL` (Mistral, Qwen, Moonshot, Doubao, etc.). Groq is now verified in [`providers/groq`](examples/providers/groq.ts). **Grok, MiniMax, and DeepSeek now have first-class support** via `provider: 'grok'`, `provider: 'minimax'`, and `provider: 'deepseek'`.
|
||||
|
||||
### Local Model Tool-Calling
|
||||
|
|
@ -460,16 +456,16 @@ Issues, feature requests, and PRs are welcome. Some areas where contributions wo
|
|||
## Contributors
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260419" />
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260423" />
|
||||
</a>
|
||||
|
||||
## Star History
|
||||
|
||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260423" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260423" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260423" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
|
|
|||
69
README_zh.md
69
README_zh.md
|
|
@ -17,26 +17,10 @@ CrewAI 是 Python。LangGraph 要你自己画图。`open-multi-agent` 是你现
|
|||
|
||||
- `runTeam(team, "构建一个 REST API")` 下去,协调者 agent 会把目标拆成任务 DAG,独立任务并行跑,再把结果合起来。不用画图,不用手动连依赖。
|
||||
- 运行时依赖就三个:`@anthropic-ai/sdk`、`openai`、`zod`。能直接塞进 Express、Next.js、Serverless 或 CI/CD,不起 Python 进程,也不跑云端 sidecar。
|
||||
- 同一个团队里的 agent 能挂不同模型:架构师用 Opus 4.6、开发用 GPT-5.4、评审跑本地 Gemma 4 都行。支持 Claude、GPT、Gemini、Grok、MiniMax、DeepSeek、Copilot,以及 OpenAI 兼容的本地模型(Ollama、vLLM、LM Studio、llama.cpp)。用 Gemini 要额外装 `@google/genai`。
|
||||
- 同一个团队里的 agent 能挂不同模型:架构师用 Opus 4.7、开发用 GPT-5.4、评审跑本地 Gemma 4 都行。支持 Claude、GPT、Gemini、Grok、MiniMax、DeepSeek、Copilot,以及 OpenAI 兼容的本地模型(Ollama、vLLM、LM Studio、llama.cpp)。用 Gemini 要额外装 `@google/genai`。
|
||||
|
||||
还有 MCP、上下文策略、结构化输出、任务重试、human-in-the-loop、生命周期 hook、循环检测、可观测性等,下面章节或 [`examples/`](./examples/) 里都有。
|
||||
|
||||
## 做什么,不做什么
|
||||
|
||||
**做的事:**
|
||||
- 一个协调者,把目标拆成任务 DAG。
|
||||
- 一个任务队列,独立任务并行跑,失败级联到下游。
|
||||
- 共享内存和消息总线,让 agent 之间能看到彼此的输出。
|
||||
- 多模型团队,每个 agent 可以挂不同的 LLM provider。
|
||||
|
||||
**不做的事:**
|
||||
- **Agent handoffs**:agent A 对话中途把控制权交给 agent B 这种模式不做。要这个用 [OpenAI Agents SDK](https://github.com/openai/openai-agents-python)。我们这边一个 agent 从头到尾负责一个任务。
|
||||
- **状态持久化 / 检查点**:暂时不做。加存储后端会破坏 3 个依赖的承诺,而且我们的工作流是秒到分钟级,不是小时级。真有长时间工作流的需求再说。
|
||||
|
||||
A2A 协议在跟踪,观望中,等有人真用再跟。
|
||||
|
||||
完整理由见 [`DECISIONS.md`](./DECISIONS.md)。
|
||||
|
||||
## 和其他框架怎么选
|
||||
|
||||
如果你在看 [LangGraph JS](https://github.com/langchain-ai/langgraphjs):它是声明式图编排,自己定义节点、边、路由,`compile()` + `invoke()`。`open-multi-agent` 反过来,目标驱动:给一个团队和一个目标,协调者在运行时拆 DAG。想完全控拓扑、流程定下来的用 LangGraph;想写得少、迭代快、还在探索的选这个。LangGraph 有成熟 checkpoint,我们没做。
|
||||
|
|
@ -45,15 +29,29 @@ Python 栈直接用 [CrewAI](https://github.com/crewAIInc/crewAI) 就行,编
|
|||
|
||||
和 [Vercel AI SDK](https://github.com/vercel/ai) 不冲突。AI SDK 是 LLM 调用层,统一的 TypeScript 客户端,60+ provider,带流式、tool call、结构化输出,但不做多智能体编排。要多 agent,把 `open-multi-agent` 叠在 AI SDK 上面就行。单 agent 用 AI SDK,多 agent 用这个。
|
||||
|
||||
## 谁在用
|
||||
## 生态
|
||||
|
||||
项目 2026-04-01 发布,目前 5,500+ stars,MIT 协议。目前能确认在用的:
|
||||
项目 2026-04-01 发布,MIT 协议。生态还在成型,下面的列表不长,但都是真的。
|
||||
|
||||
### 生产环境在用
|
||||
|
||||
- **[temodar-agent](https://github.com/xeloxa/temodar-agent)**(约 50 stars)。WordPress 安全分析平台,作者 [Ali Sünbül](https://github.com/xeloxa)。在 Docker runtime 里直接用我们的内置工具(`bash`、`file_*`、`grep`)。已确认生产环境使用。
|
||||
- **家用服务器 Cybersecurity SOC。** 本地完全离线跑 Qwen 2.5 + DeepSeek Coder(通过 Ollama),在 Wazuh + Proxmox 上搭自主 SOC 流水线。早期用户,未公开。
|
||||
|
||||
如果你在生产或 side project 里用了 `open-multi-agent`,[请开个 Discussion](https://github.com/JackChen-me/open-multi-agent/discussions),我加上来。
|
||||
|
||||
### 集成(免费)
|
||||
|
||||
- **[Engram](https://www.engram-memory.com)** — "Git for AI memory." Syncs knowledge across agents instantly and flags conflicts. ([repo](https://github.com/Agentscreator/engram-memory))
|
||||
|
||||
做了 `open-multi-agent` 集成?[开个 Discussion](https://github.com/JackChen-me/open-multi-agent/discussions),我加上来。
|
||||
|
||||
### Featured Partner($3,000 / 年)
|
||||
|
||||
12 个月显眼位置:logo、100 字介绍、maintainer 背书 quote。面向已经集成 `open-multi-agent` 的产品或平台。
|
||||
|
||||
[咨询 Featured Partner](https://github.com/JackChen-me/open-multi-agent/issues/new?title=Featured+Partner+Inquiry&labels=featured-partner-inquiry)
|
||||
|
||||
## 快速开始
|
||||
|
||||
需要 Node.js >= 18。
|
||||
|
|
@ -74,6 +72,8 @@ npm install @jackchen_me/open-multi-agent
|
|||
- `DEEPSEEK_API_KEY`(DeepSeek)
|
||||
- `GITHUB_TOKEN`(Copilot)
|
||||
|
||||
### CLI(`oma`)
|
||||
|
||||
包里还自带一个叫 `oma` 的命令行工具,给 shell 和 CI 场景用,输出都是 JSON。`oma run`、`oma task`、`oma provider`、退出码、文件格式都在 [docs/cli.md](./docs/cli.md) 里。
|
||||
|
||||
下面用三个 agent 协作做一个 REST API:
|
||||
|
|
@ -178,16 +178,17 @@ Tokens: 12847 output tokens
|
|||
│ └───────────────────────┘
|
||||
┌────────▼──────────┐
|
||||
│ Agent │
|
||||
│ - run() │ ┌──────────────────────┐
|
||||
│ - prompt() │───►│ LLMAdapter │
|
||||
│ - stream() │ │ - AnthropicAdapter │
|
||||
└────────┬──────────┘ │ - OpenAIAdapter │
|
||||
│ │ - CopilotAdapter │
|
||||
│ │ - GeminiAdapter │
|
||||
│ │ - GrokAdapter │
|
||||
│ │ - MiniMaxAdapter │
|
||||
│ │ - DeepSeekAdapter │
|
||||
│ └──────────────────────┘
|
||||
│ - run() │ ┌────────────────────────┐
|
||||
│ - prompt() │───►│ LLMAdapter │
|
||||
│ - stream() │ │ - AnthropicAdapter │
|
||||
└────────┬──────────┘ │ - OpenAIAdapter │
|
||||
│ │ - AzureOpenAIAdapter │
|
||||
│ │ - CopilotAdapter │
|
||||
│ │ - GeminiAdapter │
|
||||
│ │ - GrokAdapter │
|
||||
│ │ - MiniMaxAdapter │
|
||||
│ │ - DeepSeekAdapter │
|
||||
│ └────────────────────────┘
|
||||
┌────────▼──────────┐
|
||||
│ AgentRunner │ ┌──────────────────────┐
|
||||
│ - conversation │───►│ ToolRegistry │
|
||||
|
|
@ -374,8 +375,6 @@ const agent: AgentConfig = {
|
|||
|
||||
Gemini 需要 `npm install @google/genai`(optional peer dependency)。
|
||||
|
||||
已验证支持 tool-calling 的本地模型:**Gemma 4**(见 [`providers/gemma4-local`](examples/providers/gemma4-local.ts))。
|
||||
|
||||
OpenAI 兼容的 API 都能用 `provider: 'openai'` + `baseURL` 接(Mistral、Qwen、Moonshot、Doubao 等)。Groq 在 [`providers/groq`](examples/providers/groq.ts) 里验证过。Grok、MiniMax、DeepSeek 直接用 `provider: 'grok'`、`provider: 'minimax'`、`provider: 'deepseek'`,不用配 `baseURL`。
|
||||
|
||||
### 本地模型 Tool-Calling
|
||||
|
|
@ -455,16 +454,16 @@ Issue、feature request、PR 都欢迎。特别想要:
|
|||
## 贡献者
|
||||
|
||||
<a href="https://github.com/JackChen-me/open-multi-agent/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260419" />
|
||||
<img src="https://contrib.rocks/image?repo=JackChen-me/open-multi-agent&max=20&v=20260423" />
|
||||
</a>
|
||||
|
||||
## Star 趋势
|
||||
|
||||
<a href="https://star-history.com/#JackChen-me/open-multi-agent&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date" />
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&theme=dark&v=20260423" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260423" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=JackChen-me/open-multi-agent&type=Date&v=20260423" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
|
|
|||
|
|
@ -48,6 +48,14 @@
|
|||
],
|
||||
"author": "",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/JackChen-me/open-multi-agent.git"
|
||||
},
|
||||
"homepage": "https://github.com/JackChen-me/open-multi-agent#readme",
|
||||
"bugs": {
|
||||
"url": "https://github.com/JackChen-me/open-multi-agent/issues"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
|
|
|
|||
Loading…
Reference in New Issue