docs: 新增 OpenCode CLI 配置入门文档

This commit is contained in:
tukuaiai 2026-01-10 21:43:46 +08:00
parent cfdc8bf800
commit 3ba2cc22a2
15 changed files with 895 additions and 7 deletions

View File

@ -113,6 +113,7 @@
1. [01-网络环境配置](./i18n/zh/documents/01-入门指南/01-网络环境配置.md) - 配置网络访问
2. [02-开发环境搭建](./i18n/zh/documents/01-入门指南/02-开发环境搭建.md) - 复制提示词给 AI让 AI 指导你搭建环境
3. [03-IDE配置](./i18n/zh/documents/01-入门指南/03-IDE配置.md) - 配置 VS Code 编辑器
4. [04-OpenCode-CLI配置](./i18n/zh/documents/01-入门指南/04-OpenCode-CLI配置.md) - 免费 AI CLI 工具,支持 GLM-4.7/MiniMax M2.1 等模型
---

View File

@ -0,0 +1,11 @@
Applying Dialectical Thesis-Antithesis-Synthesis to Vibe Coding: I treat each coding session as a round of "triadic progression"
Thesis (Current State): First let the model quickly provide the "smoothest implementation" based on intuition, with only one goal: get the main path running as soon as possible
Antithesis (Audit & Tuning): Immediately take the "critic" perspective and challenge it: list failure modes/edge cases/performance and security concerns, and ground the challenges with tests, types, lint, benchmarks
Synthesis (Correction Based on Review): Combine speed with constraints: refactor interfaces, converge dependencies, complete tests and documentation, forming a more stable starting point for the next round
Practice Mantra: Write smoothly first → Then challenge → Then converge
Vibe is responsible for generating possibilities, thesis-antithesis-synthesis is responsible for turning possibilities into engineering certainties

View File

@ -0,0 +1,107 @@
### Phenomenological Reduction (Suspension of Assumptions) for Vibe Coding
**Core Purpose**
Strip "what I think the requirement is" from the conversation, leaving only observable, reproducible, and verifiable facts and experience structures, allowing the model to produce usable code with fewer assumptions.
---
## 1) Key Methods (Understanding in Engineering Context)
* **Epoché (Suspension)**: Temporarily withhold any "causal explanations/business inferences/best practice preferences."
Only record: what happened, what is expected, what are the constraints.
* **Reduction**: Reduce the problem to the minimal structure of "given input → process → output."
Don't discuss architecture, patterns, or tech stack elegance first.
* **Intentionality**: Clarify "who this feature is for, in what context, to achieve what experience."
Not "make a login," but "users can complete login within 2 seconds even on weak networks and get clear feedback."
---
## 2) Applicable Scenarios
* Requirements descriptions full of abstract words: fast, stable, like something, intelligent, smooth.
* Model starts "bringing its own assumptions": filling in product logic, randomly selecting frameworks, adding complexity on its own.
* Hard to reproduce bugs: intermittent, environment-related, unclear input boundaries.
---
## 3) Operating Procedure (Can Follow Directly)
### A. First "Clear Explanations," Keep Only Phenomena
Describe using four elements:
1. **Phenomenon**: Actual result (including errors/screenshots/log fragments).
2. **Intent**: Desired result (observable criteria).
3. **Context**: Environment and preconditions (version, platform, network, permissions, data scale).
4. **Boundaries**: What not to do/not to assume (don't change interface, don't introduce new dependencies, don't change database structure, etc.).
### B. Produce "Minimal Reproducible Example" (MRE)
* Minimal input sample (shortest JSON/smallest table/smallest request)
* Minimal code snippet (remove unrelated modules)
* Clear reproduction steps (1, 2, 3)
* Expected vs. Actual (comparison table)
### C. Reduce "Abstract Words" to Testable Metrics
* "Fast" → P95 latency < X, cold start < Y, throughput >= Z
* "Stable" → Error rate < 0.1%, retry strategy, circuit breaker conditions
* "User-friendly" → Interaction feedback, error messages, undo/recovery capability
---
## 4) Prompt Templates for Models (Can Copy Directly)
**Template 1: Reduce Problem (No Speculation)**
```
Please first do "phenomenological reduction": don't speculate on causes, don't introduce extra features.
Based only on the information I provide, output:
1) Phenomenon (observable facts)
2) Intent (observable result I want)
3) Context (environment/constraints)
4) Undetermined items (minimum information that must be clarified or I need to provide)
5) Minimal reproducible steps (MRE)
Then provide the minimal fix solution and corresponding tests.
```
**Template 2: Abstract Requirements to Testable Specs**
```
Apply "suspension of assumptions" to the following requirements: remove all abstract words, convert to verifiable specs:
- Clear input/output
- Clear success/failure criteria
- Clear performance/resource metrics (if needed)
- Clear what NOT to do
Finally provide acceptance test case list.
Requirements: <paste>
```
---
## 5) Concrete Implementation in Vibe Coding (Building Habits)
* **Write "phenomenon card" before each work session** (2 minutes): phenomenon/intent/context/boundaries.
* **Have the model restate first**: require it to only restate facts and gaps, no solutions allowed.
* **Then enter generation**: solutions must be tied to "observable acceptance" and "falsifiable tests."
---
## 6) Common Pitfalls and Countermeasures
* **Pitfall: Treating explanations as facts** ("Might be caused by cache")
Countermeasure: Move "might" to "hypothesis list," each hypothesis with verification steps.
* **Pitfall: Requirements piled with adjectives**
Countermeasure: Force conversion to metrics and test cases; no writing code if not "testable."
* **Pitfall: Model self-selecting tech stack**
Countermeasure: Lock in boundaries: language/framework/dependencies/interfaces cannot change.
---
## 7) One-Sentence Mantra (Easy to Put in Toolbox Card)
**First suspend explanations, then fix phenomena; first write acceptance criteria, then let model write implementation.**

View File

@ -95,5 +95,12 @@ In the paradigm of Vibe Coding, we are no longer just "typists" but "architects
* **Reflective Equilibrium**: Iteratively calibrating specific judgments and general principles for systemic consistency.
* **Conceptual Engineering**: Actively engineering and optimizing conceptual tools to serve Vibe Coding practices.
---
## Detailed Method Guides
- [Phenomenological Reduction](./Phenomenological%20Reduction.md) - Suspension of assumptions for clear requirements
- [Dialectics](./Dialectics.md) - Thesis-Antithesis-Synthesis iterative development
---
*Note: This content evolves continuously as the supreme ideological directive of the Vibe Coding CN project.*

View File

@ -1,15 +1,13 @@
# 📖 Documents
# 📚 Documents
> Documentation library for Vibe Coding methodology, guides, and resources
> Vibe Coding knowledge system, organized by learning path
---
## 📁 Directory Structure
## 🗺️ Directory Structure
```
documents/
├── -01-philosophy-and-methodology/ # Supreme ideological directive
├── 00-fundamentals/ # Core concepts & principles
├── -01-philosophy-and-methodology/ # Supreme ideological directive, underlying logic
├── 00-fundamentals/ # Core concepts, glue coding, methodology
│ ├── Glue Coding.md
│ ├── Language Layer Elements.md
│ ├── Common Pitfalls.md
@ -44,6 +42,26 @@ documents/
└── Recommended Programming Books.md
```
## 🚀 Quick Navigation
| Directory | Description | Target Audience |
|:----------|:------------|:----------------|
| [-01-philosophy-and-methodology](./-01-philosophy-and-methodology/) | Ideological principles, epistemological tools | Architects & advanced developers |
| [00-fundamentals](./00-fundamentals/) | Glue coding, core concepts | Understanding fundamentals |
| [01-getting-started](./01-getting-started/) | Environment setup, from zero | Beginners |
| [02-methodology](./02-methodology/) | Tool tutorials, development experience | Improving efficiency |
| [03-practice](./03-practice/) | Project experience, case reviews | Hands-on practice |
| [04-resources](./04-resources/) | Templates, tools, external links | Reference lookup |
## 📖 Recommended Learning Path
1. **Philosophy** → [-01-philosophy-and-methodology](./-01-philosophy-and-methodology/README.md)
2. **Concepts** → [Glue Coding](./00-fundamentals/Glue%20Coding.md)
3. **Getting Started** → [Vibe Coding Philosophy](./01-getting-started/00-Vibe%20Coding%20Philosophy.md)
4. **Setup** → [Development Environment Setup](./01-getting-started/02-Development%20Environment%20Setup.md)
5. **Tools** → [tmux Shortcut Cheatsheet](./02-methodology/tmux%20Shortcut%20Cheatsheet.md)
6. **Practice** → [Practical Examples](./03-practice/)
---
## 🗂️ Categories
@ -51,6 +69,8 @@ documents/
### -01-philosophy-and-methodology
Supreme ideological directive and epistemological tools:
- **Philosophy & Methodology** - The underlying protocol of Vibe Coding
- **Phenomenological Reduction** - Suspension of assumptions for clear requirements
- **Dialectics** - Thesis-Antithesis-Synthesis iterative development
### 00-fundamentals
Core concepts and methodology:

View File

@ -0,0 +1,25 @@
# Workflow Collection
Directory for various automation workflows.
## Directory Structure
```
workflow/
├── auto-dev-loop/ # Fully automated development loop workflow (5-step Agent)
├── canvas-dev/ # Canvas whiteboard-driven development workflow
└── README.md
```
## Available Workflows
| Workflow | Description |
|----------|-------------|
| [auto-dev-loop](./auto-dev-loop/) | 5-step AI Agent closed-loop development process based on state machine + hooks |
| [canvas-dev](./canvas-dev/) | Canvas whiteboard-driven development workflow (AI Chief Architect) |
## Adding New Workflows
1. Create a subdirectory under this directory
2. Include necessary configuration files and documentation
3. Update this README

View File

@ -0,0 +1,26 @@
# CHANGELOG
## 2025-12-25T05:45:00+08:00 - Implemented workflow_engine MVP
- Key changes: Created `workflow_engine/` directory, implemented file event hooks + state machine scheduler
- Files/modules involved:
- `workflow_engine/runner.py` - State machine scheduler, supports start/dispatch/status commands
- `workflow_engine/hook_runner.sh` - inotify file watching hook
- `workflow_engine/state/current_step.json` - State file
- `workflow_engine/README.md` - Usage documentation
- Verification method and results: `python runner.py start` successfully executed step1→step5 full flow, artifacts saved to artifacts/
- Remaining issues and next steps: Integrate actual LLM calls to replace MOCK; add CI integration examples
## 2025-12-25T04:58:27+08:00 - Workflow Auto-loop Solution Analysis
- Key changes: Researched the five prompts under `workflow_steps`, analyzed closed-loop and master control requirements, output an implementable state machine/hook-style orchestrator design (no code changes).
- Files/modules involved: `step1_requirements.jsonl`, `step2_execution_plan.jsonl`, `step3_implementation.jsonl`, `step4_verification.jsonl`, `step5_controller.jsonl` (read only).
- Verification method and results: Analytical output, no code execution, TODO.
- Remaining issues and next steps: Implement orchestrator MVP; calibrate JSONL with PARE v3.0 structure; add persistent state and task queue for master control loop.
## 2025-12-25T05:04:00+08:00 - Moved workflow-orchestrator Skill Directory
- Key changes: Migrated `i18n/zh/skills/01-AI工具/workflow-orchestrator` to `prompt_jsonl/workflow_steps/` directory.
- Files/modules involved: `workflow-orchestrator/SKILL.md`, `workflow-orchestrator/AGENTS.md`, `workflow-orchestrator/references/index.md`, `workflow-orchestrator/CHANGELOG.md`.
- Verification method and results: Command line `mv` followed by directory structure check, files intact.
- Remaining issues and next steps: Add `workflow_engine` scripts in new location and align with skill documentation.

View File

@ -0,0 +1,93 @@
# Fully Automated Development Loop Workflow
A 5-step AI Agent workflow system based on **state machine + file hooks**.
## Directory Structure
```
workflow/
├── .kiro/agents/workflow.json # Kiro Agent configuration
├── workflow_engine/ # State machine scheduling engine
│ ├── runner.py # Core scheduler
│ ├── hook_runner.sh # File watching hook
│ ├── state/ # State files
│ └── artifacts/ # Artifacts directory
├── workflow-orchestrator/ # Orchestration skill documentation
├── step1_requirements.jsonl # Requirements locking Agent
├── step2_execution_plan.jsonl # Plan orchestration Agent
├── step3_implementation.jsonl # Implementation changes Agent
├── step4_verification.jsonl # Verification & release Agent
├── step5_controller.jsonl # Master control & loop Agent
└── CHANGELOG.md
```
## Quick Start
### Method 1: Using Kiro CLI
```bash
# Navigate to workflow directory
cd ~/projects/vibe-coding-cn/i18n/en/workflow
# Start with workflow agent
kiro-cli chat --agent workflow
```
### Method 2: Manual Execution
```bash
cd ~/projects/vibe-coding-cn/i18n/en/workflow
# Start workflow
python3 workflow_engine/runner.py start
# Check status
python3 workflow_engine/runner.py status
```
### Method 3: Auto Mode (Hook Watching)
```bash
# Terminal 1: Start file watching
./workflow_engine/hook_runner.sh
# Terminal 2: Trigger workflow
python3 workflow_engine/runner.py start
```
## Workflow Process
```
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Step1 │───▶│ Step2 │───▶│ Step3 │───▶│ Step4 │───▶│ Step5 │
│ Input │ │ Plan │ │ Impl │ │ Verify │ │ Control │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └────┬────┘
▲ │
│ Failure rollback │
└────────────────────────────────────────────┘
```
## Core Mechanisms
| Mechanism | Description |
|-----------|-------------|
| State-driven | `state/current_step.json` as the single scheduling entry point |
| File Hook | `inotifywait` watches state changes and triggers automatically |
| Loop Control | Step5 decides rollback or completion based on verification results |
| Circuit Breaker | Maximum 3 retries per task |
## Kiro Integration
Agent configuration is located at `.kiro/agents/workflow.json`, including:
- **hooks**: Agent lifecycle hooks
- `agentSpawn`: Read state on startup
- `stop`: Check state when conversation ends
- **resources**: Auto-load prompt files into context
- **toolsSettings**: Pre-authorize file operations and command execution
## Next Steps
- [ ] Integrate actual LLM calls (replace MOCK in runner.py)
- [ ] Add CI/CD integration examples
- [ ] Support parallel task processing

View File

@ -0,0 +1,59 @@
# 🎨 Canvas Whiteboard-Driven Development Workflow
> Graphics are first-class citizens; code is the serialized form of the whiteboard
## Core Philosophy
```
Traditional Development: Code → Verbal Communication → Mental Architecture → Code Chaos
Canvas Approach: Code ⇄ Whiteboard ⇄ AI ⇄ Human (Whiteboard as Single Source of Truth)
```
| Pain Point | Solution |
|:-----------|:---------|
| 🤖 AI can't understand project structure | ✅ AI reads whiteboard JSON directly, instantly grasps architecture |
| 🧠 Humans can't remember complex dependencies | ✅ Clear connections, ripple effects visible at a glance |
| 💬 Team collaboration relies on verbal explanation | ✅ Point at the whiteboard, newcomers understand in 5 minutes |
## File Structure
```
canvas-dev/
├── README.md # This file - Workflow overview
├── workflow.md # Complete workflow steps (linear process)
├── prompts/
│ ├── 01-architecture-analysis.md # Prompt for generating whiteboard from code
│ ├── 02-whiteboard-driven-coding.md # Prompt for generating code from whiteboard
│ └── 03-whiteboard-sync-check.md # Validate whiteboard-code consistency
├── templates/
│ ├── project.canvas # Obsidian Canvas project template
│ └── module.canvas # Single module whiteboard template
└── examples/
└── demo-project.canvas # Example project whiteboard
```
## Quick Start
### 1. Prepare Tools
- [Obsidian](https://obsidian.md/) - Free open-source whiteboard tool
- AI assistant (Claude/GPT-4, must support reading Canvas JSON)
### 2. Generate Project Architecture Whiteboard
```bash
# Provide project code path to AI, use architecture analysis prompt
# AI automatically generates .canvas file
```
### 3. Drive Development with Whiteboard
- Draw new modules and dependency relationships on the whiteboard
- Export whiteboard JSON and send to AI
- AI generates/modifies code based on the whiteboard
## Related Documentation
- [Canvas Whiteboard-Driven Development Guide](../../documents/02-methodology/Graphical AI Collaboration - Canvas Whiteboard-Driven Development.md)
- [Whiteboard-Driven Development System Prompt](../../prompts/01-system-prompts/AGENTS.md/12/AGENTS.md)
- [Glue Coding](../../documents/00-fundamentals/Glue Coding.md)

View File

@ -0,0 +1,85 @@
# 01-Architecture Analysis Prompt
> Automatically generate Obsidian Canvas architecture whiteboard from existing code
## Use Cases
- Taking over a new project, quickly understand architecture
- Create visual documentation for existing projects
- Prepare for Code Review or technical presentations
## Prompt
```markdown
You are a code architecture analysis expert. Please analyze the following project structure and generate an architecture whiteboard in Obsidian Canvas format.
## Input
Project path: {PROJECT_PATH}
Analysis granularity: {GRANULARITY} (file/class/service)
## Output Requirements
Generate a .canvas file conforming to Obsidian Canvas JSON format, including:
1. **Nodes**:
- Each module/file/class as a node
- Node contains: id, type, x, y, width, height, text
- Layout by functional zones (e.g., API layer on left, data layer on right)
2. **Edges**:
- Represent dependency/call relationships between modules
- Contains: id, fromNode, toNode, fromSide, toSide, label
- Label indicates relationship type (call/inheritance/dependency/data flow)
3. **Groups**:
- Group by functional domain (e.g., user module, payment module)
- Use colors to distinguish different layers
## Canvas JSON Structure Example
```json
{
"nodes": [
{
"id": "node1",
"type": "text",
"x": 0,
"y": 0,
"width": 200,
"height": 100,
"text": "# UserService\n- createUser()\n- getUser()"
}
],
"edges": [
{
"id": "edge1",
"fromNode": "node1",
"toNode": "node2",
"fromSide": "right",
"toSide": "left",
"label": "calls"
}
]
}
```
## Analysis Steps
1. Scan project directory structure
2. Identify entry files and core modules
3. Analyze import/require statements to extract dependency relationships
4. Identify database operations, API calls, external services
5. Layout node positions by call hierarchy
6. Generate complete .canvas JSON
```
## Usage Example
```
Please analyze the /home/user/my-project project and generate a file-level architecture whiteboard.
Focus on:
- API routes and handler functions
- Database models and operations
- External service calls
```
## Output File
The generated `.canvas` file can be directly opened and edited in Obsidian.

View File

@ -0,0 +1,88 @@
# 02-Whiteboard-Driven Coding Prompt
> Generate/modify code based on Canvas whiteboard architecture diagram
## Use Cases
- New feature development: Draw whiteboard first, then generate code
- Architecture refactoring: Modify whiteboard connections, AI syncs code refactoring
- Module splitting: Split nodes on whiteboard, AI generates new files
## Prompt
```markdown
You are an expert at generating code from architecture whiteboards. Please generate corresponding code implementation based on the following Obsidian Canvas whiteboard JSON.
## Input
Canvas JSON:
```json
{CANVAS_JSON}
```
Tech stack: {TECH_STACK}
Target directory: {TARGET_DIR}
## Parsing Rules
1. **Node → File/Class**
- Title in node text → filename/classname
- List items in node text → methods/functions
- Node color/group → module affiliation
2. **Edge → Dependency Relationship**
- fromNode → toNode = import/call relationship
- Edge label determines relationship type:
- "calls" → function call
- "extends" → class extends
- "depends" → import
- "data flow" → parameter passing
3. **Group → Directory Structure**
- Nodes in the same group go in the same directory
- Group name → directory name
## Output Requirements
1. Generate complete file structure
2. Each file contains:
- Correct import statements (based on edges)
- Class/function definitions (based on node content)
- Call relationship implementation (based on edge direction)
3. Add necessary type annotations and comments
4. Follow tech stack best practices
## Output Format
```
File: {file_path}
```{language}
{code_content}
```
```
## Usage Example
```
Generate Python FastAPI project code based on the following whiteboard:
{paste .canvas file content}
Tech stack: Python 3.11 + FastAPI + SQLAlchemy
Target directory: /home/user/my-api
```
## Incremental Update Mode
When whiteboard is modified, use the following prompt:
```markdown
Whiteboard has been updated, please compare old and new versions, only modify changed parts:
Old whiteboard: {OLD_CANVAS_JSON}
New whiteboard: {NEW_CANVAS_JSON}
Output:
1. Files to add
2. Files to modify (output only diff)
3. Files to delete
```

View File

@ -0,0 +1,147 @@
# 03-Whiteboard Sync Check Prompt
> Validate consistency between whiteboard and actual code
## Use Cases
- Check if whiteboard needs updating before PR/MR merge
- Periodic audit of architecture documentation accuracy
- Discover implicit dependencies in code
## Prompt
```markdown
You are a code and architecture consistency checking expert. Please compare the following whiteboard and code to find inconsistencies.
## Input
Canvas whiteboard JSON:
```json
{CANVAS_JSON}
```
Project code path: {PROJECT_PATH}
## Check Items
1. **Node Completeness**
- Do all nodes in the whiteboard have corresponding code files/classes?
- Are there important modules in code not recorded in whiteboard?
2. **Edge Accuracy**
- Do whiteboard edges reflect real import/call relationships?
- Are there dependencies in code not marked in whiteboard?
3. **Group Correctness**
- Is whiteboard grouping consistent with directory structure?
- Are there abnormal cross-group dependencies?
## Output Format
### 🔴 Severe Inconsistencies (Must Fix)
| Type | Whiteboard | Code | Suggestion |
|:-----|:-----------|:-----|:-----------|
| Missing node | - | UserService.py | Add to whiteboard |
| Wrong edge | A→B | A doesn't call B | Remove edge |
### 🟡 Minor Inconsistencies (Recommend Fix)
| Type | Whiteboard | Code | Suggestion |
|:-----|:-----------|:-----|:-----------|
| Naming inconsistency | user_service | UserService | Unify naming |
### 🟢 Good Consistency
- Node coverage: {X}%
- Edge accuracy: {Y}%
### 📋 Fix Suggestions
1. {specific fix step}
2. {specific fix step}
```
## Automation Script (Optional)
```python
#!/usr/bin/env python3
"""
canvas_sync_check.py - Whiteboard and code consistency check script
Usage: python canvas_sync_check.py project.canvas /path/to/project
"""
import json
import ast
import os
from pathlib import Path
def load_canvas(canvas_path):
with open(canvas_path) as f:
return json.load(f)
def extract_imports(py_file):
"""Extract import relationships from Python file"""
with open(py_file) as f:
tree = ast.parse(f.read())
imports = []
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
imports.append(alias.name)
elif isinstance(node, ast.ImportFrom):
if node.module:
imports.append(node.module)
return imports
def check_consistency(canvas, project_path):
"""Compare whiteboard nodes with actual files"""
canvas_nodes = {n['text'].split('\n')[0].strip('# ')
for n in canvas.get('nodes', [])}
actual_files = set()
for py_file in Path(project_path).rglob('*.py'):
actual_files.add(py_file.stem)
missing_in_canvas = actual_files - canvas_nodes
missing_in_code = canvas_nodes - actual_files
return {
'missing_in_canvas': missing_in_canvas,
'missing_in_code': missing_in_code,
'coverage': len(canvas_nodes & actual_files) / len(actual_files) * 100
}
if __name__ == '__main__':
import sys
if len(sys.argv) != 3:
print("Usage: python canvas_sync_check.py <canvas_file> <project_path>")
sys.exit(1)
canvas = load_canvas(sys.argv[1])
result = check_consistency(canvas, sys.argv[2])
print(f"Coverage: {result['coverage']:.1f}%")
if result['missing_in_canvas']:
print(f"Missing in whiteboard: {result['missing_in_canvas']}")
if result['missing_in_code']:
print(f"Missing in code: {result['missing_in_code']}")
```
## CI/CD Integration
```yaml
# .github/workflows/canvas-check.yml
name: Canvas Sync Check
on:
pull_request:
paths:
- '**.py'
- '**.canvas'
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check canvas consistency
run: python scripts/canvas_sync_check.py docs/architecture.canvas src/
```

View File

@ -0,0 +1,31 @@
🚀 Canvas-Driven Development Method - Complete Workflow
1. Understand Core Philosophy: Canvas whiteboard as single source of truth, code is its serialized form; graphical language superior to text description; humans responsible for architecture design, AI responsible for code implementation
/
2. Prepare Tool Environment: Install Obsidian (free open-source whiteboard tool); Configure AI assistant (Claude/GPT-4, must support reading Canvas JSON format); Prepare target project codebase
/
3. Generate Initial Architecture Whiteboard: Provide project code path to AI; Use architecture analysis prompt to have AI scan project structure; AI automatically generates .canvas file containing module nodes and dependency connections
/
4. Open .canvas File in Obsidian: Import generated architecture whiteboard; Check auto-identified modules, files, API call relationships; Verify key dependency connections are accurate
/
5. Manually Optimize Whiteboard Architecture: Drag and adjust module positions for clear layout; Add implicit dependency connections AI missed; Add annotation nodes to mark key design decisions; Remove redundant or incorrect connections
/
6. Establish Code-Whiteboard Sync Mechanism: [Assumption: automation tools exist] Configure code change monitoring script; Set whiteboard auto-update rules (new file → new node, new import → new connection); Or manual maintenance: update corresponding whiteboard area after each code change
/
7. Use Whiteboard to Drive AI Programming (New Feature Development): Draw new module boxes and expected call relationships on whiteboard; Export whiteboard JSON and send to AI; Instruction: "Implement concrete code according to this architecture diagram"; AI generates files and function calls based on node names and connection directions
/
8. Use Whiteboard to Drive Code Refactoring (Architecture Adjustment): Delete/reconnect dependency lines between modules on whiteboard; Mark large modules to be split (e.g., payment_service split into payment_processor and payment_validator); Send modified whiteboard to AI: "Refactor code according to new architecture, list files to modify"
/
9. Use Whiteboard for Code Review: View whiteboard global architecture before review; Identify abnormal connections (e.g., frontend directly connecting to database, circular dependencies); Mark problem points on whiteboard; During discussion, point to whiteboard: "This call chain shouldn't exist"
/
10. Use Whiteboard to Accelerate Team Collaboration: Newcomers first view whiteboard for 1 minute to understand the big picture; Draw change scope on whiteboard during requirement review; Project whiteboard during technical planning meetings instead of code; Convert whiteboard annotations to development tasks after meeting
/
11. Maintain Whiteboard-Code Consistency: Check if whiteboard needs updating before each PR/MR merge; Periodically run auto-validation script: compare whiteboard JSON with actual code dependencies; When inconsistencies found, prioritize fixing whiteboard (because whiteboard is source of truth)
/
12. Extended Use Cases: Auto-generate whiteboard when taking over legacy projects for quick understanding; Mark hot paths on whiteboard during performance optimization; Check sensitive data flow on whiteboard during security audits; Draw service call topology on whiteboard during API design
/
13. [Gap Clarification] Specify your project type to optimize workflow: A) Monolith (single process, multiple modules) B) Microservices architecture (multiple services, RPC communication) C) Frontend-backend separation (frontend framework + backend API)? Default assumption A to continue
/
14. [Gap Clarification] Choose whiteboard granularity level: A) File level (each code file as one node) B) Class/function level (each class as one node) C) Service level (only show large modules)? Recommended: A for beginners, C for complex projects
/
15. Continuously Iterate Workflow: Weekly review if whiteboard reflects real architecture; Collect team feedback to optimize node naming and layout rules; Explore whiteboard integration with CI/CD (e.g., PR triggers whiteboard diff check); Share best practice cases to team knowledge base

View File

@ -0,0 +1,187 @@
# OpenCode CLI 配置
> 免费 AI 编程助手,支持 75+ 模型,无需信用卡
OpenCode 是一个开源 AI 编程代理,支持终端、桌面应用和 IDE 扩展。无需账号即可使用免费模型。
官网:[opencode.ai](https://opencode.ai/)
---
## 安装
```bash
# 一键安装(推荐)
curl -fsSL https://opencode.ai/install | bash
# 或使用 npm
npm install -g opencode-ai
# 或使用 Homebrew (macOS/Linux)
brew install anomalyco/tap/opencode
# Windows - Scoop
scoop bucket add extras && scoop install extras/opencode
# Windows - Chocolatey
choco install opencode
```
---
## 免费模型配置
OpenCode 支持多个免费模型提供商,无需付费即可使用。
### 方式一Z.AI推荐GLM-4.7
1. 访问 [Z.AI API 控制台](https://z.ai/manage-apikey/apikey-list) 注册并创建 API Key
2. 运行 `/connect` 命令,搜索 **Z.AI**
3. 输入 API Key
4. 运行 `/models` 选择 **GLM-4.7**
```bash
opencode
# 进入后输入
/connect
# 选择 Z.AI输入 API Key
/models
# 选择 GLM-4.7
```
### 方式二MiniMaxM2.1
1. 访问 [MiniMax API 控制台](https://platform.minimax.io/login) 注册并创建 API Key
2. 运行 `/connect`,搜索 **MiniMax**
3. 输入 API Key
4. 运行 `/models` 选择 **M2.1**
### 方式三Hugging Face多种免费模型
1. 访问 [Hugging Face 设置](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) 创建 Token
2. 运行 `/connect`,搜索 **Hugging Face**
3. 输入 Token
4. 运行 `/models` 选择 **Kimi-K2-Instruct** 或 **GLM-4.6**
### 方式四本地模型Ollama
```bash
# 安装 Ollama
curl -fsSL https://ollama.com/install.sh | sh
# 拉取模型
ollama pull llama2
```
`opencode.json` 中配置:
```json
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"llama2": {
"name": "Llama 2"
}
}
}
}
}
```
---
## 核心命令
| 命令 | 功能 |
|:---|:---|
| `/models` | 切换模型 |
| `/connect` | 添加 API Key |
| `/init` | 初始化项目(生成 AGENTS.md |
| `/undo` | 撤销上次修改 |
| `/redo` | 重做 |
| `/share` | 分享对话链接 |
| `Tab` | 切换 Plan 模式(只规划不执行) |
---
## 让 AI 执行一切配置任务
OpenCode 的核心思维:**把所有配置任务交给 AI**。
### 示例:安装 MCP 服务器
```
帮我安装 filesystem MCP 服务器,配置到 opencode
```
### 示例:部署 GitHub 开源项目
```
克隆 https://github.com/xxx/yyy 项目,阅读 README帮我完成所有依赖安装和环境配置
```
### 示例:配置 Skills
```
阅读项目结构,为这个项目创建合适的 AGENTS.md 规则文件
```
### 示例:配置环境变量
```
检查项目需要哪些环境变量,帮我创建 .env 文件模板并说明每个变量的用途
```
### 示例:安装依赖
```
分析 package.json / requirements.txt安装所有依赖解决版本冲突
```
---
## 推荐工作流
1. **进入项目目录**
```bash
cd /path/to/project
opencode
```
2. **初始化项目**
```
/init
```
3. **切换免费模型**
```
/models
# 选择 GLM-4.7 或 MiniMax M2.1
```
4. **开始工作**
- 先用 `Tab` 切换到 Plan 模式,让 AI 规划
- 确认方案后再让 AI 执行
---
## 配置文件位置
- 全局配置:`~/.config/opencode/opencode.json`
- 项目配置:`./opencode.json`(项目根目录)
- 认证信息:`~/.local/share/opencode/auth.json`
---
## 相关资源
- [OpenCode 官方文档](https://opencode.ai/docs/)
- [GitHub 仓库](https://github.com/opencode-ai/opencode)
- [Models.dev - 模型目录](https://models.dev)