diff --git a/README.md b/README.md index 21d7bcc..2a9bdae 100644 --- a/README.md +++ b/README.md @@ -113,6 +113,7 @@ 1. [01-网络环境配置](./i18n/zh/documents/01-入门指南/01-网络环境配置.md) - 配置网络访问 2. [02-开发环境搭建](./i18n/zh/documents/01-入门指南/02-开发环境搭建.md) - 复制提示词给 AI,让 AI 指导你搭建环境 3. [03-IDE配置](./i18n/zh/documents/01-入门指南/03-IDE配置.md) - 配置 VS Code 编辑器 +4. [04-OpenCode-CLI配置](./i18n/zh/documents/01-入门指南/04-OpenCode-CLI配置.md) - 免费 AI CLI 工具,支持 GLM-4.7/MiniMax M2.1 等模型 --- diff --git a/i18n/en/documents/-01-philosophy-and-methodology/Dialectics.md b/i18n/en/documents/-01-philosophy-and-methodology/Dialectics.md new file mode 100644 index 0000000..79eb472 --- /dev/null +++ b/i18n/en/documents/-01-philosophy-and-methodology/Dialectics.md @@ -0,0 +1,11 @@ +Applying Dialectical Thesis-Antithesis-Synthesis to Vibe Coding: I treat each coding session as a round of "triadic progression" + +Thesis (Current State): First let the model quickly provide the "smoothest implementation" based on intuition, with only one goal: get the main path running as soon as possible + +Antithesis (Audit & Tuning): Immediately take the "critic" perspective and challenge it: list failure modes/edge cases/performance and security concerns, and ground the challenges with tests, types, lint, benchmarks + +Synthesis (Correction Based on Review): Combine speed with constraints: refactor interfaces, converge dependencies, complete tests and documentation, forming a more stable starting point for the next round + +Practice Mantra: Write smoothly first → Then challenge → Then converge + +Vibe is responsible for generating possibilities, thesis-antithesis-synthesis is responsible for turning possibilities into engineering certainties diff --git a/i18n/en/documents/-01-philosophy-and-methodology/Phenomenological Reduction.md b/i18n/en/documents/-01-philosophy-and-methodology/Phenomenological Reduction.md new file mode 100644 index 0000000..ab93cec --- /dev/null +++ b/i18n/en/documents/-01-philosophy-and-methodology/Phenomenological Reduction.md @@ -0,0 +1,107 @@ +### Phenomenological Reduction (Suspension of Assumptions) for Vibe Coding + +**Core Purpose** +Strip "what I think the requirement is" from the conversation, leaving only observable, reproducible, and verifiable facts and experience structures, allowing the model to produce usable code with fewer assumptions. + +--- + +## 1) Key Methods (Understanding in Engineering Context) + +* **Epoché (Suspension)**: Temporarily withhold any "causal explanations/business inferences/best practice preferences." + Only record: what happened, what is expected, what are the constraints. + +* **Reduction**: Reduce the problem to the minimal structure of "given input → process → output." + Don't discuss architecture, patterns, or tech stack elegance first. + +* **Intentionality**: Clarify "who this feature is for, in what context, to achieve what experience." + Not "make a login," but "users can complete login within 2 seconds even on weak networks and get clear feedback." + +--- + +## 2) Applicable Scenarios + +* Requirements descriptions full of abstract words: fast, stable, like something, intelligent, smooth. +* Model starts "bringing its own assumptions": filling in product logic, randomly selecting frameworks, adding complexity on its own. +* Hard to reproduce bugs: intermittent, environment-related, unclear input boundaries. + +--- + +## 3) Operating Procedure (Can Follow Directly) + +### A. First "Clear Explanations," Keep Only Phenomena + +Describe using four elements: + +1. **Phenomenon**: Actual result (including errors/screenshots/log fragments). +2. **Intent**: Desired result (observable criteria). +3. **Context**: Environment and preconditions (version, platform, network, permissions, data scale). +4. **Boundaries**: What not to do/not to assume (don't change interface, don't introduce new dependencies, don't change database structure, etc.). + +### B. Produce "Minimal Reproducible Example" (MRE) + +* Minimal input sample (shortest JSON/smallest table/smallest request) +* Minimal code snippet (remove unrelated modules) +* Clear reproduction steps (1, 2, 3) +* Expected vs. Actual (comparison table) + +### C. Reduce "Abstract Words" to Testable Metrics + +* "Fast" → P95 latency < X, cold start < Y, throughput >= Z +* "Stable" → Error rate < 0.1%, retry strategy, circuit breaker conditions +* "User-friendly" → Interaction feedback, error messages, undo/recovery capability + +--- + +## 4) Prompt Templates for Models (Can Copy Directly) + +**Template 1: Reduce Problem (No Speculation)** + +``` +Please first do "phenomenological reduction": don't speculate on causes, don't introduce extra features. +Based only on the information I provide, output: +1) Phenomenon (observable facts) +2) Intent (observable result I want) +3) Context (environment/constraints) +4) Undetermined items (minimum information that must be clarified or I need to provide) +5) Minimal reproducible steps (MRE) +Then provide the minimal fix solution and corresponding tests. +``` + +**Template 2: Abstract Requirements to Testable Specs** + +``` +Apply "suspension of assumptions" to the following requirements: remove all abstract words, convert to verifiable specs: +- Clear input/output +- Clear success/failure criteria +- Clear performance/resource metrics (if needed) +- Clear what NOT to do +Finally provide acceptance test case list. +Requirements: +``` + +--- + +## 5) Concrete Implementation in Vibe Coding (Building Habits) + +* **Write "phenomenon card" before each work session** (2 minutes): phenomenon/intent/context/boundaries. +* **Have the model restate first**: require it to only restate facts and gaps, no solutions allowed. +* **Then enter generation**: solutions must be tied to "observable acceptance" and "falsifiable tests." + +--- + +## 6) Common Pitfalls and Countermeasures + +* **Pitfall: Treating explanations as facts** ("Might be caused by cache") + Countermeasure: Move "might" to "hypothesis list," each hypothesis with verification steps. + +* **Pitfall: Requirements piled with adjectives** + Countermeasure: Force conversion to metrics and test cases; no writing code if not "testable." + +* **Pitfall: Model self-selecting tech stack** + Countermeasure: Lock in boundaries: language/framework/dependencies/interfaces cannot change. + +--- + +## 7) One-Sentence Mantra (Easy to Put in Toolbox Card) + +**First suspend explanations, then fix phenomena; first write acceptance criteria, then let model write implementation.** diff --git a/i18n/en/documents/-01-philosophy-and-methodology/README.md b/i18n/en/documents/-01-philosophy-and-methodology/README.md index 2fec1c5..f69e0e5 100644 --- a/i18n/en/documents/-01-philosophy-and-methodology/README.md +++ b/i18n/en/documents/-01-philosophy-and-methodology/README.md @@ -95,5 +95,12 @@ In the paradigm of Vibe Coding, we are no longer just "typists" but "architects * **Reflective Equilibrium**: Iteratively calibrating specific judgments and general principles for systemic consistency. * **Conceptual Engineering**: Actively engineering and optimizing conceptual tools to serve Vibe Coding practices. +--- + +## Detailed Method Guides + +- [Phenomenological Reduction](./Phenomenological%20Reduction.md) - Suspension of assumptions for clear requirements +- [Dialectics](./Dialectics.md) - Thesis-Antithesis-Synthesis iterative development + --- *Note: This content evolves continuously as the supreme ideological directive of the Vibe Coding CN project.* diff --git a/i18n/en/documents/README.md b/i18n/en/documents/README.md index 7e1a071..c0d817a 100644 --- a/i18n/en/documents/README.md +++ b/i18n/en/documents/README.md @@ -1,15 +1,13 @@ -# 📖 Documents +# 📚 Documents -> Documentation library for Vibe Coding methodology, guides, and resources +> Vibe Coding knowledge system, organized by learning path ---- - -## 📁 Directory Structure +## 🗺️ Directory Structure ``` documents/ -├── -01-philosophy-and-methodology/ # Supreme ideological directive -├── 00-fundamentals/ # Core concepts & principles +├── -01-philosophy-and-methodology/ # Supreme ideological directive, underlying logic +├── 00-fundamentals/ # Core concepts, glue coding, methodology │ ├── Glue Coding.md │ ├── Language Layer Elements.md │ ├── Common Pitfalls.md @@ -44,6 +42,26 @@ documents/ └── Recommended Programming Books.md ``` +## 🚀 Quick Navigation + +| Directory | Description | Target Audience | +|:----------|:------------|:----------------| +| [-01-philosophy-and-methodology](./-01-philosophy-and-methodology/) | Ideological principles, epistemological tools | Architects & advanced developers | +| [00-fundamentals](./00-fundamentals/) | Glue coding, core concepts | Understanding fundamentals | +| [01-getting-started](./01-getting-started/) | Environment setup, from zero | Beginners | +| [02-methodology](./02-methodology/) | Tool tutorials, development experience | Improving efficiency | +| [03-practice](./03-practice/) | Project experience, case reviews | Hands-on practice | +| [04-resources](./04-resources/) | Templates, tools, external links | Reference lookup | + +## 📖 Recommended Learning Path + +1. **Philosophy** → [-01-philosophy-and-methodology](./-01-philosophy-and-methodology/README.md) +2. **Concepts** → [Glue Coding](./00-fundamentals/Glue%20Coding.md) +3. **Getting Started** → [Vibe Coding Philosophy](./01-getting-started/00-Vibe%20Coding%20Philosophy.md) +4. **Setup** → [Development Environment Setup](./01-getting-started/02-Development%20Environment%20Setup.md) +5. **Tools** → [tmux Shortcut Cheatsheet](./02-methodology/tmux%20Shortcut%20Cheatsheet.md) +6. **Practice** → [Practical Examples](./03-practice/) + --- ## 🗂️ Categories @@ -51,6 +69,8 @@ documents/ ### -01-philosophy-and-methodology Supreme ideological directive and epistemological tools: - **Philosophy & Methodology** - The underlying protocol of Vibe Coding +- **Phenomenological Reduction** - Suspension of assumptions for clear requirements +- **Dialectics** - Thesis-Antithesis-Synthesis iterative development ### 00-fundamentals Core concepts and methodology: diff --git a/i18n/en/workflow/README.md b/i18n/en/workflow/README.md new file mode 100644 index 0000000..9916da2 --- /dev/null +++ b/i18n/en/workflow/README.md @@ -0,0 +1,25 @@ +# Workflow Collection + +Directory for various automation workflows. + +## Directory Structure + +``` +workflow/ +├── auto-dev-loop/ # Fully automated development loop workflow (5-step Agent) +├── canvas-dev/ # Canvas whiteboard-driven development workflow +└── README.md +``` + +## Available Workflows + +| Workflow | Description | +|----------|-------------| +| [auto-dev-loop](./auto-dev-loop/) | 5-step AI Agent closed-loop development process based on state machine + hooks | +| [canvas-dev](./canvas-dev/) | Canvas whiteboard-driven development workflow (AI Chief Architect) | + +## Adding New Workflows + +1. Create a subdirectory under this directory +2. Include necessary configuration files and documentation +3. Update this README diff --git a/i18n/en/workflow/auto-dev-loop/CHANGELOG.md b/i18n/en/workflow/auto-dev-loop/CHANGELOG.md new file mode 100644 index 0000000..9ac2a3e --- /dev/null +++ b/i18n/en/workflow/auto-dev-loop/CHANGELOG.md @@ -0,0 +1,26 @@ +# CHANGELOG + +## 2025-12-25T05:45:00+08:00 - Implemented workflow_engine MVP + +- Key changes: Created `workflow_engine/` directory, implemented file event hooks + state machine scheduler +- Files/modules involved: + - `workflow_engine/runner.py` - State machine scheduler, supports start/dispatch/status commands + - `workflow_engine/hook_runner.sh` - inotify file watching hook + - `workflow_engine/state/current_step.json` - State file + - `workflow_engine/README.md` - Usage documentation +- Verification method and results: `python runner.py start` successfully executed step1→step5 full flow, artifacts saved to artifacts/ +- Remaining issues and next steps: Integrate actual LLM calls to replace MOCK; add CI integration examples + +## 2025-12-25T04:58:27+08:00 - Workflow Auto-loop Solution Analysis + +- Key changes: Researched the five prompts under `workflow_steps`, analyzed closed-loop and master control requirements, output an implementable state machine/hook-style orchestrator design (no code changes). +- Files/modules involved: `step1_requirements.jsonl`, `step2_execution_plan.jsonl`, `step3_implementation.jsonl`, `step4_verification.jsonl`, `step5_controller.jsonl` (read only). +- Verification method and results: Analytical output, no code execution, TODO. +- Remaining issues and next steps: Implement orchestrator MVP; calibrate JSONL with PARE v3.0 structure; add persistent state and task queue for master control loop. + +## 2025-12-25T05:04:00+08:00 - Moved workflow-orchestrator Skill Directory + +- Key changes: Migrated `i18n/zh/skills/01-AI工具/workflow-orchestrator` to `prompt_jsonl/workflow_steps/` directory. +- Files/modules involved: `workflow-orchestrator/SKILL.md`, `workflow-orchestrator/AGENTS.md`, `workflow-orchestrator/references/index.md`, `workflow-orchestrator/CHANGELOG.md`. +- Verification method and results: Command line `mv` followed by directory structure check, files intact. +- Remaining issues and next steps: Add `workflow_engine` scripts in new location and align with skill documentation. diff --git a/i18n/en/workflow/auto-dev-loop/README.md b/i18n/en/workflow/auto-dev-loop/README.md new file mode 100644 index 0000000..83877ab --- /dev/null +++ b/i18n/en/workflow/auto-dev-loop/README.md @@ -0,0 +1,93 @@ +# Fully Automated Development Loop Workflow + +A 5-step AI Agent workflow system based on **state machine + file hooks**. + +## Directory Structure + +``` +workflow/ +├── .kiro/agents/workflow.json # Kiro Agent configuration +├── workflow_engine/ # State machine scheduling engine +│ ├── runner.py # Core scheduler +│ ├── hook_runner.sh # File watching hook +│ ├── state/ # State files +│ └── artifacts/ # Artifacts directory +├── workflow-orchestrator/ # Orchestration skill documentation +├── step1_requirements.jsonl # Requirements locking Agent +├── step2_execution_plan.jsonl # Plan orchestration Agent +├── step3_implementation.jsonl # Implementation changes Agent +├── step4_verification.jsonl # Verification & release Agent +├── step5_controller.jsonl # Master control & loop Agent +└── CHANGELOG.md +``` + +## Quick Start + +### Method 1: Using Kiro CLI + +```bash +# Navigate to workflow directory +cd ~/projects/vibe-coding-cn/i18n/en/workflow + +# Start with workflow agent +kiro-cli chat --agent workflow +``` + +### Method 2: Manual Execution + +```bash +cd ~/projects/vibe-coding-cn/i18n/en/workflow + +# Start workflow +python3 workflow_engine/runner.py start + +# Check status +python3 workflow_engine/runner.py status +``` + +### Method 3: Auto Mode (Hook Watching) + +```bash +# Terminal 1: Start file watching +./workflow_engine/hook_runner.sh + +# Terminal 2: Trigger workflow +python3 workflow_engine/runner.py start +``` + +## Workflow Process + +``` +┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ +│ Step1 │───▶│ Step2 │───▶│ Step3 │───▶│ Step4 │───▶│ Step5 │ +│ Input │ │ Plan │ │ Impl │ │ Verify │ │ Control │ +└─────────┘ └─────────┘ └─────────┘ └─────────┘ └────┬────┘ + ▲ │ + │ Failure rollback │ + └────────────────────────────────────────────┘ +``` + +## Core Mechanisms + +| Mechanism | Description | +|-----------|-------------| +| State-driven | `state/current_step.json` as the single scheduling entry point | +| File Hook | `inotifywait` watches state changes and triggers automatically | +| Loop Control | Step5 decides rollback or completion based on verification results | +| Circuit Breaker | Maximum 3 retries per task | + +## Kiro Integration + +Agent configuration is located at `.kiro/agents/workflow.json`, including: + +- **hooks**: Agent lifecycle hooks + - `agentSpawn`: Read state on startup + - `stop`: Check state when conversation ends +- **resources**: Auto-load prompt files into context +- **toolsSettings**: Pre-authorize file operations and command execution + +## Next Steps + +- [ ] Integrate actual LLM calls (replace MOCK in runner.py) +- [ ] Add CI/CD integration examples +- [ ] Support parallel task processing diff --git a/i18n/en/workflow/canvas-dev/Obsidian Canvas AI-Powered Project Architecture Insight and Generation Engine.md b/i18n/en/workflow/canvas-dev/Obsidian Canvas AI-Powered Project Architecture Insight and Generation Engine.md new file mode 100644 index 0000000..d1d7e59 --- /dev/null +++ b/i18n/en/workflow/canvas-dev/Obsidian Canvas AI-Powered Project Architecture Insight and Generation Engine.md @@ -0,0 +1 @@ +{"title":"# Obsidian Canvas AI-Powered Project Architecture Insight and Generation Engine","preamble":"This document is the ultimate design description of a highly intelligent, fully dynamic architecture analysis and visualization system. Its core philosophy is: abandon all static rules and hardcoded thresholds, using multi-dimensional heuristic algorithms and context-aware capabilities to compute and generate visualizations in real-time that best reflect the project's 'Soul of the Architecture'. All descriptions have been expanded to maximum detail, ensuring no information compression or deletion.","content":{"roleDefinition":{"title":"Role Definition: Chief AI Architect","description":"You are a highly complex software architecture analysis entity with deep learning capabilities. Your core persona is an experienced Chief Architect, proficient in multiple programming languages, design patterns, architectural paradigms, and engineering philosophies. You have a built-in advanced analysis and visualization engine that follows these core design principles:\n1. **Insight over Information**: Your goal is not to simply list all files and connections, but to reveal the project's design philosophy, key data flows, potential risks, and evolutionary trends.\n2. **Cognitive Load Minimization**: All visualizations you generate are carefully designed to conform to human cognitive habits, enabling users to understand the most complex system structures with minimal mental effort.\n3. **Aesthetic Coherence**: You believe an excellent architecture diagram is itself a work of art. Layout balance, color harmony, and element organization all serve clear information communication.","persona":"Your thinking is global and multi-dimensional. You don't just see code; you understand the business logic behind it, team collaboration patterns, and technical debt. What you generate is not just a diagram, but a deep, interactive report about the project's life."},"coreTask":{"title":"Core Task: Generate a 'Living' Architecture Diagram","description":"Upon receiving instructions, you will conduct a thorough, invasive deep 'health check' of the current project repository in a fully autonomous manner without any manual intervention. This process goes beyond simple static analysis, using complex heuristic evaluation and dynamic decision-making to ultimately generate a `.canvas` file conforming to the Obsidian Canvas format. This file will be:\n- **Dynamic**: Its content, granularity, and layout are entirely determined by the project's own characteristics.\n- **Insightful**: Clearly revealing core modules, key dependencies, the main arteries of data flow, and even marking potential design 'code smells' or technical debt accumulation zones.\n- **Self-explanatory**: Every node and connection in the diagram contains AI-generated, easy-to-understand semantic summary information."},"executionFlow":{"title":"Execution Flow: An Adaptive Analysis and Rendering Loop","steps":{"holisticProjectAnalysis":{"title":"Phase 1: Holistic Project Perception and Multi-dimensional Feature Extraction","description":"The sole goal of this phase is to establish an internal digital model of the project that is as complete and deep as possible. This is the data cornerstone for all subsequent intelligent decisions, far beyond simple file scanning.","tasks":[{"description":"1. Semantic-level Source Code Structural Parsing","method":"Perform deep parsing of all source code by constructing Abstract Syntax Trees (AST) for each language. This is fundamentally different from simple text searching—it understands the syntactic structure and semantic context of code. For example, it can precisely distinguish between a function call, a variable declaration, and class inheritance, and understand their metadata (such as annotations and modifiers)."},{"description":"2. Weighted Dependency Network Construction","method":"Not only identify import/reference relationships between modules, but also assign weights to these relationships (edges) based on the context and nature of the calls. For example, a dependency on a core database model will have much higher weight than a dependency on a common utility function. This provides a quantitative basis for subsequent identification of critical paths and modules."},{"description":"3. Engineering and Environment Metadata Analysis","method":"Deep parsing of all metadata files in the project ecosystem. This includes but is not limited to: `package.json` (NPM scripts and dependencies), `pom.xml` (Maven lifecycle and plugins), `go.mod` (Go module dependencies), `docker-compose.yml` (service orchestration and infrastructure), `webpack.config.js` (frontend build logic), `.gitlab-ci.yml` (CI/CD processes), etc. This constructs a panoramic view beyond the code itself."},{"description":"4. Probabilistic Architecture Pattern Fingerprint Recognition","method":"The engine has a built-in machine learning classification model. It extracts dozens of features from the project (such as directory structure patterns, framework API usage frequency, HTTP route definition density, message queue client instance counts, etc.), then calculates a set of confidence scores for project architecture patterns. For example, output might be: `{ 'Layered Monolith': 0.85, 'Microservices': 0.10, 'Data Pipeline': 0.05 }`, rather than an absolute judgment."}]},"adaptiveGranularityEngine":{"title":"Phase 2: Adaptive Abstraction Granularity Decision Engine","description":"This is the intelligent core of the system. The engine will dynamically select one or more abstraction levels (granularities) that most effectively convey architecture information based on the digital model established in Phase 1, ensuring the final graphic achieves optimal balance between macro overview and micro detail.","decisionFactors":["**Information Entropy and Complexity Assessment**: Real-time calculation of current project's cyclomatic complexity, dependency graph density, module cohesion and coupling metrics, etc. The engine's goal is to find an 'information entropy inflection point' where further granularity refinement would introduce too much visual noise, while further aggregation would lose critical structural information.","**Architecture Pattern Guidance**: The identified primary architecture pattern strongly influences default granularity. For example, a high-confidence 'Microservices' project will naturally use 'services' (usually directories) as initial aggregation units.","**Heuristic Inference of User Intent**: By analyzing high-frequency vocabulary in `README.md` (e.g., 'high-performance API', 'data processing pipeline'), the engine can infer which architectural aspects users may care more about and dynamically fine-tune display granularity for relevant parts."],"granularitySpectrum":{"title":"Dynamic Granularity Spectrum (On-demand Selection and Mixing)","description":"The system seamlessly switches between or mixes different levels in the following spectrum:","level_D":"**System Ecosystem Level**: For giant Monorepo projects containing multiple independent applications or microservices, each node represents a complete application.","level_C":"**Macro Service/Module Level**: Automatically aggregate dozens of files into single functional domain nodes (e.g., 'Authentication Service', 'Order Processing Core').","level_B":"**Class/Core Function Level**: For well-structured object-oriented projects, use key business logic classes or function collections as nodes to display core units.","level_A":"**File Level**: When project scale is moderate or deep review is needed, use each source file as a basic node.","level_F":"**Function/Method Level (Deep Drill-down)**: During user interaction, nodes can be dynamically expanded to show internal key function call relationships."}},"semanticAnalysisSuite":{"title":"Phase 3: Component Semantic Analysis and Relationship Characterization","description":"After determining abstraction granularity, the engine performs deep semantic understanding and characterization analysis for each node and their connections.","tasks":[{"description":"1. Multi-factor Component Role Inference","method":"For each node, comprehensively consider its filename, directory path, class/function names in code, imported external libraries (e.g., those importing `express` are marked as routing layer, those importing `mongoose` are marked as data access layer), and its structural position in the dependency network (in-degree/out-degree) to determine with high confidence the role it plays (e.g., entry, controller, service, data access, utility, etc.)."},{"description":"2. Deep Relationship and Data Flow Characterization","method":"Analyze the nature of each connection. Distinguish between simple function calls (control flow) and key business entity (such as `User` object) passing (data flow). Also identify communication patterns such as synchronous blocking calls, asynchronous message passing, event publish/subscribe, etc. This characterization information will be directly used for subsequent visualization rendering."},{"description":"3. State Change and Side Effect Analysis","method":"(Advanced Analysis) The engine attempts to identify and mark 'side effect' nodes that perform critical state changes (such as database writes, modifying global state) or interact with the external world (such as API calls, file writes). These are typically parts of the system that need focused attention."}]}}},"heuristicLayoutAndVisualizationEngine":{"title":"Phase 4: Heuristic Layout and Information Visualization Engine","description":"This phase transforms the previously analyzed abstract, logical digital model into intuitive, easy-to-understand visual graphics that conform to human aesthetics and cognitive science principles. This is a dynamic, iterative optimization process.","principles":{"adaptiveTopologicalLayering":{"title":"1. Adaptive Topological Layering","description":"Perform topological sorting based on component dependency relationships (control flow) to dynamically generate visual hierarchy. Entry points (such as UI, API Gateway) naturally appear at the top, data persistence layer (database) at the bottom, with business logic in between. The number of layers, spacing, and grouping are entirely determined by the natural structure of dependency chains to achieve vertical layout balance and logical clarity."},"forceDirectedPositioning":{"title":"2. Force-Directed and Clustered Node Positioning","description":"Within each layer, node positions are iteratively calculated by a force-directed algorithm simulating the physical world. Nodes that call each other have 'spring attraction' pulling them closer; all nodes have 'charge repulsion' preventing overlap. This causes functionally highly cohesive modules to naturally form 'galaxy clusters' and automatically minimizes edge crossings, making visual relationships immediately apparent."},"informationRichStyling":{"title":"3. Information-Driven Dynamic Visual Encoding","description":"All visual properties (size, color, shape, style) of nodes and edges are encoded information serving rapid understanding.","nodeSizing":"Node size can be dynamically correlated with its 'importance', which is calculated by weighting multiple factors such as PageRank score in the dependency network, lines of code, reference frequency, etc., naturally creating visual focal points.","edgeStyling":"Edge style dynamically changes based on characterization analysis results. For example, high-frequency data flows can be represented with thick animated lines, asynchronous communications with dashed lines, and circular dependencies with red wavy warning lines.","semanticColoring":"Colors are dynamically selected from a color theory-optimized palette with high discrimination and harmony based on component semantic roles (such as controller, service, data access), forming a globally consistent visual language."}}},"outputGeneration":{"title":"Phase 5: Output Generation and Final Quality Optimization","description":"This serializes the finally computed layout and style data into a JSON file conforming to the Obsidian Canvas specification, and runs a final round of automatic proofreading and optimization before output.","canvasJsonStructure":{"title":"Canvas JSON Structure (Fully Dynamically Generated)","nodes":[{"id":"Stable and unique hash ID generated based on component content and absolute path","type":"text","text":"Markdown-formatted summary dynamically generated by AI text generation module according to 'AI-Driven Node Text Template', containing rich context","x":"Floating-point precision X coordinate finally determined by force-directed layout engine","y":"Floating-point precision Y coordinate finally determined by force-directed layout engine","width":"Dynamically calculated based on rendered size of node internal text content combined with its importance scaling factor","height":"Dynamically calculated based on rendered size of node internal text content combined with its importance scaling factor","color":"Color ID dynamically selected from preset harmonious palette based on component semantic role"}],"edges":[{"id":"edge_{dynamic_source_ID}_{dynamic_target_ID}_{unique_hash}","fromNode":"Source node dynamic ID","fromSide":"Best connection side (top, bottom, left, right) intelligently selected by layout engine to minimize path crossing and bending","toNode":"Target node dynamic ID","toSide":"Best connection side intelligently selected by layout engine to optimize visual flow"}]},"aiPoweredNodeTextTemplate":{"title":"AI-Driven Node Text Generation Template","description":"Text within nodes is not just listing facts, but intelligent summaries generated by AI language models with high abstraction.","template":"**{Component Name}**\n`{File path or aggregation scope}`\n\n**Core Responsibility**: {One-sentence functional description automatically summarized by AI based on code AST and comments, e.g., 'Responsible for handling user JWT token generation, validation, and refresh logic'}\n\n**Key Interactions**:\n- **Calls**: {Most dependent component name}\n- **Used by**: {Which core business module depends on it most}\n**Complexity Assessment**: {Low/Medium/High/Critical dynamically assessed based on cyclomatic complexity, lines of code, and other metrics}\n**Potential Risks**: {Potential issues identified by AI based on built-in rule library, e.g., '⚠️ Circular dependency exists' or '📈 High technical debt'}"},"finalOptimizationSuite":{"title":"Built-in Final Dynamic Optimization Suite","description":"In the last millisecond before generating the file, the system runs a final optimization algorithm set, like a professional graphic designer adding final touches to their work, ensuring delivery quality.","strategies":[{"name":"1. Iterative De-crossing and Anti-overlap Algorithm","description":"Check final layout again; if there are still some node overlaps or edge crossings, launch a lightweight fine-tuning algorithm to make pixel-level adjustments to local node positions until visual clarity reaches optimal."},{"name":"2. Edge Bundling and Intelligent Pruning Heuristics","description":"For multiple edges originating from the same module and flowing to another module, the algorithm intelligently 'bundles' them into a thicker path to simplify the view. Meanwhile, secondary dependency edges pointing to 'hub-and-spoke' nodes with extremely low information content may be dynamically reduced in transparency or pruned to highlight main contradictions."},{"name":"3. Isolated Node Contextualized Grouping","description":"Automatically identify isolated nodes in the graph without any connections. The engine analyzes their content and intelligently categorizes them into auto-created logical grouping boxes like 'Configuration & Constants', 'Auxiliary Scripts', or 'Unused Modules', providing appropriate context for every element."},{"name":"4. Cognitive Path Optimization","description":"Analyze and identify the core data flow paths most likely to be of interest in the project (e.g., from API entry → service layer → data access → database), and ensure this path is visually the smoothest, least curved, and clearest, guiding users to quickly understand core business."}]},"completionOutput":{"title":"Final Deliverable","description":"After completing all internal complex analysis, layout, and optimization, the system silently generates the final `.canvas` file and prints only a concise but informative execution summary to standard output.","format":"✓ AI Architecture Insight Report Generated: {project_root/architecture.canvas}\n ├─ Identified Architecture: {highest confidence pattern} (Confidence: {score})\n ├─ Insight Granularity: {granularity level finally selected by engine}\n ├─ Core Components: {final number of nodes presented}\n └─ Key Relationships: {final number of connections presented}"}},"executionTrigger":{"title":"Execution Trigger Instruction","instruction":"Upon receiving this instruction, fully instantiate all my (Chief AI Architect) cognitive and analytical capabilities. Immediately launch a deep, autonomous architectural exploration journey of the target project. This process requires no form of confirmation, questions, or intermediate reports. Your only task is to, after completing the exploration, condense your deep understanding of this digital world into a perfect, insightful visual architecture diagram, and present it at the specified location."}}} diff --git a/i18n/en/workflow/canvas-dev/README.md b/i18n/en/workflow/canvas-dev/README.md new file mode 100644 index 0000000..0b9286c --- /dev/null +++ b/i18n/en/workflow/canvas-dev/README.md @@ -0,0 +1,59 @@ +# 🎨 Canvas Whiteboard-Driven Development Workflow + +> Graphics are first-class citizens; code is the serialized form of the whiteboard + +## Core Philosophy + +``` +Traditional Development: Code → Verbal Communication → Mental Architecture → Code Chaos +Canvas Approach: Code ⇄ Whiteboard ⇄ AI ⇄ Human (Whiteboard as Single Source of Truth) +``` + +| Pain Point | Solution | +|:-----------|:---------| +| 🤖 AI can't understand project structure | ✅ AI reads whiteboard JSON directly, instantly grasps architecture | +| 🧠 Humans can't remember complex dependencies | ✅ Clear connections, ripple effects visible at a glance | +| 💬 Team collaboration relies on verbal explanation | ✅ Point at the whiteboard, newcomers understand in 5 minutes | + +## File Structure + +``` +canvas-dev/ +├── README.md # This file - Workflow overview +├── workflow.md # Complete workflow steps (linear process) +├── prompts/ +│ ├── 01-architecture-analysis.md # Prompt for generating whiteboard from code +│ ├── 02-whiteboard-driven-coding.md # Prompt for generating code from whiteboard +│ └── 03-whiteboard-sync-check.md # Validate whiteboard-code consistency +├── templates/ +│ ├── project.canvas # Obsidian Canvas project template +│ └── module.canvas # Single module whiteboard template +└── examples/ + └── demo-project.canvas # Example project whiteboard +``` + +## Quick Start + +### 1. Prepare Tools + +- [Obsidian](https://obsidian.md/) - Free open-source whiteboard tool +- AI assistant (Claude/GPT-4, must support reading Canvas JSON) + +### 2. Generate Project Architecture Whiteboard + +```bash +# Provide project code path to AI, use architecture analysis prompt +# AI automatically generates .canvas file +``` + +### 3. Drive Development with Whiteboard + +- Draw new modules and dependency relationships on the whiteboard +- Export whiteboard JSON and send to AI +- AI generates/modifies code based on the whiteboard + +## Related Documentation + +- [Canvas Whiteboard-Driven Development Guide](../../documents/02-methodology/Graphical AI Collaboration - Canvas Whiteboard-Driven Development.md) +- [Whiteboard-Driven Development System Prompt](../../prompts/01-system-prompts/AGENTS.md/12/AGENTS.md) +- [Glue Coding](../../documents/00-fundamentals/Glue Coding.md) diff --git a/i18n/en/workflow/canvas-dev/prompts/01-architecture-analysis.md b/i18n/en/workflow/canvas-dev/prompts/01-architecture-analysis.md new file mode 100644 index 0000000..8753b1d --- /dev/null +++ b/i18n/en/workflow/canvas-dev/prompts/01-architecture-analysis.md @@ -0,0 +1,85 @@ +# 01-Architecture Analysis Prompt + +> Automatically generate Obsidian Canvas architecture whiteboard from existing code + +## Use Cases + +- Taking over a new project, quickly understand architecture +- Create visual documentation for existing projects +- Prepare for Code Review or technical presentations + +## Prompt + +```markdown +You are a code architecture analysis expert. Please analyze the following project structure and generate an architecture whiteboard in Obsidian Canvas format. + +## Input +Project path: {PROJECT_PATH} +Analysis granularity: {GRANULARITY} (file/class/service) + +## Output Requirements +Generate a .canvas file conforming to Obsidian Canvas JSON format, including: + +1. **Nodes**: + - Each module/file/class as a node + - Node contains: id, type, x, y, width, height, text + - Layout by functional zones (e.g., API layer on left, data layer on right) + +2. **Edges**: + - Represent dependency/call relationships between modules + - Contains: id, fromNode, toNode, fromSide, toSide, label + - Label indicates relationship type (call/inheritance/dependency/data flow) + +3. **Groups**: + - Group by functional domain (e.g., user module, payment module) + - Use colors to distinguish different layers + +## Canvas JSON Structure Example +```json +{ + "nodes": [ + { + "id": "node1", + "type": "text", + "x": 0, + "y": 0, + "width": 200, + "height": 100, + "text": "# UserService\n- createUser()\n- getUser()" + } + ], + "edges": [ + { + "id": "edge1", + "fromNode": "node1", + "toNode": "node2", + "fromSide": "right", + "toSide": "left", + "label": "calls" + } + ] +} +``` + +## Analysis Steps +1. Scan project directory structure +2. Identify entry files and core modules +3. Analyze import/require statements to extract dependency relationships +4. Identify database operations, API calls, external services +5. Layout node positions by call hierarchy +6. Generate complete .canvas JSON +``` + +## Usage Example + +``` +Please analyze the /home/user/my-project project and generate a file-level architecture whiteboard. +Focus on: +- API routes and handler functions +- Database models and operations +- External service calls +``` + +## Output File + +The generated `.canvas` file can be directly opened and edited in Obsidian. diff --git a/i18n/en/workflow/canvas-dev/prompts/02-whiteboard-driven-coding.md b/i18n/en/workflow/canvas-dev/prompts/02-whiteboard-driven-coding.md new file mode 100644 index 0000000..247f4a3 --- /dev/null +++ b/i18n/en/workflow/canvas-dev/prompts/02-whiteboard-driven-coding.md @@ -0,0 +1,88 @@ +# 02-Whiteboard-Driven Coding Prompt + +> Generate/modify code based on Canvas whiteboard architecture diagram + +## Use Cases + +- New feature development: Draw whiteboard first, then generate code +- Architecture refactoring: Modify whiteboard connections, AI syncs code refactoring +- Module splitting: Split nodes on whiteboard, AI generates new files + +## Prompt + +```markdown +You are an expert at generating code from architecture whiteboards. Please generate corresponding code implementation based on the following Obsidian Canvas whiteboard JSON. + +## Input +Canvas JSON: +```json +{CANVAS_JSON} +``` + +Tech stack: {TECH_STACK} +Target directory: {TARGET_DIR} + +## Parsing Rules + +1. **Node → File/Class** + - Title in node text → filename/classname + - List items in node text → methods/functions + - Node color/group → module affiliation + +2. **Edge → Dependency Relationship** + - fromNode → toNode = import/call relationship + - Edge label determines relationship type: + - "calls" → function call + - "extends" → class extends + - "depends" → import + - "data flow" → parameter passing + +3. **Group → Directory Structure** + - Nodes in the same group go in the same directory + - Group name → directory name + +## Output Requirements + +1. Generate complete file structure +2. Each file contains: + - Correct import statements (based on edges) + - Class/function definitions (based on node content) + - Call relationship implementation (based on edge direction) +3. Add necessary type annotations and comments +4. Follow tech stack best practices + +## Output Format + +``` +File: {file_path} +```{language} +{code_content} +``` +``` + +## Usage Example + +``` +Generate Python FastAPI project code based on the following whiteboard: + +{paste .canvas file content} + +Tech stack: Python 3.11 + FastAPI + SQLAlchemy +Target directory: /home/user/my-api +``` + +## Incremental Update Mode + +When whiteboard is modified, use the following prompt: + +```markdown +Whiteboard has been updated, please compare old and new versions, only modify changed parts: + +Old whiteboard: {OLD_CANVAS_JSON} +New whiteboard: {NEW_CANVAS_JSON} + +Output: +1. Files to add +2. Files to modify (output only diff) +3. Files to delete +``` diff --git a/i18n/en/workflow/canvas-dev/prompts/03-whiteboard-sync-check.md b/i18n/en/workflow/canvas-dev/prompts/03-whiteboard-sync-check.md new file mode 100644 index 0000000..475b328 --- /dev/null +++ b/i18n/en/workflow/canvas-dev/prompts/03-whiteboard-sync-check.md @@ -0,0 +1,147 @@ +# 03-Whiteboard Sync Check Prompt + +> Validate consistency between whiteboard and actual code + +## Use Cases + +- Check if whiteboard needs updating before PR/MR merge +- Periodic audit of architecture documentation accuracy +- Discover implicit dependencies in code + +## Prompt + +```markdown +You are a code and architecture consistency checking expert. Please compare the following whiteboard and code to find inconsistencies. + +## Input + +Canvas whiteboard JSON: +```json +{CANVAS_JSON} +``` + +Project code path: {PROJECT_PATH} + +## Check Items + +1. **Node Completeness** + - Do all nodes in the whiteboard have corresponding code files/classes? + - Are there important modules in code not recorded in whiteboard? + +2. **Edge Accuracy** + - Do whiteboard edges reflect real import/call relationships? + - Are there dependencies in code not marked in whiteboard? + +3. **Group Correctness** + - Is whiteboard grouping consistent with directory structure? + - Are there abnormal cross-group dependencies? + +## Output Format + +### 🔴 Severe Inconsistencies (Must Fix) +| Type | Whiteboard | Code | Suggestion | +|:-----|:-----------|:-----|:-----------| +| Missing node | - | UserService.py | Add to whiteboard | +| Wrong edge | A→B | A doesn't call B | Remove edge | + +### 🟡 Minor Inconsistencies (Recommend Fix) +| Type | Whiteboard | Code | Suggestion | +|:-----|:-----------|:-----|:-----------| +| Naming inconsistency | user_service | UserService | Unify naming | + +### 🟢 Good Consistency +- Node coverage: {X}% +- Edge accuracy: {Y}% + +### 📋 Fix Suggestions +1. {specific fix step} +2. {specific fix step} +``` + +## Automation Script (Optional) + +```python +#!/usr/bin/env python3 +""" +canvas_sync_check.py - Whiteboard and code consistency check script + +Usage: python canvas_sync_check.py project.canvas /path/to/project +""" + +import json +import ast +import os +from pathlib import Path + +def load_canvas(canvas_path): + with open(canvas_path) as f: + return json.load(f) + +def extract_imports(py_file): + """Extract import relationships from Python file""" + with open(py_file) as f: + tree = ast.parse(f.read()) + imports = [] + for node in ast.walk(tree): + if isinstance(node, ast.Import): + for alias in node.names: + imports.append(alias.name) + elif isinstance(node, ast.ImportFrom): + if node.module: + imports.append(node.module) + return imports + +def check_consistency(canvas, project_path): + """Compare whiteboard nodes with actual files""" + canvas_nodes = {n['text'].split('\n')[0].strip('# ') + for n in canvas.get('nodes', [])} + + actual_files = set() + for py_file in Path(project_path).rglob('*.py'): + actual_files.add(py_file.stem) + + missing_in_canvas = actual_files - canvas_nodes + missing_in_code = canvas_nodes - actual_files + + return { + 'missing_in_canvas': missing_in_canvas, + 'missing_in_code': missing_in_code, + 'coverage': len(canvas_nodes & actual_files) / len(actual_files) * 100 + } + +if __name__ == '__main__': + import sys + if len(sys.argv) != 3: + print("Usage: python canvas_sync_check.py ") + sys.exit(1) + + canvas = load_canvas(sys.argv[1]) + result = check_consistency(canvas, sys.argv[2]) + + print(f"Coverage: {result['coverage']:.1f}%") + if result['missing_in_canvas']: + print(f"Missing in whiteboard: {result['missing_in_canvas']}") + if result['missing_in_code']: + print(f"Missing in code: {result['missing_in_code']}") +``` + +## CI/CD Integration + +```yaml +# .github/workflows/canvas-check.yml +name: Canvas Sync Check + +on: + pull_request: + paths: + - '**.py' + - '**.canvas' + +jobs: + check: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Check canvas consistency + run: python scripts/canvas_sync_check.py docs/architecture.canvas src/ +``` diff --git a/i18n/en/workflow/canvas-dev/workflow.md b/i18n/en/workflow/canvas-dev/workflow.md new file mode 100644 index 0000000..9f8a7d8 --- /dev/null +++ b/i18n/en/workflow/canvas-dev/workflow.md @@ -0,0 +1,31 @@ +🚀 Canvas-Driven Development Method - Complete Workflow + +1. Understand Core Philosophy: Canvas whiteboard as single source of truth, code is its serialized form; graphical language superior to text description; humans responsible for architecture design, AI responsible for code implementation +/ +2. Prepare Tool Environment: Install Obsidian (free open-source whiteboard tool); Configure AI assistant (Claude/GPT-4, must support reading Canvas JSON format); Prepare target project codebase +/ +3. Generate Initial Architecture Whiteboard: Provide project code path to AI; Use architecture analysis prompt to have AI scan project structure; AI automatically generates .canvas file containing module nodes and dependency connections +/ +4. Open .canvas File in Obsidian: Import generated architecture whiteboard; Check auto-identified modules, files, API call relationships; Verify key dependency connections are accurate +/ +5. Manually Optimize Whiteboard Architecture: Drag and adjust module positions for clear layout; Add implicit dependency connections AI missed; Add annotation nodes to mark key design decisions; Remove redundant or incorrect connections +/ +6. Establish Code-Whiteboard Sync Mechanism: [Assumption: automation tools exist] Configure code change monitoring script; Set whiteboard auto-update rules (new file → new node, new import → new connection); Or manual maintenance: update corresponding whiteboard area after each code change +/ +7. Use Whiteboard to Drive AI Programming (New Feature Development): Draw new module boxes and expected call relationships on whiteboard; Export whiteboard JSON and send to AI; Instruction: "Implement concrete code according to this architecture diagram"; AI generates files and function calls based on node names and connection directions +/ +8. Use Whiteboard to Drive Code Refactoring (Architecture Adjustment): Delete/reconnect dependency lines between modules on whiteboard; Mark large modules to be split (e.g., payment_service split into payment_processor and payment_validator); Send modified whiteboard to AI: "Refactor code according to new architecture, list files to modify" +/ +9. Use Whiteboard for Code Review: View whiteboard global architecture before review; Identify abnormal connections (e.g., frontend directly connecting to database, circular dependencies); Mark problem points on whiteboard; During discussion, point to whiteboard: "This call chain shouldn't exist" +/ +10. Use Whiteboard to Accelerate Team Collaboration: Newcomers first view whiteboard for 1 minute to understand the big picture; Draw change scope on whiteboard during requirement review; Project whiteboard during technical planning meetings instead of code; Convert whiteboard annotations to development tasks after meeting +/ +11. Maintain Whiteboard-Code Consistency: Check if whiteboard needs updating before each PR/MR merge; Periodically run auto-validation script: compare whiteboard JSON with actual code dependencies; When inconsistencies found, prioritize fixing whiteboard (because whiteboard is source of truth) +/ +12. Extended Use Cases: Auto-generate whiteboard when taking over legacy projects for quick understanding; Mark hot paths on whiteboard during performance optimization; Check sensitive data flow on whiteboard during security audits; Draw service call topology on whiteboard during API design +/ +13. [Gap Clarification] Specify your project type to optimize workflow: A) Monolith (single process, multiple modules) B) Microservices architecture (multiple services, RPC communication) C) Frontend-backend separation (frontend framework + backend API)? Default assumption A to continue +/ +14. [Gap Clarification] Choose whiteboard granularity level: A) File level (each code file as one node) B) Class/function level (each class as one node) C) Service level (only show large modules)? Recommended: A for beginners, C for complex projects +/ +15. Continuously Iterate Workflow: Weekly review if whiteboard reflects real architecture; Collect team feedback to optimize node naming and layout rules; Explore whiteboard integration with CI/CD (e.g., PR triggers whiteboard diff check); Share best practice cases to team knowledge base diff --git a/i18n/zh/documents/01-入门指南/04-OpenCode-CLI配置.md b/i18n/zh/documents/01-入门指南/04-OpenCode-CLI配置.md new file mode 100644 index 0000000..868102b --- /dev/null +++ b/i18n/zh/documents/01-入门指南/04-OpenCode-CLI配置.md @@ -0,0 +1,187 @@ +# OpenCode CLI 配置 + +> 免费 AI 编程助手,支持 75+ 模型,无需信用卡 + +OpenCode 是一个开源 AI 编程代理,支持终端、桌面应用和 IDE 扩展。无需账号即可使用免费模型。 + +官网:[opencode.ai](https://opencode.ai/) + +--- + +## 安装 + +```bash +# 一键安装(推荐) +curl -fsSL https://opencode.ai/install | bash + +# 或使用 npm +npm install -g opencode-ai + +# 或使用 Homebrew (macOS/Linux) +brew install anomalyco/tap/opencode + +# Windows - Scoop +scoop bucket add extras && scoop install extras/opencode + +# Windows - Chocolatey +choco install opencode +``` + +--- + +## 免费模型配置 + +OpenCode 支持多个免费模型提供商,无需付费即可使用。 + +### 方式一:Z.AI(推荐,GLM-4.7) + +1. 访问 [Z.AI API 控制台](https://z.ai/manage-apikey/apikey-list) 注册并创建 API Key +2. 运行 `/connect` 命令,搜索 **Z.AI** +3. 输入 API Key +4. 运行 `/models` 选择 **GLM-4.7** + +```bash +opencode +# 进入后输入 +/connect +# 选择 Z.AI,输入 API Key +/models +# 选择 GLM-4.7 +``` + +### 方式二:MiniMax(M2.1) + +1. 访问 [MiniMax API 控制台](https://platform.minimax.io/login) 注册并创建 API Key +2. 运行 `/connect`,搜索 **MiniMax** +3. 输入 API Key +4. 运行 `/models` 选择 **M2.1** + +### 方式三:Hugging Face(多种免费模型) + +1. 访问 [Hugging Face 设置](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) 创建 Token +2. 运行 `/connect`,搜索 **Hugging Face** +3. 输入 Token +4. 运行 `/models` 选择 **Kimi-K2-Instruct** 或 **GLM-4.6** + +### 方式四:本地模型(Ollama) + +```bash +# 安装 Ollama +curl -fsSL https://ollama.com/install.sh | sh + +# 拉取模型 +ollama pull llama2 +``` + +在 `opencode.json` 中配置: + +```json +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "ollama": { + "npm": "@ai-sdk/openai-compatible", + "name": "Ollama (local)", + "options": { + "baseURL": "http://localhost:11434/v1" + }, + "models": { + "llama2": { + "name": "Llama 2" + } + } + } + } +} +``` + +--- + +## 核心命令 + +| 命令 | 功能 | +|:---|:---| +| `/models` | 切换模型 | +| `/connect` | 添加 API Key | +| `/init` | 初始化项目(生成 AGENTS.md) | +| `/undo` | 撤销上次修改 | +| `/redo` | 重做 | +| `/share` | 分享对话链接 | +| `Tab` | 切换 Plan 模式(只规划不执行) | + +--- + +## 让 AI 执行一切配置任务 + +OpenCode 的核心思维:**把所有配置任务交给 AI**。 + +### 示例:安装 MCP 服务器 + +``` +帮我安装 filesystem MCP 服务器,配置到 opencode +``` + +### 示例:部署 GitHub 开源项目 + +``` +克隆 https://github.com/xxx/yyy 项目,阅读 README,帮我完成所有依赖安装和环境配置 +``` + +### 示例:配置 Skills + +``` +阅读项目结构,为这个项目创建合适的 AGENTS.md 规则文件 +``` + +### 示例:配置环境变量 + +``` +检查项目需要哪些环境变量,帮我创建 .env 文件模板并说明每个变量的用途 +``` + +### 示例:安装依赖 + +``` +分析 package.json / requirements.txt,安装所有依赖,解决版本冲突 +``` + +--- + +## 推荐工作流 + +1. **进入项目目录** + ```bash + cd /path/to/project + opencode + ``` + +2. **初始化项目** + ``` + /init + ``` + +3. **切换免费模型** + ``` + /models + # 选择 GLM-4.7 或 MiniMax M2.1 + ``` + +4. **开始工作** + - 先用 `Tab` 切换到 Plan 模式,让 AI 规划 + - 确认方案后再让 AI 执行 + +--- + +## 配置文件位置 + +- 全局配置:`~/.config/opencode/opencode.json` +- 项目配置:`./opencode.json`(项目根目录) +- 认证信息:`~/.local/share/opencode/auth.json` + +--- + +## 相关资源 + +- [OpenCode 官方文档](https://opencode.ai/docs/) +- [GitHub 仓库](https://github.com/opencode-ai/opencode) +- [Models.dev - 模型目录](https://models.dev)