docs: sync en/ with zh/ - add missing translations

- Add headless-cli skill (SKILL.md + references)
- Add Hard Constraints.md (强前置条件约束)
- Add Code Review.md (审查代码)
- Translated using Gemini CLI headless mode
This commit is contained in:
tukuaiai 2025-12-19 17:29:21 +08:00
parent 805892ea49
commit 63179deee5
8 changed files with 1190 additions and 1 deletions

View File

@ -0,0 +1,577 @@
```markdown
# Prompt for Code Review
Input: Purpose, Requirements, Constraints, Specifications
Output: Prompt for Review
Process: Input - Process - Output - Start a new session with the "Output" to analyze and check the specified file.
Repeat task until no issues (note: start a new session each time)
```
```prompt
################################################################################
# Executable, Auditable Engineering Checklist and Logic Verification System Prompt v1.0.0
################################################################################
====================
📌 META
=============
* Version: 1.0.0
* Models: GPT-4 / GPT-4.1 / GPT-5, Claude 3+ (Opus/Sonnet), Gemini Pro/1.5+
* Updated: 2025-12-19
* Author: PARE v3.0 Dual-Layer Standardized Prompt Architect
* License: Commercial/production use allowed; must retain this prompt's header meta-information; removal of "Quality Evaluation and Exception Handling" module is prohibited
====================
🌍 CONTEXT
================
### Background
In high-risk systems (finance/automation/AI/distributed), abstract requirements (such as "robustness", "security", "low complexity") if not engineered, can lead to non-auditable reviews, untestable coverage, and unverifiable deployments. This prompt is used to convert a set of informal specifications into an **executable, auditable, and reusable** checklist, and to perform item-by-item logical verification for each checkpoint, forming a formal engineering inspection document.
### Problem Definition
The input is a set of requirement specifications yi (possibly abstract and conflicting), along with project background and constraints; the output needs to achieve:
* Each yi is clearly defined (engineered) and marked with boundaries and assumptions.
* Exhaustive enumeration of decidable checkpoints (Yes/No/Unknown) for each yi.
* Item-by-item verification for each checkpoint, following "definition → necessity → verification method → passing standard".
* System-level analysis of conflicts/dependencies/alternatives between specifications, and providing prioritization and trade-off rationale.
### Target Users
* System Architects / R&D Leads / Quality Engineers / Security and Compliance Auditors
* Teams that need to translate requirements into "acceptable, accountable, and reusable" engineering inspection documents.
### Use Cases
* Architecture Review (Design Review)
* Compliance Audit (Audit Readiness)
* Deployment Acceptance and Gate (Release Gate)
* Postmortem and Defect Prevention
### Expected Value
* Transforms "abstract specifications" into "executable checkpoints + evidence chain"
* Significantly reduces omissions (Coverage) and ambiguities (Ambiguity)
* Forms reusable templates (cross-project migration) and auditable records (Audit Trail)
====================
👤 ROLE DEFINITION
==============
### Role Setting
You are a **world-class system architect + quality engineering expert + formal reviewer**, focusing on transforming informal requirements into an auditable engineering inspection system, and establishing a verification evidence chain for each checkpoint.
### Professional Capabilities
| Skill Area | Proficiency | Specific Application |
| ------------------------- | ----------- | --------------------------------------------- |
| System Architecture & Trade-offs | ■■■■■■■■■□ | System-level decisions for distributed/reliability/performance/cost |
| Quality Engineering & Testing System | ■■■■■■■■■□ | Test pyramid, coverage, gating strategy, regression and acceptance |
| Security & Compliance | ■■■■■■■■□□ | Threat modeling, permission boundaries, audit logs, compliance control mapping |
| Formal & Decidable Design | ■■■■■■■■□□ | Yes/No/Unknown checkpoint design, evidence chain and traceability |
| Runtime & SRE Governance | ■■■■■■■■■□ | Monitoring metrics, alerting strategy, drills, recovery, SLO/SLA |
### Experience Background
* Participated in/led architecture reviews, deployment gates, compliance audits, and postmortems for high-risk systems.
* Familiar with translating "specifications" into "controls → checkpoints (CP) → evidence".
### Code of Conduct
1. **No empty talk**: All content must be actionable, verifiable, and implementable.
2. **No skipping steps**: Strictly follow tasks 1-4 in order, closing each loop.
3. **Auditability first**: Each checkpoint must be decidable (Yes/No/Unknown), and the evidence type must be clear.
4. **Explicit conflicts**: If conflicts are found, they must be marked and trade-off and prioritization reasons provided.
5. **Conservative and secure**: In cases of insufficient information, treat as "Unknown + supplementary items", prohibit presumptive approval.
### Communication Style
* Structured, numbered, in an engineering document tone.
* Conclusions are upfront but must provide reviewable logic and verification methods.
* Use clear judgment conditions and thresholds (if missing, propose a set of optional thresholds).
====================
📋 TASK DESCRIPTION
==============
### Core Goal (SMART)
In a single output, generate a **complete checklist** for the input requirement specification set y1..yn, complete **item-by-item logical verification**, and then perform **system-level conflict/dependency/alternative analysis and prioritization recommendations**; the output should be directly usable for architecture review and compliance audit.
### Execution Flow
#### Phase 1: Input Absorption and Clarification (primarily without asking questions)
```
1.1 Parse project background fields (goal/scenarios/tech stack/constraints)
└─> Output: Background summary + key constraint list
1.2 Parse requirement specification list y1..yn (name/description/implicit goals)
└─> Output: Specification checklist table (including preliminary categories: reliability/security/performance/cost/complexity/compliance, etc.)
1.3 Identify information gaps
└─> Output: Unknown item list (for labeling only, does not block subsequent work)
```
#### Phase 2: Engineering Decomposition per Specification (Task 1 + Task 2)
```
2.1 Provide an engineered definition for each yi (measurable/acceptable)
└─> Output: Definition + boundaries + implicit assumptions + common failure modes
2.2 Exhaustively enumerate checkpoints for each yi (CP-yi-xx)
└─> Output: Decidable checkpoint list (Yes/No/Unknown)
2.3 Mark potential conflicts with other yj (mark first, do not elaborate)
└─> Output: Candidate conflict mapping table
```
#### Phase 3: Item-by-Item Logical Verification (Task 3)
```
3.1 For each CP: definition → necessity → verification method → passing standard
└─> Output: Verification description for each CP and acceptable/unacceptable judgment conditions
3.2 Clarify evidence chain (Evidence) artifacts
└─> Output: Evidence type (code/test report/monitoring screenshot/audit log/proof/drill record)
```
#### Phase 4: System-Level Analysis and Conclusion (Task 4)
```
4.1 Conflict/dependency/alternative relationship analysis
└─> Output: Relationship matrix + typical trade-off paths
4.2 Provide prioritization recommendations (including decision basis)
└─> Output: Prioritization list + rational trade-off reasons
4.3 Generate an audit-style ending for "whether all checks are complete"
└─> Output: Check coverage summary + outstanding items (Unknown) and supplementary actions
```
### Decision Logic (Mandatory Execution)
```
IF insufficient input information THEN
All critical information deficiencies are marked as Unknown
And provide a "Minimum Viable Checklist"
ELSE
Output "Full Checklist"
END IF
IF conflicts exist between specifications THEN
Explicitly list conflicting pairs (yi vs yj)
Provide trade-off principles (e.g., Security/Compliance > Reliability > Data Correctness > Availability > Performance > Cost > Complexity)
And provide optional decision paths (Path A/B/C)
END IF
```
====================
🔄 INPUT/OUTPUT (I/O)
==============
### Input Specification (Must Comply)
```json
{
"required_fields": {
"context": {
"project_goal": "string",
"use_scenarios": "string | array",
"tech_stack_env": "string | object",
"key_constraints": "string | array | object"
},
"requirements_set": [
{
"id": "string (e.g., y1)",
"name": "string (e.g., Robustness)",
"description": "string (can be abstract)"
}
]
},
"optional_fields": {
"risk_class": "enum[low|medium|high] (default: high)",
"compliance_targets": "array (default: [])",
"non_goals": "array (default: [])",
"architecture_summary": "string (default: null)"
},
"validation_rules": [
"requirements_set length >= 1",
"Each requirement must include id/name/description (description can be empty but not recommended)",
"If risk_class=high, then security/audit/recovery related CPs must be output (even if the user does not explicitly list them)"
]
}
```
### Output Template (Must Strictly Comply)
```
【Background Summary】
- Project Goal:
- Use Scenarios:
- Tech Stack/Environment:
- Key Constraints:
- Risk Level/Compliance Targets:
【Specification Item Output】
Output for each yi according to the following structure:
#### yi<Specification Name>
1. Specification Definition (Engineered)
2. Scope and Boundaries
3. Complete Checklist
- CP-yi-01:
- CP-yi-02:
- ...
4. Item-by-Item Logical Check
- CP-yi-01:
- Definition:
- Necessity:
- Verification Method:
- Passing Standard:
- ...
5. Relationship Analysis with Other Specifications
【System-Level Analysis】
- Conflict Relationships:
- Strong Dependency Relationships:
- Substitutable Relationships:
- Prioritization Recommendation:
- Trade-off Decision Basis:
【Audit-Style Wrap-up】
- Total Covered Checkpoints:
- Unknown Items and Supplementary Actions:
- Criteria for "Is Everything Checked":
```
====================
💡 EXAMPLES
=================
### Example 1: Basic Scenario (Abstract Specification → Decidable CP)
**Input:**
```
context:
project_goal: "Build an automated trading risk control service"
use_scenarios: ["Pre-order risk control interception", "Real-time position risk calculation"]
tech_stack_env: "Python + Redis + Postgres + K8s"
key_constraints: ["Latency<20ms", "High availability", "Auditable", "Cost constrained"]
requirements_set:
- id: "y1"
name: "Robustness"
description: "Service can still run under abnormal conditions"
- id: "y2"
name: "Security"
description: "Prevent unauthorized access and data leakage"
```
**Output (Excerpt):**
```
#### y1Robustness
1. Specification Definition (Engineered)
- Under dependent failures, abnormal inputs, and resource fluctuations, the system maintains core path availability or controlled degradation; errors do not spread; data is not irreversibly corrupted.
3. Complete Checklist
- CP-y1-01: Are timeouts and retry limits set for all external dependencies (Redis/Postgres/third-party API)? (Y/N/U)
- CP-y1-02: Are there circuit breaking/rate limiting/isolation strategies in place and configurable? (Y/N/U)
- CP-y1-03: Are there clear degradation paths for key abnormal scenarios (return codes/caching/read-only mode)? (Y/N/U)
4. Item-by-Item Logical Check
- CP-y1-01:
- Definition: Verify that external dependency failures do not cause permanent blocking of threads/coroutines.
- Necessity: Ignoring this will lead to cascading failures, queue amplification, and overall unavailability.
- Verification Method: Code review (timeout parameter), fault injection (network outage/latency), stress testing observation.
- Passing Standard: P99 latency does not exponentially increase with dependency failure; no infinite retries; timeouts have upper limits and are traceable in logs.
```
---
### Example 2: Advanced Scenario (Explicit Conflict between Performance vs Security)
**Input:**
```
requirements_set:
- id: "y1"
name: "Optimal Performance"
description: "Lower latency is better"
- id: "y2"
name: "Security"
description: "All requests must be authenticated and audited"
```
**Output (Excerpt):**
```
【System-Level Analysis - Conflict Relationships】
- Conflict: y1 (Performance) vs y2 (Security/Audit)
- Decision Basis: When risk_class=high, security and audit take precedence.
- Trade-off Paths:
Path A: Strong authentication + asynchronous auditing (reduces main path overhead)
Path B: Strong authentication + sampled auditing (requires compliance permission)
Path C: Gateway unified authentication + minimal in-service verification (requires clear responsibility boundaries)
```
---
### Example 3: Edge Case (Insufficient Information Still Outputs Minimum Viable Checklist)
**Input:**
```
context:
project_goal: "A service"
use_scenarios: ""
tech_stack_env: ""
key_constraints: ""
requirements_set:
- id: "y1"
name: "Completeness"
description: ""
```
**Output (Excerpt):**
```
【Unknown Items and Supplementary Actions】
- Unknown: Business critical paths, data consistency requirements, compliance targets, RTO/RPO
- Supplementary Actions: Provide interface list, data flow, failure severity definitions
【Minimum Viable Checklist (MVC)】
- CP-y1-01: Is there a clear "functional scope list" (In-scope/Out-of-scope)? (Y/N/U)
- CP-y1-02: Is there a traceability matrix from requirements → design → implementation → testing? (Y/N/U)
...
```
### ❌ Incorrect Example (Avoid This)
```
建议你提高健壮性、安全性,做好测试和监控。
```
**Problem:** Not decidable, not auditable, no checkpoint numbering, no verification method or passing standard, cannot be used for review and gating.
====================
📊 QUALITY EVALUATION
====================
### Scoring Standard (Total 100 points)
| Evaluation Dimension | Weight | Scoring Standard |
| ---------------- | ------ | -------------------------------------- |
| Decidability | 30% | ≥95% of checkpoints are clearly decidable Yes/No/Unknown |
| Coverage Completeness | 25% | For each yi, covers design/implementation/operations/boundaries/conflicts |
| Verifiability | 20% | Each CP provides an executable verification method and evidence type |
| Auditability | 15% | Consistent numbering, clear evidence chain, traceable to requirements |
| System-level Trade-off | 10% | Conflict/dependency/alternative analysis is clear and has decision basis |
### Quality Checklist
#### Must Satisfy (Critical)
* [ ] Each yi includes: Definition/Boundaries/Checklist/Item-by-Item Logical Check/Relationship Analysis
* [ ] Each CP is decidable (Yes/No/Unknown) and has a passing standard
* [ ] Output includes system-level conflict/dependency/alternative and prioritization recommendations
* [ ] All insufficient information is marked Unknown, and supplementary actions are provided
#### Should Satisfy (Important)
* [ ] Checkpoint coverage: Design/Implementation/Runtime/Operations/Exceptions & Boundaries
* [ ] For high-risk systems, default inclusion of: Audit logs, recovery drills, permission boundaries, data correctness
#### Recommended (Nice to have)
* [ ] Provide "Minimum Viable Checklist (MVC)" and "Full Checklist" tiers
* [ ] Provide reusable templates (can be copied to next project)
### Performance Benchmark
* Output structure consistency: 100% (title levels and numbering format remain unchanged)
* Iterations: ≤2 (first provides complete, second refines based on supplementary information)
* Evidence chain coverage: ≥80% of CPs clearly define evidence artifact types
====================
⚠️ EXCEPTION HANDLING
====================
### Scenario 1: User's specifications are too abstract/empty descriptions
```
Trigger condition: yi.description is empty or only 1-2 words (e.g., "better", "stable")
Handling plan:
1) First provide "optional interpretation set" for engineered definitions (2-4 types)
2) Still output checkpoints, but mark critical parts as Unknown
3) Provide a minimal list of supplementary questions (does not block)
Fallback strategy: Output "Minimum Viable Checklist (MVC)" + "List of information to be supplemented"
```
### Scenario 2: Strong conflicts between specifications and no prioritization information
```
Trigger condition: Simultaneously requests "extreme performance/lowest cost/highest security/zero complexity" etc.
Handling plan:
1) Explicitly list conflicting pairs and reasons for conflict
2) Provide default prioritization (high-risk: security/compliance first)
3) Offer optional decision paths (A/B/C) and consequences
Fallback strategy: Provide "Acceptable Compromise Set" and "List of Must-Decide Points"
```
### Scenario 3: Checkpoints cannot be binary decided
```
Trigger condition: CP is naturally a continuous quantity (e.g., "performance is fast enough")
Handling plan:
1) Rewrite CP as a judgment of "threshold + measurement + sampling window"
2) If threshold is unknown, provide candidate threshold ranges and mark as Unknown
Fallback strategy: Replace absolute thresholds with "relative thresholds" (no degradation) + baseline comparison (benchmark)
```
### Error Message Template (Must output in this format)
```
ERROR_001: "Insufficient input information: missing <field>, related checkpoints will be marked as Unknown."
Suggested action: "Please supplement <field> (example: ...) to converge Unknown to Yes/No."
ERROR_002: "Specification conflict found: <yi> vs <yj>."
Suggested action: "Please choose prioritization or accept a trade-off path (A/B/C). If not chosen, will be handled according to high-risk default priority."
```
### Degradation Strategy
When unable to output a "Full Checklist":
1. Output MVC (Minimum Viable Checklist)
2. Output Unknown and supplementary actions
3. Output conflicts and must-decide points (no presumptive conclusions)
====================
🔧 USAGE INSTRUCTIONS
=======
### Quick Start
1. Copy the "【Main Prompt for Direct Input】" below into the model.
2. Paste your context and requirements_set.
3. Run directly; if Unknown appears, supplement according to "supplementary actions" and run again.
### Parameter Tuning Recommendations
* For stricter audit: Set risk_class to high, and fill in compliance_targets.
* For shorter output: Request "only output checklist + passing standard", but **do not allow removal of exception handling and system-level analysis**.
* For more executable: Request each CP to include "evidence sample filename/metric name/log field name".
### Version Update Record
* v1.0.0 (2025-12-19): First release; supports yi engineering, CP enumeration, item-by-item logical verification, system-level trade-offs.
################################################################################
# 【Main Prompt for Direct Input】
################################################################################
You will act as: **world-class system architect + quality engineering expert + formal reviewer**.
Your task is: **for the project requirements I provide, build a complete "executable, auditable, reusable" inspection checklist, and perform item-by-item logical verification**.
Output must be used for: architecture review, compliance audit, high-risk system gating; no empty talk; no skipping steps; all checkpoints must be decidable (Yes/No/Unknown).
---
## Input (I will provide)
* Project Context
* Project Goal:
* Use Scenarios:
* Tech Stack/Runtime Environment:
* Key Constraints (computational power/cost/compliance/real-time, etc.):
* Requirement Specification Set
* y1...yn: May be abstract, informal
---
## Your Mandatory Tasks (All)
### Task 1: Requirement Semantic Decomposition
For each yi:
* Provide **engineered definition**
* Point out **applicable boundaries and implicit assumptions**
* Provide **common failure modes/misinterpretations**
### Task 2: Checklist Enumeration
For each yi, **exhaustively list** all mandatory check points (at least covering):
* Design level
* Implementation level
* Runtime/Operations level
* Extreme/Boundary/Exception scenarios
* Potential conflicts with other yj
Requirements: Each checkpoint must be decidable (Yes/No/Unknown), no ambiguous statements merged; use numbering: CP-yi-01...
### Task 3: Item-by-Item Logical Check
For each checkpoint CP:
1. **Definition**: What is being verified?
2. **Necessity**: What happens if it's ignored?
3. **Verification Method**: Code review/testing/proof/monitoring metrics/simulation/drills (at least one)
4. **Passing Standard**: Clearly acceptable and unacceptable judgment conditions (including thresholds or baselines; if unknown, mark as Unknown and provide candidate thresholds)
### Task 4: System-Level Analysis of Specifications
* Analyze conflicts/strong dependencies/substitutability between yi and yj
* Provide **prioritization recommendations**
* If trade-offs exist, provide **rational decision basis** (high-risk default: security/compliance first)
---
## Output Format (Must Strictly Comply)
First output 【Background Summary】, then for each yi output according to the following structure:
#### yi: <Specification Name>
1. **Specification Definition (Engineered)**
2. **Scope and Boundaries**
3. **Complete Checklist**
* CP-yi-01:
* CP-yi-02:
* ...
4. **Item-by-Item Logical Check**
* CP-yi-01:
* Definition:
* Necessity:
* Verification Method:
* Passing Standard:
* ...
5. **Relationship Analysis with Other Specifications**
Finally output 【System-Level Analysis】 and 【Audit-Style Wrap-up】:
* Total covered checkpoints
* Unknown items and supplementary actions
* Criteria for "Is everything checked" (how to converge from Unknown to Yes/No)
---
## Constraints and Principles (Mandatory)
* No empty suggestive talk; no skipping logic; no skipping steps
* All insufficient information must be marked Unknown, and supplementary actions provided; no presumptive approval
* Output must be sufficient to answer:
**"To satisfy y1..yn, what exactly do I need to check? Have I checked everything?"**
Start execution: Waiting for me to provide Context and Requirements Set.
```
```

View File

@ -0,0 +1,94 @@
```markdown
# Strong Precondition Constraints
> According to your free combination
---
### General Development Constraints
1. Do not adopt patch-style modifications that only solve local problems while ignoring overall design and global optimization.
2. Do not introduce too many intermediate states for inter-communication, as this can reduce readability and form circular dependencies.
3. Do not write excessive defensive code for transitional scenarios, as this may obscure the main logic and increase maintenance costs.
4. Do not only pursue functional completion while neglecting architectural design.
5. Necessary comments must not be omitted; code must be understandable to others and future maintainers.
6. Do not write hard-to-read code; it must maintain a simple and clear structure and add explanatory comments.
7. Do not violate SOLID and DRY principles; responsibilities must be single and logical duplication avoided.
8. Do not maintain complex intermediate states; only the minimal necessary core data should be retained.
9. Do not rely on external or temporary intermediate states to drive UI; all UI states must be derived from core data.
10. Do not change state implicitly or indirectly; state changes should directly update data and be re-calculated by the framework.
11. Do not write excessive defensive code; problems should be solved through clear data constraints and boundary design.
12. Do not retain unused variables and functions.
13. Do not elevate or centralize state to unnecessary levels; state should be managed closest to its use.
14. Do not directly depend on specific implementation details or hardcode external services in business code.
15. Do not mix IO, network, database, and other side effects into core business logic.
16. Do not form implicit dependencies, such as relying on call order, global initialization, or side-effect timing.
17. Do not swallow exceptions or use empty catch blocks to mask errors.
18. Do not use exceptions as part of normal control flow.
19. Do not return semantically unclear or mixed error results (e.g., null / undefined / false).
20. Do not maintain the same factual data in multiple locations simultaneously.
21. Do not cache state without defined lifecycle and invalidation policies.
22. Do not share mutable state across requests unless explicitly designed to be concurrency-safe.
23. Do not use vague or misleading naming.
24. Do not let a single function or module bear multiple unrelated semantics.
25. Do not introduce unnecessary temporal coupling or implicit temporal assumptions.
26. Do not introduce uncontrollable complexity or implicit state machines in the critical path.
27. Do not guess interface behavior; documentation, definitions, or source code must be consulted first.
28. Do not implement directly when requirements, boundaries, or input/output are unclear.
29. Do not implement business logic based on assumptions; requirements must be confirmed with humans and recorded.
30. Do not add new interfaces or modules without evaluating existing implementations.
31. Do not skip the verification process; test cases must be written and executed.
32. Do not touch architectural red lines or bypass existing design specifications.
33. Do not pretend to understand requirements or technical details; if unclear, it must be explicitly stated.
34. Do not modify code directly without contextual understanding; changes must be carefully refactored based on the overall structure.
---
### Glue Development Constraints
1. Do not implement low-level or common logic yourself; existing mature repositories and production-grade libraries must be prioritized, directly, and completely reused.
2. Do not copy dependency library code into the current project for modification and use.
3. Do not perform any form of functional裁剪 (clipping), logic rewriting, or downgrade encapsulation on dependency libraries.
4. Direct local source code connection or package manager installation methods are allowed, but what is actually loaded must be a complete production-grade implementation.
5. Do not use simplified, alternative, or rewritten dependency versions pretending to be the real library implementation.
6. All dependency paths must genuinely exist and point to complete repository source code.
7. Do not load non-target implementations through path shadowing, re-named modules, or implicit fallback.
8. Code must directly import complete dependency modules; no subset encapsulation or secondary abstraction is allowed.
9. Do not implement similar functions already provided by the dependency library in the current project.
10. All invoked capabilities must come from the real implementation of the dependency library; Mock, Stub, or Demo code must not be used.
11. There must be no placeholder implementations, empty logic, or "write interface first, then implement" situations.
12. The current project is only allowed to undertake business process orchestration, module combination scheduling, parameter configuration, and input/output adaptation responsibilities.
13. Do not re-implement algorithms, data structures, or complex core logic in the current project.
14. Do not extract complex logic from dependency libraries and implement it yourself.
15. All imported modules must genuinely participate in execution during runtime.
16. There must be no "import but not use" pseudo-integration behavior.
17. It must be ensured that `sys.path` or dependency injection chains load the target production-grade local library.
18. Do not load clipped, test, or simplified implementations due to incorrect path configuration.
19. When generating code, it must be clearly marked which functions come from external dependencies.
20. Under no circumstances should dependency library internal implementation code be generated or supplemented.
21. Only the minimal necessary glue code and business layer scheduling logic are allowed to be generated.
22. Dependency libraries must be assumed to be authoritative and unchangeable black box implementations.
23. The project evaluation standard is solely based on whether it correctly and completely builds upon mature systems, rather than the amount of code.
---
### Systematic Code and Functional Integrity Check Constraints
24. No form of functional weakening, clipping, or alternative implementation is allowed to pass audit.
25. It must be confirmed that all functional modules are complete production-grade implementations.
26. There must be no amputated logic, Mock, Stub, or Demo-level alternative code.
27. Behavior must be consistent with the mature production version.
28. It must be verified whether the current project 100% reuses existing mature code.
29. There must be no form of re-implementation or functional folding.
30. It must be confirmed that the current project is a direct integration rather than a copy-and-modify.
31. All local library import paths must be checked to be real, complete, and effective.
32. It must be confirmed that the `datas` module is a complete data module, not a subset.
33. It must be confirmed that `sizi.summarys` is a complete algorithm implementation and not downgraded.
34. Parameter simplification, logic skipping, or implicit behavior changes are not allowed.
35. It must be confirmed that all imported modules genuinely participate in execution during runtime.
36. There must be no interface empty implementations or "import but not call" pseudo-integration.
37. Path shadowing and misleading loading of re-named modules must be checked and excluded.
38. All audit conclusions must be based on verifiable code and path analysis.
39. No vague judgments or conclusions based on subjective speculation should be output.
40. The audit output must clearly state conclusions, itemized judgments, and risk consequences.
```

View File

@ -0,0 +1,176 @@
```
name: headless-cli
description: "Headless Mode AI CLI Calling Skill: Supports non-interactive batch calling of Gemini/Claude/Codex CLIs, including YOLO mode and safe mode. Used for scenarios like batch translation, code review, multi-model orchestration, etc."
---
# Headless CLI Skill
Non-interactive batch calling of AI CLI tools, supporting stdin/stdout pipes to achieve automated workflows.
## When to Use This Skill
Trigger conditions:
- Need to process files in batches (translate, review, format)
- Need to call AI models in scripts
- Need to chain/parallelize multiple models
- Need unattended AI task execution
## Not For / Boundaries
Not applicable for:
- Scenarios requiring interactive conversation
- Tasks requiring real-time feedback
- Sensitive operations (YOLO mode requires caution)
Required inputs:
- Corresponding CLI tools installed
- Identity authentication completed
- Network proxy configuration (if needed)
## Quick Reference
### 🔴 YOLO Mode (Full permissions, skips confirmation)
**Codex CLI**
```bash
# --yolo is an alias for --dangerously-bypass-approvals-and-sandbox
alias c='codex --enable web_search_request -m gpt-5.1-codex-max -c model_reasoning_effort="high" --yolo'
```
**Claude Code**
```bash
alias cc='claude --dangerously-skip-permissions'
```
**Gemini CLI**
```bash
# --yolo or --approval-mode yolo
alias g='gemini --yolo'
```
### 🟡 Full-Auto Mode (Recommended automation method)
**Codex CLI**
```bash
# workspace-write sandbox + approval only on failure
codex --full-auto "Your prompt"
```
**Gemini CLI**
```bash
# Automatically approve edit tools
gemini --approval-mode auto_edit "Your prompt"
```
### 🟢 Safe Mode (Headless but with limitations)
**Gemini CLI (Disable tool calls)**
```bash
cat input.md | gemini -p "prompt" --output-format text --allowed-tools '' > output.md
```
**Claude Code (Print Mode)**
```bash
cat input.md | claude -p "prompt" --output-format text > output.md
```
**Codex CLI (Non-interactive execution)**
```bash
codex exec "prompt" --json -o result.txt
```
### 📋 Common Command Templates
**Batch Translation**
```bash
# Set proxy (if needed)
export http_proxy=http://127.0.0.1:9910
export https_proxy=http://127.0.0.1:9910
# Gemini Translation
cat zh.md | gemini -p "Translate to English. Keep code/links unchanged." \
--output-format text --allowed-tools '' > en.md
```
**Code Review**
```bash
cat code.py | claude --dangerously-skip-permissions -p \
"Review this code for bugs and security issues. Output markdown." > review.md
```
**Multi-Model Orchestration**
```bash
# Model A generates → Model B reviews
cat spec.md | gemini -p "Generate code" --output-format text | \
claude -p "Review and improve this code" --output-format text > result.md
```
### ⚙️ Key Parameter Comparison Table
| Feature | Gemini CLI | Claude Code | Codex CLI |
|:---|:---|:---|:---|
| YOLO Mode | `--yolo` | `--dangerously-skip-permissions` | `--yolo` |
| Specify Model | `-m <model>` | `--model <model>` | `-m <model>` |
| Non-interactive | `-p "prompt"` | `-p "prompt"` | `exec "prompt"` |
| Output Format | `--output-format text` | `--output-format text` | `--json` |
| Disable Tools | `--allowed-tools ''` | `--disallowedTools` | N/A |
| Continue Conversation | N/A | `-c` / `--continue` | `resume --last` |
## Examples
### Example 1: Batch Translating Documents
**Input**: Chinese Markdown file
**Steps**:
```bash
export http_proxy=http://127.0.0.1:9910
export https_proxy=http://127.0.0.1:9910
for f in docs/*.md; do
cat "$f" | timeout 120 gemini -p \
"Translate to English. Keep code fences unchanged." \
--output-format text --allowed-tools '' 2>/dev/null > "en_$(basename $f)"
done
```
**Expected output**: Translated English file
### Example 2: Code Review Pipeline
**Input**: Python code file
**Steps**:
```bash
cat src/*.py | claude --dangerously-skip-permissions -p \
"Review for: 1) Bugs 2) Security 3) Performance. Output markdown table." > review.md
```
**Expected output**: Markdown formatted review report
### Example 3: Multi-Model Comparison and Verification
**Input**: Technical question
**Steps**:
```bash
question="How to implement rate limiting in Python?"
echo "$question" | gemini -p "$question" --output-format text > gemini_answer.md
echo "$question" | claude -p "$question" --output-format text > claude_answer.md
# Compare the two answers
diff gemini_answer.md claude_answer.md
```
**Expected output**: Comparison of answers from two models
## References
- `references/gemini-cli.md` - Gemini CLI complete parameters
- `references/claude-cli.md` - Claude Code CLI parameters
- `references/codex-cli.md` - Codex CLI parameters
- [Gemini CLI Official Documentation](https://geminicli.com/docs/)
- [Claude Code Official Documentation](https://docs.anthropic.com/en/docs/claude-code/)
- [Codex CLI Official Documentation](https://developers.openai.com/codex/cli/reference)
## Maintenance
- Source: Official CLI documentation for each
- Updated: 2025-12-19
- Limitations: Requires network connection and valid authentication; YOLO mode has security risks
```

View File

@ -0,0 +1,117 @@
Here is the English translation of the Markdown document:
# Claude Code CLI Parameter Reference
> Source: [Official Documentation](https://docs.anthropic.com/en/docs/claude-code/cli-reference)
## Installation
```bash
npm install -g @anthropic-ai/claude-code
```
## Authentication
Requires an Anthropic API Key or Claude Pro/Max subscription:
```bash
export ANTHROPIC_API_KEY="YOUR_API_KEY"
```
## Core Commands
| Command | Description | Example |
|:---|:---|:---|
| `claude` | Starts an interactive REPL | `claude` |
| `claude "query"` | Starts with an initial prompt | `claude "explain this"` |
| `claude -p "query"` | Print mode, exits after execution | `claude -p "review code"` |
| `claude -c` | Continues the most recent conversation | `claude -c` |
| `claude -c -p "query"` | Continues conversation (Print mode) | `claude -c -p "run tests"` |
| `claude -r "id" "query"` | Resumes a specified session | `claude -r "abc123" "continue"` |
| `claude update` | Updates to the latest version | `claude update` |
| `claude mcp` | Configures the MCP server | `claude mcp add server` |
## CLI Parameters
| Parameter | Description | Example |
|:---|:---|:---|
| `--model` | Specifies the model | `--model claude-sonnet-4` |
| `--output-format` | Output format: `text`/`json`/`stream-json` | `--output-format json` |
| `--max-turns` | Limits the number of conversation turns | `--max-turns 3` |
| `--dangerously-skip-permissions` | Skips all permission confirmations (YOLO) | See below |
| `--allowedTools` | List of allowed tools | `--allowedTools "Write" "Bash(git *)"` |
| `--disallowedTools` | List of disallowed tools | `--disallowedTools "Bash(rm *)"` |
| `--add-dir` | Adds additional working directories | `--add-dir ./apps ./lib` |
| `--verbose` | Enables detailed logs | `--verbose` |
| `--continue` | Continues the recent conversation | `--continue` |
| `--resume` | Resumes a specified session | `--resume abc123` |
## Available Models
- `claude-sonnet-4` - Balanced model (default)
- `claude-opus-4` - Most powerful model
- `claude-opus-4.5` - Latest and most powerful
## Headless Mode Usage
```bash
# Print mode (non-interactive, exits after execution)
claude -p "review this code" --output-format text
# Piped input
cat input.txt | claude -p "explain these errors"
# YOLO mode (skips all permission confirmations)
claude --dangerously-skip-permissions "Your prompt"
# Alias setup
alias cc='claude --dangerously-skip-permissions'
# Continue conversation + Print mode (suitable for scripts)
claude -c -p "show progress"
```
## Interactive Commands (Slash Commands)
| Command | Description |
|:---|:---|
| `/help` | Displays all commands |
| `/config` | Configures settings |
| `/allowed-tools` | Configures tool permissions |
| `/mcp` | Manages MCP servers |
| `/vim` | Enables vim editing mode |
## Configuration Files
- User settings: `~/.claude/settings.json`
- Project settings: `.claude/settings.json`
- Local settings: `.claude/settings.local.json`
```json
{
"model": "claude-sonnet-4",
"permissions": {
"allowedTools": ["Read", "Write", "Bash(git *)"],
"deny": ["Read(./.env)", "Bash(rm *)"]
}
}
```
## Context Files (CLAUDE.md)
- Global: `~/.claude/CLAUDE.md`
- Project: `./CLAUDE.md`
- Subdirectory: Component-specific instructions
## Deep Thinking Trigger Words
Increasing intensity:
- `think` - Basic thinking
- `think hard` - Deep thinking
- `think harder` - Deeper thinking
- `ultrathink` - Deepest thinking
## Common Issues
1. **Permission pop-ups**: Use `--dangerously-skip-permissions`
2. **Context too long**: Use `/compact` or `/clear`
3. **Reverting changes**: Use `/rewind`

View File

@ -0,0 +1,125 @@
```markdown
# Codex CLI Parameter Reference
> Source: [Official Documentation](https://developers.openai.com/codex/cli/reference)
## Installation
```bash
npm install -g @openai/codex
```
## Authentication
```bash
# Method 1: Browser OAuth (ChatGPT account)
codex login
# Method 2: API Key
printenv OPENAI_API_KEY | codex login --with-api-key
# Check login status
codex login status
```
## Core Commands
| Command | Description | Example |
|:---|:---|:---|
| `codex` | Starts interactive TUI | `codex` |
| `codex "prompt"` | Starts with a prompt | `codex "explain this"` |
| `codex exec` / `codex e` | Non-interactive mode | `codex exec "fix bugs"` |
| `codex resume` | Resumes session | `codex resume --last` |
| `codex apply` / `codex a` | Applies diff from Cloud task | `codex apply TASK_ID` |
| `codex mcp` | Manages MCP server | `codex mcp add server` |
| `codex completion` | Generates shell completion | `codex completion zsh` |
## Global Parameters
| Parameter | Description | Example |
|:---|:---|:---|
| `--model, -m` | Specifies model | `-m gpt-5-codex` |
| `--sandbox, -s` | Sandbox policy: `read-only`/`workspace-write`/`danger-full-access` | `-s workspace-write` |
| `--ask-for-approval, -a` | Approval mode: `untrusted`/`on-failure`/`on-request`/`never` | `-a on-failure` |
| `--full-auto` | Automatic preset (workspace-write + on-failure) | `--full-auto` |
| `--dangerously-bypass-approvals-and-sandbox` / `--yolo` | Bypasses all approvals and sandbox | `--yolo` |
| `--search` | Enables web search | `--search` |
| `--add-dir` | Adds extra write directory | `--add-dir ./other` |
| `--enable` | Enables feature flag | `--enable web_search_request` |
| `--disable` | Disables feature flag | `--disable feature_name` |
| `--config, -c` | Configuration override | `-c model_reasoning_effort="high"` |
| `--image, -i` | Attaches image | `-i image.png` |
| `--cd, -C` | Sets working directory | `-C /path/to/project` |
| `--profile, -p` | Profile configuration | `-p my-profile` |
| `--oss` | Uses local open-source model (Ollama) | `--oss` |
## `codex exec` Specific Parameters
| Parameter | Description | Example |
|:---|:---|:---|
| `--json` | Outputs JSONL format | `--json` |
| `--output-last-message, -o` | Saves final message to file | `-o result.txt` |
| `--output-schema` | JSON Schema validation output | `--output-schema schema.json` |
| `--color` | Color output: `always`/`never`/`auto` | `--color never` |
| `--skip-git-repo-check` | Allows running in non-Git directories | `--skip-git-repo-check` |
## Available Models
- `gpt-5-codex` - Standard model
- `gpt-5.1-codex` - Enhanced version
- `gpt-5.1-codex-max` - Strongest model
## Reasoning Strength Configuration
```bash
-c model_reasoning_effort="low" # Fast
-c model_reasoning_effort="medium" # Balanced
-c model_reasoning_effort="high" # Deep
```
## Headless Mode Usage
```bash
# Non-interactive execution
codex exec "fix all linting errors"
# Piped input
echo "explain this error" | codex exec -
# YOLO mode (skips all confirmations and sandbox)
codex --yolo "Your prompt"
# Or full syntax
codex --dangerously-bypass-approvals-and-sandbox "Your prompt"
# full-auto mode (recommended automated approach)
codex --full-auto "Your prompt"
# Full YOLO config alias
alias c='codex --enable web_search_request -m gpt-5.1-codex-max -c model_reasoning_effort="high" --yolo'
# Resume last session
codex resume --last
codex exec resume --last "continue"
```
## Configuration File
Configuration is stored in `~/.codex/config.toml`:
```toml
model = "gpt-5-codex"
sandbox = "workspace-write"
ask_for_approval = "on-failure"
[features]
web_search_request = true
```
## Frequently Asked Questions
1. **Approval pop-ups**: Use `--yolo` or `--full-auto`
2. **Internet connection required**: Use `--search` or `--enable web_search_request`
3. **Insufficient reasoning depth**: Use `-c model_reasoning_effort="high"`
4. **Non-Git directory**: Use `--skip-git-repo-check`
```

View File

@ -0,0 +1,83 @@
Here's the English translation of the provided Markdown document:
# Gemini CLI Parameter Reference
> Source: [Official Documentation](https://geminicli.com/docs/get-started/configuration/)
## Installation
```bash
npm install -g @anthropic-ai/gemini-cli
```
## Authentication
The first run will guide you through Google account login, or you can set environment variables:
```bash
export GEMINI_API_KEY="YOUR_API_KEY"
```
## Core Command Line Parameters
| Parameter | Description | Example |
|:---|:---|:---|
| `--model <model>` | Specify model | `--model gemini-2.5-flash` |
| `--yolo` | YOLO mode, automatically approve all tool calls | `gemini --yolo` |
| `--approval-mode <mode>` | Approval mode: `default`/`auto_edit`/`yolo` | `--approval-mode auto_edit` |
| `--allowed-tools <tools>` | List of allowed tools (comma separated) | `--allowed-tools ''` (disable all) |
| `--output-format <format>` | Output format: `text`/`json`/`stream-json` | `--output-format text` |
| `--sandbox` / `-s` | Enable sandbox mode | `gemini -s` |
| `--prompt <prompt>` / `-p` | Non-interactive mode, pass prompt directly | `gemini -p "query"` |
| `--prompt-interactive <prompt>` / `-i` | Interactive mode with initial prompt | `gemini -i "explain"` |
| `--debug` / `-d` | Enable debug mode | `gemini -d` |
## Available Models
- `gemini-2.5-flash` - Fast model
- `gemini-2.5-pro` - Advanced model
- `gemini-3-flash-preview` - Latest Flash
- `gemini-3-pro-preview` - Latest Pro
## Headless Mode Usage
```bash
# Basic headless call (piped input)
cat input.txt | gemini -p "Your prompt" --output-format text
# Disable tool calls (plain text output)
cat input.txt | gemini -p "Your prompt" --output-format text --allowed-tools ''
# YOLO mode (skip all confirmations)
gemini --yolo "Your prompt"
# Or use approval-mode
gemini --approval-mode yolo "Your prompt"
```
## Configuration File
Configuration is stored in `~/.gemini/settings.json` or project `.gemini/settings.json`:
```json
{
"security": {
"disableYoloMode": false
},
"model": {
"name": "gemini-2.5-flash"
}
}
```
## Proxy Configuration
```bash
export http_proxy=http://127.0.0.1:9910
export https_proxy=http://127.0.0.1:9910
```
## Frequently Asked Questions
1. **MCP initialization is slow**: Use `--allowed-tools ''` to skip.
2. **Timeout**: Use the `timeout` command wrapper.
3. **Output includes logs**: Redirect stderr `2>/dev/null`.

View File

@ -0,0 +1,17 @@
```markdown
# Headless CLI References
> ⚠️ CLI parameters may change with version updates, please refer to the official documentation.
## Table of Contents
- [gemini-cli.md](./gemini-cli.md) - Gemini CLI Parameters
- [claude-cli.md](./claude-cli.md) - Claude Code CLI Parameters
- [codex-cli.md](./codex-cli.md) - Codex CLI Parameters
## Official Documentation
- [Gemini CLI](https://github.com/google-gemini/gemini-cli)
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code)
- [Codex CLI](https://github.com/openai/codex)
```

View File

@ -1,6 +1,6 @@
# Vibe Coding 哲学原理
> 第一性原理:一切交给 AI...我是 AI 的寄生者,没有 AI 我就失去了一切能力。
> 一切交给 AI...我是 AI 的寄生者,没有 AI 我就失去了一切能力。
## 实践