Writing — AI智能体写作技能 Writing, community, AI智能体写作技能, ide skills, 长篇内容自动化写作, 并行评估, 用户批准, 自动化编辑, 写作效率提高, Writing AI agent skill, Claude Code

v1.0.0

关于此技能

Perfect for Language Agents needing advanced content critique and improvement capabilities. AI智能体写作技能是一种通过AI技术实现长篇内容的自动化写作和编辑的能力

功能特性

长篇内容的迭代批判和改进
并行评估和用户批准
自动化写作和编辑
提高写作效率和质量
支持多种文件格式

# 核心主题

Gerstep Gerstep
[83]
[18]
更新于: 3/23/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
50
Canonical Locale
en
Detected Body Locale
en

Perfect for Language Agents needing advanced content critique and improvement capabilities. AI智能体写作技能是一种通过AI技术实现长篇内容的自动化写作和编辑的能力

核心价值

Empowers agents to iteratively refine long-form content using parallel judges, evaluating sections against acceptance criteria, and streamlining content creation with Markdown files and vision documents. It utilizes natural language processing and machine learning to provide data-driven feedback, ensuring sharp, concise, and practical writing.

适用 Agent 类型

Perfect for Language Agents needing advanced content critique and improvement capabilities.

赋予的主要能力 · Writing

Automating content review and editing for blog posts and articles
Generating data-driven feedback for research papers and essays
Refining technical writing, such as guidebooks and documentation, for clarity and completeness

! 使用限制与门槛

  • Requires user approval for all changes
  • Limited to Markdown files (.md) and optional vision documents
  • Needs access to source files for cross-referencing

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

Writing 是什么?

Perfect for Language Agents needing advanced content critique and improvement capabilities. AI智能体写作技能是一种通过AI技术实现长篇内容的自动化写作和编辑的能力

如何安装 Writing?

运行命令:npx killer-skills add Gerstep/cybos/Writing。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

Writing 适用于哪些场景?

典型场景包括:Automating content review and editing for blog posts and articles、Generating data-driven feedback for research papers and essays、Refining technical writing, such as guidebooks and documentation, for clarity and completeness。

Writing 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

Writing 有哪些限制?

Requires user approval for all changes;Limited to Markdown files (.md) and optional vision documents;Needs access to source files for cross-referencing。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add Gerstep/cybos/Writing。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    Writing 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

Writing

利用AI智能体写作技能,实现长篇内容的迭代批判和改进,提高写作效率和质量

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

Writing Skill

Iterative critique and improvement of long-form content. Parallel judges evaluate sections against acceptance criteria. User approves all changes.

Architecture

USER: /cyber-writing <content-file> [--vision <file>] [--sources <files...>]
    │
    ▼
PHASE 1: INGEST                           (main session)
    Read content + vision + sources
    Parse into sections (## headings)
    User picks scope (all / specific sections)
    Write /tmp/writing-brief.md
    │
    ▼
PHASE 2: CRITIQUE                          (5 parallel subagents per section)
    ┌──────────┬──────────┬──────────┬──────────┬──────────┐
    │PRACTICAL │  DATA    │ LANGUAGE │SUBSTANCE │COMPLETE- │
    │  JUDGE   │FRESHNESS │  JUDGE   │  JUDGE   │NESS JUDGE│
    │          │  JUDGE   │          │          │          │
    │"Would a  │"Late-2025│"Sharp?   │"Actual   │"What's   │
    │CTO act   │or 2026   │Concise?  │insight   │missing?  │
    │on this?" │data?"    │No fluff?"│or common │What's    │
    │          │          │          │wisdom?"  │excess?"  │
    └────┬─────┴────┬─────┴────┬─────┴────┬─────┴────┬─────┘
         └──────────┴──────────┴──────────┴──────────┘
                               │
                    Each returns findings as task result
    │
    ▼
PHASE 3: SYNTHESIS                         (main session)
    Deduplicate + consolidate judge outputs into:
    ├── PROBLEMS  (what's wrong, with evidence + severity)
    ├── PROPOSALS (specific rewrites / additions / cuts)
    └── MISSING   (gaps that need new content)
    │
    ▼
PHASE 4: PRESENT                           (main session, interactive)
    Per proposal: Accept / Modify / I'll write it / Skip / Iterate
    │
    ├── Accept/Modify → PHASE 5
    ├── I'll write it → wait for user
    ├── Skip → next proposal
    └── Iterate → back to PHASE 2 for this section
    │
    ▼
PHASE 5: APPLY                             (main session)
    Edit file with accepted changes
    │
    ▼
PHASE 6: NEXT SECTION                      (loop)
    Repeat Phase 2-5 for next section in scope
    After all sections → summary of changes

Invocation

Conversational or via command:

/cyber-writing <content-file> [--vision <file>] [--sources <file1> <file2> ...]

Arguments:

  • content-file: Path to the content being critiqued (required)
  • --vision: Vision/goal document for the project (optional but recommended)
  • --sources: Source files the content was built from (optional, for cross-referencing)

Examples:

/cyber-writing ~/path/to/guidebook.md --vision ~/path/to/VISION.md
/cyber-writing ~/path/to/essay.md --sources ~/path/to/research.md ~/path/to/notes.md

If arguments are missing, ask the user for the content file path and whether there's a vision doc.


Phase 1: Ingest

  1. Read content file — the document being critiqued
  2. Read vision doc (if provided) — the project's goal, audience, acceptance criteria
  3. Read source files (if provided) — materials the content was built from
  4. Parse sections — split by ## headings. Each ## heading = one section
  5. Present section list to user with line counts
  6. User picks scope:
    • all — critique every section sequentially
    • Section numbers — e.g., "1.1, 1.3, 2.4"
    • Range — e.g., "Part 2" or "sections 2.1-2.6"

Write Handoff Brief

Write /tmp/writing-brief.md containing:

markdown
1# Writing Critique Brief 2 3## Project 4[1-2 sentences: what this content is, who it's for] 5 6## Vision 7[Summary of vision doc, or "No vision doc provided"] 8 9## Acceptance Criteria 10- Practical: reader can act on this immediately 11- Data: claims use late-2025 or 2026 data with named sources 12- Language: sharp, concise, no marketing fluff, no LLM-isms 13- Substance: genuine insight, not restated common knowledge 14- Complete: important angles covered, nothing excess 15 16## Source Files 17[List of source file paths, or "No source files provided"] 18 19## Full Document TOC 20[Table of contents with section headings and line numbers]

Phase 2: Critique (Parallel Subagents)

For each section in scope, dispatch 5 judge subagents simultaneously via the Task tool.

Judge Dispatch

Each judge:

  • subagent_type: general-purpose
  • model: sonnet (fast, cheaper, sufficient for critique)
  • Prompt: Built from template at prompts/judge.md with role-specific parameters from the dispatch table below
  • Context provided in prompt: The section text (inline), the brief contents (inline), path to full document, paths to source files

Dispatch Configuration

#JUDGE_ROLEKEY_QUESTIONCRITERIA
1Practical Judge"Would a CTO, COO, or engineering lead change their behavior after reading this section?"Actionable advice with specific steps. Named tools, frameworks, or approaches. Clear "do this, not that" guidance. Concrete examples, not abstract principles.
2Data Freshness Judge"Is every claim backed by current (late-2025 or 2026) data from a named source?"Every statistic has a named company/study and year. No "research shows" without citation. Data points are from 2025-2026 where possible. Pre-2024 data flagged as potentially outdated. Source annotations (<!-- Source: ... -->) match the claims.
3Language Judge"Is every sentence sharp, concise, and free of bullshit?"No LLM-isms (see anti-pattern list). No marketing fluff or hollow declaratives. No hedging ("it could be argued", "one might say"). No filler transitions ("Moreover", "Furthermore"). Every sentence earns its place. Read-aloud test: sounds like a person talking, not a press release.
4Substance Judge"Is there actual insight here or just restated common knowledge?"Original framing or analysis, not Wikipedia summaries. Specific mechanisms explained (HOW something works, not just THAT it exists). Counterarguments or nuances acknowledged. Would an expert learn something, or just nod along?
5Completeness Judge"What important angles are missing? What's said too much?"Key counterarguments or risks not addressed. Important real-world examples missing. Sections that repeat content from other sections. Paragraphs that could be cut without losing information. Cross-references to other sections that should exist.

Anti-Pattern List (provided to Language Judge)

Instant-fail words/phrases:

  • delve, leverage, tapestry, landscape, paradigm, synergy, holistic
  • revolutionary, transformative, game-changing, cutting-edge, groundbreaking
  • "It's important to note that", "It's worth mentioning"
  • "In today's rapidly evolving", "In an era of"
  • "This isn't X. It's Y." (hollow inversion pattern)
  • "But here's the thing", "Here's the turn"
  • "The implications are profound", "The opportunity is clear"
  • Moreover, Furthermore, Additionally, Consequently
  • "Let's delve into", "Let's explore", "Let's unpack"

Structural anti-patterns:

  • Paragraphs longer than 4 sentences
  • Vague attribution ("they say", "experts agree", "studies show")
  • Dramatic mic-drop endings ("This isn't the future. It's the present.")
  • Unnecessary qualifiers ("very", "really", "extremely", "incredibly")

Phase 3: Synthesis

After all 5 judges return for a section:

  1. Read all judge outputs
  2. Deduplicate — multiple judges flagging the same issue consolidates into one finding with combined evidence
  3. Classify each finding:
    • PROBLEM: Something wrong with existing text (with severity: Critical / Important / Minor)
    • PROPOSAL: A specific rewrite, addition, or cut (with the exact new text or description of change)
    • MISSING: A gap that needs new content (with description of what should be added)
  4. Rank by severity: Critical first, then Important, then Minor
  5. Write synthesis to /tmp/writing-synthesis-{section-id}.md

Synthesis Format

markdown
1## Section [X.X]: [Title] 2 3### Critical 41. **[Problem]** — [Judge(s)] 5 Evidence: [quote from text] 6 Proposal: [specific change] 7 8### Important 91. **[Problem]** — [Judge(s)] 10 Evidence: [quote from text] 11 Proposal: [specific change] 12 13### Minor 141. **[Problem]** — [Judge(s)] 15 Proposal: [specific change] 16 17### Missing 181. **[Gap description]** — [Judge(s)] 19 Suggestion: [what to add and where]

Phase 4: Present to User

Present the synthesis section-by-section. For each finding, offer choices via AskUserQuestion:

For PROBLEMS with PROPOSALS:

PROBLEM: [description with evidence]
PROPOSAL: [specific change]

Options:
1. Accept — apply this edit
2. Modify — "good direction but..." (user refines)
3. I'll write it — user will edit this themselves
4. Skip — leave as-is

For MISSING items:

MISSING: [gap description]
SUGGESTION: [what to add]

Options:
1. Draft it — generate the missing content for my review
2. I'll write it — user will add this themselves
3. Skip — not needed

After all findings for a section:

Options:
1. Move to next section
2. Re-critique this section (after user's manual edits)
3. Done — stop here

Interaction Rules

  • Never batch-apply — present one finding at a time for critical/important, batch minor findings
  • Show diffs — when proposing changes, show the before/after clearly
  • Accept user rewrites — if user provides their own text, use it exactly
  • Track changes — maintain a running list of all accepted changes for the session summary

Phase 5: Apply

For accepted changes:

  1. Use the Edit tool to apply the change to the content file
  2. Confirm the edit was applied
  3. Move to next finding

For "Draft it" on missing items:

  1. Generate the content in the response
  2. Show it to the user for approval
  3. If approved, use Edit or Write to insert it at the appropriate location

Phase 6: Loop + Summary

After completing all sections in scope:

markdown
1## Session Summary 2 3**Sections reviewed**: [list] 4**Changes applied**: [count] 5**Items skipped**: [count] 6**Items for user to write**: [count] 7 8### Changes Made 91. Section X.X: [brief description of change] 102. Section X.X: [brief description of change] 11... 12 13### Still Open 141. Section X.X: [user said they'd write this] 152. Section X.X: [gap identified, skipped]

Key Principles

  1. Never edit without user approval — propose, never impose
  2. Judges are brutally honest — no praise filler, just problems and proposals
  3. Every critique has evidence — quote the specific text that's problematic
  4. Proposals are concrete — "rewrite X as Y", never "could be improved"
  5. Section-by-section — one section fully resolved before moving to next
  6. Full context to subagents — each judge gets vision doc, full document TOC, and the section text
  7. Model efficiency — judges run on sonnet (fast, sufficient for critique)

Files

FilePurpose
SKILL.mdThis file — architecture and workflow
prompts/judge.mdSubagent prompt template for all 5 judges

Handoff Files

FileWritten ByRead By
/tmp/writing-brief.mdMain (Phase 1)All judges
/tmp/writing-synthesis-{section}.mdMain (Phase 3)Main (Phase 4), preserved for context

Commands

CommandDescription
/cyber-writingMain entry point — critique and improve content

相关技能

寻找 Writing 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具