Writing — AIエージェントの文章作成スキル Writing, community, AIエージェントの文章作成スキル, ide skills, 長文の自動化作成, 並列評価, ユーザーの承認, 自動化編集, 文章作成の効率向上, Writing AI agent skill, Claude Code

v1.0.0

このスキルについて

Perfect for Language Agents needing advanced content critique and improvement capabilities. AIエージェントの文章作成スキルは、AI技術を利用して長文の自動化作成と編集を行う能力である

機能

長文の批判と改善の繰り返し
並列評価とユーザーの承認
自動化作成と編集
文章作成の効率と品質の向上
複数のファイル形式をサポート

# Core Topics

Gerstep Gerstep
[83]
[18]
Updated: 3/23/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
50
Canonical Locale
en
Detected Body Locale
en

Perfect for Language Agents needing advanced content critique and improvement capabilities. AIエージェントの文章作成スキルは、AI技術を利用して長文の自動化作成と編集を行う能力である

このスキルを使用する理由

Empowers agents to iteratively refine long-form content using parallel judges, evaluating sections against acceptance criteria, and streamlining content creation with Markdown files and vision documents. It utilizes natural language processing and machine learning to provide data-driven feedback, ensuring sharp, concise, and practical writing.

おすすめ

Perfect for Language Agents needing advanced content critique and improvement capabilities.

実現可能なユースケース for Writing

Automating content review and editing for blog posts and articles
Generating data-driven feedback for research papers and essays
Refining technical writing, such as guidebooks and documentation, for clarity and completeness

! セキュリティと制限

  • Requires user approval for all changes
  • Limited to Markdown files (.md) and optional vision documents
  • Needs access to source files for cross-referencing

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is Writing?

Perfect for Language Agents needing advanced content critique and improvement capabilities. AIエージェントの文章作成スキルは、AI技術を利用して長文の自動化作成と編集を行う能力である

How do I install Writing?

Run the command: npx killer-skills add Gerstep/cybos/Writing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for Writing?

Key use cases include: Automating content review and editing for blog posts and articles, Generating data-driven feedback for research papers and essays, Refining technical writing, such as guidebooks and documentation, for clarity and completeness.

Which IDEs are compatible with Writing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for Writing?

Requires user approval for all changes. Limited to Markdown files (.md) and optional vision documents. Needs access to source files for cross-referencing.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Gerstep/cybos/Writing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use Writing immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

Writing

AIエージェントの文章作成スキルを利用して、長文の批判と改善を繰り返し、文章作成の効率と品質を向上させる

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Writing Skill

Iterative critique and improvement of long-form content. Parallel judges evaluate sections against acceptance criteria. User approves all changes.

Architecture

USER: /cyber-writing <content-file> [--vision <file>] [--sources <files...>]
    │
    ▼
PHASE 1: INGEST                           (main session)
    Read content + vision + sources
    Parse into sections (## headings)
    User picks scope (all / specific sections)
    Write /tmp/writing-brief.md
    │
    ▼
PHASE 2: CRITIQUE                          (5 parallel subagents per section)
    ┌──────────┬──────────┬──────────┬──────────┬──────────┐
    │PRACTICAL │  DATA    │ LANGUAGE │SUBSTANCE │COMPLETE- │
    │  JUDGE   │FRESHNESS │  JUDGE   │  JUDGE   │NESS JUDGE│
    │          │  JUDGE   │          │          │          │
    │"Would a  │"Late-2025│"Sharp?   │"Actual   │"What's   │
    │CTO act   │or 2026   │Concise?  │insight   │missing?  │
    │on this?" │data?"    │No fluff?"│or common │What's    │
    │          │          │          │wisdom?"  │excess?"  │
    └────┬─────┴────┬─────┴────┬─────┴────┬─────┴────┬─────┘
         └──────────┴──────────┴──────────┴──────────┘
                               │
                    Each returns findings as task result
    │
    ▼
PHASE 3: SYNTHESIS                         (main session)
    Deduplicate + consolidate judge outputs into:
    ├── PROBLEMS  (what's wrong, with evidence + severity)
    ├── PROPOSALS (specific rewrites / additions / cuts)
    └── MISSING   (gaps that need new content)
    │
    ▼
PHASE 4: PRESENT                           (main session, interactive)
    Per proposal: Accept / Modify / I'll write it / Skip / Iterate
    │
    ├── Accept/Modify → PHASE 5
    ├── I'll write it → wait for user
    ├── Skip → next proposal
    └── Iterate → back to PHASE 2 for this section
    │
    ▼
PHASE 5: APPLY                             (main session)
    Edit file with accepted changes
    │
    ▼
PHASE 6: NEXT SECTION                      (loop)
    Repeat Phase 2-5 for next section in scope
    After all sections → summary of changes

Invocation

Conversational or via command:

/cyber-writing <content-file> [--vision <file>] [--sources <file1> <file2> ...]

Arguments:

  • content-file: Path to the content being critiqued (required)
  • --vision: Vision/goal document for the project (optional but recommended)
  • --sources: Source files the content was built from (optional, for cross-referencing)

Examples:

/cyber-writing ~/path/to/guidebook.md --vision ~/path/to/VISION.md
/cyber-writing ~/path/to/essay.md --sources ~/path/to/research.md ~/path/to/notes.md

If arguments are missing, ask the user for the content file path and whether there's a vision doc.


Phase 1: Ingest

  1. Read content file — the document being critiqued
  2. Read vision doc (if provided) — the project's goal, audience, acceptance criteria
  3. Read source files (if provided) — materials the content was built from
  4. Parse sections — split by ## headings. Each ## heading = one section
  5. Present section list to user with line counts
  6. User picks scope:
    • all — critique every section sequentially
    • Section numbers — e.g., "1.1, 1.3, 2.4"
    • Range — e.g., "Part 2" or "sections 2.1-2.6"

Write Handoff Brief

Write /tmp/writing-brief.md containing:

markdown
1# Writing Critique Brief 2 3## Project 4[1-2 sentences: what this content is, who it's for] 5 6## Vision 7[Summary of vision doc, or "No vision doc provided"] 8 9## Acceptance Criteria 10- Practical: reader can act on this immediately 11- Data: claims use late-2025 or 2026 data with named sources 12- Language: sharp, concise, no marketing fluff, no LLM-isms 13- Substance: genuine insight, not restated common knowledge 14- Complete: important angles covered, nothing excess 15 16## Source Files 17[List of source file paths, or "No source files provided"] 18 19## Full Document TOC 20[Table of contents with section headings and line numbers]

Phase 2: Critique (Parallel Subagents)

For each section in scope, dispatch 5 judge subagents simultaneously via the Task tool.

Judge Dispatch

Each judge:

  • subagent_type: general-purpose
  • model: sonnet (fast, cheaper, sufficient for critique)
  • Prompt: Built from template at prompts/judge.md with role-specific parameters from the dispatch table below
  • Context provided in prompt: The section text (inline), the brief contents (inline), path to full document, paths to source files

Dispatch Configuration

#JUDGE_ROLEKEY_QUESTIONCRITERIA
1Practical Judge"Would a CTO, COO, or engineering lead change their behavior after reading this section?"Actionable advice with specific steps. Named tools, frameworks, or approaches. Clear "do this, not that" guidance. Concrete examples, not abstract principles.
2Data Freshness Judge"Is every claim backed by current (late-2025 or 2026) data from a named source?"Every statistic has a named company/study and year. No "research shows" without citation. Data points are from 2025-2026 where possible. Pre-2024 data flagged as potentially outdated. Source annotations (<!-- Source: ... -->) match the claims.
3Language Judge"Is every sentence sharp, concise, and free of bullshit?"No LLM-isms (see anti-pattern list). No marketing fluff or hollow declaratives. No hedging ("it could be argued", "one might say"). No filler transitions ("Moreover", "Furthermore"). Every sentence earns its place. Read-aloud test: sounds like a person talking, not a press release.
4Substance Judge"Is there actual insight here or just restated common knowledge?"Original framing or analysis, not Wikipedia summaries. Specific mechanisms explained (HOW something works, not just THAT it exists). Counterarguments or nuances acknowledged. Would an expert learn something, or just nod along?
5Completeness Judge"What important angles are missing? What's said too much?"Key counterarguments or risks not addressed. Important real-world examples missing. Sections that repeat content from other sections. Paragraphs that could be cut without losing information. Cross-references to other sections that should exist.

Anti-Pattern List (provided to Language Judge)

Instant-fail words/phrases:

  • delve, leverage, tapestry, landscape, paradigm, synergy, holistic
  • revolutionary, transformative, game-changing, cutting-edge, groundbreaking
  • "It's important to note that", "It's worth mentioning"
  • "In today's rapidly evolving", "In an era of"
  • "This isn't X. It's Y." (hollow inversion pattern)
  • "But here's the thing", "Here's the turn"
  • "The implications are profound", "The opportunity is clear"
  • Moreover, Furthermore, Additionally, Consequently
  • "Let's delve into", "Let's explore", "Let's unpack"

Structural anti-patterns:

  • Paragraphs longer than 4 sentences
  • Vague attribution ("they say", "experts agree", "studies show")
  • Dramatic mic-drop endings ("This isn't the future. It's the present.")
  • Unnecessary qualifiers ("very", "really", "extremely", "incredibly")

Phase 3: Synthesis

After all 5 judges return for a section:

  1. Read all judge outputs
  2. Deduplicate — multiple judges flagging the same issue consolidates into one finding with combined evidence
  3. Classify each finding:
    • PROBLEM: Something wrong with existing text (with severity: Critical / Important / Minor)
    • PROPOSAL: A specific rewrite, addition, or cut (with the exact new text or description of change)
    • MISSING: A gap that needs new content (with description of what should be added)
  4. Rank by severity: Critical first, then Important, then Minor
  5. Write synthesis to /tmp/writing-synthesis-{section-id}.md

Synthesis Format

markdown
1## Section [X.X]: [Title] 2 3### Critical 41. **[Problem]** — [Judge(s)] 5 Evidence: [quote from text] 6 Proposal: [specific change] 7 8### Important 91. **[Problem]** — [Judge(s)] 10 Evidence: [quote from text] 11 Proposal: [specific change] 12 13### Minor 141. **[Problem]** — [Judge(s)] 15 Proposal: [specific change] 16 17### Missing 181. **[Gap description]** — [Judge(s)] 19 Suggestion: [what to add and where]

Phase 4: Present to User

Present the synthesis section-by-section. For each finding, offer choices via AskUserQuestion:

For PROBLEMS with PROPOSALS:

PROBLEM: [description with evidence]
PROPOSAL: [specific change]

Options:
1. Accept — apply this edit
2. Modify — "good direction but..." (user refines)
3. I'll write it — user will edit this themselves
4. Skip — leave as-is

For MISSING items:

MISSING: [gap description]
SUGGESTION: [what to add]

Options:
1. Draft it — generate the missing content for my review
2. I'll write it — user will add this themselves
3. Skip — not needed

After all findings for a section:

Options:
1. Move to next section
2. Re-critique this section (after user's manual edits)
3. Done — stop here

Interaction Rules

  • Never batch-apply — present one finding at a time for critical/important, batch minor findings
  • Show diffs — when proposing changes, show the before/after clearly
  • Accept user rewrites — if user provides their own text, use it exactly
  • Track changes — maintain a running list of all accepted changes for the session summary

Phase 5: Apply

For accepted changes:

  1. Use the Edit tool to apply the change to the content file
  2. Confirm the edit was applied
  3. Move to next finding

For "Draft it" on missing items:

  1. Generate the content in the response
  2. Show it to the user for approval
  3. If approved, use Edit or Write to insert it at the appropriate location

Phase 6: Loop + Summary

After completing all sections in scope:

markdown
1## Session Summary 2 3**Sections reviewed**: [list] 4**Changes applied**: [count] 5**Items skipped**: [count] 6**Items for user to write**: [count] 7 8### Changes Made 91. Section X.X: [brief description of change] 102. Section X.X: [brief description of change] 11... 12 13### Still Open 141. Section X.X: [user said they'd write this] 152. Section X.X: [gap identified, skipped]

Key Principles

  1. Never edit without user approval — propose, never impose
  2. Judges are brutally honest — no praise filler, just problems and proposals
  3. Every critique has evidence — quote the specific text that's problematic
  4. Proposals are concrete — "rewrite X as Y", never "could be improved"
  5. Section-by-section — one section fully resolved before moving to next
  6. Full context to subagents — each judge gets vision doc, full document TOC, and the section text
  7. Model efficiency — judges run on sonnet (fast, sufficient for critique)

Files

FilePurpose
SKILL.mdThis file — architecture and workflow
prompts/judge.mdSubagent prompt template for all 5 judges

Handoff Files

FileWritten ByRead By
/tmp/writing-brief.mdMain (Phase 1)All judges
/tmp/writing-synthesis-{section}.mdMain (Phase 3)Main (Phase 4), preserved for context

Commands

CommandDescription
/cyber-writingMain entry point — critique and improve content

関連スキル

Looking for an alternative to Writing or another community skill for your workflow? Explore these related open-source skills.

すべて表示

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

カスタマイズ可能なウィジェットプラグインをprompts.chatのフィードシステム用に生成する

149.6k
0
AI

flags

Logo of vercel
vercel

React フレームワーク

138.4k
0
ブラウザ

pr-review

Logo of pytorch
pytorch

Pythonにおけるテンソルと動的ニューラルネットワーク(強力なGPUアクセラレーション)

98.6k
0
開発者