research-review — community research-review, community, ide skills

v1.0.0

About this Skill

Perfect for Research Agents needing iterative document review and analysis using the Rule of 5. Iterative review of research documents using Rule of 5

charly-vibes charly-vibes
[0]
[1]
Updated: 3/20/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Locale and body language aligned
Review Score
7/11
Quality Score
34
Canonical Locale
en
Detected Body Locale
en

Perfect for Research Agents needing iterative document review and analysis using the Rule of 5. Iterative review of research documents using Rule of 5

Core Value

Empowers agents to perform thorough research document reviews using iterative refinement until convergence, leveraging the Rule of 5 for accuracy and sources, and streamlining the review process with automated passes.

Ideal Agent Persona

Perfect for Research Agents needing iterative document review and analysis using the Rule of 5.

Capabilities Granted for research-review

Automating research document review using the Rule of 5
Generating iterative refinement reports for research papers
Streamlining the review process for AI-driven development

! Prerequisites & Limits

  • Requires research document path or list of available documents
  • Limited to 5 iterative passes

Why this page is reference-only

  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is research-review?

Perfect for Research Agents needing iterative document review and analysis using the Rule of 5. Iterative review of research documents using Rule of 5

How do I install research-review?

Run the command: npx killer-skills add charly-vibes/wai/research-review. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for research-review?

Key use cases include: Automating research document review using the Rule of 5, Generating iterative refinement reports for research papers, Streamlining the review process for AI-driven development.

Which IDEs are compatible with research-review?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for research-review?

Requires research document path or list of available documents. Limited to 5 iterative passes.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add charly-vibes/wai/research-review. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use research-review immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

research-review

Install research-review, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Iterative Research Review (Rule of 5)

Perform thorough research document review using the Rule of 5 - iterative refinement until convergence.

Setup

If research document path provided: Read the document completely

If no path: Ask for the research document path or list available research documents

Process

Perform 5 passes, each focusing on different aspects. After each pass (starting with pass 2), check for convergence.

PASS 1 - Accuracy & Sources

Focus on:

  • Claims backed by evidence
  • Source credibility and recency
  • Correct interpretation of sources
  • Factual accuracy of technical details
  • Version/date relevance (is information outdated?)
  • Code references are correct (file:line exist and match claim)

Output format:

PASS 1: Accuracy & Sources

Issues Found:

[ACC-001] [CRITICAL|HIGH|MEDIUM|LOW] - Section/Paragraph
Description: [What's inaccurate or unsourced]
Evidence: [Why this is problematic]
Recommendation: [How to fix with specific guidance]

[ACC-002] ...

What to look for:

  • "This works by..." without code reference
  • Claims about codebase without verification
  • Outdated information (library versions, deprecated APIs)
  • Misinterpretation of source code
  • Assumptions presented as facts

PASS 2 - Completeness & Scope

Focus on:

  • Missing important topics or considerations
  • Unanswered questions that should be addressed
  • Gaps in the analysis
  • Scope creep (irrelevant tangents)
  • Depth appropriate for the topic
  • All research questions answered

Prefix: COMP-001, COMP-002, etc.

What to look for:

  • Research question asked but not answered
  • Obvious related topics not explored
  • Shallow treatment of complex topics
  • Too much detail on tangential topics
  • "Further research needed" without followthrough

PASS 3 - Clarity & Structure

Focus on:

  • Logical flow and organization
  • Clear definitions of terms
  • Appropriate headings and sections
  • Readability for target audience
  • Jargon explained or avoided
  • Consistent terminology

Prefix: CLAR-001, CLAR-002, etc.

What to look for:

  • Jumping between topics without transitions
  • Technical terms used without definition
  • Conclusions before supporting evidence
  • Redundant sections
  • Confusing or ambiguous language

PASS 4 - Actionability & Conclusions

Focus on:

  • Clear takeaways and recommendations
  • Conclusions supported by the research
  • Practical applicability to the project
  • Trade-offs clearly articulated
  • Next steps identified
  • Decision-making guidance provided

Prefix: ACT-001, ACT-002, etc.

What to look for:

  • Research without recommendations
  • Conclusions that don't follow from findings
  • "Interesting but..." without actionable insight
  • Missing implementation guidance
  • No clear "what should we do?"

PASS 5 - Integration & Context

Focus on:

  • Alignment with existing research
  • Connections to specs and requirements
  • Relevance to current project goals
  • Contradictions with established decisions
  • Impact on existing plans
  • References to related work

Prefix: INT-001, INT-002, etc.

What to look for:

  • Contradicts previous research without acknowledgment
  • Ignores existing patterns in codebase
  • Doesn't reference related specs or docs
  • Recommendations conflict with project direction
  • Missing cross-references

Convergence Check

After each pass (starting with pass 2), report:

Convergence Check After Pass [N]:

1. New CRITICAL issues: [count]
2. Total new issues this pass: [count]
3. Total new issues previous pass: [count]
4. Estimated false positive rate: [percentage]

Status: [CONVERGED | ITERATE | NEEDS_HUMAN]

Convergence criteria:

  • CONVERGED: No new CRITICAL, <10% new issues vs previous pass, <20% false positives
  • ITERATE: Continue to next pass
  • NEEDS_HUMAN: Found blocking issues requiring human judgment

If CONVERGED before Pass 5: Stop and report final findings.

Final Report

After convergence or completing all passes:

## Research Review Final Report

**Research:** [path/to/research.md]

### Summary

Total Issues by Severity:
- CRITICAL: [count] - Must fix before using research
- HIGH: [count] - Should fix before using research
- MEDIUM: [count] - Consider addressing
- LOW: [count] - Nice to have

Convergence: Pass [N]

### Top 3 Most Critical Findings

1. [ACC-001] [Description] - Section [N]
   Impact: [Why this matters]
   Fix: [What to do]

2. [COMP-003] [Description] - Section [N]
   Impact: [Why this matters]
   Fix: [What to do]

3. [ACT-002] [Description] - Conclusions
   Impact: [Why this matters]
   Fix: [What to do]

### Recommended Revisions

1. [Action 1 - specific and actionable]
2. [Action 2 - specific and actionable]
3. [Action 3 - specific and actionable]

### Verdict

[READY | NEEDS_REVISION | NEEDS_MORE_RESEARCH]

**Rationale:** [1-2 sentences explaining the verdict]

### Research Quality Assessment

- **Accuracy**: [Excellent|Good|Fair|Poor] - [brief comment]
- **Completeness**: [Excellent|Good|Fair|Poor] - [brief comment]
- **Actionability**: [Excellent|Good|Fair|Poor] - [brief comment]
- **Clarity**: [Excellent|Good|Fair|Poor] - [brief comment]

Rules

  1. Be specific - Reference sections/paragraphs, provide file:line for code claims
  2. Verify claims - Actually check code references and factual statements
  3. Validate actionability - Research should drive decisions, not just inform
  4. Prioritize correctly:
    • CRITICAL: Factually wrong or misleading
    • HIGH: Significant gaps or unclear conclusions
    • MEDIUM: Could be clearer or more complete
    • LOW: Minor improvements
  5. If converged before pass 5 - Stop and report, don't continue needlessly

Related Skills

Looking for an alternative to research-review or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer