KS
Killer-Skills

sop-code-review — how to use sop-code-review how to use sop-code-review, sop-code-review setup guide, what is sop-code-review, sop-code-review alternative, sop-code-review vs code review tools, sop-code-review install for AI agents, comprehensive code review workflow, parallel automated testing with JavaScript, code review workflow for developers

v1.0.0
GitHub

About this Skill

Perfect for Development Agents needing structured code review workflows with automated checks and integration reviews. sop-code-review is a code review workflow that utilizes parallel automated testing and specialized reviewers to ensure high-quality code, following a 4-phase process: automated checks, specialized reviews, integration review, and final approval.

Features

Utilizes parallel automated testing via JavaScript
Employs a coordinator pattern with a 'star' topology for review swarm initialization
Includes a 30-minute automated checks phase for quick quality checks
Supports specialized reviews for different quality aspects
Conducts a 1-hour integration review for thorough code examination
Requires a final 30-minute approval phase for verified code

# Core Topics

DNYoussef DNYoussef
[0]
[0]
Updated: 3/6/2026

Quality Score

Top 5%
60
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add DNYoussef/ai-chrome-extension/sop-code-review

Agent Capability Analysis

The sop-code-review MCP Server by DNYoussef is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use sop-code-review, sop-code-review setup guide, what is sop-code-review.

Ideal Agent Persona

Perfect for Development Agents needing structured code review workflows with automated checks and integration reviews.

Core Value

Empowers agents to perform comprehensive code reviews using specialized reviewers for different quality aspects, leveraging parallel automated testing and integration reviews via JavaScript and coordinator patterns like 'star' topology.

Capabilities Granted for sop-code-review MCP Server

Automating code quality checks with parallel automated testing
Conducting specialized reviews for distinct quality aspects
Performing integration reviews for cohesive codebase validation

! Prerequisites & Limits

  • Requires 4 hours for complete workflow execution
  • Dependent on JavaScript for automated testing scripts
  • Limited to specific quality aspects defined by specialized reviewers
Project
SKILL.md
12.4 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

SOP: Code Review Workflow

Comprehensive code review using specialized reviewers for different quality aspects.

Timeline: 4 Hours

Phases:

  1. Automated Checks (30 min)
  2. Specialized Reviews (2 hours)
  3. Integration Review (1 hour)
  4. Final Approval (30 min)

Phase 1: Automated Checks (30 minutes)

Quick Quality Checks

Parallel Automated Testing:

javascript
1// Initialize review swarm 2await mcp__ruv-swarm__swarm_init({ 3 topology: 'star', // Coordinator pattern for reviews 4 maxAgents: 6, 5 strategy: 'specialized' 6}); 7 8// Run all automated checks in parallel 9const [lint, tests, coverage, build] = await Promise.all([ 10 Task("Linter", ` 11Run linting checks: 12- ESLint for JavaScript/TypeScript 13- Pylint for Python 14- RuboCop for Ruby 15- Check for code style violations 16 17Store results: code-review/${prId}/lint-results 18`, "reviewer"), 19 20 Task("Test Runner", ` 21Run test suite: 22- Unit tests 23- Integration tests 24- E2E tests (if applicable) 25- All tests must pass 26 27Store results: code-review/${prId}/test-results 28`, "tester"), 29 30 Task("Coverage Analyzer", ` 31Check code coverage: 32- Overall coverage > 80% 33- New code coverage > 90% 34- No critical paths uncovered 35 36Generate coverage report 37Store: code-review/${prId}/coverage-report 38`, "reviewer"), 39 40 Task("Build Validator", ` 41Validate build: 42- Clean build (no warnings) 43- Type checking passes 44- No broken dependencies 45- Bundle size within limits 46 47Store build results: code-review/${prId}/build-status 48`, "reviewer") 49]); 50 51// If any automated check fails, stop and request fixes 52if (hasFailures([lint, tests, coverage, build])) { 53 await Task("Review Coordinator", ` 54Automated checks failed. Request fixes from author: 55${summarizeFailures([lint, tests, coverage, build])} 56 57Store feedback: code-review/${prId}/automated-feedback 58`, "pr-manager"); 59 return; // Stop review until fixed 60}

Deliverables:

  • All automated checks passing
  • Test results documented
  • Coverage report generated

Phase 2: Specialized Reviews (2 hours)

Parallel Expert Reviews

Sequential coordination of parallel reviews:

javascript
1// Spawn specialized reviewers in parallel 2const [codeQuality, security, performance, architecture, docs] = await Promise.all([ 3 Task("Code Quality Reviewer", ` 4Review for code quality: 5 6**Readability**: 7- Clear, descriptive names (variables, functions, classes) 8- Appropriate function/method length (< 50 lines) 9- Logical code organization 10- Minimal cognitive complexity 11 12**Maintainability**: 13- DRY principle (no code duplication) 14- SOLID principles followed 15- Clear separation of concerns 16- Proper error handling 17 18**Best Practices**: 19- Following language idioms 20- Proper use of design patterns 21- Appropriate comments (why, not what) 22- No code smells (magic numbers, long parameter lists) 23 24Store review: code-review/${prId}/quality-review 25Rating: 1-5 stars 26`, "code-analyzer"), 27 28 Task("Security Reviewer", ` 29Review for security issues: 30 31**Authentication & Authorization**: 32- Proper authentication checks 33- Correct authorization rules 34- No privilege escalation risks 35- Secure session management 36 37**Data Security**: 38- Input validation (prevent injection attacks) 39- Output encoding (prevent XSS) 40- Sensitive data encryption 41- No hardcoded secrets or credentials 42 43**Common Vulnerabilities** (OWASP Top 10): 44- SQL Injection prevention 45- XSS prevention 46- CSRF protection 47- Secure dependencies (no known vulnerabilities) 48 49Store review: code-review/${prId}/security-review 50Severity: Critical/High/Medium/Low for each finding 51`, "security-manager"), 52 53 Task("Performance Reviewer", ` 54Review for performance issues: 55 56**Algorithmic Efficiency**: 57- Appropriate time complexity (no unnecessary O(n²)) 58- Efficient data structures chosen 59- No unnecessary iterations 60- Lazy loading where appropriate 61 62**Resource Usage**: 63- No memory leaks 64- Proper cleanup (connections, files, timers) 65- Efficient database queries (avoid N+1) 66- Batch operations where possible 67 68**Optimization Opportunities**: 69- Caching potential 70- Parallelization opportunities 71- Database index needs 72- API call optimization 73 74Store review: code-review/${prId}/performance-review 75Impact: High/Medium/Low for each finding 76`, "perf-analyzer"), 77 78 Task("Architecture Reviewer", ` 79Review for architectural consistency: 80 81**Design Patterns**: 82- Follows established patterns in codebase 83- Appropriate abstraction level 84- Proper dependency injection 85- Clean architecture principles 86 87**Integration**: 88- Fits well with existing code 89- No unexpected side effects 90- Backward compatibility maintained 91- API contracts respected 92 93**Scalability**: 94- Design supports future growth 95- No hardcoded limits 96- Stateless where possible 97- Horizontally scalable 98 99Store review: code-review/${prId}/architecture-review 100Concerns: Blocker/Major/Minor for each finding 101`, "system-architect"), 102 103 Task("Documentation Reviewer", ` 104Review documentation: 105 106**Code Documentation**: 107- Public APIs documented (JSDoc/docstring) 108- Complex logic explained 109- Non-obvious behavior noted 110- Examples provided where helpful 111 112**External Documentation**: 113- README updated (if needed) 114- API docs updated (if API changed) 115- Migration guide (if breaking changes) 116- Changelog updated 117 118**Tests as Documentation**: 119- Test names are descriptive 120- Test coverage demonstrates usage 121- Edge cases documented in tests 122 123Store review: code-review/${prId}/docs-review 124Completeness: 0-100% 125`, "api-docs") 126]); 127 128// Aggregate all reviews 129await Task("Review Aggregator", ` 130Aggregate specialized reviews: 131- Quality: ${codeQuality} 132- Security: ${security} 133- Performance: ${performance} 134- Architecture: ${architecture} 135- Documentation: ${docs} 136 137Identify: 138- Blocking issues (must fix before merge) 139- High-priority suggestions 140- Nice-to-have improvements 141 142Generate summary 143Store: code-review/${prId}/aggregated-review 144`, "reviewer");

Deliverables:

  • 5 specialized reviews completed
  • Issues categorized by severity
  • Aggregated review summary

Phase 3: Integration Review (1 hour)

End-to-End Impact Assessment

Sequential Analysis:

javascript
1// Step 1: Integration Testing 2await Task("Integration Tester", ` 3Test integration with existing system: 4- Does this change break any existing functionality? 5- Are all integration tests passing? 6- Does it play well with related modules? 7- Any unexpected side effects? 8 9Run integration test suite 10Store results: code-review/${prId}/integration-tests 11`, "tester"); 12 13// Step 2: Deployment Impact 14await Task("DevOps Reviewer", ` 15Assess deployment impact: 16- Infrastructure changes needed? 17- Database migrations required? 18- Configuration updates needed? 19- Backward compatibility maintained? 20- Rollback plan clear? 21 22Store assessment: code-review/${prId}/deployment-impact 23`, "cicd-engineer"); 24 25// Step 3: User Impact 26await Task("Product Reviewer", ` 27Assess user impact: 28- Does this change improve user experience? 29- Are there any user-facing changes? 30- Is UX/UI consistent with design system? 31- Are analytics/tracking updated? 32 33Store assessment: code-review/${prId}/user-impact 34`, "planner"); 35 36// Step 4: Risk Assessment 37await Task("Risk Analyzer", ` 38Overall risk assessment: 39- What's the blast radius of this change? 40- What's the worst-case failure scenario? 41- Do we have rollback procedures? 42- Should this be feature-flagged? 43- Monitoring and alerting adequate? 44 45Store risk assessment: code-review/${prId}/risk-analysis 46Recommendation: Approve/Conditional/Reject 47`, "reviewer");

Deliverables:

  • Integration test results
  • Deployment impact assessment
  • User impact assessment
  • Risk analysis

Phase 4: Final Approval (30 minutes)

Review Summary & Decision

Sequential Finalization:

javascript
1// Step 1: Generate Final Summary 2await Task("Review Coordinator", ` 3Generate final review summary: 4 5**Automated Checks**: ✅ All passing 6**Quality Review**: ${qualityScore}/5 7**Security Review**: ${securityIssues} issues (${criticalCount} critical) 8**Performance Review**: ${perfIssues} issues (${highImpactCount} high-impact) 9**Architecture Review**: ${archConcerns} concerns (${blockerCount} blockers) 10**Documentation Review**: ${docsCompleteness}% complete 11**Integration Tests**: ${integrationStatus} 12**Deployment Impact**: ${deploymentImpact} 13**User Impact**: ${userImpact} 14**Risk Level**: ${riskLevel} 15 16**Blocking Issues**: 17${listBlockingIssues()} 18 19**Recommendations**: 20${generateRecommendations()} 21 22**Overall Decision**: ${decision} (Approve/Request Changes/Reject) 23 24Store final summary: code-review/${prId}/final-summary 25`, "pr-manager"); 26 27// Step 2: Author Notification 28await Task("Notification Agent", ` 29Notify PR author: 30- Review complete 31- Summary of findings 32- Action items (if any) 33- Next steps 34 35Send notification 36Store: code-review/${prId}/author-notification 37`, "pr-manager"); 38 39// Step 3: Decision Actions 40if (decision === 'Approve') { 41 await Task("Merge Coordinator", ` 42Approved for merge: 43- Add "approved" label 44- Update PR status 45- Queue for merge (if auto-merge enabled) 46- Notify relevant teams 47 48Store: code-review/${prId}/merge-approval 49`, "pr-manager"); 50} else if (decision === 'Request Changes') { 51 await Task("Feedback Coordinator", ` 52Request changes: 53- Create detailed feedback comment 54- Label as "changes-requested" 55- Assign back to author 56- Schedule follow-up review 57 58Store: code-review/${prId}/change-request 59`, "pr-manager"); 60} else { 61 await Task("Rejection Handler", ` 62Reject PR: 63- Create detailed explanation 64- Suggest alternative approaches 65- Label as "rejected" 66- Close PR (or request fundamental rework) 67 68Store: code-review/${prId}/rejection 69`, "pr-manager"); 70}

Deliverables:

  • Final review summary
  • Author notification
  • Decision and next steps

Success Criteria

Review Quality

  • Coverage: All aspects reviewed (quality, security, performance, architecture, docs)
  • Consistency: Reviews follow established guidelines
  • Actionability: All feedback is specific and actionable
  • Timeliness: Reviews completed within 4 hours

Code Quality Gates

  • Automated Tests: 100% passing
  • Code Coverage: > 80% overall, > 90% for new code
  • Linting: 0 violations
  • Security: 0 critical issues, 0 high-severity issues
  • Performance: No high-impact performance regressions
  • Documentation: 100% of public APIs documented

Process Metrics

  • Review Turnaround: < 4 hours (business hours)
  • Author Satisfaction: > 4/5 (feedback is helpful)
  • Defect Escape Rate: < 1% (issues found in production that should have been caught)

Review Guidelines

What Reviewers Should Focus On

DO Review:

  • Logic correctness
  • Edge case handling
  • Error handling robustness
  • Security vulnerabilities
  • Performance implications
  • Code clarity and maintainability
  • Test coverage and quality
  • API design and contracts
  • Documentation completeness

DON'T Nitpick:

  • Personal style preferences (use automated linting)
  • Minor variable naming (unless truly confusing)
  • Trivial formatting (use automated formatting)
  • Subjective "better" ways (unless significantly better)

Giving Feedback

Effective Feedback:

  • ✅ "This function has O(n²) complexity. Consider using a hash map for O(n)."
  • ✅ "This input isn't validated. Add validation to prevent SQL injection."
  • ✅ "This error isn't logged. Add error logging for debugging."

Ineffective Feedback:

  • ❌ "I don't like this."
  • ❌ "This could be better."
  • ❌ "Change this." (without explanation)

Tone:

  • Be respectful and constructive
  • Assume good intent
  • Ask questions rather than make demands
  • Suggest, don't dictate (unless security/critical issue)

Agent Coordination Summary

Total Agents Used: 12-15 Execution Pattern: Star topology (coordinator with specialists) Timeline: 4 hours Memory Namespaces: code-review/{pr-id}/*

Key Agents:

  1. reviewer - Lint, build, coordination
  2. tester - Test execution, integration testing
  3. code-analyzer - Code quality review
  4. security-manager - Security review
  5. perf-analyzer - Performance review
  6. system-architect - Architecture review
  7. api-docs - Documentation review
  8. cicd-engineer - Deployment impact
  9. planner - Product/user impact
  10. pr-manager - Review coordination, notifications

Usage

javascript
1// Invoke this SOP skill for a PR 2Skill("sop-code-review") 3 4// Or execute with specific PR 5Task("Code Review Orchestrator", ` 6Execute comprehensive code review for PR #${prNumber} 7Repository: ${repoName} 8Author: ${authorName} 9Changes: ${changesSummary} 10`, "pr-manager")

Status: Production-ready SOP Complexity: Medium (12-15 agents, 4 hours) Pattern: Star topology with specialized reviewers

Related Skills

Looking for an alternative to sop-code-review or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication