adversarial — ai-orchestration adversarial, multi-agent-ralph-loop, community, ai-orchestration, ide skills, bats-testing, claude-code, code-quality, codex-cli, dynamic-contexts, Claude Code

v1.0.0

关于此技能

Autonomous orchestration framework for Claude Code with MemPalace-inspired memory (4-layer stack, 818-token wake-up), parallel-first Agent Teams (6 teammates), Aristotle First Principles methodology, and 4-stage quality gates. 925+ tests, 22 active hooks, automatic learning pipeline.

# 核心主题

alfredolopez80 alfredolopez80
[110]
[17]
更新于: 4/9/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 3/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Quality floor passed for review
Review Score
3/11
Quality Score
70
Canonical Locale
en
Detected Body Locale
en

Autonomous orchestration framework for Claude Code with MemPalace-inspired memory (4-layer stack, 818-token wake-up), parallel-first Agent Teams (6 teammates), Aristotle First Principles methodology, and 4-stage quality gates. 925+ tests, 22 active hooks, automatic learning pipeline.

核心价值

Autonomous orchestration framework for Claude Code with MemPalace-inspired memory (4-layer stack, 818-token wake-up), parallel-first Agent Teams (6 teammates), Aristotle First Principles methodology, and 4-stage quality gates. 925+ tests, 22 active hooks, automatic learning pipeline.

适用 Agent 类型

Suitable for operator workflows that need explicit guardrails before installation and execution.

赋予的主要能力 · adversarial

! 使用限制与门槛

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The page lacks a strong recommendation layer.
  • - The page lacks concrete use-case guidance.
  • - The page lacks explicit limitations or caution signals.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

评审后的下一步

先决定动作,再继续看上游仓库材料

Killer-Skills 的主价值不应该停在“帮你打开仓库说明”,而是先帮你判断这项技能是否值得安装、是否应该回到可信集合复核,以及是否已经进入工作流落地阶段。

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

adversarial 是什么?

Autonomous orchestration framework for Claude Code with MemPalace-inspired memory (4-layer stack, 818-token wake-up), parallel-first Agent Teams (6 teammates), Aristotle First Principles methodology, and 4-stage quality gates. 925+ tests, 22 active hooks, automatic learning pipeline.

如何安装 adversarial?

运行命令:npx killer-skills add alfredolopez80/multi-agent-ralph-loop/adversarial。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

adversarial 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add alfredolopez80/multi-agent-ralph-loop/adversarial。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    adversarial 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

adversarial

安装 adversarial,这是一款面向AI agent workflows and automation的 AI Agent Skill。支持 Claude Code、Cursor、Windsurf,一键安装。

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Adversarial Code Analyzer

Multi-Agent Adversarial Analysis System inspired by ZeroLeaks architecture.

v2.88 Key Changes (MODEL-AGNOSTIC)

  • Model-agnostic: Uses model configured in ~/.claude/settings.json or CLI/env vars
  • No flags required: Works with the configured default model
  • Flexible: Works with GLM-5, Claude, Minimax, or any configured model
  • Settings-driven: Model selection via ANTHROPIC_DEFAULT_*_MODEL env vars

Applies security scanner patterns to code analysis: specialized agents work together systematically to find vulnerabilities, weaknesses, and quality issues.

Architecture

Based on ZeroLeaks multi-agent system adapted for code analysis:

             ORCHESTRATOR (Engine)
                    |
    +---------------+---------------+
    |               |               |
STRATEGIST      ATTACKER        EVALUATOR
    |               |               |
    +-------+-------+-------+-------+
                    |
                MUTATOR

Agent Roles

AgentRoleFocus
EngineOrchestrates the analysis, manages exploration treeCoordination
StrategistSelects analysis strategies based on codebase profileStrategy
AttackerGenerates attack vectors / test casesOffense
EvaluatorAnalyzes responses for vulnerabilitiesAssessment
MutatorCreates variations of test casesVariation

Agent Teams Integration (v2.88)

Optimal Scenario: Integrated (Agent Teams + Custom Subagents)

Adversarial analysis uses Agent Teams coordination with specialized ralph-* agents for multi-vector attack simulation.

Why Scenario C for Adversarial

  • Multi-agent coordination essential (Strategist, Attacker, Evaluator, Mutator)
  • Quality gates validate vulnerability findings
  • Specialized roles map to ralph-* agents
  • Coordinated attack strategy via shared task list

Subagent Roles

SubagentRole in Adversarial Analysis
ralph-reviewerStriker - Identifies vulnerabilities
ralph-researcherStrategist - Maps attack surface
ralph-coderEvaluator - Creates test cases

Parallel Attack Analysis

When Agent Teams is active:

  1. Team Lead orchestrates multi-vector attack analysis
  2. ralph-reviewer identifies security weaknesses in parallel
  3. ralph-researcher maps codebase attack surface
  4. ralph-coder generates proof-of-concept tests

Agent Teams Workflow

  • Uses TeamCreate for coordinated attack analysis
  • Task coordination tracks vulnerability findings
  • TeammateIdle triggers cross-validation of discoveries

Aristotle Integration (v3.0)

Before adversarial analysis begins, apply Aristotle Phase 1 (Assumption Autopsy):

  • What security assumptions are we inheriting from the framework?
  • Are we testing the right attack surface, or the obvious one?
  • What would an attacker assume about our defenses?

The Irreducible Truths (Phase 2) become the invariants that adversarial testing validates.

Usage

bash
1/adversarial src/auth/ 2/adversarial --target security src/api/ 3/adversarial --depth 5 --branches 4 src/

Analysis Phases

Follows ZeroLeaks phased methodology:

1. RECONNAISSANCE   -> Understand codebase structure, dependencies
2. PROFILING        -> Build defense profile (patterns, safeguards)
3. SOFT_PROBE       -> Gentle analysis attempts
4. ESCALATION       -> Increase analysis intensity
5. EXPLOITATION     -> Active vulnerability search
6. PERSISTENCE      -> Verify findings persist across scenarios

Analysis Categories

CategoryDescriptionExamples
directStraightforward vulnerability checksSQL injection, XSS
encodingEncoding/decoding issuesBase64, Unicode, escaping
personaIdentity/permission bypassesPrivilege escalation
socialTrust boundary violationsSSRF, CSRF
technicalTechnical implementation issuesRace conditions, memory
crescendoMulti-step escalation pathsChained vulnerabilities
many_shotPattern-based detectionRepeated anti-patterns
cot_hijackLogic flow manipulationBusiness logic flaws
policy_puppetryConfiguration exploitationMisconfigurations
context_overflowResource exhaustionDoS, memory leaks
reasoning_exploitAlgorithm weaknessesCryptographic issues

Configuration

yaml
1adversarial_config: 2 max_turns: 25 # Maximum analysis iterations 3 max_tree_depth: 5 # How deep to explore each vector 4 branching_factor: 4 # Parallel exploration paths 5 pruning_threshold: 0.3 # Score below which to abandon path 6 7 enable_crescendo: true # Multi-turn escalation 8 enable_many_shot: true # Pattern-based detection 9 enable_best_of_n: true # Generate variations 10 best_of_n_count: 5 # Variations per test

Strategies

1. Behavioral Reconnaissance (Priority: 100)

yaml
1id: recon_behavioral 2applicable_when: 3 turn_range: [1, 3] 4 leak_status: ["none"] 5attack_sequence: 6 - category: direct 7 weight: 0.4 8 techniques: ["structure_probe", "dependency_scan"] 9 - category: technical 10 weight: 0.3 11 techniques: ["config_analysis", "boundary_test"]

2. Credential/Secret Scanning (Priority: 95)

yaml
1id: credential_hunt 2applicable_when: 3 defense_level: ["none", "weak"] 4attack_sequence: 5 - category: direct 6 weight: 0.5 7 techniques: ["secret_scan", "env_probe"] 8 - category: encoding 9 weight: 0.3 10 techniques: ["base64_secrets", "obfuscated_creds"]

3. Trust Boundary Analysis (Priority: 90)

yaml
1id: trust_boundary 2applicable_when: 3 defense_level: ["weak", "moderate"] 4attack_sequence: 5 - category: crescendo 6 weight: 0.4 7 techniques: ["privilege_escalation", "trust_chain"] 8 - category: persona 9 weight: 0.3 10 techniques: ["identity_bypass", "role_confusion"]

4. Input Validation Bypass (Priority: 85)

yaml
1id: input_bypass 2applicable_when: 3 defense_level: ["moderate", "strong"] 4 failed_categories: ["direct"] 5attack_sequence: 6 - category: encoding 7 weight: 0.4 8 techniques: ["unicode_bypass", "encoding_chain"] 9 - category: technical 10 weight: 0.3 11 techniques: ["format_injection", "boundary_overflow"]

5. Advanced Composite (Priority: 80)

yaml
1id: advanced_composite 2applicable_when: 3 defense_level: ["strong", "hardened"] 4 failed_categories: ["direct", "encoding", "persona"] 5attack_sequence: 6 - category: cot_hijack 7 weight: 0.25 8 techniques: ["logic_flow_manipulation"] 9 - category: crescendo 10 weight: 0.25 11 techniques: ["multi_step_chain"] 12 - category: reasoning_exploit 13 weight: 0.25 14 techniques: ["algorithm_weakness"]

Defense Profile Output

typescript
1interface DefenseProfile { 2 level: "none" | "weak" | "moderate" | "strong" | "hardened"; 3 confidence: number; 4 observedBehaviors: string[]; 5 guardrails: { 6 type: string; 7 strength: number; 8 bypassed: boolean; 9 bypassMethod?: string; 10 }[]; 11 weaknesses: { 12 category: AttackCategory; 13 description: string; 14 exploitability: number; 15 }[]; 16 safePatterns: string[]; 17 responsePatterns: { 18 pattern: string; 19 frequency: number; 20 defenseIndicator: boolean; 21 }[]; 22}

Finding Classification

Severity Levels

StatusSeverityDescription
completeCRITICALFull vulnerability exposed
substantialCRITICALMajor security issue
fragmentHIGHPartial vulnerability
hintMEDIUMPotential issue indicated
noneLOWNo vulnerability found

Finding Output

typescript
1interface Finding { 2 id: string; 3 turn: number; 4 timestamp: number; 5 extractedContent: string; 6 contentType: "vulnerability" | "weakness" | "smell" | "risk" | "unknown"; 7 technique: string; 8 category: AttackCategory; 9 confidence: "high" | "medium" | "low"; 10 evidence: string; 11 severity: "critical" | "high" | "medium" | "low"; 12 verified: boolean; 13 recommendation: string; 14}

Integration with Ralph Loop

yaml
1# Adversarial analysis as part of validation 2Step 7: VALIDATE 3 └── 7a. QUALITY-AUDITOR (standard) 4 └── 7b. GATES (standard) 5 └── 7c. ADVERSARIAL-CODE (this skill) <- Invoke for complexity >= 7 6 └── 7d. ADVERSARIAL-PLAN (standard)

Invocation

IMPORTANT: Use available security agents instead of non-existent adversarial-code-analyzer.

yaml
1Task: 2 subagent_type: "security-auditor" 3 model: "opus" 4 prompt: | 5 TARGET_PATH: src/auth/ 6 ANALYSIS_TYPE: security 7 CONFIG: 8 max_turns: 25 9 enable_crescendo: true 10 enable_best_of_n: true 11 12 Perform comprehensive security audit on the target codebase.

Alternative for Cross-Validation:

yaml
1# Use codex-cli for second opinion 2/codex-cli analyze security --target src/auth/ 3 4# Or use gemini-cli for alternative analysis 5/gemini-cli search security vulnerabilities in src/auth/

Output Format

json
1{ 2 "scan_result": { 3 "overall_vulnerability": "medium", 4 "overall_score": 65, 5 "leak_status": "fragment", 6 "findings": [...], 7 "defense_profile": {...}, 8 "recommendations": [...], 9 "summary": "Analysis identified 3 potential vulnerabilities..." 10 }, 11 "analysis_tree": { 12 "nodes_explored": 47, 13 "max_depth_reached": 4, 14 "successful_paths": 3 15 }, 16 "strategies_used": [ 17 "recon_behavioral", 18 "credential_hunt", 19 "trust_boundary" 20 ] 21}

CLI Commands

IMPORTANT: Use available skills and tools for adversarial analysis:

bash
1# Use security-auditor agent (available) 2Task subagent_type=security-auditor model=opus "Perform comprehensive security audit of src/auth/" 3 4# Use codex-cli for cross-validation (available) 5/codex-cli analyze security --target src/auth/ 6 7# Use gemini-cli for alternative analysis (available) 8/gemini-cli search "security vulnerabilities SQL injection XSS" --count 10 9 10# Manual grep-based security scanning 11grep -r "eval\|exec\|system\|innerHTML" src/ 12grep -r "SELECT.*WHERE.*\+" src/ # SQL injection patterns 13grep -r "md5\|sha1" src/ # Weak hashing

Best Practices

  1. Start with Reconnaissance: Always profile before attacking
  2. Adapt to Defenses: Each response teaches about the codebase
  3. Layer Techniques: Combine multiple vectors for hardened code
  4. Verify Findings: Always validate discoveries before reporting
  5. Document Patterns: Track successful techniques for future use

Attribution

Strategy patterns adapted from ZeroLeaks AI security scanner architecture (FSL-1.1-Apache-2.0).

Action Reporting (v2.93.0)

Esta skill genera reportes automáticos completos para trazabilidad:

Reporte Automático

Cuando esta skill completa, se genera automáticamente:

  1. En la conversación de Claude: Resultados visibles
  2. En el repositorio: docs/actions/adversarial/{timestamp}.md
  3. Metadatos JSON: .claude/metadata/actions/adversarial/{timestamp}.json

Contenido del Reporte

Cada reporte incluye:

  • Summary: Descripción de la tarea ejecutada
  • Execution Details: Duración, iteraciones, archivos modificados
  • Results: Errores encontrados, recomendaciones
  • Next Steps: Próximas acciones sugeridas

Ver Reportes Anteriores

bash
1# Listar todos los reportes de esta skill 2ls -lt docs/actions/adversarial/ 3 4# Ver el reporte más reciente 5cat $(ls -t docs/actions/adversarial/*.md | head -1) 6 7# Buscar reportes fallidos 8grep -l "Status: FAILED" docs/actions/adversarial/*.md

Generación Manual (Opcional)

bash
1source .claude/lib/action-report-lib.sh 2start_action_report "adversarial" "Task description" 3# ... ejecución ... 4complete_action_report "success" "Summary" "Recommendations"

Referencias del Sistema

相关技能

寻找 adversarial 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
开发者工具