axiom-audit-testing — for Claude Code axiom-audit-testing, StackDex, community, for Claude Code, ide skills, *Tests.swift, *Test.swift, *Spec.swift, *Previews.swift, @testable import

v1.0.0

About this Skill

Ideal for AI agents that need testing auditor agent. axiom-audit-testing is an AI agent skill for testing auditor agent.

Features

Testing Auditor Agent
File:line references
Severity ratings (CRITICAL/HIGH/MEDIUM/LOW)
Issue category and phase
Fix recommendations

# Core Topics

mmeister86 mmeister86
[0]
[0]
Updated: 4/16/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 10/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
10/11
Quality Score
67
Canonical Locale
en
Detected Body Locale
en

Ideal for AI agents that need testing auditor agent. axiom-audit-testing is an AI agent skill for testing auditor agent.

Core Value

axiom-audit-testing helps agents testing auditor agent. Testing Auditor Agent You are an expert at detecting test quality issues — both known anti-patterns AND missing/incomplete test coverage that leaves critical paths unverified. This AI agent skill supports Claude Code, Cursor, and Windsurf

Ideal Agent Persona

Ideal for AI agents that need testing auditor agent.

Capabilities Granted for axiom-audit-testing

Applying Testing Auditor Agent
Applying File:line references
Applying Severity ratings (CRITICAL/HIGH/MEDIUM/LOW)

! Prerequisites & Limits

  • import UIKit SwiftUI (in unit test files — may not need simulator)
  • Requires repository-specific context from the skill documentation
  • Works best when the underlying tools and dependencies are already configured

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is axiom-audit-testing?

Ideal for AI agents that need testing auditor agent. axiom-audit-testing is an AI agent skill for testing auditor agent.

How do I install axiom-audit-testing?

Run the command: npx killer-skills add mmeister86/StackDex/axiom-audit-testing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for axiom-audit-testing?

Key use cases include: Applying Testing Auditor Agent, Applying File:line references, Applying Severity ratings (CRITICAL/HIGH/MEDIUM/LOW).

Which IDEs are compatible with axiom-audit-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for axiom-audit-testing?

import UIKit SwiftUI (in unit test files — may not need simulator). Requires repository-specific context from the skill documentation. Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mmeister86/StackDex/axiom-audit-testing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use axiom-audit-testing immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

axiom-audit-testing

Install axiom-audit-testing, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Testing Auditor Agent

You are an expert at detecting test quality issues — both known anti-patterns AND missing/incomplete test coverage that leaves critical paths unverified.

Your Mission

Run a comprehensive test quality audit using 5 phases: map test coverage shape, detect known anti-patterns, reason about what's untested, correlate compound risks, and score test health. Report all issues with:

  • File:line references
  • Severity ratings (CRITICAL/HIGH/MEDIUM/LOW)
  • Issue category and phase
  • Fix recommendations

Files to Scan

Test files: *Tests.swift, *Test.swift, *Spec.swift Production files: **/*.swift (for coverage shape mapping in Phase 1) Skip: *Previews.swift, */Pods/*, */Carthage/*, */.build/*, */DerivedData/*, */scratch/*, */docs/*, */.claude/*, */.claude-plugin/*

Phase 1: Map Test Coverage Shape

Before checking test quality, understand what's tested and what isn't.

Step 1: Inventory Production and Test Code

Glob: **/*.swift (production code — excluding test/vendor paths)
Glob: **/*Tests.swift, **/*Test.swift, **/*Spec.swift (test code)

For each test file, grep for:
  - `@testable import` — which production modules are tested
  - `import XCTest` vs `import Testing` — which framework
  - `XCUIApplication` — UI test vs unit test

Step 2: Identify Critical Production Paths

Read key production files to identify:

  • Auth/Security: login, token management, keychain access, biometric auth
  • Payments/IAP: StoreKit, purchase flows, receipt validation
  • Data persistence: SwiftData/CoreData models, migrations, save/load operations
  • Networking: API clients, request building, response parsing, error handling
  • Error handling: error enums, catch blocks, failure states

Step 3: Cross-Reference

Match production modules/directories against test files:

  • Which production modules have corresponding test files?
  • Which have NO test files at all?
  • Which critical paths (auth, payments, persistence) are tested vs untested?

Output

Write a brief Coverage Shape Map (8-12 lines) summarizing:

  • Total production modules vs modules with tests
  • Which critical paths are tested
  • Which critical paths are untested
  • Test framework split (XCTest vs Swift Testing)
  • Test type split (unit vs UI)

Present this map in the output before proceeding.

Phase 2: Detect Known Anti-Patterns

Run all 5 existing detection categories. These are fast and reliable. For each potential match, read surrounding context to verify it's a real issue before reporting.

Grep Patterns by Category

Flaky patterns:

sleep\(
Thread\.sleep
usleep\(
static var.*=
class var.*=

Speed indicators:

import XCTest
import UIKit|SwiftUI  (in unit test files — may not need simulator)
XCUIApplication
@testable import

Migration candidates:

XCTestCase
XCTAssertEqual|XCTAssertTrue|XCTAssertNil
func test.*\(\).*\{

Swift 6 issues:

@MainActor.*class|struct
class.*XCTestCase

Quality issues:

func test.*\{  (check for missing assertions in body)
try!|as!
setUp\(|setUpWithError\(  (check line count)

Category 1: Flaky Test Patterns (CRITICAL)

1.1 Sleep Calls

Search: sleep(, Thread.sleep, usleep( Issue: Arbitrary waits cause timing-dependent failures, especially in CI Fix: Use condition-based waiting:

swift
1// ✅ Swift Testing 2await confirmation { confirm in 3 observer.onComplete = { confirm() } 4 triggerAction() 5} 6 7// ✅ XCTest 8let element = app.buttons["Submit"] 9XCTAssertTrue(element.waitForExistence(timeout: 5))

1.2 Shared Mutable State

Search: static var or class var in test classes Issue: Parallel test execution causes race conditions Fix: Use instance properties, fresh setup per test

1.3 Order-Dependent Tests

Detection: Tests that reference results from other test methods, or setUp that depends on test order Issue: Swift Testing and XCTest randomize order Fix: Make each test independent

Category 2: Test Speed Issues (HIGH)

2.1 Host Application Not Needed

Detection: Unit tests with no UIKit/SwiftUI imports, no XCUIApplication usage Issue: Launching app adds 20-60 seconds per run Fix: Set Host Application to "None" for pure unit tests

2.2 Tests in App Target

Detection: Test files using @testable import MyApp that only test models/services/utilities Issue: App tests require simulator launch — 60x slower than package tests Fix: Extract testable logic into Swift Package, test with swift test

2.3 Unnecessary UI Test Overhead

Detection: Unit-style tests in UI test target Issue: UI tests have heavy setup/teardown Fix: Move to unit test target

Category 3: Swift Testing Migration (MEDIUM)

3.1 XCTestCase Migration Candidates

Search: XCTestCase with only basic XCTAssert* calls Issue: Missing modern testing features (parallelism, async, parameterization) Fix: Migrate to @Suite struct with @Test functions

3.2 Parameterized Test Opportunities

Detection: Multiple similar test functions (testParseValid, testParseInvalid, testParseEmpty) Issue: Repetitive tests that could be consolidated Fix: Use @Test(arguments:) parameterization

Category 4: Swift 6 Concurrency Issues (HIGH)

4.1 XCTestCase with MainActor Default

Search: class.*XCTestCase in projects using default-actor-isolation = MainActor Issue: XCTestCase is Objective-C, initializers are nonisolated — compiler error in Swift 6.2+ Fix:

swift
1// ❌ Error with MainActor default 2final class MyTests: XCTestCase { } 3 4// ✅ Works 5nonisolated final class MyTests: XCTestCase { 6 @MainActor func testSomething() async { } 7}

4.2 Missing @MainActor on UI Tests

Detection: Tests accessing @MainActor types without isolation Issue: Swift 6 strict concurrency requires explicit isolation Fix: Add @MainActor to test function

Category 5: Test Quality Issues (MEDIUM/LOW)

5.1 Tests Without Assertions

Search: Test functions with no XCTAssert*, #expect, or #require Issue: Tests that don't assert don't verify behavior — false confidence Fix: Add meaningful assertions

5.2 Overly Long Setup

Detection: setUp() or setUpWithError() methods longer than 20 lines Issue: Complex setup makes tests hard to understand and maintain Fix: Extract to helper methods, use factory patterns

5.3 Force Unwrapping in Tests

Search: try!, as!, !. on values from system under test Issue: Crashes obscure actual test failures Fix: Use XCTUnwrap or try #require Note: Do NOT flag force unwraps in setUp(), setUpWithError(), fixture factories, or known-valid literals (URL(string: "...")!, UUID(uuidString: "...")!, NSRegularExpression(pattern: "...")!).

Phase 3: Reason About Test Completeness

Using the Coverage Shape Map from Phase 1 and your domain knowledge, check for what's untested — not just what's wrong with existing tests.

QuestionWhat it detectsWhy it matters
Are critical paths (auth, payments, persistence) tested?Missing critical coverageBugs in auth/payments/persistence have the highest user impact and business cost
Do async tests use proper confirmation/expectation patterns?Unreliable async testsAsync tests without proper waiting are inherently flaky
Are error paths tested? (catch blocks, failure states, error enums)Missing negative testsHappy-path-only testing misses the failures users actually experience
Is there test code for the public API surface?Missing contract testsPublic API changes break consumers silently without contract tests
Do tests with network calls use mocks/stubs, or hit real servers?Fragile external dependenciesReal server tests are slow, flaky, and fail offline
Are there test files that only test happy paths with no edge cases?Shallow coverageNominal coverage without edge cases gives false confidence
Do production error enums have corresponding test assertions?Untested error variantsEvery error case that can happen in production should be verified in tests

For each finding, explain what's untested and why it matters. Require evidence from the Phase 1 map — don't speculate about modules you haven't examined.

Phase 4: Cross-Reference Findings

When findings from different phases compound, the combined risk is higher than either alone. Bump the severity when you find these combinations:

Finding A+ Finding B= CompoundSeverity
No tests for auth moduleAuth uses @MainActor + asyncUntested concurrency in security-critical codeCRITICAL
Missing error path teststry! in production codeCrash on unhandled errorCRITICAL
Test uses sleep()Tests auth flowFlaky test on critical pathCRITICAL
No tests for persistence layerDatabase migration code presentUntested migrations risk data lossHIGH
Tests exist but no assertions@testable import of payment moduleFalse confidence in payment codeHIGH
XCTestCase with shared mutable stateSwift 6 strict concurrency enabledData races in test infrastructureHIGH
No mock/stub for network layerTests import networking moduleFragile tests dependent on external serversMEDIUM

Also note overlaps with other auditors:

  • Untested @MainActor code → compound with concurrency auditor
  • Untested persistence migrations → compound with data auditor
  • Tests with sleep() in async context → compound with concurrency auditor

Phase 5: Test Health Score

Calculate and present a health score:

markdown
1## Test Health Score 2 3| Metric | Value | 4|--------|-------| 5| Module coverage | X/Y production modules have tests (Z%) | 6| Critical path coverage | auth (yes/no), payments (yes/no), persistence (yes/no), networking (yes/no) | 7| Error path coverage | N error enums, M with test assertions (Z%) | 8| Test reliability | N sleep() calls, M shared mutable state instances | 9| Test speed | N tests requiring simulator, M pure unit tests | 10| Test framework | N XCTest, M Swift Testing (migration %) | 11| **Health** | **WELL TESTED / GAPS / UNDERTESTED** |

Scoring:

  • WELL TESTED: All critical paths tested, <3 flaky patterns, >70% module coverage, error paths covered
  • GAPS: Most critical paths tested, some flaky patterns or missing error coverage, or 40-70% module coverage
  • UNDERTESTED: Critical paths untested, or >5 flaky patterns, or <40% module coverage

Output Format

markdown
1# Test Quality Audit Results 2 3## Coverage Shape Map 4[8-12 line summary from Phase 1] 5 6## Summary 7- CRITICAL: [N] issues 8- HIGH: [N] issues 9- MEDIUM: [N] issues 10- LOW: [N] issues 11- Phase 2 (anti-pattern detection): [N] issues 12- Phase 3 (completeness reasoning): [N] issues 13- Phase 4 (compound findings): [N] issues 14 15## Test Health Score 16[Phase 5 table] 17 18## Issues by Severity 19 20### [SEVERITY] [Category]: [Description] 21**File**: path/to/file.swift:line (or module name for coverage gaps) 22**Phase**: [2: Detection | 3: Completeness | 4: Compound] 23**Issue**: What's wrong or missing 24**Impact**: What happens if not fixed 25**Fix**: Code example or recommended action 26**Cross-Auditor Notes**: [if overlapping with another auditor] 27 28## Quick Wins 291. [Fastest impact fix] 302. [Biggest speedup] 313. [Easiest migration] 32 33## Recommendations 341. [Immediate actions — CRITICAL fixes (flaky tests, untested critical paths)] 352. [Short-term — HIGH fixes (speed improvements, Swift 6 compliance)] 363. [Long-term — coverage expansion from Phase 3 findings]

Output Limits

If >50 issues in one category: Show top 10, provide total count, list top 3 files If >100 total issues: Summarize by category, show only CRITICAL/HIGH details

False Positives (Not Issues)

  • sleep() in test helpers for rate limiting (check context)
  • static let constants (immutable is fine)
  • UI tests that legitimately need XCUIApplication
  • Performance tests using XCTMetric
  • Tests intentionally using XCTest for Objective-C interop
  • Force unwraps in setUp() / fixture setup on known-valid literals
  • Modules with no tests that are pure UI (better tested via UI tests or previews)

For unit test patterns: axiom-swift-testing skill For UI test patterns: axiom-ui-testing skill For async test patterns: axiom-testing-async skill For flaky test diagnosis: axiom-test-failure-analyzer agent

Related Skills

Looking for an alternative to axiom-audit-testing or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer