perl-testing — for Claude Code perl-testing, everything-claude-code, official, for Claude Code, ide skills, Perl testing, Test2::V0, Test::More, TDD methodology, prove runner, Claude Code

Verified
v1.0.0
GitHub

About this Skill

Perfect for Code Review Agents needing advanced Perl testing capabilities with Test2::V0 and Test::More. perl-testing is a skill that enables developers to write efficient tests for their Perl applications using industry-standard testing frameworks and methodologies.

Features

Writing tests using Test2::V0
Creating test suites with Test::More
Implementing TDD methodology
Using prove runner for test execution
Mocking dependencies for isolated testing
Measuring test coverage with Devel::Cover

# Core Topics

affaan-m affaan-m
[108.5k]
[14167]
Updated: 3/26/2026

Agent Capability Analysis

The perl-testing skill by affaan-m is an open-source official AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for for Claude Code, Perl testing, Test2::V0.

Ideal Agent Persona

Perfect for Code Review Agents needing advanced Perl testing capabilities with Test2::V0 and Test::More.

Core Value

Empowers agents to implement Test-Driven Development methodology using prove runner, mocking, and coverage analysis with Devel::Cover, ensuring robust Perl applications with high test coverage.

Capabilities Granted for perl-testing

Designing comprehensive test suites for Perl modules and applications
Migrating tests from Test::More to Test2::V0 for improved testing efficiency
Debugging failing Perl tests using TDD workflow and test coverage analysis

! Prerequisites & Limits

  • Requires Perl environment with Test2::V0 and Test::More installed
  • Limited to Perl applications and modules
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox
SKILL.md
Readonly

Perl Testing Patterns

Comprehensive testing strategies for Perl applications using Test2::V0, Test::More, prove, and TDD methodology.

When to Activate

  • Writing new Perl code (follow TDD: red, green, refactor)
  • Designing test suites for Perl modules or applications
  • Reviewing Perl test coverage
  • Setting up Perl testing infrastructure
  • Migrating tests from Test::More to Test2::V0
  • Debugging failing Perl tests

TDD Workflow

Always follow the RED-GREEN-REFACTOR cycle.

perl
1# Step 1: RED — Write a failing test 2# t/unit/calculator.t 3use v5.36; 4use Test2::V0; 5 6use lib 'lib'; 7use Calculator; 8 9subtest 'addition' => sub { 10 my $calc = Calculator->new; 11 is($calc->add(2, 3), 5, 'adds two numbers'); 12 is($calc->add(-1, 1), 0, 'handles negatives'); 13}; 14 15done_testing; 16 17# Step 2: GREEN — Write minimal implementation 18# lib/Calculator.pm 19package Calculator; 20use v5.36; 21use Moo; 22 23sub add($self, $a, $b) { 24 return $a + $b; 25} 26 271; 28 29# Step 3: REFACTOR — Improve while tests stay green 30# Run: prove -lv t/unit/calculator.t

Test::More Fundamentals

The standard Perl testing module — widely used, ships with core.

Basic Assertions

perl
1use v5.36; 2use Test::More; 3 4# Plan upfront or use done_testing 5# plan tests => 5; # Fixed plan (optional) 6 7# Equality 8is($result, 42, 'returns correct value'); 9isnt($result, 0, 'not zero'); 10 11# Boolean 12ok($user->is_active, 'user is active'); 13ok(!$user->is_banned, 'user is not banned'); 14 15# Deep comparison 16is_deeply( 17 $got, 18 { name => 'Alice', roles => ['admin'] }, 19 'returns expected structure' 20); 21 22# Pattern matching 23like($error, qr/not found/i, 'error mentions not found'); 24unlike($output, qr/password/, 'output hides password'); 25 26# Type check 27isa_ok($obj, 'MyApp::User'); 28can_ok($obj, 'save', 'delete'); 29 30done_testing;

SKIP and TODO

perl
1use v5.36; 2use Test::More; 3 4# Skip tests conditionally 5SKIP: { 6 skip 'No database configured', 2 unless $ENV{TEST_DB}; 7 8 my $db = connect_db(); 9 ok($db->ping, 'database is reachable'); 10 is($db->version, '15', 'correct PostgreSQL version'); 11} 12 13# Mark expected failures 14TODO: { 15 local $TODO = 'Caching not yet implemented'; 16 is($cache->get('key'), 'value', 'cache returns value'); 17} 18 19done_testing;

Test2::V0 Modern Framework

Test2::V0 is the modern replacement for Test::More — richer assertions, better diagnostics, and extensible.

Why Test2?

  • Superior deep comparison with hash/array builders
  • Better diagnostic output on failures
  • Subtests with cleaner scoping
  • Extensible via Test2::Tools::* plugins
  • Backward-compatible with Test::More tests

Deep Comparison with Builders

perl
1use v5.36; 2use Test2::V0; 3 4# Hash builder — check partial structure 5is( 6 $user->to_hash, 7 hash { 8 field name => 'Alice'; 9 field email => match(qr/\@example\.com$/); 10 field age => validator(sub { $_ >= 18 }); 11 # Ignore other fields 12 etc(); 13 }, 14 'user has expected fields' 15); 16 17# Array builder 18is( 19 $result, 20 array { 21 item 'first'; 22 item match(qr/^second/); 23 item DNE(); # Does Not Exist — verify no extra items 24 }, 25 'result matches expected list' 26); 27 28# Bag — order-independent comparison 29is( 30 $tags, 31 bag { 32 item 'perl'; 33 item 'testing'; 34 item 'tdd'; 35 }, 36 'has all required tags regardless of order' 37);

Subtests

perl
1use v5.36; 2use Test2::V0; 3 4subtest 'User creation' => sub { 5 my $user = User->new(name => 'Alice', email => 'alice@example.com'); 6 ok($user, 'user object created'); 7 is($user->name, 'Alice', 'name is set'); 8 is($user->email, 'alice@example.com', 'email is set'); 9}; 10 11subtest 'User validation' => sub { 12 my $warnings = warns { 13 User->new(name => '', email => 'bad'); 14 }; 15 ok($warnings, 'warns on invalid data'); 16}; 17 18done_testing;

Exception Testing with Test2

perl
1use v5.36; 2use Test2::V0; 3 4# Test that code dies 5like( 6 dies { divide(10, 0) }, 7 qr/Division by zero/, 8 'dies on division by zero' 9); 10 11# Test that code lives 12ok(lives { divide(10, 2) }, 'division succeeds') or note($@); 13 14# Combined pattern 15subtest 'error handling' => sub { 16 ok(lives { parse_config('valid.json') }, 'valid config parses'); 17 like( 18 dies { parse_config('missing.json') }, 19 qr/Cannot open/, 20 'missing file dies with message' 21 ); 22}; 23 24done_testing;

Test Organization and prove

Directory Structure

text
1t/ 2├── 00-load.t # Verify modules compile 3├── 01-basic.t # Core functionality 4├── unit/ 5│ ├── config.t # Unit tests by module 6│ ├── user.t 7│ └── util.t 8├── integration/ 9│ ├── database.t 10│ └── api.t 11├── lib/ 12│ └── TestHelper.pm # Shared test utilities 13└── fixtures/ 14 ├── config.json # Test data files 15 └── users.csv

prove Commands

bash
1# Run all tests 2prove -l t/ 3 4# Verbose output 5prove -lv t/ 6 7# Run specific test 8prove -lv t/unit/user.t 9 10# Recursive search 11prove -lr t/ 12 13# Parallel execution (8 jobs) 14prove -lr -j8 t/ 15 16# Run only failing tests from last run 17prove -l --state=failed t/ 18 19# Colored output with timer 20prove -l --color --timer t/ 21 22# TAP output for CI 23prove -l --formatter TAP::Formatter::JUnit t/ > results.xml

.proverc Configuration

text
1-l 2--color 3--timer 4-r 5-j4 6--state=save

Fixtures and Setup/Teardown

Subtest Isolation

perl
1use v5.36; 2use Test2::V0; 3use File::Temp qw(tempdir); 4use Path::Tiny; 5 6subtest 'file processing' => sub { 7 # Setup 8 my $dir = tempdir(CLEANUP => 1); 9 my $file = path($dir, 'input.txt'); 10 $file->spew_utf8("line1\nline2\nline3\n"); 11 12 # Test 13 my $result = process_file("$file"); 14 is($result->{line_count}, 3, 'counts lines'); 15 16 # Teardown happens automatically (CLEANUP => 1) 17};

Shared Test Helpers

Place reusable helpers in t/lib/TestHelper.pm and load with use lib 't/lib'. Export factory functions like create_test_db(), create_temp_dir(), and fixture_path() via Exporter.

Mocking

Test::MockModule

perl
1use v5.36; 2use Test2::V0; 3use Test::MockModule; 4 5subtest 'mock external API' => sub { 6 my $mock = Test::MockModule->new('MyApp::API'); 7 8 # Good: Mock returns controlled data 9 $mock->mock(fetch_user => sub ($self, $id) { 10 return { id => $id, name => 'Mock User', email => 'mock@test.com' }; 11 }); 12 13 my $api = MyApp::API->new; 14 my $user = $api->fetch_user(42); 15 is($user->{name}, 'Mock User', 'returns mocked user'); 16 17 # Verify call count 18 my $call_count = 0; 19 $mock->mock(fetch_user => sub { $call_count++; return {} }); 20 $api->fetch_user(1); 21 $api->fetch_user(2); 22 is($call_count, 2, 'fetch_user called twice'); 23 24 # Mock is automatically restored when $mock goes out of scope 25}; 26 27# Bad: Monkey-patching without restoration 28# *MyApp::API::fetch_user = sub { ... }; # NEVER — leaks across tests

For lightweight mock objects, use Test::MockObject to create injectable test doubles with ->mock() and verify calls with ->called_ok().

Coverage with Devel::Cover

Running Coverage

bash
1# Basic coverage report 2cover -test 3 4# Or step by step 5perl -MDevel::Cover -Ilib t/unit/user.t 6cover 7 8# HTML report 9cover -report html 10open cover_db/coverage.html 11 12# Specific thresholds 13cover -test -report text | grep 'Total' 14 15# CI-friendly: fail under threshold 16cover -test && cover -report text -select '^lib/' \ 17 | perl -ne 'if (/Total.*?(\d+\.\d+)/) { exit 1 if $1 < 80 }'

Integration Testing

Use in-memory SQLite for database tests, mock HTTP::Tiny for API tests.

perl
1use v5.36; 2use Test2::V0; 3use DBI; 4 5subtest 'database integration' => sub { 6 my $dbh = DBI->connect('dbi:SQLite:dbname=:memory:', '', '', { 7 RaiseError => 1, 8 }); 9 $dbh->do('CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)'); 10 11 $dbh->prepare('INSERT INTO users (name) VALUES (?)')->execute('Alice'); 12 my $row = $dbh->selectrow_hashref('SELECT * FROM users WHERE name = ?', undef, 'Alice'); 13 is($row->{name}, 'Alice', 'inserted and retrieved user'); 14}; 15 16done_testing;

Best Practices

DO

  • Follow TDD: Write tests before implementation (red-green-refactor)
  • Use Test2::V0: Modern assertions, better diagnostics
  • Use subtests: Group related assertions, isolate state
  • Mock external dependencies: Network, database, file system
  • Use prove -l: Always include lib/ in @INC
  • Name tests clearly: 'user login with invalid password fails'
  • Test edge cases: Empty strings, undef, zero, boundary values
  • Aim for 80%+ coverage: Focus on business logic paths
  • Keep tests fast: Mock I/O, use in-memory databases

DON'T

  • Don't test implementation: Test behavior and output, not internals
  • Don't share state between subtests: Each subtest should be independent
  • Don't skip done_testing: Ensures all planned tests ran
  • Don't over-mock: Mock boundaries only, not the code under test
  • Don't use Test::More for new projects: Prefer Test2::V0
  • Don't ignore test failures: All tests must pass before merge
  • Don't test CPAN modules: Trust libraries to work correctly
  • Don't write brittle tests: Avoid over-specific string matching

Quick Reference

TaskCommand / Pattern
Run all testsprove -lr t/
Run one test verboseprove -lv t/unit/user.t
Parallel test runprove -lr -j8 t/
Coverage reportcover -test && cover -report html
Test equalityis($got, $expected, 'label')
Deep comparisonis($got, hash { field k => 'v'; etc() }, 'label')
Test exceptionlike(dies { ... }, qr/msg/, 'label')
Test no exceptionok(lives { ... }, 'label')
Mock a methodTest::MockModule->new('Pkg')->mock(m => sub { ... })
Skip testsSKIP: { skip 'reason', $count unless $cond; ... }
TODO testsTODO: { local $TODO = 'reason'; ... }

Common Pitfalls

Forgetting done_testing

perl
1# Bad: Test file runs but doesn't verify all tests executed 2use Test2::V0; 3is(1, 1, 'works'); 4# Missing done_testing — silent bugs if test code is skipped 5 6# Good: Always end with done_testing 7use Test2::V0; 8is(1, 1, 'works'); 9done_testing;

Missing -l Flag

bash
1# Bad: Modules in lib/ not found 2prove t/unit/user.t 3# Can't locate MyApp/User.pm in @INC 4 5# Good: Include lib/ in @INC 6prove -l t/unit/user.t

Over-Mocking

Mock the dependency, not the code under test. If your test only verifies that a mock returns what you told it to, it tests nothing.

Test Pollution

Use my variables inside subtests — never our — to prevent state leaking between tests.

Remember: Tests are your safety net. Keep them fast, focused, and independent. Use Test2::V0 for new projects, prove for running, and Devel::Cover for accountability.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is perl-testing?

Perfect for Code Review Agents needing advanced Perl testing capabilities with Test2::V0 and Test::More. perl-testing is a skill that enables developers to write efficient tests for their Perl applications using industry-standard testing frameworks and methodologies.

How do I install perl-testing?

Run the command: npx killer-skills add affaan-m/everything-claude-code/perl-testing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for perl-testing?

Key use cases include: Designing comprehensive test suites for Perl modules and applications, Migrating tests from Test::More to Test2::V0 for improved testing efficiency, Debugging failing Perl tests using TDD workflow and test coverage analysis.

Which IDEs are compatible with perl-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for perl-testing?

Requires Perl environment with Test2::V0 and Test::More installed. Limited to Perl applications and modules.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add affaan-m/everything-claude-code/perl-testing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use perl-testing immediately in the current project.

Related Skills

Looking for an alternative to perl-testing or another official skill for your workflow? Explore these related open-source skills.

View All

flags

Logo of facebook
facebook

Use when you need to check feature flag states, compare channels, or debug why a feature behaves differently across release channels.

244.2k
0
Developer

extract-errors

Logo of facebook
facebook

extract-errors is a React error handling skill that automates the process of extracting and assigning error codes, ensuring accurate and up-to-date error messages in React applications.

244.2k
0
Developer

fix

Logo of facebook
facebook

fix is a code optimization skill that automates formatting and linting using yarn prettier and linc.

244.2k
0
Developer

flow

Logo of facebook
facebook

Use when you need to run Flow type checking, or when seeing Flow type errors in React code.

244.2k
0
Developer