characterization-testing — for Claude Code characterization-testing, ai_coordinate, community, for Claude Code, ide skills, Characterization, Testing, helping, create, capture

v1.0.0

Sobre este Skill

Ideal para Agentes de Refatoração que precisam garantir a estabilidade do código legado por meio de testes de caracterização Resumo localizado: Creates characterization tests to capture existing behavior before refactoring. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Recursos

Characterization Testing
What is Characterization Testing?
Creates characterization tests to capture existing behavior before refactoring
Use when writing char tests, creating golden tests, preparing for refactoring, or preserving
This ensures that refactoring does not introduce regressions

# Tópicos principais

3balljugglerYu 3balljugglerYu
[0]
[0]
Atualizado: 3/9/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Ideal para Agentes de Refatoração que precisam garantir a estabilidade do código legado por meio de testes de caracterização Resumo localizado: Creates characterization tests to capture existing behavior before refactoring. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Por que usar essa habilidade

Habilita os agentes a capturar o comportamento atual do código existente usando Testes de Mestre Dourado, garantindo que nenhuma regressão seja introduzida durante a refatoração, e fornecendo uma rede de segurança para o código legado com especificações pouco claras por meio de testes de

Melhor para

Ideal para Agentes de Refatoração que precisam garantir a estabilidade do código legado por meio de testes de caracterização

Casos de Uso Práticos for characterization-testing

Criar testes de caracterização para código legado
Garantir que a refatoração não introduza regressões
Capturar o comportamento atual do código existente antes de atualizações importantes

! Segurança e Limitações

  • Requer acesso à base de código existente
  • Limitado a testar o comportamento atual, não a correção
  • Pode não ser eficaz para código com especificações que mudam rapidamente

About The Source

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Demo Labs

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ e etapas de instalação

These questions and steps mirror the structured data on this page for better search understanding.

? Perguntas frequentes

O que é characterization-testing?

Ideal para Agentes de Refatoração que precisam garantir a estabilidade do código legado por meio de testes de caracterização Resumo localizado: Creates characterization tests to capture existing behavior before refactoring. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Como instalar characterization-testing?

Execute o comando: npx killer-skills add 3balljugglerYu/ai_coordinate/characterization-testing. Ele funciona com Cursor, Windsurf, VS Code, Claude Code e mais de 19 outros IDEs.

Quais são os casos de uso de characterization-testing?

Os principais casos de uso incluem: Criar testes de caracterização para código legado, Garantir que a refatoração não introduza regressões, Capturar o comportamento atual do código existente antes de atualizações importantes.

Quais IDEs são compatíveis com characterization-testing?

Esta skill é compatível com Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use a CLI do Killer-Skills para uma instalação unificada.

characterization-testing tem limitações?

Requer acesso à base de código existente. Limitado a testar o comportamento atual, não a correção. Pode não ser eficaz para código com especificações que mudam rapidamente.

Como instalar este skill

  1. 1. Abra o terminal

    Abra o terminal ou linha de comando no diretório do projeto.

  2. 2. Execute o comando de instalação

    Execute: npx killer-skills add 3balljugglerYu/ai_coordinate/characterization-testing. A CLI detectará sua IDE ou agente automaticamente e configurará a skill.

  3. 3. Comece a usar o skill

    O skill já está ativo. Seu agente de IA pode usar characterization-testing imediatamente no projeto atual.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

characterization-testing

Resumo localizado: Creates characterization tests to capture existing behavior before refactoring. This AI agent skill supports Claude Code, Cursor, and

SKILL.md
Readonly
Upstream Repository Material
The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

Characterization Testing

You are helping the user create characterization tests that capture the current behavior of existing code before refactoring. This ensures that refactoring does not introduce regressions.

What is Characterization Testing?

Characterization tests (also known as "Golden Master Tests" or "Snapshot Tests") record the current behavior of code, not necessarily the "correct" behavior. For legacy code with unclear specifications, the current behavior becomes the specification.

Reference: See docs/TEST_PLAN.md sections 4.3 and 8.3 for detailed patterns.

Workflow

Step 1: Accept Target

Accept the class name as argument:

  • /char-test AuthViewModel - for ViewModel/Repository/Service
  • /char-test EventPage --widget - for Widget with golden tests

Step 2: Read and Analyze Target Class

  1. Read the target file from lib/ directory
  2. List all public methods and their signatures
  3. Identify state properties (for ViewModels)
  4. Note dependencies (repositories, services, external APIs)

Step 3: Identify Input Patterns

For each public method, identify test scenarios:

PatternDescriptionExample
NormalValid inputssignIn(valid_email, valid_password)
ErrorInvalid/malformed inputssignIn("", "")
BoundaryEdge casessignIn(max_length_email, min_password)
NullNullable parametersfetchData(null)

Step 4: Generate Test Code

For ViewModel/Repository/Service (ApprovalTests)

Generate test file at: test/characterization/{feature}/{class}_char_test.dart

dart
1import 'package:approval_tests/approval_tests.dart'; 2import 'package:flutter_test/flutter_test.dart'; 3import 'package:mockito/annotations.dart'; 4import 'package:mockito/mockito.dart'; 5 6// Import target class 7import 'package:live_view/ui/{feature}/{class}.dart'; 8 9// Import mocks 10import '../../../mocks/mock_locator.dart'; 11 12@GenerateMocks([/* dependencies */]) 13import '{class}_char_test.mocks.dart'; 14 15@Tags(['characterization']) 16void main() { 17 group('Characterization: {ClassName}', () { 18 late {ClassName} target; 19 late MockIClock mockClock; 20 late MockIDioClient mockDioClient; 21 22 setUp(() async { 23 mockClock = MockIClock(); 24 mockDioClient = MockIDioClient(); 25 26 // Fix time for deterministic tests 27 when(mockClock.now()).thenReturn(DateTime(2025, 1, 30, 12, 0, 0)); 28 29 // Return recorded responses 30 when(mockDioClient.get(any)).thenAnswer((_) async => 31 Response(data: recordedApiResponse, statusCode: 200)); 32 33 await setupMockLocatorForTest( 34 clock: mockClock, 35 dioClient: mockDioClient, 36 ); 37 38 target = {ClassName}(); 39 }); 40 41 tearDown(() async { 42 await locator.reset(); 43 }); 44 45 test('CHAR-{PREFIX}-001: {methodName} states snapshot', () async { 46 final results = <String>[]; 47 48 // Pattern 1: Normal case 49 try { 50 final result = await target.{methodName}(/* normal inputs */); 51 results.add('{methodName}(normal): $result, state=${target.state}'); 52 } catch (e) { 53 results.add('{methodName}(normal): threw $e'); 54 } 55 56 // Pattern 2: Error case 57 try { 58 final result = await target.{methodName}(/* error inputs */); 59 results.add('{methodName}(error): $result'); 60 } catch (e) { 61 results.add('{methodName}(error): threw $e'); 62 } 63 64 // Pattern 3: Boundary case 65 try { 66 final result = await target.{methodName}(/* boundary inputs */); 67 results.add('{methodName}(boundary): $result'); 68 } catch (e) { 69 results.add('{methodName}(boundary): threw $e'); 70 } 71 72 // Compare with approved snapshot 73 Approvals.verify(results.join('\n')); 74 }); 75 76 test('CHAR-{PREFIX}-002: {methodName} response snapshot', () async { 77 final result = await target.{methodName}(/* inputs */); 78 79 final snapshot = { 80 'result': result?.toJson(), 81 'state': { 82 'property1': target.state.property1, 83 'property2': target.state.property2, 84 }, 85 }; 86 87 Approvals.verifyAsJson(snapshot); 88 }); 89 }); 90}

For Widget (Golden Tests)

Generate test file at: test/characterization/widgets/{widget}_char_test.dart

dart
1import 'package:flutter/material.dart'; 2import 'package:flutter_test/flutter_test.dart'; 3 4// Import target widget 5import 'package:live_view/ui/{feature}/{widget}.dart'; 6 7// Import test helpers 8import '../../helpers/test_app.dart'; 9 10@Tags(['characterization', 'golden']) 11void main() { 12 group('Characterization: {WidgetName}', () { 13 testWidgets('CHAR-WIDGET-001: {WidgetName} loading state', (tester) async { 14 await tester.pumpWidget( 15 TestApp(child: {WidgetName}()), 16 ); 17 18 await expectLater( 19 find.byType({WidgetName}), 20 matchesGoldenFile('goldens/{widget}_loading.png'), 21 ); 22 }); 23 24 testWidgets('CHAR-WIDGET-002: {WidgetName} with data', (tester) async { 25 await tester.pumpWidget( 26 TestApp( 27 overrides: [{provider}.overrideWith((ref) => mock{State})], 28 child: {WidgetName}(), 29 ), 30 ); 31 await tester.pumpAndSettle(); 32 33 await expectLater( 34 find.byType({WidgetName}), 35 matchesGoldenFile('goldens/{widget}_loaded.png'), 36 ); 37 }); 38 39 testWidgets('CHAR-WIDGET-003: {WidgetName} empty state', (tester) async { 40 await tester.pumpWidget( 41 TestApp( 42 overrides: [{provider}.overrideWith((ref) => emptyMock)], 43 child: {WidgetName}(), 44 ), 45 ); 46 await tester.pumpAndSettle(); 47 48 await expectLater( 49 find.byType({WidgetName}), 50 matchesGoldenFile('goldens/{widget}_empty.png'), 51 ); 52 }); 53 54 testWidgets('CHAR-WIDGET-004: {WidgetName} error state', (tester) async { 55 await tester.pumpWidget( 56 TestApp( 57 overrides: [{provider}.overrideWith((ref) => errorMock)], 58 child: {WidgetName}(), 59 ), 60 ); 61 await tester.pumpAndSettle(); 62 63 await expectLater( 64 find.byType({WidgetName}), 65 matchesGoldenFile('goldens/{widget}_error.png'), 66 ); 67 }); 68 }); 69}

Step 5: Output Location

TypeOutput Path
ViewModeltest/characterization/{feature}/{class}_char_test.dart
Repositorytest/characterization/domain/repository/{class}_char_test.dart
Servicetest/characterization/service/{class}_char_test.dart
Widgettest/characterization/widgets/{widget}_char_test.dart
Golden filestest/characterization/goldens/{widget}_{state}.png
Approval filestest/characterization/{feature}/{class}_char_test.{ID}.approved.txt

Step 6: Provide Next Steps

After generating the test file, instruct the user:

For ApprovalTests (ViewModel/Repository/Service):

bash
1# 1. Run the test (first run generates .received files) 2flutter test test/characterization/{feature}/ 3 4# 2. Review the .received files 5# 3. If correct, rename to .approved 6mv test/characterization/{feature}/{class}_char_test.CHAR-{PREFIX}-001.received.txt \ 7 test/characterization/{feature}/{class}_char_test.CHAR-{PREFIX}-001.approved.txt 8 9# 4. Commit the .approved files 10git add test/characterization/{feature}/*.approved.txt

For Golden Tests (Widget):

bash
1# 1. Generate golden files 2flutter test --update-goldens test/characterization/widgets/ 3 4# 2. Review generated PNGs 5ls test/characterization/goldens/ 6 7# 3. Commit golden files 8git add test/characterization/goldens/*.png

Step 7: Update Progress Tracker

IMPORTANT: After creating the characterization test file, you MUST update docs/test-progress.yaml:

  1. Find the target class entry in the appropriate tier (tier1, tier2, tier3)
  2. Update the following fields:
    • status: Change from pending to char_test_created
    • char_test: Set to the test file path

Example update:

yaml
1# Before 2LiveViewRepository: 3 status: pending 4 char_test: null 5 6# After 7LiveViewRepository: 8 status: char_test_created 9 char_test: test/characterization/domain/repository/live_view_repository_char_test.dart

This step is mandatory. Do not skip it.

Deterministic Testing Techniques

Characterization tests must produce the same output for the same input. Handle non-deterministic elements:

ElementSolution
DateTime.now()Mock with IClock interface
Random valuesSeed fixed or mock
Network responsesMock with IDioClient
File pathsUse relative or test directories
Platform differencesRun on CI environment (ubuntu-latest)

Post-Refactoring Verification

After completing the refactoring (e.g., /interface-create):

bash
1# Run characterization tests 2flutter test test/characterization/{feature}/ 3 4# Expected results: 5# - No diff = Refactoring successful 6# - Diff found = Behavior changed, review and either: 7# - Fix the refactoring 8# - Approve as intentional change

Checklist

Before completing characterization test generation:

  • All public methods identified
  • Normal, error, boundary patterns covered
  • Non-deterministic elements mocked
  • Test file created at correct path
  • Import statements correct
  • Mock generation annotations added
  • Tags added (@Tags(['characterization']))
  • User instructed on next steps (generate/approve snapshots)
  • docs/test-progress.yaml updated (status: char_test_created, char_test: path)

References

Habilidades Relacionadas

Looking for an alternative to characterization-testing or another community skill for your workflow? Explore these related open-source skills.

Ver tudo

openclaw-release-maintainer

Logo of openclaw
openclaw

Resumo localizado: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Resumo localizado: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor, and

flags

Logo of vercel
vercel

Resumo localizado: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Navegador

pr-review

Logo of pytorch
pytorch

Resumo localizado: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Desenvolvedor