coops-tdd-auto — TDD automatizado coops-tdd-auto, coding-agent-launcher, community, TDD automatizado, ide skills, detecção de test runner, execução de testes, configuração de projetos, package.json, pom.xml

v1.0.0

Sobre este Skill

Perfeito para Agentes de Desenvolvimento que precisam de desenvolvimento automatizado orientado a testes com Behaviour-Driven TDD coops-tdd-auto é uma ferramenta de automação de TDD que detecta o test runner do projeto e executa testes automaticamente

Recursos

Detecção automática do test runner do projeto
Execução de testes antes da implementação do código
Suporte a configuração de projetos via package.json, pom.xml e Makefile
Verificação da saída limpa do test runner
Integração com linguagens de programação para inferir o test runner

# Core Topics

will-head will-head
[0]
[0]
Updated: 3/20/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
33
Canonical Locale
en
Detected Body Locale
en

Perfeito para Agentes de Desenvolvimento que precisam de desenvolvimento automatizado orientado a testes com Behaviour-Driven TDD coops-tdd-auto é uma ferramenta de automação de TDD que detecta o test runner do projeto e executa testes automaticamente

Por que usar essa habilidade

Habilita os agentes a automatizar processos de teste usando Behaviour-Driven TDD, agilizando o desenvolvimento orientado a testes com escrita de testes obrigatória antes do código de implementação, e suportando vários executores de teste como aqueles configurados em package.json, pom.xml ou Makefile

Melhor para

Perfeito para Agentes de Desenvolvimento que precisam de desenvolvimento automatizado orientado a testes com Behaviour-Driven TDD

Casos de Uso Práticos for coops-tdd-auto

Automatizar Behaviour-Driven TDD para testes simplificados
Gerar testes com falha para novos itens de tarefa
Reestruturar código após execuções de teste bem-sucedidas

! Segurança e Limitações

  • Requer arquivos de configuração do projeto como package.json, pom.xml ou Makefile para detecção de executores de teste
  • Desenvolvimento orientado a testes obrigatório pode diminuir o ritmo inicial do desenvolvimento

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is coops-tdd-auto?

Perfeito para Agentes de Desenvolvimento que precisam de desenvolvimento automatizado orientado a testes com Behaviour-Driven TDD coops-tdd-auto é uma ferramenta de automação de TDD que detecta o test runner do projeto e executa testes automaticamente

How do I install coops-tdd-auto?

Run the command: npx killer-skills add will-head/coding-agent-launcher/coops-tdd-auto. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for coops-tdd-auto?

Key use cases include: Automatizar Behaviour-Driven TDD para testes simplificados, Gerar testes com falha para novos itens de tarefa, Reestruturar código após execuções de teste bem-sucedidas.

Which IDEs are compatible with coops-tdd-auto?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for coops-tdd-auto?

Requer arquivos de configuração do projeto como package.json, pom.xml ou Makefile para detecção de executores de teste. Desenvolvimento orientado a testes obrigatório pode diminuir o ritmo inicial do desenvolvimento.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add will-head/coding-agent-launcher/coops-tdd-auto. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use coops-tdd-auto immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

coops-tdd-auto

Aprenda a configurar o coops-tdd-auto para melhorar a eficiência do desenvolvimento com TDD automatizado

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Behaviour-Driven TDD — Automated Mode

TDD is mandatory. Do not write implementation code before writing a failing test.

Before You Start

Detect the project's test runner from config files (package.json, pom.xml, Makefile, etc.). If ambiguous, infer from the language and project structure. Verify by running the suite once and confirming it exits cleanly.

Red → Green → Refactor

Red — Write a Failing Test

  1. Derive the behaviour from the current task item. One task item = one or more behaviours = one or more tests.
  2. Write a test that specifies that behaviour from the caller's perspective.
  3. Test at the public interface (exports, public methods, observable outcomes). Never test internals.
  4. Run the test. Confirm it fails for the right reason — the behaviour is absent, not a syntax error or import problem.

If a task item maps to multiple distinct behaviours, write one test per behaviour — do not combine. If a task item is too vague to derive a testable behaviour, flag it rather than guessing.

Test file naming — one test file per behaviour where practical, named for the behaviour being tested.

Namingwhen_[condition]_should_[outcome], adapted to language conventions:

  • when_balance_is_zero_should_reject_withdrawal
  • when_email_is_invalid_should_raise_error
  • when_password_is_too_short_should_fail_validation

Structure — Arrange / Act / Assert:

python
1def test_when_balance_is_zero_should_reject_withdrawal(): 2 # Arrange 3 account = Account(balance=0) 4 5 # Act / Assert 6 with pytest.raises(InsufficientFundsError): 7 account.withdraw(10)

Use Evident Data: only include values that affect the test outcome. Use builders or helpers to hide irrelevant setup noise.

Green — Make the Test Pass

Write the minimum code to make the test pass. Nothing more. Speed over design — cleanup is for Refactor.

Do not write code for requirements not expressed in a test.

Refactor — Improve the Design

With tests green, improve structure without changing behaviour:

  • Rename, extract, reorganise — do not change what the code does.
  • Run all tests after each change.
  • Do NOT modify or add tests during refactoring.
  • Apply coding standards (loaded at session start via coding-standards) during this phase — standards compliance belongs here, not in Green. Green stays minimal.

Repeat the cycle for the next behaviour.

Scope Control

Each test should be the most obvious, smallest step toward the requirement. If you find yourself writing a lot of code to make one test pass, the test is probably too large — break it into a smaller first step. Only add code needed to satisfy a behavioural requirement expressed in a test.

Modifying Existing Code

  1. Run the full test suite. Confirm all tests pass.
  2. Make the change.
  3. Run the full test suite again. All tests must still pass.
  4. If tests fail, the implementation is wrong — revert and try again. Do not modify tests to compensate.

Test Rules

  • Never write production code except to make a failing test pass.
  • Tests must come from task requirements. Do not invent scenarios not specified by the task.
  • Only write a test in response to a new behaviour — never in response to a new method or class.
  • Test at the public interface only. Never test private or internal methods or classes.
  • Never expose internals just to test them.
  • Never modify existing tests to make implementation changes pass. This is reward hacking.
  • Tests must be fast (seconds, not minutes) and binary (pass/fail, no interpretation needed).
  • Code coverage is a tool for guiding refactoring, not a target.

Test Doubles

  • Do NOT mock internal collaborators to isolate classes.
  • Only use test doubles for slow I/O (network, database, filesystem, message queues).
  • Prefer in-memory implementations over mocks — they are more honest about behaviour.

Refactoring Rules

  • Refactoring = changing implementation without changing behaviour.
  • During refactoring, existing tests MUST NOT be modified or deleted.
  • New classes or methods extracted during refactoring do not get their own tests — they are covered via the public interface.
  • If tests break during refactoring, the tests were coupled to implementation details. Flag this to the user rather than fixing the tests.

What Not To Do

If you catch yourself doing any of these, stop and revert:

  • Writing tests after implementation rather than before.
  • Modifying or deleting existing tests to make implementation changes pass.
  • Writing speculative code not required by any test.
  • Writing a test in response to a new method or class rather than a new behaviour.

For reasoning behind these rules, see references/tdd-philosophy.md.

Habilidades Relacionadas

Looking for an alternative to coops-tdd-auto or another community skill for your workflow? Explore these related open-source skills.

Ver tudo

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Gerar plugins de widgets personalizáveis para o sistema de feed do prompts.chat

flags

Logo of vercel
vercel

O Framework React

138.4k
0
Navegador

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Desenvolvedor