coops-tdd-auto — tests automatisés coops-tdd-auto, coding-agent-launcher, community, tests automatisés, ide skills, Behavior-Driven TDD, développement agile, automatisation des tests, intégration avec package.json

v1.0.0

À propos de ce Skill

Parfait pour les Agents de Développement ayant besoin d'un développement basé sur des tests automatisés avec Behaviour-Driven TDD coops-tdd-auto est un outil de tests automatisés qui suit le principe de Behavior-Driven TDD

Fonctionnalités

Détection automatique de l'exécuteur de tests du projet
Prise en charge de Behavior-Driven TDD
Intégration avec des fichiers de configuration tels que package.json et pom.xml
Exécution de tests automatisés en mode Red → Green → Refactor
Compatibilité avec des structures de projets et des langages de programmation variés

# Core Topics

will-head will-head
[0]
[0]
Updated: 3/20/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
33
Canonical Locale
en
Detected Body Locale
en

Parfait pour les Agents de Développement ayant besoin d'un développement basé sur des tests automatisés avec Behaviour-Driven TDD coops-tdd-auto est un outil de tests automatisés qui suit le principe de Behavior-Driven TDD

Pourquoi utiliser cette compétence

Permet aux agents d'automatiser les processus de test en utilisant Behaviour-Driven TDD, rationalisant le développement basé sur les tests avec l'écriture obligatoire des tests avant le code d'implémentation et prenant en charge divers exécuteurs de tests tels que ceux configurés dans package.json, pom.xml ou Makefile

Meilleur pour

Parfait pour les Agents de Développement ayant besoin d'un développement basé sur des tests automatisés avec Behaviour-Driven TDD

Cas d'utilisation exploitables for coops-tdd-auto

Automatiser Behaviour-Driven TDD pour des tests rationalisés
Générer des tests en échec pour de nouveaux éléments de tâche
Réorganiser le code après des exécutions de test réussies

! Sécurité et Limitations

  • Nécessite des fichiers de configuration de projet tels que package.json, pom.xml ou Makefile pour la détection de l'exécuteur de tests
  • Le développement basé sur les tests obligatoires peut ralentir le rythme de développement initial

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is coops-tdd-auto?

Parfait pour les Agents de Développement ayant besoin d'un développement basé sur des tests automatisés avec Behaviour-Driven TDD coops-tdd-auto est un outil de tests automatisés qui suit le principe de Behavior-Driven TDD

How do I install coops-tdd-auto?

Run the command: npx killer-skills add will-head/coding-agent-launcher. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for coops-tdd-auto?

Key use cases include: Automatiser Behaviour-Driven TDD pour des tests rationalisés, Générer des tests en échec pour de nouveaux éléments de tâche, Réorganiser le code après des exécutions de test réussies.

Which IDEs are compatible with coops-tdd-auto?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for coops-tdd-auto?

Nécessite des fichiers de configuration de projet tels que package.json, pom.xml ou Makefile pour la détection de l'exécuteur de tests. Le développement basé sur les tests obligatoires peut ralentir le rythme de développement initial.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add will-head/coding-agent-launcher. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use coops-tdd-auto immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

coops-tdd-auto

Découvrez comment configurer coops-tdd-auto pour améliorer votre flux de travail de développement avec des tests automatisés et Behavior-Driven TDD

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Behaviour-Driven TDD — Automated Mode

TDD is mandatory. Do not write implementation code before writing a failing test.

Before You Start

Detect the project's test runner from config files (package.json, pom.xml, Makefile, etc.). If ambiguous, infer from the language and project structure. Verify by running the suite once and confirming it exits cleanly.

Red → Green → Refactor

Red — Write a Failing Test

  1. Derive the behaviour from the current task item. One task item = one or more behaviours = one or more tests.
  2. Write a test that specifies that behaviour from the caller's perspective.
  3. Test at the public interface (exports, public methods, observable outcomes). Never test internals.
  4. Run the test. Confirm it fails for the right reason — the behaviour is absent, not a syntax error or import problem.

If a task item maps to multiple distinct behaviours, write one test per behaviour — do not combine. If a task item is too vague to derive a testable behaviour, flag it rather than guessing.

Test file naming — one test file per behaviour where practical, named for the behaviour being tested.

Namingwhen_[condition]_should_[outcome], adapted to language conventions:

  • when_balance_is_zero_should_reject_withdrawal
  • when_email_is_invalid_should_raise_error
  • when_password_is_too_short_should_fail_validation

Structure — Arrange / Act / Assert:

python
1def test_when_balance_is_zero_should_reject_withdrawal(): 2 # Arrange 3 account = Account(balance=0) 4 5 # Act / Assert 6 with pytest.raises(InsufficientFundsError): 7 account.withdraw(10)

Use Evident Data: only include values that affect the test outcome. Use builders or helpers to hide irrelevant setup noise.

Green — Make the Test Pass

Write the minimum code to make the test pass. Nothing more. Speed over design — cleanup is for Refactor.

Do not write code for requirements not expressed in a test.

Refactor — Improve the Design

With tests green, improve structure without changing behaviour:

  • Rename, extract, reorganise — do not change what the code does.
  • Run all tests after each change.
  • Do NOT modify or add tests during refactoring.
  • Apply coding standards (loaded at session start via coding-standards) during this phase — standards compliance belongs here, not in Green. Green stays minimal.

Repeat the cycle for the next behaviour.

Scope Control

Each test should be the most obvious, smallest step toward the requirement. If you find yourself writing a lot of code to make one test pass, the test is probably too large — break it into a smaller first step. Only add code needed to satisfy a behavioural requirement expressed in a test.

Modifying Existing Code

  1. Run the full test suite. Confirm all tests pass.
  2. Make the change.
  3. Run the full test suite again. All tests must still pass.
  4. If tests fail, the implementation is wrong — revert and try again. Do not modify tests to compensate.

Test Rules

  • Never write production code except to make a failing test pass.
  • Tests must come from task requirements. Do not invent scenarios not specified by the task.
  • Only write a test in response to a new behaviour — never in response to a new method or class.
  • Test at the public interface only. Never test private or internal methods or classes.
  • Never expose internals just to test them.
  • Never modify existing tests to make implementation changes pass. This is reward hacking.
  • Tests must be fast (seconds, not minutes) and binary (pass/fail, no interpretation needed).
  • Code coverage is a tool for guiding refactoring, not a target.

Test Doubles

  • Do NOT mock internal collaborators to isolate classes.
  • Only use test doubles for slow I/O (network, database, filesystem, message queues).
  • Prefer in-memory implementations over mocks — they are more honest about behaviour.

Refactoring Rules

  • Refactoring = changing implementation without changing behaviour.
  • During refactoring, existing tests MUST NOT be modified or deleted.
  • New classes or methods extracted during refactoring do not get their own tests — they are covered via the public interface.
  • If tests break during refactoring, the tests were coupled to implementation details. Flag this to the user rather than fixing the tests.

What Not To Do

If you catch yourself doing any of these, stop and revert:

  • Writing tests after implementation rather than before.
  • Modifying or deleting existing tests to make implementation changes pass.
  • Writing speculative code not required by any test.
  • Writing a test in response to a new method or class rather than a new behaviour.

For reasoning behind these rules, see references/tdd-philosophy.md.

Compétences associées

Looking for an alternative to coops-tdd-auto or another community skill for your workflow? Explore these related open-source skills.

Voir tout

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Générez des plugins de widgets personnalisables pour le système de flux prompts.chat

flags

Logo of vercel
vercel

Le Cadre de Réaction

138.4k
0
Navigateur

pr-review

Logo of pytorch
pytorch

Tenseurs et réseaux neuronaux dynamiques en Python avec une forte accélération GPU

98.6k
0
Développeur