coops-tdd-auto — TDD 자동화 coops-tdd-auto, coding-agent-launcher, community, TDD 자동화, ide skills, 테스트 驱动 개발 도구, AI 테스트 도구, 자동화 테스트 코드 생성, package.json 테스트 설정, pom.xml 테스트 설정

v1.0.0

이 스킬 정보

자동화된 테스트 주도 개발이 필요한 개발 에이전트에게 적합합니다. Behaviour-Driven TDD를 사용합니다 coops-tdd-auto는 TDD를 자동화하는 AI 도구입니다

기능

프로젝트의 테스트 러너를 자동으로 감지
package.json, pom.xml, Makefile 등의 설정 파일을 지원
Red → Green → Refactor의 TDD 프로세스를 구현
테스트 코드 자동 생성
다중 언어 프로젝트 구조 추론을 지원

# Core Topics

will-head will-head
[0]
[0]
Updated: 3/20/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
33
Canonical Locale
en
Detected Body Locale
en

자동화된 테스트 주도 개발이 필요한 개발 에이전트에게 적합합니다. Behaviour-Driven TDD를 사용합니다 coops-tdd-auto는 TDD를 자동화하는 AI 도구입니다

이 스킬을 사용하는 이유

에이전트가 테스트 프로세스를 자동화할 수 있는 능력을 부여하며 Behaviour-Driven TDD를 사용하여 테스트 주도 개발을 간소화하고 구현 코드 이전에 필수 테스트를 작성하는 것을 지원하며 package.json, pom.xml 또는 Makefile에 구성된 다양한 테스트 실행자를 지원합니다

최적의 용도

자동화된 테스트 주도 개발이 필요한 개발 에이전트에게 적합합니다. Behaviour-Driven TDD를 사용합니다

실행 가능한 사용 사례 for coops-tdd-auto

정렬된 테스트를 위한 Behaviour-Driven TDD 자동화
새로운 작업 항목을 위한 실패한 테스트 생성
성공한 테스트 실행 이후 코드 리팩토링

! 보안 및 제한 사항

  • 테스트 실행자 감지를 위해 package.json, pom.xml 또는 Makefile 같은 프로젝트 구성 파일이 필요합니다
  • 강제적인 테스트 주도 개발로 인해 초기 개발 속도가 느려질 수 있습니다

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is coops-tdd-auto?

자동화된 테스트 주도 개발이 필요한 개발 에이전트에게 적합합니다. Behaviour-Driven TDD를 사용합니다 coops-tdd-auto는 TDD를 자동화하는 AI 도구입니다

How do I install coops-tdd-auto?

Run the command: npx killer-skills add will-head/coding-agent-launcher/coops-tdd-auto. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for coops-tdd-auto?

Key use cases include: 정렬된 테스트를 위한 Behaviour-Driven TDD 자동화, 새로운 작업 항목을 위한 실패한 테스트 생성, 성공한 테스트 실행 이후 코드 리팩토링.

Which IDEs are compatible with coops-tdd-auto?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for coops-tdd-auto?

테스트 실행자 감지를 위해 package.json, pom.xml 또는 Makefile 같은 프로젝트 구성 파일이 필요합니다. 강제적인 테스트 주도 개발로 인해 초기 개발 속도가 느려질 수 있습니다.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add will-head/coding-agent-launcher/coops-tdd-auto. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use coops-tdd-auto immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

coops-tdd-auto

Install coops-tdd-auto, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Behaviour-Driven TDD — Automated Mode

TDD is mandatory. Do not write implementation code before writing a failing test.

Before You Start

Detect the project's test runner from config files (package.json, pom.xml, Makefile, etc.). If ambiguous, infer from the language and project structure. Verify by running the suite once and confirming it exits cleanly.

Red → Green → Refactor

Red — Write a Failing Test

  1. Derive the behaviour from the current task item. One task item = one or more behaviours = one or more tests.
  2. Write a test that specifies that behaviour from the caller's perspective.
  3. Test at the public interface (exports, public methods, observable outcomes). Never test internals.
  4. Run the test. Confirm it fails for the right reason — the behaviour is absent, not a syntax error or import problem.

If a task item maps to multiple distinct behaviours, write one test per behaviour — do not combine. If a task item is too vague to derive a testable behaviour, flag it rather than guessing.

Test file naming — one test file per behaviour where practical, named for the behaviour being tested.

Namingwhen_[condition]_should_[outcome], adapted to language conventions:

  • when_balance_is_zero_should_reject_withdrawal
  • when_email_is_invalid_should_raise_error
  • when_password_is_too_short_should_fail_validation

Structure — Arrange / Act / Assert:

python
1def test_when_balance_is_zero_should_reject_withdrawal(): 2 # Arrange 3 account = Account(balance=0) 4 5 # Act / Assert 6 with pytest.raises(InsufficientFundsError): 7 account.withdraw(10)

Use Evident Data: only include values that affect the test outcome. Use builders or helpers to hide irrelevant setup noise.

Green — Make the Test Pass

Write the minimum code to make the test pass. Nothing more. Speed over design — cleanup is for Refactor.

Do not write code for requirements not expressed in a test.

Refactor — Improve the Design

With tests green, improve structure without changing behaviour:

  • Rename, extract, reorganise — do not change what the code does.
  • Run all tests after each change.
  • Do NOT modify or add tests during refactoring.
  • Apply coding standards (loaded at session start via coding-standards) during this phase — standards compliance belongs here, not in Green. Green stays minimal.

Repeat the cycle for the next behaviour.

Scope Control

Each test should be the most obvious, smallest step toward the requirement. If you find yourself writing a lot of code to make one test pass, the test is probably too large — break it into a smaller first step. Only add code needed to satisfy a behavioural requirement expressed in a test.

Modifying Existing Code

  1. Run the full test suite. Confirm all tests pass.
  2. Make the change.
  3. Run the full test suite again. All tests must still pass.
  4. If tests fail, the implementation is wrong — revert and try again. Do not modify tests to compensate.

Test Rules

  • Never write production code except to make a failing test pass.
  • Tests must come from task requirements. Do not invent scenarios not specified by the task.
  • Only write a test in response to a new behaviour — never in response to a new method or class.
  • Test at the public interface only. Never test private or internal methods or classes.
  • Never expose internals just to test them.
  • Never modify existing tests to make implementation changes pass. This is reward hacking.
  • Tests must be fast (seconds, not minutes) and binary (pass/fail, no interpretation needed).
  • Code coverage is a tool for guiding refactoring, not a target.

Test Doubles

  • Do NOT mock internal collaborators to isolate classes.
  • Only use test doubles for slow I/O (network, database, filesystem, message queues).
  • Prefer in-memory implementations over mocks — they are more honest about behaviour.

Refactoring Rules

  • Refactoring = changing implementation without changing behaviour.
  • During refactoring, existing tests MUST NOT be modified or deleted.
  • New classes or methods extracted during refactoring do not get their own tests — they are covered via the public interface.
  • If tests break during refactoring, the tests were coupled to implementation details. Flag this to the user rather than fixing the tests.

What Not To Do

If you catch yourself doing any of these, stop and revert:

  • Writing tests after implementation rather than before.
  • Modifying or deleting existing tests to make implementation changes pass.
  • Writing speculative code not required by any test.
  • Writing a test in response to a new method or class rather than a new behaviour.

For reasoning behind these rules, see references/tdd-philosophy.md.

관련 스킬

Looking for an alternative to coops-tdd-auto or another community skill for your workflow? Explore these related open-source skills.

모두 보기

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
인공지능

widget-generator

Logo of f
f

prompts.chat 피드 시스템을 위한 사용자 지정 가능한 위젯 플러그인을 생성합니다

149.6k
0
인공지능

flags

Logo of vercel
vercel

리액트 프레임워크

138.4k
0
브라우저

pr-review

Logo of pytorch
pytorch

파이썬에서 텐서와 동적 신경망 구현 및 강력한 GPU 가속 지원

98.6k
0
개발자