KS
Killer-Skills

use-tdd — how to use use-tdd how to use use-tdd, what is use-tdd, use-tdd alternative, use-tdd vs TDD, use-tdd install, use-tdd setup guide, Red-Green-Refactor principle, Iron Law in TDD, test-driven development for AI agents

v1.0.0
GitHub

About this Skill

Perfect for intent-driven AI Agents needing rigorous Test-Driven Development capabilities. use-tdd is a Test-Driven Development skill that adheres to the Iron Law, requiring a failing test before writing production code, and follows the Red-Green-Refactor cycle.

Features

Enforces the Iron Law: NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Implements Red-Green-Refactor cycle for efficient coding
Writes minimal code to pass tests, ensuring test-driven development
Deletes existing code if not written with a failing test first, promoting a clean start
Follows a strict principle of not keeping or adapting existing code as reference

# Core Topics

mtthsnc mtthsnc
[0]
[0]
Updated: 3/6/2026

Quality Score

Top 5%
33
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add mtthsnc/autonome/use-tdd

Agent Capability Analysis

The use-tdd MCP Server by mtthsnc is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use use-tdd, what is use-tdd, use-tdd alternative.

Ideal Agent Persona

Perfect for intent-driven AI Agents needing rigorous Test-Driven Development capabilities.

Core Value

Empowers agents to enforce Test-Driven Development principles, ensuring production code is test-driven using the Red-Green-Refactor methodology, strictly following the Iron Law of no production code without a failing test first.

Capabilities Granted for use-tdd MCP Server

Writing minimal code to pass tests
Debugging using the Red-Green-Refactor cycle
Ensuring test reliability with the core principle of watching tests fail

! Prerequisites & Limits

  • Requires strict adherence to the Iron Law
  • No exceptions for production code without prior failing tests
  • Deletes code not following Test-Driven Development principles
Project
SKILL.md
4.4 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Write the test first. Watch it fail. Write minimal code to pass.

Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.

The Iron Law

NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST

Write code before the test? Delete it. Start over.

No exceptions:

  • Don't keep it as "reference"
  • Don't "adapt" it while writing tests
  • Delete means delete

Red-Green-Refactor

RED - Write Failing Test

Write one minimal test showing what should happen.

Requirements:

  • One behavior
  • Clear name
  • Real code (no mocks unless unavoidable)

Verify RED - Watch It Fail

MANDATORY. Never skip.

bash
1npm test path/to/test.test.ts

Confirm:

  • Test fails (not errors)
  • Failure message is expected
  • Fails because feature missing (not typos)

Test passes? You're testing existing behavior. Fix test.

GREEN - Minimal Code

Write simplest code to pass the test.

Don't add features, refactor other code, or "improve" beyond the test.

Verify GREEN - Watch It Pass

MANDATORY.

bash
1npm test path/to/test.test.ts

Confirm:

  • Test passes
  • Other tests still pass
  • Output pristine (no errors, warnings)

Test fails? Fix code, not test.

REFACTOR - Clean Up

After green only:

  • Remove duplication
  • Improve names
  • Extract helpers

Keep tests green. Don't add behavior.

Why Order Matters

"I'll write tests after to verify it works"

Tests written after code pass immediately. Passing immediately proves nothing:

  • Might test wrong thing
  • Might test implementation, not behavior
  • Might miss edge cases you forgot

Test-first forces you to see the test fail, proving it actually tests something.

"I already manually tested all the edge cases"

Manual testing is ad-hoc:

  • No record of what you tested
  • Can't re-run when code changes
  • Easy to forget cases under pressure

Automated tests are systematic and run the same way every time.

"Deleting X hours of work is wasteful"

Sunk cost fallacy. Your choice now:

  • Delete and rewrite with TDD (X more hours, high confidence)
  • Keep it and add tests after (30 min, low confidence, likely bugs)

The "waste" is keeping code you can't trust.

"TDD is dogmatic, being pragmatic means adapting"

TDD IS pragmatic:

  • Finds bugs before commit (faster than debugging after)
  • Prevents regressions (tests catch breaks immediately)
  • Documents behavior (tests show how to use code)
  • Enables refactoring (change freely, tests catch breaks)

"Tests after achieve the same goals - it's spirit not ritual"

No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"

Tests-after are biased by your implementation. You test what you built, not what's required.

Tests-first force edge case brainstorm before implementing.

Common Rationalizations

ExcuseReality
"Too simple to test"Simple code breaks. Test takes 30 seconds.
"I'll test after"Tests passing immediately prove nothing.
"Tests after achieve same goals"Tests-after = "what does this do?" Tests-first = "what should this do?"
"Already manually tested"Ad-hoc ≠ systematic. No record, can't re-run.
"Deleting X hours is wasteful"Sunk cost fallacy. Keeping unverified code is technical debt.
"Keep as reference"You'll adapt it. That's testing after. Delete means delete.
"Need to explore first"Fine. Throw away exploration, start with TDD.
"Test hard = design unclear"Listen to test. Hard to test = hard to use.

Red Flags - STOP

  • Code before test
  • Test after implementation
  • Test passes immediately
  • Can't explain why test failed
  • Tests added "later"
  • "I already manually tested it"
  • "Tests after achieve the same purpose"
  • "Keep as reference" or "adapt existing code"
  • "Already spent X hours, deleting is wasteful"

All of these mean: Delete code. Start over with TDD.

Verification Checklist

Before marking work complete:

  • Every new function/method has a test
  • Watched each test fail before implementing
  • Each test failed for expected reason (feature missing, not typo)
  • Wrote minimal code to pass each test
  • All tests pass
  • Output pristine (no errors, warnings)
  • Tests use real code (mocks only if unavoidable)
  • Edge cases and errors covered

Can't check all boxes? You skipped TDD. Start over.

Final Rule

Production code → test exists and failed first
Otherwise → not TDD

No exceptions without your human partner's permission.

Related Skills

Looking for an alternative to use-tdd or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication