mcaf-testing — for Claude Code mcaf-testing, GeminiSharpSDK, community, for Claude Code, ide skills, csharp, dotnet, gemini, gemini-cli, gemini-sdk-csharp

v1.0.0

이 스킬 정보

적합한 상황: Ideal for AI agents that need for new behaviour and bugfixes: tests drive the change (tdd: reproduce/specify → test fails →. 현지화된 요약: mcaf-testing helps AI agents handle repository-specific developer workflows with documented implementation details. It covers csharp, dotnet, gemini workflows.

기능

For new behaviour and bugfixes: tests drive the change (TDD: reproduce/specify → test fails →
testing rules (levels, mocks policy, suites to run, containers, etc.)
Start from the docs that define behaviour (no guessing):
docs/Features/ for user/system flows and business rules
docs/ADR/ for architectural decisions and invariants that must remain true

# Core Topics

managedcode managedcode
[0]
[0]
Updated: 4/9/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
57
Canonical Locale
en
Detected Body Locale
en

적합한 상황: Ideal for AI agents that need for new behaviour and bugfixes: tests drive the change (tdd: reproduce/specify → test fails →. 현지화된 요약: mcaf-testing helps AI agents handle repository-specific developer workflows with documented implementation details. It covers csharp, dotnet, gemini workflows.

이 스킬을 사용하는 이유

추천 설명: mcaf-testing helps agents for new behaviour and bugfixes: tests drive the change (tdd: reproduce/specify → test fails →. mcaf-testing helps AI agents handle repository-specific developer workflows with

최적의 용도

적합한 상황: Ideal for AI agents that need for new behaviour and bugfixes: tests drive the change (tdd: reproduce/specify → test fails →.

실행 가능한 사용 사례 for mcaf-testing

사용 사례: Applying For new behaviour and bugfixes: tests drive the change (TDD: reproduce/specify → test fails →
사용 사례: Applying testing rules (levels, mocks policy, suites to run, containers, etc.)
사용 사례: Applying Start from the docs that define behaviour (no guessing):

! 보안 및 제한 사항

  • 제한 사항: docs/ADR/ for architectural decisions and invariants that must remain true
  • 제한 사항: follow AGENTS.md scoping rules (Architecture map → relevant docs → relevant module code; avoid repo-wide scanning)
  • 제한 사항: run tests/coverage only when you have a reason (changed code/tests, bug reproduction, baseline confirmation)

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is mcaf-testing?

적합한 상황: Ideal for AI agents that need for new behaviour and bugfixes: tests drive the change (tdd: reproduce/specify → test fails →. 현지화된 요약: mcaf-testing helps AI agents handle repository-specific developer workflows with documented implementation details. It covers csharp, dotnet, gemini workflows.

How do I install mcaf-testing?

Run the command: npx killer-skills add managedcode/GeminiSharpSDK/mcaf-testing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for mcaf-testing?

Key use cases include: 사용 사례: Applying For new behaviour and bugfixes: tests drive the change (TDD: reproduce/specify → test fails →, 사용 사례: Applying testing rules (levels, mocks policy, suites to run, containers, etc.), 사용 사례: Applying Start from the docs that define behaviour (no guessing):.

Which IDEs are compatible with mcaf-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for mcaf-testing?

제한 사항: docs/ADR/ for architectural decisions and invariants that must remain true. 제한 사항: follow AGENTS.md scoping rules (Architecture map → relevant docs → relevant module code; avoid repo-wide scanning). 제한 사항: run tests/coverage only when you have a reason (changed code/tests, bug reproduction, baseline confirmation).

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add managedcode/GeminiSharpSDK/mcaf-testing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use mcaf-testing immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

mcaf-testing

Install mcaf-testing, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

MCAF: Testing

Outputs

  • New/updated automated tests that encode documented behaviour (happy path + negative + edge), with integration/API/UI preferred
  • For new behaviour and bugfixes: tests drive the change (TDD: reproduce/specify → test fails → implement → test passes)
  • Updated verification sections in relevant docs (docs/Features/*, docs/ADR/*) when needed (tests + commands must match reality)
  • Evidence of verification: commands run (build/test/coverage/analyze) + result + the report/artifact path written by the tool (when applicable)

Workflow

  1. Read AGENTS.md:
    • commands: build, test, format, analyze, and the repo’s coverage path (either a dedicated coverage command or a test command that generates coverage)
    • testing rules (levels, mocks policy, suites to run, containers, etc.)
  2. Start from the docs that define behaviour (no guessing):
    • docs/Features/* for user/system flows and business rules
    • docs/ADR/* for architectural decisions and invariants that must remain true
    • if the docs are missing/contradict, fix the docs first (or write a minimal spec + test plan in the task/PR)
    • follow AGENTS.md scoping rules (Architecture map → relevant docs → relevant module code; avoid repo-wide scanning)
  3. Follow AGENTS.md verification timing (optimize time + tokens):
    • run tests/coverage only when you have a reason (changed code/tests, bug reproduction, baseline confirmation)
    • start with the smallest scope (new/changed tests), then expand to required suites
  4. Define the scenarios you must prove (map them back to docs):
    • positive (happy path)
    • negative (validation/forbidden/unauthorized/error paths)
    • edge (limits, concurrency, retries/idempotency, time-sensitive behaviour)
    • for ADRs: test the invariants and the “must not happen” behaviours the decision relies on
  5. Choose the highest meaningful test level:
    • prefer integration/API/UI when the behaviour crosses boundaries
    • use unit tests only when logic is isolated and higher-level coverage is impractical
  6. Implement via a TDD loop (per scenario):
    • write the test first and make sure it fails for the right reason
    • implement the minimum change to make it pass
    • refactor safely (keep tests green)
  7. Write tests that assert outcomes (not “it runs”):
    • assert returned values/responses
    • assert DB state / emitted events / observable side effects
    • include negative and edge cases when relevant
  8. Keep tests stable (treat flakiness as a bug):
    • deterministic data/fixtures, no hidden dependencies
    • avoid sleep-based timing; prefer “wait until condition”/polling with a timeout
    • keep test setup/teardown reliable (reset state between tests)
  9. Coverage (follow AGENTS.md, optimize time/tokens):
    • run coverage only if it’s part of the repo’s required verification path or if you need it to find gaps
    • run coverage once per change (it is heavier than tests)
    • capture where the report/artifacts were written (path, summary) if generated
  10. If the repo has UI:
  • run UI/E2E tests
  • inspect screenshots/videos/traces produced by the runner for failures and obvious UI regressions
  1. Run verification in layers (as required by AGENTS.md):
  • new/changed tests first
  • then the related suite
  • then broader regressions if required
  • run analyze if required
  1. Keep docs and skills consistent:
  • ensure docs/Features/* and docs/ADR/* verification sections point to the real tests and real commands
  • if you change test/coverage commands or rules, update AGENTS.md and this skill in the same PR

Guardrails

  • All test discipline and prohibitions come from AGENTS.md. Do not contradict it in this skill.

관련 스킬

Looking for an alternative to mcaf-testing or another community skill for your workflow? Explore these related open-source skills.

모두 보기

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
인공지능

widget-generator

Logo of f
f

prompts.chat 피드 시스템을 위한 사용자 지정 가능한 위젯 플러그인을 생성합니다

149.6k
0
인공지능

flags

Logo of vercel
vercel

리액트 프레임워크

138.4k
0
브라우저

pr-review

Logo of pytorch
pytorch

파이썬에서 텐서와 동적 신경망 구현 및 강력한 GPU 가속 지원

98.6k
0
개발자