smoke — for Claude Code dataraum, community, for Claude Code, ide skills, UX testing, informal test drive, tool responsiveness, output clarity, Claude Code, Cursor, Windsurf

v1.0.0

Об этом навыке

Perfect for AI Coding Assistants like Claude Code or Cursor needing UX validation and smoke testing capabilities. Smoke testing validates UX by checking tool responses and output clarity, benefiting developers using AI coding assistants like Claude Code or Cursor. It helps ensure a smooth workflow.

Возможности

Exercise tools using natural inputs
Validate output clarity and usefulness
Test tool responsiveness and workflow
Identify gaps and surprises in tool output

# Core Topics

dataraum dataraum
[2]
[0]
Updated: 3/28/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
44
Canonical Locale
en
Detected Body Locale
en

Perfect for AI Coding Assistants like Claude Code or Cursor needing UX validation and smoke testing capabilities. Smoke testing validates UX by checking tool responses and output clarity, benefiting developers using AI coding assistants like Claude Code or Cursor. It helps ensure a smooth workflow.

Зачем использовать этот навык

Empowers agents to perform informal test drives of MCP tools, checking response clarity, usefulness, and identifying obvious gaps, using protocols like tool name focusing and scenario playing, and handling edge-case inputs and errors.

Подходит лучше всего

Perfect for AI Coding Assistants like Claude Code or Cursor needing UX validation and smoke testing capabilities.

Реализуемые кейсы использования for smoke

Validating tool responses and output clarity
Testing UX workflows with multiple tool calls
Identifying and reporting usability issues and errors

! Безопасность и ограничения

  • Requires MCP server restart after code changes
  • Limited to MCP tools and workflows

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is smoke?

Perfect for AI Coding Assistants like Claude Code or Cursor needing UX validation and smoke testing capabilities. Smoke testing validates UX by checking tool responses and output clarity, benefiting developers using AI coding assistants like Claude Code or Cursor. It helps ensure a smooth workflow.

How do I install smoke?

Run the command: npx killer-skills add dataraum/dataraum/smoke. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for smoke?

Key use cases include: Validating tool responses and output clarity, Testing UX workflows with multiple tool calls, Identifying and reporting usability issues and errors.

Which IDEs are compatible with smoke?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for smoke?

Requires MCP server restart after code changes. Limited to MCP tools and workflows.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add dataraum/dataraum/smoke. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use smoke immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

smoke

Install smoke, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

Smoke: $ARGUMENTS

You just implemented or changed MCP tools. Now USE them. Not to verify correctness (that's eval's job) — to feel what the UX is like.

IMPORTANT: If you changed MCP server code in this session, the user must restart the session first. The MCP server runs as a subprocess — it loaded the old code. Remind them if they haven't restarted.

Input

$ARGUMENTS is one of:

  • A tool name to focus on (e.g., "look", "measure")
  • A brief scenario to play through (e.g., "check data quality then query revenue")
  • Empty — exercise all available tools

What this is

A quick, informal test drive. Like kicking the tires after a change. You're not checking ground truth or running calibration. You're checking:

  • Does the tool respond at all?
  • Does the output make sense to a human?
  • Is the response format useful or confusing?
  • Are there obvious gaps (missing fields, unhelpful messages, errors)?
  • Would you, as a practitioner, know what to do next based on this output?

How to do it

1. Orient

Start with begin_session if available — that's how a real session starts.

Then look at the data. Read the output as if you've never seen this dataset before. Does it tell you enough to start working?

2. Exercise the changed tools

Call each tool that was modified. Don't overthink the inputs — use them naturally, as a practitioner would.

For each call, note:

  • Response: did it work? What came back?
  • Clarity: would a practitioner understand this without reading source code?
  • Usefulness: does this output help you decide what to do next?
  • Surprises: anything unexpected, missing, or confusing?

3. Try a mini workflow

String 2-3 tool calls together as a practitioner would:

  • look → measure → "I see high entropy on column X" → query about that column
  • Or: look → "what's the revenue?" → query → "does this make sense given the data quality?" → measure

This tests the flow, not just individual tools.

4. Try to break it (gently)

  • Call a tool with edge-case inputs (empty string, unknown column, weird query)
  • Ask a question the data can't answer — does it fail gracefully?
  • Skip a step (e.g., query without looking first) — is the experience still coherent?

5. Share impressions

Tell the user what you found. Not a formal report — just honest impressions:

  • "The look output is clear, I immediately understood the data shape"
  • "measure returns scores but I don't know what 0.73 means — needs context"
  • "query works but doesn't mention that the column it aggregated has quality issues"
  • "begin_session errors with: [actual error message]"

Be specific. Quote actual output. Name actual fields. This is feedback, not a verdict.

Next step

After smoke testing:

  1. Fix any obvious issues found during the smoke test (restart session again if you change server code)
  2. Commit the implementation
  3. Update .claude/handoff.md with what needs eval attention (and testdata hints if applicable)
  4. Tell the user: "Ready for acceptance. Run /accept handoff in the eval repo after updating the vendor submodule."

If smoke testing reveals deeper problems (not just UX polish but fundamental issues): go back to /implement or even /refine. Don't patch over structural problems.

Rules

  • This is NOT acceptance testing — don't assert against ground truth
  • This is NOT a unit test — don't test internal behavior
  • This IS a UX check — would a human find this useful?
  • If a tool errors: note the error, try to understand why, move on
  • If the MCP server isn't responding: remind user to restart the session
  • Spend 5-10 minutes, not 30. Quick impressions are the point.
  • Be honest. "This feels clunky" is useful feedback. "Looks great!" is not.

Связанные навыки

Looking for an alternative to smoke or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Создание настраиваемых плагинов виджетов для системы ленты новостей prompts.chat

flags

Logo of vercel
vercel

Фреймворк React

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Разработчик