e2e-testing — community e2e-testing, ikigai, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0

이 스킬 정보

제어 소켓 상호 작용을 통해 엔드 투 엔드 테스트 기능이 필요한 테스트 자동화 에이전트에 적합합니다. JSON-based end-to-end test format, runner, and mock provider

mgreenly mgreenly
[1]
[0]
Updated: 3/2/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
7/11
Quality Score
45
Canonical Locale
en
Detected Body Locale
en

제어 소켓 상호 작용을 통해 엔드 투 엔드 테스트 기능이 필요한 테스트 자동화 에이전트에 적합합니다. JSON-based end-to-end test format, runner, and mock provider

이 스킬을 사용하는 이유

에이전트가 JSON 테스트 파일을 사용하여 ikigai 동작을 검증하고 bin/mock-provider를 사용하여 모의 모드에서 테스트를 실행하여 엔드 투 엔드 테스트의 자동화와 ikigai-ctl 사용의 검증을 가능하게 합니다.

최적의 용도

제어 소켓 상호 작용을 통해 엔드 투 엔드 테스트 기능이 필요한 테스트 자동화 에이전트에 적합합니다.

실행 가능한 사용 사례 for e2e-testing

ikigai 동작의 엔드 투 엔드 테스트 자동화
JSON 테스트 파일을 사용하여 제어 소켓 상호 작용의 검증
모의 모드와 bin/mock-provider를 사용하여 테스트 실패의 디버깅

! 보안 및 제한 사항

  • tests/e2e/ 디렉토리 내에 자체 포함된 JSON 파일이 필요
  • tests/e2e/index.json에서 정의된 실행 순서에만限定
  • 백엔드는 모의 모드의 bin/mock-provider에만限定

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is e2e-testing?

제어 소켓 상호 작용을 통해 엔드 투 엔드 테스트 기능이 필요한 테스트 자동화 에이전트에 적합합니다. JSON-based end-to-end test format, runner, and mock provider

How do I install e2e-testing?

Run the command: npx killer-skills add mgreenly/ikigai. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for e2e-testing?

Key use cases include: ikigai 동작의 엔드 투 엔드 테스트 자동화, JSON 테스트 파일을 사용하여 제어 소켓 상호 작용의 검증, 모의 모드와 bin/mock-provider를 사용하여 테스트 실패의 디버깅.

Which IDEs are compatible with e2e-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for e2e-testing?

tests/e2e/ 디렉토리 내에 자체 포함된 JSON 파일이 필요. tests/e2e/index.json에서 정의된 실행 순서에만限定. 백엔드는 모의 모드의 bin/mock-provider에만限定.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mgreenly/ikigai. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use e2e-testing immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

e2e-testing

Install e2e-testing, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

End-to-End Testing

End-to-end tests verify ikigai behavior through its control socket. For ikigai-ctl usage, see /load ikigai-ctl. For general headless interaction, see /load headless.

Test Files

Tests live in tests/e2e/ as self-contained JSON files. Run order is defined by tests/e2e/index.json — a JSON array of test filenames in execution order.

Execution Modes

ModeBackendStepsAssertions
mockbin/mock-providerall steps including mock_expectassert + assert_mock
livereal provider (Anthropic, OpenAI, Google)mock_expect steps skippedassert only

Tests are written once and run in either mode. In live mode, mock_expect steps are skipped and assert_mock is not evaluated.

JSON Schema

json
1{ 2 "name": "human-readable test name", 3 "steps": [ ... ], 4 "assert": [ ... ], 5 "assert_mock": [ ... ] 6}
  • name — describes what the test verifies
  • steps — ordered list of actions to execute
  • assert — assertions checked in ALL modes
  • assert_mock — assertions checked only in mock mode

Step Types

send_keys

json
1{"send_keys": "/model gpt-5-mini\\r"}

Include \\r to submit. See /load ikigai-ctl for escaping conventions.

read_framebuffer

json
1{"read_framebuffer": true}

Always read_framebuffer before asserting. Each capture replaces the previous one.

wait

json
1{"wait": 0.5}
  • After UI commands (/model, /clear): 0.5 seconds
  • After sending a prompt to the LLM: 3-5 seconds (prefer wait_idle)

wait_idle

Wait until the agent becomes idle or timeout elapses.

json
1{"wait_idle": 10000}
  • Value is timeout_ms (integer milliseconds)
  • Exit code 0 = idle; exit code 1 = timed out (report FAIL)
  • Use instead of wait after sending prompts to the LLM

mock_expect

Configure the mock provider's response queue. Skipped in live mode.

json
1{"mock_expect": {"responses": [{"content": "The capital of France is Paris."}]}}

The responses array is a FIFO queue — each LLM request pops the next entry. Entries contain either content (text) or tool_calls (array), never both. Must appear before the send_keys that triggers the LLM call.

Assertion Types

Assertions run against the most recent read_framebuffer capture.

contains

At least one row contains the given substring.

json
1{"contains": "gpt-5-mini"}

not_contains

No row contains the given substring.

json
1{"not_contains": "error"}

line_prefix

At least one row starts with the given prefix (after trimming leading whitespace).

json
1{"line_prefix": "●"}

Running Tests

Direct execution, one tool call per step. Never use scripts or programmatic wrappers when the user asks you to run e2e tests. The scripted runner (tests/e2e/runner) exists for CI — when the user asks you to run tests, they want direct execution so they can observe every response.

Procedure per test file:

  1. Read the JSON file
  2. Determine mode — mock if ikigai is connected to mock-provider, live otherwise
  3. Execute each step in order, one tool call per step:
    • send_keys: run ikigai-ctl send_keys "<value>"
    • wait: sleep N
    • wait_idle: run ikigai-ctl wait_idle <value>, fail if exit code is 1
    • read_framebuffer: run ikigai-ctl read_framebuffer, store result
    • mock_expect: in mock mode, curl -s 127.0.0.1:<port>/_mock/expect -d '<json>'; in live mode, skip
  4. Evaluate assertions (assert always, assert_mock in mock mode only)
  5. Report PASS or FAIL with evidence (cite relevant framebuffer rows)

Large batches (20+ tests)

Divide into chunks of 20, run sub-agents serially (shared instance — never parallel). Each sub-agent receives filenames and the full contents of this skill. Don't pre-read test files yourself.

Key Rules

  • Never start ikigai — the user manages the instance
  • Never use the runner script — direct execution only
  • One test file = one test — self-contained, no dependencies
  • Steps execute in order — sequential, never parallel
  • Always read_framebuffer before asserting
  • Never chain after wait_idle — run read_framebuffer in a separate tool call

Example: UI-only test

json
1{ 2 "name": "no model indicator on fresh start", 3 "steps": [ 4 {"read_framebuffer": true} 5 ], 6 "assert": [ 7 {"contains": "(no model)"} 8 ] 9}

Example: mock provider test

json
1{ 2 "name": "basic chat completion via mock provider", 3 "steps": [ 4 {"mock_expect": {"responses": [{"content": "The capital of France is Paris."}]}}, 5 {"send_keys": "What is the capital of France?\\r"}, 6 {"wait": 3}, 7 {"read_framebuffer": true} 8 ], 9 "assert": [ 10 {"line_prefix": "●"} 11 ], 12 "assert_mock": [ 13 {"contains": "The capital of France is Paris."} 14 ] 15}

관련 스킬

Looking for an alternative to e2e-testing or another community skill for your workflow? Explore these related open-source skills.

모두 보기

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
인공지능

widget-generator

Logo of f
f

prompts.chat 피드 시스템을 위한 사용자 지정 가능한 위젯 플러그인을 생성합니다

149.6k
0
인공지능

flags

Logo of vercel
vercel

리액트 프레임워크

138.4k
0
브라우저

pr-review

Logo of pytorch
pytorch

파이썬에서 텐서와 동적 신경망 구현 및 강력한 GPU 가속 지원

98.6k
0
개발자