e2e-testing — community e2e-testing, ikigai, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0

このスキルについて

エンドツーエンドのテスト機能をコントロールソケットの相互作用を介して必要とするテスト自動化エージェントに最適です。 JSON-based end-to-end test format, runner, and mock provider

mgreenly mgreenly
[1]
[0]
Updated: 3/2/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
7/11
Quality Score
45
Canonical Locale
en
Detected Body Locale
en

エンドツーエンドのテスト機能をコントロールソケットの相互作用を介して必要とするテスト自動化エージェントに最適です。 JSON-based end-to-end test format, runner, and mock provider

このスキルを使用する理由

エージェントに JSON テストファイルを使用して ikigai 行為を検証し、bin/mock-provider を使用してモックモードでテストを実行する機能を提供し、エンドツーエンドテストの自動化と ikigai-ctl の使用の検証を可能にします。

おすすめ

エンドツーエンドのテスト機能をコントロールソケットの相互作用を介して必要とするテスト自動化エージェントに最適です。

実現可能なユースケース for e2e-testing

ikigai 行為のエンドツーエンドテストの自動化
JSON テストファイルを使用してコントロールソケットの相互作用を検証
モックモードと bin/mock-provider を使用してテストの失敗をデバッグ

! セキュリティと制限

  • tests/e2e/ ディレクトリに自己完結した JSON ファイルが必要
  • tests/e2e/index.json で定義された実行順序のみに限定
  • バックエンドはモックモードの bin/mock-provider に限定

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is e2e-testing?

エンドツーエンドのテスト機能をコントロールソケットの相互作用を介して必要とするテスト自動化エージェントに最適です。 JSON-based end-to-end test format, runner, and mock provider

How do I install e2e-testing?

Run the command: npx killer-skills add mgreenly/ikigai. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for e2e-testing?

Key use cases include: ikigai 行為のエンドツーエンドテストの自動化, JSON テストファイルを使用してコントロールソケットの相互作用を検証, モックモードと bin/mock-provider を使用してテストの失敗をデバッグ.

Which IDEs are compatible with e2e-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for e2e-testing?

tests/e2e/ ディレクトリに自己完結した JSON ファイルが必要. tests/e2e/index.json で定義された実行順序のみに限定. バックエンドはモックモードの bin/mock-provider に限定.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mgreenly/ikigai. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use e2e-testing immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

e2e-testing

Install e2e-testing, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

End-to-End Testing

End-to-end tests verify ikigai behavior through its control socket. For ikigai-ctl usage, see /load ikigai-ctl. For general headless interaction, see /load headless.

Test Files

Tests live in tests/e2e/ as self-contained JSON files. Run order is defined by tests/e2e/index.json — a JSON array of test filenames in execution order.

Execution Modes

ModeBackendStepsAssertions
mockbin/mock-providerall steps including mock_expectassert + assert_mock
livereal provider (Anthropic, OpenAI, Google)mock_expect steps skippedassert only

Tests are written once and run in either mode. In live mode, mock_expect steps are skipped and assert_mock is not evaluated.

JSON Schema

json
1{ 2 "name": "human-readable test name", 3 "steps": [ ... ], 4 "assert": [ ... ], 5 "assert_mock": [ ... ] 6}
  • name — describes what the test verifies
  • steps — ordered list of actions to execute
  • assert — assertions checked in ALL modes
  • assert_mock — assertions checked only in mock mode

Step Types

send_keys

json
1{"send_keys": "/model gpt-5-mini\\r"}

Include \\r to submit. See /load ikigai-ctl for escaping conventions.

read_framebuffer

json
1{"read_framebuffer": true}

Always read_framebuffer before asserting. Each capture replaces the previous one.

wait

json
1{"wait": 0.5}
  • After UI commands (/model, /clear): 0.5 seconds
  • After sending a prompt to the LLM: 3-5 seconds (prefer wait_idle)

wait_idle

Wait until the agent becomes idle or timeout elapses.

json
1{"wait_idle": 10000}
  • Value is timeout_ms (integer milliseconds)
  • Exit code 0 = idle; exit code 1 = timed out (report FAIL)
  • Use instead of wait after sending prompts to the LLM

mock_expect

Configure the mock provider's response queue. Skipped in live mode.

json
1{"mock_expect": {"responses": [{"content": "The capital of France is Paris."}]}}

The responses array is a FIFO queue — each LLM request pops the next entry. Entries contain either content (text) or tool_calls (array), never both. Must appear before the send_keys that triggers the LLM call.

Assertion Types

Assertions run against the most recent read_framebuffer capture.

contains

At least one row contains the given substring.

json
1{"contains": "gpt-5-mini"}

not_contains

No row contains the given substring.

json
1{"not_contains": "error"}

line_prefix

At least one row starts with the given prefix (after trimming leading whitespace).

json
1{"line_prefix": "●"}

Running Tests

Direct execution, one tool call per step. Never use scripts or programmatic wrappers when the user asks you to run e2e tests. The scripted runner (tests/e2e/runner) exists for CI — when the user asks you to run tests, they want direct execution so they can observe every response.

Procedure per test file:

  1. Read the JSON file
  2. Determine mode — mock if ikigai is connected to mock-provider, live otherwise
  3. Execute each step in order, one tool call per step:
    • send_keys: run ikigai-ctl send_keys "<value>"
    • wait: sleep N
    • wait_idle: run ikigai-ctl wait_idle <value>, fail if exit code is 1
    • read_framebuffer: run ikigai-ctl read_framebuffer, store result
    • mock_expect: in mock mode, curl -s 127.0.0.1:<port>/_mock/expect -d '<json>'; in live mode, skip
  4. Evaluate assertions (assert always, assert_mock in mock mode only)
  5. Report PASS or FAIL with evidence (cite relevant framebuffer rows)

Large batches (20+ tests)

Divide into chunks of 20, run sub-agents serially (shared instance — never parallel). Each sub-agent receives filenames and the full contents of this skill. Don't pre-read test files yourself.

Key Rules

  • Never start ikigai — the user manages the instance
  • Never use the runner script — direct execution only
  • One test file = one test — self-contained, no dependencies
  • Steps execute in order — sequential, never parallel
  • Always read_framebuffer before asserting
  • Never chain after wait_idle — run read_framebuffer in a separate tool call

Example: UI-only test

json
1{ 2 "name": "no model indicator on fresh start", 3 "steps": [ 4 {"read_framebuffer": true} 5 ], 6 "assert": [ 7 {"contains": "(no model)"} 8 ] 9}

Example: mock provider test

json
1{ 2 "name": "basic chat completion via mock provider", 3 "steps": [ 4 {"mock_expect": {"responses": [{"content": "The capital of France is Paris."}]}}, 5 {"send_keys": "What is the capital of France?\\r"}, 6 {"wait": 3}, 7 {"read_framebuffer": true} 8 ], 9 "assert": [ 10 {"line_prefix": "●"} 11 ], 12 "assert_mock": [ 13 {"contains": "The capital of France is Paris."} 14 ] 15}

関連スキル

Looking for an alternative to e2e-testing or another community skill for your workflow? Explore these related open-source skills.

すべて表示

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

カスタマイズ可能なウィジェットプラグインをprompts.chatのフィードシステム用に生成する

149.6k
0
AI

flags

Logo of vercel
vercel

React フレームワーク

138.4k
0
ブラウザ

pr-review

Logo of pytorch
pytorch

Pythonにおけるテンソルと動的ニューラルネットワーク(強力なGPUアクセラレーション)

98.6k
0
開発者