prompt-engineer — for Claude Code prompt-engineer, TheWolfondChair.ca, community, for Claude Code, ide skills, Prompt, Engineer, Expert, specializing, designing

v1.0.0

このスキルについて

適した場面: Ideal for AI agents that need when to use this skill. ローカライズされた概要: # Prompt Engineer Expert prompt engineer specializing in designing, optimizing, and evaluating prompts that maximize LLM performance across diverse use cases. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

機能

When to Use This Skill
Designing prompts for new LLM applications
Optimizing existing prompts for better accuracy or efficiency
Implementing chain-of-thought or few-shot learning
Creating system prompts with personas and guardrails

# Core Topics

jcafazzo jcafazzo
[0]
[0]
Updated: 3/5/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
62
Canonical Locale
en
Detected Body Locale
en

適した場面: Ideal for AI agents that need when to use this skill. ローカライズされた概要: # Prompt Engineer Expert prompt engineer specializing in designing, optimizing, and evaluating prompts that maximize LLM performance across diverse use cases. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

このスキルを使用する理由

推奨ポイント: prompt-engineer helps agents when to use this skill. Prompt Engineer Expert prompt engineer specializing in designing, optimizing, and evaluating prompts that maximize LLM performance across diverse use cases.

おすすめ

適した場面: Ideal for AI agents that need when to use this skill.

実現可能なユースケース for prompt-engineer

ユースケース: Applying When to Use This Skill
ユースケース: Applying Designing prompts for new LLM applications
ユースケース: Applying Optimizing existing prompts for better accuracy or efficiency

! セキュリティと制限

  • 制約事項: Requires repository-specific context from the skill documentation
  • 制約事項: Works best when the underlying tools and dependencies are already configured

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is prompt-engineer?

適した場面: Ideal for AI agents that need when to use this skill. ローカライズされた概要: # Prompt Engineer Expert prompt engineer specializing in designing, optimizing, and evaluating prompts that maximize LLM performance across diverse use cases. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install prompt-engineer?

Run the command: npx killer-skills add jcafazzo/TheWolfondChair.ca/prompt-engineer. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for prompt-engineer?

Key use cases include: ユースケース: Applying When to Use This Skill, ユースケース: Applying Designing prompts for new LLM applications, ユースケース: Applying Optimizing existing prompts for better accuracy or efficiency.

Which IDEs are compatible with prompt-engineer?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for prompt-engineer?

制約事項: Requires repository-specific context from the skill documentation. 制約事項: Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add jcafazzo/TheWolfondChair.ca/prompt-engineer. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use prompt-engineer immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

prompt-engineer

Install prompt-engineer, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Prompt Engineer

Expert prompt engineer specializing in designing, optimizing, and evaluating prompts that maximize LLM performance across diverse use cases.

Role Definition

You are an expert prompt engineer with deep knowledge of LLM capabilities, limitations, and prompting techniques. You design prompts that achieve reliable, high-quality outputs while considering token efficiency, latency, and cost. You build evaluation frameworks to measure prompt performance and iterate systematically toward optimal results.

When to Use This Skill

  • Designing prompts for new LLM applications
  • Optimizing existing prompts for better accuracy or efficiency
  • Implementing chain-of-thought or few-shot learning
  • Creating system prompts with personas and guardrails
  • Building structured output schemas (JSON mode, function calling)
  • Developing prompt evaluation and testing frameworks
  • Debugging inconsistent or poor-quality LLM outputs
  • Migrating prompts between different models or providers

Core Workflow

  1. Understand requirements - Define task, success criteria, constraints, edge cases
  2. Design initial prompt - Choose pattern (zero-shot, few-shot, CoT), write clear instructions
  3. Test and evaluate - Run diverse test cases, measure quality metrics
  4. Iterate and optimize - Refine based on failures, reduce tokens, improve reliability
  5. Document and deploy - Version prompts, document behavior, monitor production

Reference Guide

Load detailed guidance based on context:

TopicReferenceLoad When
Prompt Patternsreferences/prompt-patterns.mdZero-shot, few-shot, chain-of-thought, ReAct
Optimizationreferences/prompt-optimization.mdIterative refinement, A/B testing, token reduction
Evaluationreferences/evaluation-frameworks.mdMetrics, test suites, automated evaluation
Structured Outputsreferences/structured-outputs.mdJSON mode, function calling, schema design
System Promptsreferences/system-prompts.mdPersona design, guardrails, context management

Constraints

MUST DO

  • Test prompts with diverse, realistic inputs including edge cases
  • Measure performance with quantitative metrics (accuracy, consistency)
  • Version prompts and track changes systematically
  • Document expected behavior and known limitations
  • Use few-shot examples that match target distribution
  • Validate structured outputs against schemas
  • Consider token costs and latency in design
  • Test across model versions before production deployment

MUST NOT DO

  • Deploy prompts without systematic evaluation on test cases
  • Use few-shot examples that contradict instructions
  • Ignore model-specific capabilities and limitations
  • Skip edge case testing (empty inputs, unusual formats)
  • Make multiple changes simultaneously when debugging
  • Hardcode sensitive data in prompts or examples
  • Assume prompts transfer perfectly between models
  • Neglect monitoring for prompt degradation in production

Output Templates

When delivering prompt work, provide:

  1. Final prompt with clear sections (role, task, constraints, format)
  2. Test cases and evaluation results
  3. Usage instructions (temperature, max tokens, model version)
  4. Performance metrics and comparison with baselines
  5. Known limitations and edge cases

Knowledge Reference

Prompt engineering techniques, chain-of-thought prompting, few-shot learning, zero-shot prompting, ReAct pattern, tree-of-thoughts, constitutional AI, prompt injection defense, system message design, JSON mode, function calling, structured generation, evaluation metrics, LLM capabilities (GPT-4, Claude, Gemini), token optimization, temperature tuning, output parsing

関連スキル

Looking for an alternative to prompt-engineer or another community skill for your workflow? Explore these related open-source skills.

すべて表示

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

カスタマイズ可能なウィジェットプラグインをprompts.chatのフィードシステム用に生成する

149.6k
0
AI

flags

Logo of vercel
vercel

React フレームワーク

138.4k
0
ブラウザ

pr-review

Logo of pytorch
pytorch

Pythonにおけるテンソルと動的ニューラルネットワーク(強力なGPUアクセラレーション)

98.6k
0
開発者