Claude Code, Cursor, Windsurf용 openai powered AI Agent Skills 모음입니다. 개발자 워크플로 자동화에 초점을 둡니다.
이 컬렉션은 OpenAI 모델을 실무 개발 워크플로에 활용하는 도구에 초점을 맞춥니다. 프롬프트 설계, 에이전트 구축, 코딩 보조, 평가, 운영 가시성까지 폭넓게 포함합니다. 핵심 목적은 프로토콜 나열이 아니라 팀의 실제 엔지니어링 성과를 높이는 도구 선택입니다. 즉, 빠른 반복, 더 나은 품질 검증, 명확한 운영 인사이트를 지원합니다. 일부 도구가 MCP를 함께 지원하더라도, 이 페이지의 중심은 OpenAI 기반 워크플로와 개발 생산성입니다.
Primary Install Bridge
Pick One Skill, Then Take the Install Path
This collection should not trap users in comparison mode or pretend to install the whole collection. Its job is to narrow the shortlist to one skill, then send the next click into installation, validation, and rollout. Installation happens on the skill path.
The Next Click Should Keep Narrowing, Not Reset Back To A Generic Directory
Once the install path is clear, move into the solution, CLI, or editorial surface that best matches this collection. That keeps platform, framework, and operations demand narrowing into a more verifiable high-intent journey.
Reviewed on 2026-04-17 against OpenAI workflow fit, installation clarity, evaluation usefulness, and runtime visibility value. This page is now positioned as an install-first OpenAI entry point instead of a broad provider-tag list.
We prioritize this page because users searching for OpenAI tooling usually need a shortlist they can install, validate, and connect to real prompt, agent, and evaluation workflows quickly.
Trust Signals
- Entries are chosen for practical OpenAI workflow value such as prompt design, evaluation, observability, and coding support.
- Selection favors tools with public docs and clear setup paths that teams can validate before wider rollout.
- The page is curated for repeatable engineering outcomes, not broad provider-name traffic capture.
Grouping Logic
- Lead with tools that can enter an OpenAI-centered workflow without adding heavy setup ambiguity.
- Keep the shortlist compact enough for quick comparison while still covering prompting, evaluation, coding, and runtime insight.
- Use installation as the bridge from OpenAI discovery into validated daily execution.
Maintenance & Review
Last Reviewed
2026-04-17
Cadence
Re-check when install flow, maintainer posture, or OpenAI workflow relevance changes upstream; otherwise review monthly.
Maintained By
Killer-Skills editorial review within the recovery-focused authority queue.
Verification
Validate installability, operator clarity, workflow fit, and maintainer trust before retaining or adding an entry.
Execution Examples
How These Skills Work Together In Practice
Validate one OpenAI workflow stack first
Use this page when you need one OpenAI-centered toolchain that can move from prompt design into real coding and evaluation work.
1. Open the installation docs before opening more OpenAI-adjacent repositories.
2. Choose one tool that best supports prompting, evaluation, runtime visibility, or coding support.
3. Install it and verify the CLI write path, sync behavior, and first operator checkpoint.
4. Only after the base path works, expand the setup across the wider team workflow.
Add evaluation before rollout
Treat the collection as an editorial filter when you need OpenAI-adjacent tools that improve quality and observability before you scale usage.
1. Check whether the tool has stable ownership and visible install guidance.
2. Review CLI behavior so operators know what will be written and synced.
3. Use one validated path before scaling it to more projects or more teammates.
4. Document the chosen OpenAI workflow baseline after the first clean rollout.
The component-refactoring skill simplifies high-complexity React components, improving code quality and reducing maintenance efforts for developers. It utilizes patterns and workflows to refactor components, making them more efficient and easier to understand.
Generate Vitest + React Testing Library tests for Dify frontend components, hooks, and utilities. Triggers on testing, spec files, coverage, Vitest, RTL, unit tests, integration tests, or write/review test requests.
Guide for implementing oRPC contract-first API patterns in Dify frontend. Trigger when creating or updating contracts in web/contract, wiring router composition, integrating TanStack Query with typed contracts, migrating legacy service calls to oRPC, or deciding whether to call queryOptions directly vs extracting a helper or use-* hook in web/service.
Zustand state management guide. Use when working with store code (src/store/**), implementing actions, managing state, or creating slices. Triggers on Zustand store development, state management questions, or action implementation.
Data fetching architecture guide using Service layer + Zustand Store + SWR. Use when implementing data fetching, creating services, working with store hooks, or migrating from useEffect. Triggers on data loading, API calls, service creation, or store data fetching tasks.
Testing guide using Vitest. Use when writing tests (.test.ts, .test.tsx), fixing failing tests, improving test coverage, or debugging test issues. Triggers on test creation, test debugging, mock setup, or test-related questions.
Guide for adding keyboard shortcuts. Use when implementing new hotkeys, registering shortcuts, or working with keyboard interactions. Triggers on hotkey implementation or keyboard shortcut tasks.
The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: AI SDK, Vercel AI SDK, generateText, streamText, add AI to my app, build an agent, tool calling, structured output, useChat.
If This Page Is Close, Keep Narrowing With Adjacent Authority Pages
Do not reset back to the generic directory. Move sideways through these adjacent high-intent collections to narrow the shortlist toward the install path that best matches your team.
This collection should not keep users browsing forever. These three questions explain how to shortlist, install, and validate the next step.
이 컬렉션들은 어떤 워크플로를 위해 만들어졌나요?
워크플로 자동화, 프로세스 자동화, 문서 작업, 데이터 워크플로 및 재사용 가능한 스킬 스택을 중심으로 구성되어 있습니다.
컬렉션과 메인 스킬 디렉터리의 차이점은?
스킬 디렉터리는 직접 검색과 필터링에 적합하고, 컬렉션은 완전한 워크플로를 위한 보완적 스킬 묶음을 찾는 데 적합합니다.
이 컬렉션들을 Claude Code나 Cursor에 설치할 수 있나요?
네. 이 컬렉션의 스킬들은 일반적으로 Claude Code, Cursor, Windsurf 및 기타 지원 환경에서 통합 설치 흐름으로 작동합니다.
Additional Recovery Paths
Use These Additional Paths If You Need One More Step To Narrow The Decision
These are the supporting surfaces for this collection after the install direction is clear and the primary next paths have already narrowed the decision.