ml-api-endpoint — developer-tools ml-api-endpoint, markups, community, developer-tools, ide skills, live-preview, markdown, security-first, Claude Code, Cursor, Windsurf

v1.0.0

이 스킬 정보

FastAPI를 사용하여 보안이 되고 상태가 없는 기계 학습 API 엔드ポイント를 배포해야 하는 AI 에이전트에게 적합합니다. Эксперт ML API. Используй для model serving, inference endpoints, FastAPI и ML deployment.

# Core Topics

Nir-Bhay Nir-Bhay
[4]
[0]
Updated: 3/7/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 9/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
9/11
Quality Score
51
Canonical Locale
en
Detected Body Locale
en

FastAPI를 사용하여 보안이 되고 상태가 없는 기계 학습 API 엔드ポイント를 배포해야 하는 AI 에이전트에게 적합합니다. Эксперт ML API. Используй для model serving, inference endpoints, FastAPI и ML deployment.

이 스킬을 사용하는 이유

FastAPI와 Pydantic를 사용하여 버전 관리된 API를 설계하고 배포하고, 입력 검증을 엄격하게 수행하고, 성공 및 오류 응답 형식을 표준화하고, 모델 업데이트에 대한 견고한 버전 관리 전략을 계획하는 기능을 에이전트에 제공합니다.

최적의 용도

FastAPI를 사용하여 보안이 되고 상태가 없는 기계 학습 API 엔드ポイント를 배포해야 하는 AI 에이전트에게 적합합니다.

실행 가능한 사용 사례 for ml-api-endpoint

상태가 없는 기계 학습 모델을 API로 배포하는 것
성공 및 오류 처리를 위한 일관된 응답 형식을 구현하는 것
추론 전에 입력을 검증하여 안전한 API 상호 작용을 보장하는 것

! 보안 및 제한 사항

  • Python 환경이 필요합니다.
  • FastAPI 및 Pydantic 라이브러리에 의존합니다.
  • 상태가 없는 설계는 모든 기계 학습 애플리케이션에 적합하지 않을 수 있습니다.

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is ml-api-endpoint?

FastAPI를 사용하여 보안이 되고 상태가 없는 기계 학습 API 엔드ポイント를 배포해야 하는 AI 에이전트에게 적합합니다. Эксперт ML API. Используй для model serving, inference endpoints, FastAPI и ML deployment.

How do I install ml-api-endpoint?

Run the command: npx killer-skills add Nir-Bhay/markups/ml-api-endpoint. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for ml-api-endpoint?

Key use cases include: 상태가 없는 기계 학습 모델을 API로 배포하는 것, 성공 및 오류 처리를 위한 일관된 응답 형식을 구현하는 것, 추론 전에 입력을 검증하여 안전한 API 상호 작용을 보장하는 것.

Which IDEs are compatible with ml-api-endpoint?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for ml-api-endpoint?

Python 환경이 필요합니다.. FastAPI 및 Pydantic 라이브러리에 의존합니다.. 상태가 없는 설계는 모든 기계 학습 애플리케이션에 적합하지 않을 수 있습니다..

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Nir-Bhay/markups/ml-api-endpoint. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use ml-api-endpoint immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

ml-api-endpoint

Install ml-api-endpoint, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

ML API Endpoint Expert

Expert in designing and deploying machine learning API endpoints.

Core Principles

API Design

  • Stateless Design: Each request contains all necessary information
  • Consistent Response Format: Standardize success/error structures
  • Versioning Strategy: Plan for model updates
  • Input Validation: Rigorous validation before inference

FastAPI Implementation

Basic ML Endpoint

python
1from fastapi import FastAPI, HTTPException 2from pydantic import BaseModel, validator 3import joblib 4import numpy as np 5 6app = FastAPI(title="ML Model API", version="1.0.0") 7 8model = None 9 10@app.on_event("startup") 11async def load_model(): 12 global model 13 model = joblib.load("model.pkl") 14 15class PredictionInput(BaseModel): 16 features: list[float] 17 18 @validator('features') 19 def validate_features(cls, v): 20 if len(v) != 10: 21 raise ValueError('Expected 10 features') 22 return v 23 24class PredictionResponse(BaseModel): 25 prediction: float 26 confidence: float | None = None 27 model_version: str 28 request_id: str 29 30@app.post("/predict", response_model=PredictionResponse) 31async def predict(input_data: PredictionInput): 32 features = np.array([input_data.features]) 33 prediction = model.predict(features)[0] 34 35 return PredictionResponse( 36 prediction=float(prediction), 37 model_version="v1", 38 request_id=generate_request_id() 39 )

Batch Prediction

python
1class BatchInput(BaseModel): 2 instances: list[list[float]] 3 4 @validator('instances') 5 def validate_batch_size(cls, v): 6 if len(v) > 100: 7 raise ValueError('Batch size cannot exceed 100') 8 return v 9 10@app.post("/predict/batch") 11async def batch_predict(input_data: BatchInput): 12 features = np.array(input_data.instances) 13 predictions = model.predict(features) 14 15 return { 16 "predictions": predictions.tolist(), 17 "count": len(predictions) 18 }

Performance Optimization

Model Caching

python
1class ModelCache: 2 def __init__(self, ttl_seconds=300): 3 self.cache = {} 4 self.ttl = ttl_seconds 5 6 def get(self, features): 7 key = hashlib.md5(str(features).encode()).hexdigest() 8 if key in self.cache: 9 result, timestamp = self.cache[key] 10 if time.time() - timestamp < self.ttl: 11 return result 12 return None 13 14 def set(self, features, prediction): 15 key = hashlib.md5(str(features).encode()).hexdigest() 16 self.cache[key] = (prediction, time.time())

Health Checks

python
1@app.get("/health") 2async def health_check(): 3 return { 4 "status": "healthy", 5 "model_loaded": model is not None 6 } 7 8@app.get("/metrics") 9async def get_metrics(): 10 return { 11 "requests_total": request_counter, 12 "prediction_latency_avg": avg_latency, 13 "error_rate": error_rate 14 }

Docker Deployment

dockerfile
1FROM python:3.9-slim 2 3WORKDIR /app 4COPY requirements.txt . 5RUN pip install --no-cache-dir -r requirements.txt 6 7COPY . . 8EXPOSE 8000 9 10CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

Best Practices

  • Use async/await for I/O operations
  • Validate data types, ranges, and business rules
  • Cache predictions for deterministic models
  • Handle model failures with fallback responses
  • Log predictions, latencies, and errors
  • Support multiple model versions
  • Set memory and CPU limits

관련 스킬

Looking for an alternative to ml-api-endpoint or another community skill for your workflow? Explore these related open-source skills.

모두 보기

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
인공지능

widget-generator

Logo of f
f

prompts.chat 피드 시스템을 위한 사용자 지정 가능한 위젯 플러그인을 생성합니다

149.6k
0
인공지능

flags

Logo of vercel
vercel

리액트 프레임워크

138.4k
0
브라우저

pr-review

Logo of pytorch
pytorch

파이썬에서 텐서와 동적 신경망 구현 및 강력한 GPU 가속 지원

98.6k
0
개발자