code-review
Open Source framework for voice and multimodal conversational AI
Browse and install thousands of AI Agent skills in the Killer-Skills directory. Supports Claude Code, Windsurf, Cursor, and more.
This directory brings installable AI Agent skills into one place so you can filter by search, category, topic, and official source, then install them directly into Claude Code, Cursor, Windsurf, and other supported environments.
Open Source framework for voice and multimodal conversational AI
Create story examples for components. Use when writing stories, creating examples, or demonstrating component usage.
Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in Node.js and browsers (with WebGPU/WASM) using pre-trained models from Hugging Face Hub.
huggingface-jobs is a skill for running general-purpose compute workloads on Hugging Face infrastructure, covering UV scripts, Docker-based jobs, and hardware selection.
huggingface-llm-trainer is a skill for training language models using Transformer Reinforcement Learning on Hugging Face Jobs infrastructure.
huggingface-paper-publisher is a skill for publishing and managing AI research papers on Hugging Face Hub, supporting paper creation, model and dataset linking, and authorship verification.
Trains and fine-tunes vision models for object detection (D-FINE, RT-DETR v2, DETR, YOLOS), image classification (timm models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3 — plus any Transformers classifier), and SAM/SAM2 segmentation using Hugging Face Transformers on Hugging Face Jobs cloud GPUs. Covers COCO-format dataset preparation, Albumentations augmentation, mAP/mAR evaluation, accuracy metrics, SAM segmentation with bbox/point prompts, DiceCE loss, hardware selection, cost estimation, Trackio monitoring, and Hub persistence. Use when users mention training object detection, image classification, SAM, SAM2, segmentation, image matting, DETR, D-FINE, RT-DETR, ViT, timm, MobileNet, ResNet, bounding box models, or fine-tuning vision models on Hugging Face Jobs.
huggingface-papers is a skill that allows users to access and summarize AI research papers from Hugging Face and arXiv, providing structured metadata and links to related models and datasets.
Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). S
huggingface-community-evals is a skill for running local evaluations of Hugging Face Hub models using inspect-ai and lighteval.
Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.
Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.