huggingface-paper-publisher
[ 公式 ]Hugging Face論文出版スキルは、Hugging Face Hub上で研究論文を公開・管理するためのAIエージェントスキル
Browse AI and ML workflow skills for model integration, prompt engineering, evaluations, and LLM automation across major IDEs.
This directory brings installable AI Agent skills into one place so you can filter by search, category, topic, and official source, then install them directly into Claude Code, Cursor, Windsurf, and other supported environments.
Hugging Face論文出版スキルは、Hugging Face Hub上で研究論文を公開・管理するためのAIエージェントスキル
Hugging Faceビジョントレーナースキル、Hugging Face JobsのクラウドGPUでビジョンモデルをトレーニングしてファインチューニングする
Hugging Face論文ページスキルは、AI研究論文を検索して読むためのAIエージェントスキルです
Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.
Hugging Faceコミュニティ評価スキルは、Hugging Face Hubモデルの評価をローカルハードウェアで実行するためのAIエージェントスキルです
Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.
Hugging Face データセットビューアー スキル、データセットの探索および抽出をサポート、ページング、検索、フィルタリング
Hugging Face Hub CLI(hf)は、リポジトリ、モデル、データセット、スペースをダウンロード、アップロード、管理するためのツールです。
Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in Node.js and browsers (with WebGPU/WASM) using pre-trained models from Hugging Face Hub.
Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.
Add and manage evaluation results in Hugging Face model cards. Supports extracting eval tables from README content, importing scores from Artificial Analysis API, and running custom model evaluations with vLLM/lighteval. Works with the model-index metadata format.
This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.