KS
Killer-Skills

hugging-face-cli — hugging-face-cli tutorial hugging-face-cli tutorial, hugging face hub api, how to use hugging-face-cli, hugging-face-cli vs hub api, hugging-face-cli install guide, hugging face cli commands, hugging-face-cli repository management, hugging face hub repository creation, hugging-face-cli compute job management

Verified
v1.0.0
GitHub

About this Skill

Ideal for AI Agents requiring seamless integration with the Hugging Face Hub for model and dataset management. hugging-face-cli is a command-line interface for interacting with the Hugging Face Hub, providing features such as repository creation, file transfers, and compute job management.

Features

Executes `hf auth login` for authentication
Downloads models and datasets using `hf download <repo_id>`
Uploads files to Hub repositories with `hf upload <repo_id>`
Creates repositories using `hf repo create <name>`
Manages local cache and runs compute jobs on HF infrastructure

# Core Topics

huggingface huggingface
[8.3k]
[490]
Updated: 3/6/2026

Quality Score

Top 5%
71
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add huggingface/skills/hugging-face-cli

Agent Capability Analysis

The hugging-face-cli MCP Server by huggingface is an open-source Categories.official integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for hugging-face-cli tutorial, hugging face hub api, how to use hugging-face-cli.

Ideal Agent Persona

Ideal for AI Agents requiring seamless integration with the Hugging Face Hub for model and dataset management.

Core Value

Empowers agents to execute Hugging Face Hub operations using the `hf` CLI, enabling direct access to model downloads, uploads, repository creation, and cloud compute jobs, all while handling authentication and cache management through protocols like HTTPS and SSH.

Capabilities Granted for hugging-face-cli MCP Server

Downloading models and datasets from the Hugging Face Hub
Uploading custom models and datasets to private repositories
Managing local cache for efficient model versioning
Running compute jobs on Hugging Face infrastructure for scalable processing

! Prerequisites & Limits

  • Requires Hugging Face account for authentication
  • Dependent on `hf` CLI version and compatibility
  • Limited to Hugging Face Hub services and models
Project
SKILL.md
6.9 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Hugging Face CLI

The hf CLI provides direct terminal access to the Hugging Face Hub for downloading, uploading, and managing repositories, cache, and compute resources.

Quick Command Reference

TaskCommand
Loginhf auth login
Download modelhf download <repo_id>
Download to folderhf download <repo_id> --local-dir ./path
Upload folderhf upload <repo_id> . .
Create repohf repo create <name>
Create taghf repo tag create <repo_id> <tag>
Delete fileshf repo-files delete <repo_id> <files>
List cachehf cache ls
Remove from cachehf cache rm <repo_or_revision>
List modelshf models ls
Get model infohf models info <model_id>
List datasetshf datasets ls
Get dataset infohf datasets info <dataset_id>
List spaceshf spaces ls
Get space infohf spaces info <space_id>
List endpointshf endpoints ls
Run GPU jobhf jobs run --flavor a10g-small <image> <cmd>
Environment infohf env

Core Commands

Authentication

bash
1hf auth login # Interactive login 2hf auth login --token $HF_TOKEN # Non-interactive 3hf auth whoami # Check current user 4hf auth list # List stored tokens 5hf auth switch # Switch between tokens 6hf auth logout # Log out

Download

bash
1hf download <repo_id> # Full repo to cache 2hf download <repo_id> file.safetensors # Specific file 3hf download <repo_id> --local-dir ./models # To local directory 4hf download <repo_id> --include "*.safetensors" # Filter by pattern 5hf download <repo_id> --repo-type dataset # Dataset 6hf download <repo_id> --revision v1.0 # Specific version

Upload

bash
1hf upload <repo_id> . . # Current dir to root 2hf upload <repo_id> ./models /weights # Folder to path 3hf upload <repo_id> model.safetensors # Single file 4hf upload <repo_id> . . --repo-type dataset # Dataset 5hf upload <repo_id> . . --create-pr # Create PR 6hf upload <repo_id> . . --commit-message="msg" # Custom message

Repository Management

bash
1hf repo create <name> # Create model repo 2hf repo create <name> --repo-type dataset # Create dataset 3hf repo create <name> --private # Private repo 4hf repo create <name> --repo-type space --space_sdk gradio # Gradio space 5hf repo delete <repo_id> # Delete repo 6hf repo move <from_id> <to_id> # Move repo to new namespace 7hf repo settings <repo_id> --private true # Update repo settings 8hf repo list --repo-type model # List repos 9hf repo branch create <repo_id> release-v1 # Create branch 10hf repo branch delete <repo_id> release-v1 # Delete branch 11hf repo tag create <repo_id> v1.0 # Create tag 12hf repo tag list <repo_id> # List tags 13hf repo tag delete <repo_id> v1.0 # Delete tag

Delete Files from Repo

bash
1hf repo-files delete <repo_id> folder/ # Delete folder 2hf repo-files delete <repo_id> "*.txt" # Delete with pattern

Cache Management

bash
1hf cache ls # List cached repos 2hf cache ls --revisions # Include individual revisions 3hf cache rm model/gpt2 # Remove cached repo 4hf cache rm <revision_hash> # Remove cached revision 5hf cache prune # Remove detached revisions 6hf cache verify gpt2 # Verify checksums from cache

Browse Hub

bash
1# Models 2hf models ls # List top trending models 3hf models ls --search "MiniMax" --author MiniMaxAI # Search models 4hf models ls --filter "text-generation" --limit 20 # Filter by task 5hf models info MiniMaxAI/MiniMax-M2.1 # Get model info 6 7# Datasets 8hf datasets ls # List top trending datasets 9hf datasets ls --search "finepdfs" --sort downloads # Search datasets 10hf datasets info HuggingFaceFW/finepdfs # Get dataset info 11 12# Spaces 13hf spaces ls # List top trending spaces 14hf spaces ls --filter "3d" --limit 10 # Filter by 3D modeling spaces 15hf spaces info enzostvs/deepsite # Get space info

Jobs (Cloud Compute)

bash
1hf jobs run python:3.12 python script.py # Run on CPU 2hf jobs run --flavor a10g-small <image> <cmd> # Run on GPU 3hf jobs run --secrets HF_TOKEN <image> <cmd> # With HF token 4hf jobs ps # List jobs 5hf jobs logs <job_id> # View logs 6hf jobs cancel <job_id> # Cancel job

Inference Endpoints

bash
1hf endpoints ls # List endpoints 2hf endpoints deploy my-endpoint \ 3 --repo openai/gpt-oss-120b \ 4 --framework vllm \ 5 --accelerator gpu \ 6 --instance-size x4 \ 7 --instance-type nvidia-a10g \ 8 --region us-east-1 \ 9 --vendor aws 10hf endpoints describe my-endpoint # Show endpoint details 11hf endpoints pause my-endpoint # Pause endpoint 12hf endpoints resume my-endpoint # Resume endpoint 13hf endpoints scale-to-zero my-endpoint # Scale to zero 14hf endpoints delete my-endpoint --yes # Delete endpoint

GPU Flavors: cpu-basic, cpu-upgrade, cpu-xl, t4-small, t4-medium, l4x1, l4x4, l40sx1, l40sx4, l40sx8, a10g-small, a10g-large, a10g-largex2, a10g-largex4, a100-large, h100, h100x8

Common Patterns

Download and Use Model Locally

bash
1# Download to local directory for deployment 2hf download meta-llama/Llama-3.2-1B-Instruct --local-dir ./model 3 4# Or use cache and get path 5MODEL_PATH=$(hf download meta-llama/Llama-3.2-1B-Instruct --quiet)

Publish Model/Dataset

bash
1hf repo create my-username/my-model --private 2hf upload my-username/my-model ./output . --commit-message="Initial release" 3hf repo tag create my-username/my-model v1.0

Sync Space with Local

bash
1hf upload my-username/my-space . . --repo-type space \ 2 --exclude="logs/*" --delete="*" --commit-message="Sync"

Check Cache Usage

bash
1hf cache ls # See all cached repos and sizes 2hf cache rm model/gpt2 # Remove a repo from cache

Key Options

  • --repo-type: model (default), dataset, space
  • --revision: Branch, tag, or commit hash
  • --token: Override authentication
  • --quiet: Output only essential info (paths/URLs)

References

Related Skills

Looking for an alternative to hugging-face-cli or building a Categories.official AI Agent? Explore these related open-source MCP Servers.

View All

flags

Logo of facebook
facebook

flags is a feature flag management system that enables developers to check flag states, compare channels, and debug feature behavior differences across release channels.

243.6k
0
Design

extract-errors

Logo of facebook
facebook

extract-errors is a skill that assists in extracting and managing error codes in React applications using yarn extract-errors command.

243.6k
0
Design

fix

Logo of facebook
facebook

fix is a technical skill that resolves lint errors, formatting issues, and ensures code quality in declarative, frontend, and UI projects

243.6k
0
Design

flow

Logo of facebook
facebook

Flow is a type checking system for JavaScript, used to validate React code and ensure consistency across applications

243.6k
0
Design