Hugging Face CLI
The hf CLI provides direct terminal access to the Hugging Face Hub for downloading, uploading, and managing repositories, cache, and compute resources.
Quick Command Reference
| Task | Command |
|---|---|
| Login | hf auth login |
| Download model | hf download <repo_id> |
| Download to folder | hf download <repo_id> --local-dir ./path |
| Upload folder | hf upload <repo_id> . . |
| Create repo | hf repo create <name> |
| Create tag | hf repo tag create <repo_id> <tag> |
| Delete files | hf repo-files delete <repo_id> <files> |
| List cache | hf cache ls |
| Remove from cache | hf cache rm <repo_or_revision> |
| List models | hf models ls |
| Get model info | hf models info <model_id> |
| List datasets | hf datasets ls |
| Get dataset info | hf datasets info <dataset_id> |
| List spaces | hf spaces ls |
| Get space info | hf spaces info <space_id> |
| List endpoints | hf endpoints ls |
| Run GPU job | hf jobs run --flavor a10g-small <image> <cmd> |
| Environment info | hf env |
Core Commands
Authentication
bash1hf auth login # Interactive login 2hf auth login --token $HF_TOKEN # Non-interactive 3hf auth whoami # Check current user 4hf auth list # List stored tokens 5hf auth switch # Switch between tokens 6hf auth logout # Log out
Download
bash1hf download <repo_id> # Full repo to cache 2hf download <repo_id> file.safetensors # Specific file 3hf download <repo_id> --local-dir ./models # To local directory 4hf download <repo_id> --include "*.safetensors" # Filter by pattern 5hf download <repo_id> --repo-type dataset # Dataset 6hf download <repo_id> --revision v1.0 # Specific version
Upload
bash1hf upload <repo_id> . . # Current dir to root 2hf upload <repo_id> ./models /weights # Folder to path 3hf upload <repo_id> model.safetensors # Single file 4hf upload <repo_id> . . --repo-type dataset # Dataset 5hf upload <repo_id> . . --create-pr # Create PR 6hf upload <repo_id> . . --commit-message="msg" # Custom message
Repository Management
bash1hf repo create <name> # Create model repo 2hf repo create <name> --repo-type dataset # Create dataset 3hf repo create <name> --private # Private repo 4hf repo create <name> --repo-type space --space_sdk gradio # Gradio space 5hf repo delete <repo_id> # Delete repo 6hf repo move <from_id> <to_id> # Move repo to new namespace 7hf repo settings <repo_id> --private true # Update repo settings 8hf repo list --repo-type model # List repos 9hf repo branch create <repo_id> release-v1 # Create branch 10hf repo branch delete <repo_id> release-v1 # Delete branch 11hf repo tag create <repo_id> v1.0 # Create tag 12hf repo tag list <repo_id> # List tags 13hf repo tag delete <repo_id> v1.0 # Delete tag
Delete Files from Repo
bash1hf repo-files delete <repo_id> folder/ # Delete folder 2hf repo-files delete <repo_id> "*.txt" # Delete with pattern
Cache Management
bash1hf cache ls # List cached repos 2hf cache ls --revisions # Include individual revisions 3hf cache rm model/gpt2 # Remove cached repo 4hf cache rm <revision_hash> # Remove cached revision 5hf cache prune # Remove detached revisions 6hf cache verify gpt2 # Verify checksums from cache
Browse Hub
bash1# Models 2hf models ls # List top trending models 3hf models ls --search "MiniMax" --author MiniMaxAI # Search models 4hf models ls --filter "text-generation" --limit 20 # Filter by task 5hf models info MiniMaxAI/MiniMax-M2.1 # Get model info 6 7# Datasets 8hf datasets ls # List top trending datasets 9hf datasets ls --search "finepdfs" --sort downloads # Search datasets 10hf datasets info HuggingFaceFW/finepdfs # Get dataset info 11 12# Spaces 13hf spaces ls # List top trending spaces 14hf spaces ls --filter "3d" --limit 10 # Filter by 3D modeling spaces 15hf spaces info enzostvs/deepsite # Get space info
Jobs (Cloud Compute)
bash1hf jobs run python:3.12 python script.py # Run on CPU 2hf jobs run --flavor a10g-small <image> <cmd> # Run on GPU 3hf jobs run --secrets HF_TOKEN <image> <cmd> # With HF token 4hf jobs ps # List jobs 5hf jobs logs <job_id> # View logs 6hf jobs cancel <job_id> # Cancel job
Inference Endpoints
bash1hf endpoints ls # List endpoints 2hf endpoints deploy my-endpoint \ 3 --repo openai/gpt-oss-120b \ 4 --framework vllm \ 5 --accelerator gpu \ 6 --instance-size x4 \ 7 --instance-type nvidia-a10g \ 8 --region us-east-1 \ 9 --vendor aws 10hf endpoints describe my-endpoint # Show endpoint details 11hf endpoints pause my-endpoint # Pause endpoint 12hf endpoints resume my-endpoint # Resume endpoint 13hf endpoints scale-to-zero my-endpoint # Scale to zero 14hf endpoints delete my-endpoint --yes # Delete endpoint
GPU Flavors: cpu-basic, cpu-upgrade, cpu-xl, t4-small, t4-medium, l4x1, l4x4, l40sx1, l40sx4, l40sx8, a10g-small, a10g-large, a10g-largex2, a10g-largex4, a100-large, h100, h100x8
Common Patterns
Download and Use Model Locally
bash1# Download to local directory for deployment 2hf download meta-llama/Llama-3.2-1B-Instruct --local-dir ./model 3 4# Or use cache and get path 5MODEL_PATH=$(hf download meta-llama/Llama-3.2-1B-Instruct --quiet)
Publish Model/Dataset
bash1hf repo create my-username/my-model --private 2hf upload my-username/my-model ./output . --commit-message="Initial release" 3hf repo tag create my-username/my-model v1.0
Sync Space with Local
bash1hf upload my-username/my-space . . --repo-type space \ 2 --exclude="logs/*" --delete="*" --commit-message="Sync"
Check Cache Usage
bash1hf cache ls # See all cached repos and sizes 2hf cache rm model/gpt2 # Remove a repo from cache
Key Options
--repo-type:model(default),dataset,space--revision: Branch, tag, or commit hash--token: Override authentication--quiet: Output only essential info (paths/URLs)
References
- Complete command reference: See references/commands.md
- Workflow examples: See references/examples.md