docker-best-practices — community docker-best-practices, matrx-sandbox, armanisadeghi, community, ai agent skill, ide skills, agent automation, AI agent skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Containerization Agents needing production-grade Docker optimization with minimal size and robust security Create production-grade Dockerfiles optimized for speed, security, and minimal size. Use when creating or reviewing Dockerfiles, docker-compose files, or when optimizing container images for Python, N

armanisadeghi armanisadeghi
[0]
[0]
Updated: 3/12/2026

Quality Score

Top 5%
42
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add armanisadeghi/matrx-sandbox/docker-best-practices
Supports 19+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The docker-best-practices skill by armanisadeghi is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Perfect for Containerization Agents needing production-grade Docker optimization with minimal size and robust security

Core Value

Empowers agents to create lightweight and secure Docker containers using multi-stage builds, pinned versions, and layer caching, ensuring fast builds and minimal attack surface with distroless or slim base images

Capabilities Granted for docker-best-practices

Optimizing Dockerfile configurations for faster builds
Minimizing container size using layer caching and minimal base images
Ensuring robust security with pinned versions and minimal attack surface

! Prerequisites & Limits

  • Requires Docker installation and configuration
  • Limited to Docker-based containerization
Project
SKILL.md
11.0 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Docker Best Practices

Production-grade Docker patterns optimized for fast builds, minimal size, and bulletproof security. Every container spins up in seconds, syncs efficiently, and never accumulates bloat.

Core Principles

  1. Multi-stage builds — Separate build dependencies from runtime
  2. Pinned versions — Never use :latest tags
  3. Layer caching — Order instructions from least to most frequently changing
  4. Minimal attack surface — Distroless or slim base images, non-root users
  5. BuildKit features — Cache mounts, secrets, multi-platform builds
  6. Fast startup — Eager dependencies, lazy application code

Quick Reference

Python API Services (FastAPI, Flask)

dockerfile
1# syntax=docker/dockerfile:1.7 2FROM python:3.11.11-slim AS base 3 4WORKDIR /app 5 6# Install system dependencies (cached unless changed) 7RUN apt-get update && apt-get install -y --no-install-recommends \ 8 ca-certificates \ 9 && rm -rf /var/lib/apt/lists/* 10 11# Create non-root user 12RUN groupadd -g 1000 app && \ 13 useradd -m -u 1000 -g app app 14 15# ─── Build stage ─── 16FROM base AS builder 17 18RUN apt-get update && apt-get install -y --no-install-recommends \ 19 build-essential \ 20 && rm -rf /var/lib/apt/lists/* 21 22# Install dependencies with cache mount 23COPY pyproject.toml ./ 24RUN --mount=type=cache,target=/root/.cache/pip \ 25 pip install --no-cache-dir . 26 27# ─── Production stage ─── 28FROM base AS production 29 30COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages 31COPY --from=builder /usr/local/bin /usr/local/bin 32 33COPY . . 34 35USER app 36EXPOSE 8000 37 38HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \ 39 CMD python -c "import httpx; httpx.get('http://localhost:8000/health', timeout=2)" 40 41CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Multi-Runtime Sandbox (Python + Node + Tools)

dockerfile
1# syntax=docker/dockerfile:1.7 2FROM ubuntu:22.04.5 AS base 3 4ENV DEBIAN_FRONTEND=noninteractive \ 5 LANG=C.UTF-8 \ 6 LC_ALL=C.UTF-8 7 8# Single apt layer for all system packages 9RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ 10 --mount=type=cache,target=/var/lib/apt,sharing=locked \ 11 apt-get update && apt-get install -y --no-install-recommends \ 12 # Core utilities 13 bash curl wget git jq unzip ca-certificates \ 14 # Build tools (for native extensions) 15 build-essential libffi-dev libssl-dev \ 16 # FUSE for S3 mount 17 fuse libfuse2 \ 18 # Process management 19 tini \ 20 # Python 21 python3.11 python3.11-venv python3.11-dev python3-pip \ 22 && ln -sf /usr/bin/python3.11 /usr/bin/python3 \ 23 && ln -sf /usr/bin/python3.11 /usr/bin/python \ 24 && rm -rf /var/lib/apt/lists/* 25 26# Python package installs with cache mount 27RUN --mount=type=cache,target=/root/.cache/pip \ 28 python3 -m pip install --no-cache-dir --upgrade pip setuptools wheel 29 30# Node.js via official NodeSource script 31RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ 32 apt-get install -y --no-install-recommends nodejs && \ 33 rm -rf /var/lib/apt/lists/* && \ 34 npm install -g npm@latest 35 36# Create non-root user 37RUN groupadd -g 1000 agent && \ 38 useradd -m -u 1000 -g agent -s /bin/bash agent 39 40USER agent 41WORKDIR /home/agent 42 43ENTRYPOINT ["tini", "--"] 44CMD ["/bin/bash"]

Development vs Production Patterns

dockerfile
1# ─── Development stage (hot reload, debug tools) ─── 2FROM base AS development 3 4RUN --mount=type=cache,target=/root/.cache/pip \ 5 pip install --no-cache-dir -e ".[dev]" 6 7COPY . . 8 9# Mount source as volume for hot reload 10CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--reload"] 11 12# ─── Production stage (minimal, optimized) ─── 13FROM base AS production 14 15COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages 16COPY . . 17 18USER app 19 20CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0"]

Essential Patterns

Version Pinning

dockerfile
1# ❌ NEVER use latest 2FROM python:3.11-slim 3 4# ✅ ALWAYS pin to patch version 5FROM python:3.11.11-slim 6 7# ✅ Pin all base images with digest for immutability 8FROM python:3.11.11-slim@sha256:abc123...

Check latest versions:

BuildKit Cache Mounts

dockerfile
1# ❌ Old way — downloads every build 2RUN pip install -r requirements.txt 3 4# ✅ New way — reuses download cache 5RUN --mount=type=cache,target=/root/.cache/pip \ 6 pip install -r requirements.txt 7 8# ✅ npm cache mount 9RUN --mount=type=cache,target=/root/.npm \ 10 npm ci --prefer-offline 11 12# ✅ apt cache mount (shared across concurrent builds) 13RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ 14 --mount=type=cache,target=/var/lib/apt,sharing=locked \ 15 apt-get update && apt-get install -y package-name

Layer Ordering for Optimal Caching

dockerfile
1# Install dependencies (rarely change) — cached ✅ 2COPY pyproject.toml ./ 3RUN pip install . 4 5# Copy application code (changes often) — rebuilt on every change ✅ 6COPY . .

Health Checks

dockerfile
1# Python API 2HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \ 3 CMD python -c "import httpx; httpx.get('http://localhost:8000/health')" 4 5# Node.js API 6HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \ 7 CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))" 8 9# General process check 10HEALTHCHECK --interval=30s --timeout=5s CMD pgrep -x python3 || exit 1

Security Hardening

dockerfile
1# 1. Non-root user 2RUN groupadd -g 1000 app && \ 3 useradd -m -u 1000 -g app app 4USER app 5 6# 2. Read-only root filesystem (in docker-compose or k8s) 7# docker run --read-only --tmpfs /tmp ... 8 9# 3. Drop capabilities (in docker-compose) 10# security_opt: 11# - no-new-privileges:true 12# cap_drop: 13# - ALL 14 15# 4. Minimal base image (distroless for Python/Node) 16FROM gcr.io/distroless/python3-debian12:latest

docker-compose.yml Best Practices

yaml
1services: 2 api: 3 build: 4 context: . 5 dockerfile: Dockerfile 6 target: production # Multi-stage target 7 cache_from: 8 - type=registry,ref=ghcr.io/org/api:buildcache 9 image: org/api:latest 10 restart: unless-stopped 11 12 # Security 13 security_opt: 14 - no-new-privileges:true 15 cap_drop: 16 - ALL 17 read_only: true 18 tmpfs: 19 - /tmp 20 21 # Resources 22 deploy: 23 resources: 24 limits: 25 cpus: '2' 26 memory: 2G 27 reservations: 28 cpus: '1' 29 memory: 1G 30 31 # Health check 32 healthcheck: 33 test: ["CMD", "curl", "-f", "http://localhost:8000/health"] 34 interval: 30s 35 timeout: 5s 36 retries: 3 37 start_period: 10s 38 39 # Env 40 env_file: .env 41 environment: 42 - NODE_ENV=production

.dockerignore

CRITICAL: Always create .dockerignore to exclude unnecessary files from build context.

# Version control
.git/
.github/
.gitignore

# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.so
*.egg-info/
dist/
build/
.venv/
venv/
*.pytest_cache/
.coverage

# Node
node_modules/
npm-debug.log*
.npm/

# IDE
.vscode/
.idea/
*.swp
*.swo
.DS_Store

# Env files (never copy .env to image)
.env
.env.*
!.env.example

# CI/CD
.github/
*.md
!README.md

# Docs
docs/
*.md

Dockerfile Review Checklist

When creating or reviewing a Dockerfile, verify:

Performance

  • Multi-stage build for services with build dependencies
  • --mount=type=cache for pip/npm/apt downloads
  • Layer ordering: system packages → language runtime → dependencies → app code
  • .dockerignore excludes unnecessary files
  • Single apt-get update && install && rm -rf per stage (not multiple RUN commands)

Security

  • Base image pinned to specific version (no :latest)
  • Non-root user for runtime
  • Minimal base image (slim, alpine, or distroless)
  • No secrets in ENV or build args (use --mount=type=secret instead)
  • Health check defined

Size

  • --no-install-recommends on apt-get
  • rm -rf /var/lib/apt/lists/* after apt-get
  • --no-cache-dir on pip installs
  • Multi-stage excludes build tools from final image
  • Production stage only includes runtime dependencies

Correctness

  • WORKDIR set before COPY
  • Correct user ownership for copied files if using non-root user
  • EXPOSE matches actual port
  • CMD/ENTRYPOINT is production-appropriate (no --reload)
  • Environment variables have sensible defaults

Common Mistakes to Avoid

❌ Installing dev dependencies in production

dockerfile
1FROM python:3.11-slim 2RUN pip install -e ".[dev]" # Includes pytest, debug tools, etc.

✅ Separate dev and prod stages

dockerfile
1FROM base AS development 2RUN pip install -e ".[dev]" 3 4FROM base AS production 5RUN pip install . # Prod deps only

❌ Running as root

dockerfile
1CMD ["uvicorn", "app:main"] # Runs as root

✅ Create and use non-root user

dockerfile
1RUN useradd -m app 2USER app 3CMD ["uvicorn", "app:main"]

❌ Copying before installing dependencies

dockerfile
1COPY . . # Invalidates cache on every code change 2RUN pip install -r requirements.txt

✅ Install dependencies first

dockerfile
1COPY requirements.txt . 2RUN pip install -r requirements.txt # Cached unless requirements.txt changes 3COPY . .

❌ Using :latest tags

dockerfile
1FROM python:3-slim # Breaks reproducibility

✅ Pin to patch version

dockerfile
1FROM python:3.11.11-slim

❌ Multiple apt-get updates

dockerfile
1RUN apt-get update && apt-get install -y curl 2RUN apt-get update && apt-get install -y git # Second update is wasted

✅ Single apt layer

dockerfile
1RUN apt-get update && apt-get install -y \ 2 curl \ 3 git \ 4 && rm -rf /var/lib/apt/lists/*

Building Images

Enable BuildKit

bash
1export DOCKER_BUILDKIT=1 2 3# Or in docker-compose.yml 4export COMPOSE_DOCKER_CLI_BUILD=1

Multi-platform builds

bash
1# For ARM64 Macs building x86_64 images 2docker buildx build \ 3 --platform linux/amd64 \ 4 --tag org/app:latest \ 5 --push \ 6 . 7 8# Build for both platforms 9docker buildx build \ 10 --platform linux/amd64,linux/arm64 \ 11 --tag org/app:latest \ 12 --push \ 13 .

Cache optimization

bash
1# Use inline cache for CI 2docker buildx build \ 3 --cache-from type=registry,ref=org/app:buildcache \ 4 --cache-to type=registry,ref=org/app:buildcache,mode=max \ 5 --tag org/app:latest \ 6 --push \ 7 .

Additional Resources

When NOT to Use This Skill

  • Creating docker-compose.yml for simple local dev (just use official images)
  • Debugging running containers (use docker logs, docker exec instead)
  • Container orchestration at scale (Kubernetes/ECS patterns differ significantly)

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is docker-best-practices?

Perfect for Containerization Agents needing production-grade Docker optimization with minimal size and robust security Create production-grade Dockerfiles optimized for speed, security, and minimal size. Use when creating or reviewing Dockerfiles, docker-compose files, or when optimizing container images for Python, N

How do I install docker-best-practices?

Run the command: npx killer-skills add armanisadeghi/matrx-sandbox/docker-best-practices. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for docker-best-practices?

Key use cases include: Optimizing Dockerfile configurations for faster builds, Minimizing container size using layer caching and minimal base images, Ensuring robust security with pinned versions and minimal attack surface.

Which IDEs are compatible with docker-best-practices?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for docker-best-practices?

Requires Docker installation and configuration. Limited to Docker-based containerization.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add armanisadeghi/matrx-sandbox/docker-best-practices. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use docker-best-practices immediately in the current project.

Related Skills

Looking for an alternative to docker-best-practices or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

openclaw-release-maintainer is a specialized AI agent skill for automating release management workflows.

333.8k
0
Data

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
Design

flags

Logo of vercel
vercel

flags is a skill for managing feature flags in Next.js internals, enabling developers to efficiently configure and optimize their React applications.

138.4k
0
Browser

x-api

[ Featured ]
Logo of affaan-m
affaan-m

x-api is a skill that harnesses performance optimization for AI agents, enabling efficient interactions with Twitter and other platforms.

103.8k
0
Productivity