setup-project — Kubernetes deployment setup-project, community, Kubernetes deployment, ide skills, GitHub repository integration, Docker image management, Self-hosting automation, Claude Code skill

v1.0.0

About this Skill

Ideal for DevOps Agents requiring automated Kubernetes deployments from GitHub repositories. A skill that automates the deployment of self-hosted services to Kubernetes clusters

Features

Deploys self-hosted services to Kubernetes clusters
Checks README for official Docker images
Supports GitHub repository URLs or project names
Automates deployment with docker-compose.yml files
Verifies self-hosting documentation

# Core Topics

ViktorBarzin ViktorBarzin
[5]
[0]
Updated: 3/22/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Ideal for DevOps Agents requiring automated Kubernetes deployments from GitHub repositories. A skill that automates the deployment of self-hosted services to Kubernetes clusters

Core Value

Empowers agents to deploy self-hosted services to Kubernetes clusters using official Docker images and docker-compose.yml files, streamlining the deployment process with self-hosting documentation from GitHub repositories.

Ideal Agent Persona

Ideal for DevOps Agents requiring automated Kubernetes deployments from GitHub repositories.

Capabilities Granted for setup-project

Deploying new services to Kubernetes clusters from GitHub
Automating self-hosted service setup with docker-compose.yml files
Validating self-hosting documentation for Kubernetes deployments

! Prerequisites & Limits

  • Requires access to a Kubernetes cluster
  • Needs a GitHub repository URL or project name
  • Dependent on the presence of official Docker images and self-hosting documentation in the README

About The Source

The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is setup-project?

Ideal for DevOps Agents requiring automated Kubernetes deployments from GitHub repositories. A skill that automates the deployment of self-hosted services to Kubernetes clusters

How do I install setup-project?

Run the command: npx killer-skills add ViktorBarzin/infra. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for setup-project?

Key use cases include: Deploying new services to Kubernetes clusters from GitHub, Automating self-hosted service setup with docker-compose.yml files, Validating self-hosting documentation for Kubernetes deployments.

Which IDEs are compatible with setup-project?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for setup-project?

Requires access to a Kubernetes cluster. Needs a GitHub repository URL or project name. Dependent on the presence of official Docker images and self-hosting documentation in the README.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add ViktorBarzin/infra. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use setup-project immediately in the current project.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

setup-project

Install setup-project, an AI agent skill for AI agent workflows and automation. Explore features, use cases, limitations, and setup guidance.

SKILL.md
Readonly
Upstream Repository Material
The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

Setup Project Skill

Purpose: Deploy a new self-hosted service to the Kubernetes cluster from a GitHub repository.

When to use: User provides a GitHub URL or project name and wants to deploy it to the cluster.

Workflow

1. Research Phase

Input: GitHub repository URL or project name

Actions:

  • Visit the GitHub repository
  • Check the README for:
    • Official Docker image (Docker Hub, ghcr.io, etc.)
    • docker-compose.yml file
    • Self-hosting documentation
    • Required dependencies (PostgreSQL, MySQL, Redis, etc.)
    • Environment variables needed
    • Default ports
    • Storage requirements

Find Docker Image Priority:

  1. Check official documentation for recommended image
  2. Look in docker-compose.yml for image: directive
  3. Check GitHub Container Registry: ghcr.io/<org>/<repo>
  4. Check Docker Hub: <org>/<repo>
  5. Check releases page for container images
  6. Last resort: Build from Dockerfile (avoid if possible)

Extract Configuration:

  • Container port (default port the app listens on)
  • Environment variables (DATABASE_URL, REDIS_HOST, SMTP, etc.)
  • Volume mounts (what data needs persistence)
  • Dependencies (database type, cache, etc.)

2. Database Setup (if needed)

If project requires PostgreSQL:

  • User provides database credentials or use pattern: <service> user with secure password
  • Database will be created in shared postgresql.dbaas.svc.cluster.local
  • Connection string format: postgresql://<user>:<password>@postgresql.dbaas.svc.cluster.local:5432/<dbname>

If project requires MySQL:

  • User provides database credentials
  • Database in shared mysql.dbaas.svc.cluster.local
  • Connection string format: mysql://<user>:<password>@mysql.dbaas.svc.cluster.local:3306/<dbname>

If project requires Redis:

  • Use shared Redis: redis.redis.svc.cluster.local:6379
  • No password required

IMPORTANT: Never create databases yourself - always ask user for credentials to use.

3. NFS Storage Setup (if service needs persistent data)

IMPORTANT: NFS directories must exist and be exported on the NFS server BEFORE deploying the service. If the directory doesn't exist, the pod will fail to mount the volume and get stuck in ContainerCreating.

Steps:

  1. Create the directory on the NFS server:
bash
1ssh root@10.0.10.15 'mkdir -p /mnt/main/<service> && chmod 777 /mnt/main/<service>'
  1. Export the directory via TrueNAS:
    • The NFS export must be configured in TrueNAS so Kubernetes nodes can mount it
    • Create the export via TrueNAS WebUI or API, allowing access from the Kubernetes network (10.0.20.0/24)
    • Verify the export is accessible:
bash
1# From a k8s node or the dev VM 2showmount -e 10.0.10.15 | grep <service>
  1. Verify the mount works before proceeding:
bash
1# Quick test from a k8s node 2ssh root@10.0.20.100 'mount -t nfs 10.0.10.15:/mnt/main/<service> /tmp/test-mount && ls /tmp/test-mount && umount /tmp/test-mount'

Only proceed to Terraform module creation after confirming the NFS export is accessible.

4. Terraform Module Creation

Create module directory:

bash
1mkdir -p modules/kubernetes/<service-name>/

Create modules/kubernetes/<service-name>/main.tf:

hcl
1variable "tls_secret_name" {} 2variable "tier" { type = string } 3variable "postgresql_password" {} # Only if needed 4# Add other variables as needed (smtp_password, api_keys, etc.) 5 6resource "kubernetes_namespace" "<service>" { 7 metadata { 8 name = "<service>" 9 } 10} 11 12module "tls_secret" { 13 source = "../setup_tls_secret" 14 namespace = kubernetes_namespace.<service>.metadata[0].name 15 tls_secret_name = var.tls_secret_name 16} 17 18# If database migrations needed, add init_container 19resource "kubernetes_deployment" "<service>" { 20 metadata { 21 name = "<service>" 22 namespace = kubernetes_namespace.<service>.metadata[0].name 23 labels = { 24 app = "<service>" 25 tier = var.tier 26 } 27 } 28 spec { 29 replicas = 1 30 selector { 31 match_labels = { 32 app = "<service>" 33 } 34 } 35 template { 36 metadata { 37 labels = { 38 app = "<service>" 39 } 40 } 41 spec { 42 # Init container for migrations (if needed) 43 # init_container { ... } 44 45 container { 46 name = "<service>" 47 image = "<docker-image>:<tag>" 48 49 port { 50 container_port = <port> 51 } 52 53 # Environment variables 54 env { 55 name = "DATABASE_URL" 56 value = "postgresql://<service>:${var.postgresql_password}@postgresql.dbaas.svc.cluster.local:5432/<service>" 57 } 58 # Add other env vars as needed 59 60 # Volume mounts for persistent data 61 volume_mount { 62 name = "data" 63 mount_path = "<mount-path>" 64 sub_path = "<optional-subpath>" 65 } 66 67 resources { 68 requests = { 69 memory = "256Mi" 70 cpu = "100m" 71 } 72 limits = { 73 memory = "2Gi" 74 cpu = "1" 75 } 76 } 77 78 # Health checks (if endpoints exist) 79 liveness_probe { 80 http_get { 81 path = "/health" # or /healthz, /, etc. 82 port = <port> 83 } 84 initial_delay_seconds = 60 85 period_seconds = 30 86 } 87 } 88 89 # NFS volume for persistence 90 volume { 91 name = "data" 92 nfs { 93 server = "10.0.10.15" 94 path = "/mnt/main/<service>" 95 } 96 } 97 } 98 } 99 } 100} 101 102resource "kubernetes_service" "<service>" { 103 metadata { 104 name = "<service>" 105 namespace = kubernetes_namespace.<service>.metadata[0].name 106 labels = { 107 app = "<service>" 108 } 109 } 110 111 spec { 112 selector = { 113 app = "<service>" 114 } 115 port { 116 name = "http" 117 port = 80 118 target_port = <container-port> 119 } 120 } 121} 122 123module "ingress" { 124 source = "../ingress_factory" 125 namespace = kubernetes_namespace.<service>.metadata[0].name 126 name = "<service>" 127 tls_secret_name = var.tls_secret_name 128 # Add extra_annotations if needed (proxy-body-size, timeouts, etc.) 129}

5. Update Main Terraform Files

Add to modules/kubernetes/main.tf:

  1. Add variable declarations at top:
hcl
1variable "<service>_postgresql_password" { type = string }
  1. Add to appropriate DEFCON level (ask user which level, default to 5):
hcl
15 : [ 2 ..., 3 "<service>" 4]
  1. Add module block at bottom:
hcl
1module "<service>" { 2 source = "./<service>" 3 for_each = contains(local.active_modules, "<service>") ? { <service> = true } : {} 4 tls_secret_name = var.tls_secret_name 5 postgresql_password = var.<service>_postgresql_password 6 tier = local.tiers.aux # or appropriate tier 7 8 depends_on = [null_resource.core_services] 9}

Add to main.tf:

  1. Add variable:
hcl
1variable "<service>_postgresql_password" { type = string }
  1. Pass to kubernetes_cluster module:
hcl
1module "kubernetes_cluster" { 2 ... 3 <service>_postgresql_password = var.<service>_postgresql_password 4}

Update terraform.tfvars:

  1. Add password/credentials:
hcl
1<service>_postgresql_password = "<secure-password>"
  1. Add to Cloudflare DNS (ask user if proxied or non-proxied):
hcl
1cloudflare_non_proxied_names = [ 2 ..., 3 "<service>" 4]

6. Email/SMTP Configuration (if needed)

If service needs to send emails:

hcl
1env { 2 name = "MAILER_HOST" 3 value = "mailserver.viktorbarzin.me" # Public hostname for TLS 4} 5env { 6 name = "MAILER_PORT" 7 value = "587" 8} 9env { 10 name = "MAILER_USER" 11 value = "info@viktorbarzin.me" 12} 13env { 14 name = "MAILER_PASSWORD" 15 value = var.mailserver_accounts["info@viktorbarzin.me"] # Pass from module 16}

Add to module call:

hcl
1smtp_password = var.mailserver_accounts["info@viktorbarzin.me"]

7. Apply Terraform

bash
1terraform init 2terraform apply -target=module.kubernetes_cluster.module.<service> -var="kube_config_path=$(pwd)/config" -auto-approve

IMPORTANT: Also apply the cloudflared module to create the Cloudflare DNS record:

bash
1terraform apply -target=module.kubernetes_cluster.module.cloudflared -var="kube_config_path=$(pwd)/config" -auto-approve

Without this step, the DNS record won't be created even though it's defined in terraform.tfvars.

8. Verification

bash
1kubectl get pods -n <service> 2kubectl logs -n <service> -l app=<service> --tail=50

Test URL: https://<service>.viktorbarzin.me

9. Commit Changes

bash
1git add modules/kubernetes/<service>/ main.tf modules/kubernetes/main.tf terraform.tfvars 2git commit -m "Add <service> deployment 3 4- Deploy <service> as <description> 5- Uses <dependencies> 6- Ingress at <service>.viktorbarzin.me 7 8[ci skip]"

Common Patterns

Init Container for Migrations

hcl
1init_container { 2 name = "migration" 3 image = "<same-image>" 4 command = ["sh", "-c", "<migration-command>"] 5 6 # Same env vars and volumes as main container 7}

Dynamic Environment Variables

hcl
1locals { 2 common_env = [ 3 { name = "VAR1", value = "value1" }, 4 { name = "VAR2", value = "value2" }, 5 ] 6} 7 8dynamic "env" { 9 for_each = local.common_env 10 content { 11 name = env.value.name 12 value = env.value.value 13 } 14}

External URL Configuration

Many apps need their public URL configured:

hcl
1env { 2 name = "APP_URL" # or PUBLIC_URL, EXTERNAL_URL, etc. 3 value = "https://<service>.viktorbarzin.me" 4} 5env { 6 name = "HTTPS" # or ENABLE_HTTPS, etc. 7 value = "true" 8}

Checklist

  • Find official Docker image or docker-compose
  • Identify dependencies (DB, Redis, etc.)
  • Ask user for database credentials (never create yourself)
  • Create NFS directory and export on TrueNAS (if persistent storage needed)
  • Verify NFS mount is accessible from k8s nodes
  • Create modules/kubernetes/<service>/main.tf
  • Update modules/kubernetes/main.tf (variables, DEFCON level, module block)
  • Update main.tf (variable, pass to module)
  • Update terraform.tfvars (password, Cloudflare DNS)
  • Run terraform init and terraform apply
  • Verify pods are running
  • Test the URL
  • Commit changes with [ci skip]

Questions to Ask User

  1. What DEFCON level should this service be in? (Default: 5)
  2. Should Cloudflare proxy this domain? (Default: no, add to non_proxied_names)
  3. Does this need email/SMTP? (Configure if yes)
  4. What database credentials should I use? (Never create yourself)
  5. What tier? (core/cluster/gpu/edge/aux - default: aux)

Notes

  • Always create NFS directories and exports BEFORE deploying - pods will get stuck in ContainerCreating if the NFS path doesn't exist or isn't exported
  • Always use official documentation as the source of truth
  • Prefer stable/latest tags over specific versions for self-hosted
  • Use shared infrastructure: PostgreSQL at postgresql.dbaas.svc.cluster.local, Redis at redis.redis.svc.cluster.local
  • NFS storage: Always at 10.0.10.15:/mnt/main/<service>
  • Email: Use mailserver.viktorbarzin.me (public hostname) not internal service name
  • Resource limits: Start conservative, can increase if needed
  • Health checks: Only add if the app has health endpoints

Related Skills

Looking for an alternative to setup-project or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

openclaw-release-maintainer is an AI agent skill for openclaw release maintainer.

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

flags is an AI agent skill for use this skill when adding or changing framework feature flags in next.js internals.

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

pr-review is an AI agent skill for pytorch pr review skill.

98.6k
0
Developer