setup-project — AI 编码助手技能 setup-project, community, AI 编码助手技能, ide skills, Kubernetes 部署, GitHub 仓库集成, Docker 镜像管理, 自托管自动化, Claude Code 技能

v1.0.0

关于此技能

适用于需要从 GitHub 仓库自动部署 Kubernetes 的 DevOps Agent。 一种自动将自托管服务部署到 Kubernetes 集群的技能

功能特性

将自托管服务部署到 Kubernetes 集群
检查 README 中的官方 Docker 镜像
支持 GitHub 仓库 URL 或项目名称
使用 docker-compose.yml 文件自动化部署
验证自托管文档

# 核心主题

ViktorBarzin ViktorBarzin
[5]
[0]
更新于: 3/22/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
47
Canonical Locale
en
Detected Body Locale
en

适用于需要从 GitHub 仓库自动部署 Kubernetes 的 DevOps Agent。 一种自动将自托管服务部署到 Kubernetes 集群的技能

核心价值

赋予代理部署自托管服务到 Kubernetes 集群的能力,使用官方 Docker 镜像和 docker-compose.yml 文件,简化部署过程,并从 GitHub 仓库获取自托管文档。

适用 Agent 类型

适用于需要从 GitHub 仓库自动部署 Kubernetes 的 DevOps Agent。

赋予的主要能力 · setup-project

从 GitHub 部署新服务到 Kubernetes 集群
使用 docker-compose.yml 文件自动设置自托管服务
验证 Kubernetes 部署的自托管文档

! 使用限制与门槛

  • 需要访问 Kubernetes 集群
  • 需要 GitHub 仓库 URL 或项目名称
  • 依赖于官方 Docker 镜像和自托管文档在 README 中的存在

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

评审后的下一步

先决定动作,再继续看上游仓库材料

Killer-Skills 的主价值不应该停在“帮你打开仓库说明”,而是先帮你判断这项技能是否值得安装、是否应该回到可信集合复核,以及是否已经进入工作流落地阶段。

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

setup-project 是什么?

适用于需要从 GitHub 仓库自动部署 Kubernetes 的 DevOps Agent。 一种自动将自托管服务部署到 Kubernetes 集群的技能

如何安装 setup-project?

运行命令:npx killer-skills add ViktorBarzin/infra/setup-project。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

setup-project 适用于哪些场景?

典型场景包括:从 GitHub 部署新服务到 Kubernetes 集群、使用 docker-compose.yml 文件自动设置自托管服务、验证 Kubernetes 部署的自托管文档。

setup-project 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

setup-project 有哪些限制?

需要访问 Kubernetes 集群;需要 GitHub 仓库 URL 或项目名称;依赖于官方 Docker 镜像和自托管文档在 README 中的存在。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add ViktorBarzin/infra/setup-project。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    setup-project 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

setup-project

安装 setup-project,这是一款面向AI agent workflows and automation的 AI Agent Skill。查看评审结论、使用场景与安装路径。

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Setup Project Skill

Purpose: Deploy a new self-hosted service to the Kubernetes cluster from a GitHub repository.

When to use: User provides a GitHub URL or project name and wants to deploy it to the cluster.

Workflow

1. Research Phase

Input: GitHub repository URL or project name

Actions:

  • Visit the GitHub repository
  • Check the README for:
    • Official Docker image (Docker Hub, ghcr.io, etc.)
    • docker-compose.yml file
    • Self-hosting documentation
    • Required dependencies (PostgreSQL, MySQL, Redis, etc.)
    • Environment variables needed
    • Default ports
    • Storage requirements

Find Docker Image Priority:

  1. Check official documentation for recommended image
  2. Look in docker-compose.yml for image: directive
  3. Check GitHub Container Registry: ghcr.io/<org>/<repo>
  4. Check Docker Hub: <org>/<repo>
  5. Check releases page for container images
  6. Last resort: Build from Dockerfile (avoid if possible)

Extract Configuration:

  • Container port (default port the app listens on)
  • Environment variables (DATABASE_URL, REDIS_HOST, SMTP, etc.)
  • Volume mounts (what data needs persistence)
  • Dependencies (database type, cache, etc.)

2. Database Setup (if needed)

If project requires PostgreSQL:

  • User provides database credentials or use pattern: <service> user with secure password
  • Database will be created in shared postgresql.dbaas.svc.cluster.local
  • Connection string format: postgresql://<user>:<password>@postgresql.dbaas.svc.cluster.local:5432/<dbname>

If project requires MySQL:

  • User provides database credentials
  • Database in shared mysql.dbaas.svc.cluster.local
  • Connection string format: mysql://<user>:<password>@mysql.dbaas.svc.cluster.local:3306/<dbname>

If project requires Redis:

  • Use shared Redis: redis.redis.svc.cluster.local:6379
  • No password required

IMPORTANT: Never create databases yourself - always ask user for credentials to use.

3. NFS Storage Setup (if service needs persistent data)

IMPORTANT: NFS directories must exist and be exported on the NFS server BEFORE deploying the service. If the directory doesn't exist, the pod will fail to mount the volume and get stuck in ContainerCreating.

Steps:

  1. Create the directory on the NFS server:
bash
1ssh root@10.0.10.15 'mkdir -p /mnt/main/<service> && chmod 777 /mnt/main/<service>'
  1. Export the directory via TrueNAS:
    • The NFS export must be configured in TrueNAS so Kubernetes nodes can mount it
    • Create the export via TrueNAS WebUI or API, allowing access from the Kubernetes network (10.0.20.0/24)
    • Verify the export is accessible:
bash
1# From a k8s node or the dev VM 2showmount -e 10.0.10.15 | grep <service>
  1. Verify the mount works before proceeding:
bash
1# Quick test from a k8s node 2ssh root@10.0.20.100 'mount -t nfs 10.0.10.15:/mnt/main/<service> /tmp/test-mount && ls /tmp/test-mount && umount /tmp/test-mount'

Only proceed to Terraform module creation after confirming the NFS export is accessible.

4. Terraform Module Creation

Create module directory:

bash
1mkdir -p modules/kubernetes/<service-name>/

Create modules/kubernetes/<service-name>/main.tf:

hcl
1variable "tls_secret_name" {} 2variable "tier" { type = string } 3variable "postgresql_password" {} # Only if needed 4# Add other variables as needed (smtp_password, api_keys, etc.) 5 6resource "kubernetes_namespace" "<service>" { 7 metadata { 8 name = "<service>" 9 } 10} 11 12module "tls_secret" { 13 source = "../setup_tls_secret" 14 namespace = kubernetes_namespace.<service>.metadata[0].name 15 tls_secret_name = var.tls_secret_name 16} 17 18# If database migrations needed, add init_container 19resource "kubernetes_deployment" "<service>" { 20 metadata { 21 name = "<service>" 22 namespace = kubernetes_namespace.<service>.metadata[0].name 23 labels = { 24 app = "<service>" 25 tier = var.tier 26 } 27 } 28 spec { 29 replicas = 1 30 selector { 31 match_labels = { 32 app = "<service>" 33 } 34 } 35 template { 36 metadata { 37 labels = { 38 app = "<service>" 39 } 40 } 41 spec { 42 # Init container for migrations (if needed) 43 # init_container { ... } 44 45 container { 46 name = "<service>" 47 image = "<docker-image>:<tag>" 48 49 port { 50 container_port = <port> 51 } 52 53 # Environment variables 54 env { 55 name = "DATABASE_URL" 56 value = "postgresql://<service>:${var.postgresql_password}@postgresql.dbaas.svc.cluster.local:5432/<service>" 57 } 58 # Add other env vars as needed 59 60 # Volume mounts for persistent data 61 volume_mount { 62 name = "data" 63 mount_path = "<mount-path>" 64 sub_path = "<optional-subpath>" 65 } 66 67 resources { 68 requests = { 69 memory = "256Mi" 70 cpu = "100m" 71 } 72 limits = { 73 memory = "2Gi" 74 cpu = "1" 75 } 76 } 77 78 # Health checks (if endpoints exist) 79 liveness_probe { 80 http_get { 81 path = "/health" # or /healthz, /, etc. 82 port = <port> 83 } 84 initial_delay_seconds = 60 85 period_seconds = 30 86 } 87 } 88 89 # NFS volume for persistence 90 volume { 91 name = "data" 92 nfs { 93 server = "10.0.10.15" 94 path = "/mnt/main/<service>" 95 } 96 } 97 } 98 } 99 } 100} 101 102resource "kubernetes_service" "<service>" { 103 metadata { 104 name = "<service>" 105 namespace = kubernetes_namespace.<service>.metadata[0].name 106 labels = { 107 app = "<service>" 108 } 109 } 110 111 spec { 112 selector = { 113 app = "<service>" 114 } 115 port { 116 name = "http" 117 port = 80 118 target_port = <container-port> 119 } 120 } 121} 122 123module "ingress" { 124 source = "../ingress_factory" 125 namespace = kubernetes_namespace.<service>.metadata[0].name 126 name = "<service>" 127 tls_secret_name = var.tls_secret_name 128 # Add extra_annotations if needed (proxy-body-size, timeouts, etc.) 129}

5. Update Main Terraform Files

Add to modules/kubernetes/main.tf:

  1. Add variable declarations at top:
hcl
1variable "<service>_postgresql_password" { type = string }
  1. Add to appropriate DEFCON level (ask user which level, default to 5):
hcl
15 : [ 2 ..., 3 "<service>" 4]
  1. Add module block at bottom:
hcl
1module "<service>" { 2 source = "./<service>" 3 for_each = contains(local.active_modules, "<service>") ? { <service> = true } : {} 4 tls_secret_name = var.tls_secret_name 5 postgresql_password = var.<service>_postgresql_password 6 tier = local.tiers.aux # or appropriate tier 7 8 depends_on = [null_resource.core_services] 9}

Add to main.tf:

  1. Add variable:
hcl
1variable "<service>_postgresql_password" { type = string }
  1. Pass to kubernetes_cluster module:
hcl
1module "kubernetes_cluster" { 2 ... 3 <service>_postgresql_password = var.<service>_postgresql_password 4}

Update terraform.tfvars:

  1. Add password/credentials:
hcl
1<service>_postgresql_password = "<secure-password>"
  1. Add to Cloudflare DNS (ask user if proxied or non-proxied):
hcl
1cloudflare_non_proxied_names = [ 2 ..., 3 "<service>" 4]

6. Email/SMTP Configuration (if needed)

If service needs to send emails:

hcl
1env { 2 name = "MAILER_HOST" 3 value = "mailserver.viktorbarzin.me" # Public hostname for TLS 4} 5env { 6 name = "MAILER_PORT" 7 value = "587" 8} 9env { 10 name = "MAILER_USER" 11 value = "info@viktorbarzin.me" 12} 13env { 14 name = "MAILER_PASSWORD" 15 value = var.mailserver_accounts["info@viktorbarzin.me"] # Pass from module 16}

Add to module call:

hcl
1smtp_password = var.mailserver_accounts["info@viktorbarzin.me"]

7. Apply Terraform

bash
1terraform init 2terraform apply -target=module.kubernetes_cluster.module.<service> -var="kube_config_path=$(pwd)/config" -auto-approve

IMPORTANT: Also apply the cloudflared module to create the Cloudflare DNS record:

bash
1terraform apply -target=module.kubernetes_cluster.module.cloudflared -var="kube_config_path=$(pwd)/config" -auto-approve

Without this step, the DNS record won't be created even though it's defined in terraform.tfvars.

8. Verification

bash
1kubectl get pods -n <service> 2kubectl logs -n <service> -l app=<service> --tail=50

Test URL: https://<service>.viktorbarzin.me

9. Commit Changes

bash
1git add modules/kubernetes/<service>/ main.tf modules/kubernetes/main.tf terraform.tfvars 2git commit -m "Add <service> deployment 3 4- Deploy <service> as <description> 5- Uses <dependencies> 6- Ingress at <service>.viktorbarzin.me 7 8[ci skip]"

Common Patterns

Init Container for Migrations

hcl
1init_container { 2 name = "migration" 3 image = "<same-image>" 4 command = ["sh", "-c", "<migration-command>"] 5 6 # Same env vars and volumes as main container 7}

Dynamic Environment Variables

hcl
1locals { 2 common_env = [ 3 { name = "VAR1", value = "value1" }, 4 { name = "VAR2", value = "value2" }, 5 ] 6} 7 8dynamic "env" { 9 for_each = local.common_env 10 content { 11 name = env.value.name 12 value = env.value.value 13 } 14}

External URL Configuration

Many apps need their public URL configured:

hcl
1env { 2 name = "APP_URL" # or PUBLIC_URL, EXTERNAL_URL, etc. 3 value = "https://<service>.viktorbarzin.me" 4} 5env { 6 name = "HTTPS" # or ENABLE_HTTPS, etc. 7 value = "true" 8}

Checklist

  • Find official Docker image or docker-compose
  • Identify dependencies (DB, Redis, etc.)
  • Ask user for database credentials (never create yourself)
  • Create NFS directory and export on TrueNAS (if persistent storage needed)
  • Verify NFS mount is accessible from k8s nodes
  • Create modules/kubernetes/<service>/main.tf
  • Update modules/kubernetes/main.tf (variables, DEFCON level, module block)
  • Update main.tf (variable, pass to module)
  • Update terraform.tfvars (password, Cloudflare DNS)
  • Run terraform init and terraform apply
  • Verify pods are running
  • Test the URL
  • Commit changes with [ci skip]

Questions to Ask User

  1. What DEFCON level should this service be in? (Default: 5)
  2. Should Cloudflare proxy this domain? (Default: no, add to non_proxied_names)
  3. Does this need email/SMTP? (Configure if yes)
  4. What database credentials should I use? (Never create yourself)
  5. What tier? (core/cluster/gpu/edge/aux - default: aux)

Notes

  • Always create NFS directories and exports BEFORE deploying - pods will get stuck in ContainerCreating if the NFS path doesn't exist or isn't exported
  • Always use official documentation as the source of truth
  • Prefer stable/latest tags over specific versions for self-hosted
  • Use shared infrastructure: PostgreSQL at postgresql.dbaas.svc.cluster.local, Redis at redis.redis.svc.cluster.local
  • NFS storage: Always at 10.0.10.15:/mnt/main/<service>
  • Email: Use mailserver.viktorbarzin.me (public hostname) not internal service name
  • Resource limits: Start conservative, can increase if needed
  • Health checks: Only add if the app has health endpoints

相关技能

寻找 setup-project 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具