deploy-mcp — claude-code deploy-mcp, futuresearch-python, community, claude-code, ide skills, filtering, llm-agents, pandas-dataframe, ranking, semantic-analysis, Claude Code

v1.0.0
GitHub

About this Skill

Perfect for DevOps Agents needing automated MCP server deployment and scaling for efficient research team management A researcher for every row. Give your AI a research team.

# Core Topics

futuresearch futuresearch
[18]
[3]
Updated: 3/19/2026

Agent Capability Analysis

The deploy-mcp skill by futuresearch is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for claude-code, filtering, llm-agents.

Ideal Agent Persona

Perfect for DevOps Agents needing automated MCP server deployment and scaling for efficient research team management

Core Value

Empowers agents to automate MCP server deployment and scaling using GitHub Actions workflow, Helm, and Kubernetes, providing efficient research team management for AI coding workflows with features like automated checks, build and push, and deploy with layered values

Capabilities Granted for deploy-mcp

Automating MCP server deployment to staging and production environments
Scaling replicas via Helm values or kubectl for efficient resource management
Monitoring and debugging deployments using GitHub Actions and Kubernetes

! Prerequisites & Limits

  • Requires GitHub Actions workflow and Kubernetes setup
  • Limited to MCP server deployment and scaling
  • Needs Helm and Kubernetes CLI for advanced operations
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox
SKILL.md
Readonly

Deploying the MCP Server

Quick Deploy

Staging (from main)

bash
1gh workflow run "Deploy MCP Server" -f branch=main -f deploy_staging=true

Production (from main)

bash
1gh workflow run "Deploy MCP Server" -f branch=main -f deploy_production=true

Both environments

bash
1gh workflow run "Deploy MCP Server" -f branch=main -f deploy_staging=true -f deploy_production=true

From a feature branch

bash
1gh workflow run "Deploy MCP Server" -f branch=feat/my-branch -f deploy_staging=true

Monitoring a Deploy

bash
1# Watch the workflow run 2gh run list --workflow="Deploy MCP Server" --limit 3 3gh run watch <run-id> 4 5# Check pod rollout 6kubectl rollout status deploy/futuresearch-mcp-staging -n futuresearch-mcp-staging --timeout=5m 7 8# Verify pods are running 9kubectl get pods -n futuresearch-mcp-staging -o wide

How It Works

The GitHub Actions workflow (.github/workflows/deploy-mcp.yaml) does:

  1. Checks — ruff lint + pytest on the target branch
  2. Build & push — Docker image to GAR, tagged with short SHA (+ latest on main)
  3. Deploy — Helm upgrade with layered values:
    • values.yaml — base config
    • values.staging.yaml — staging overrides (MCP_SERVER_URL, REDIS_DB, replicaCount, host)
    • values.secrets.staging.yaml — SOPS-decrypted secrets (Supabase, API keys)

The deploy uses --atomic so it auto-rolls back on failure.

Scaling Replicas

Via Helm values (persistent)

Edit futuresearch-mcp/deploy/chart/values.staging.yaml:

yaml
1replicaCount: 2 # Change this

Commit, push, and redeploy.

Via kubectl (temporary, resets on next deploy)

bash
1# Staging 2kubectl scale deploy futuresearch-mcp-staging -n futuresearch-mcp-staging --replicas=3 3 4# Take offline 5kubectl scale deploy futuresearch-mcp-staging -n futuresearch-mcp-staging --replicas=0

Environments

EnvironmentNamespaceHostRedis DB
Stagingfuturesearch-mcp-stagingmcp-staging.futuresearch.ai14
Productionfuturesearch-mcpmcp.futuresearch.ai(default in values.yaml)

Both environments hit the same production FutureSearch API — there is no staging API.

Updating Secrets

bash
1# View current secrets 2sops -d futuresearch-mcp/deploy/chart/secrets.staging.enc.yaml 3 4# Update a value 5sops --set '["secrets"]["data"]["KEY_NAME"] "new-value"' futuresearch-mcp/deploy/chart/secrets.staging.enc.yaml

Commit the encrypted file and redeploy.

Key Files

FilePurpose
.github/workflows/deploy-mcp.yamlCI/CD workflow (checks → build → deploy)
futuresearch-mcp/deploy/chart/values.yamlBase Helm values
futuresearch-mcp/deploy/chart/values.staging.yamlStaging overrides
futuresearch-mcp/deploy/chart/secrets.enc.yamlProduction secrets (SOPS)
futuresearch-mcp/deploy/chart/secrets.staging.enc.yamlStaging secrets (SOPS)
futuresearch-mcp/deploy/DockerfileServer container image

Gotchas

  • Branch protection on main: Can't push directly — create a PR and merge first, then deploy from main.
  • SOPS decryption requires GCP IAM: Run gcloud auth application-default login if decryption fails.
  • Concurrent deploys: Workflow uses cancel-in-progress: false — if a deploy is running, the next one queues.
  • Atomic rollback: --atomic means a failed deploy auto-reverts to the previous release. Check helm history if this happens.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is deploy-mcp?

Perfect for DevOps Agents needing automated MCP server deployment and scaling for efficient research team management A researcher for every row. Give your AI a research team.

How do I install deploy-mcp?

Run the command: npx killer-skills add futuresearch/futuresearch-python/deploy-mcp. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for deploy-mcp?

Key use cases include: Automating MCP server deployment to staging and production environments, Scaling replicas via Helm values or kubectl for efficient resource management, Monitoring and debugging deployments using GitHub Actions and Kubernetes.

Which IDEs are compatible with deploy-mcp?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for deploy-mcp?

Requires GitHub Actions workflow and Kubernetes setup. Limited to MCP server deployment and scaling. Needs Helm and Kubernetes CLI for advanced operations.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add futuresearch/futuresearch-python/deploy-mcp. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use deploy-mcp immediately in the current project.

Related Skills

Looking for an alternative to deploy-mcp or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer