run-thor — community run-thor, cyber-inference, community, ide skills

v1.0.0

About this Skill

Perfect for AI Agents needing automated model management and dynamic resource allocation for edge deployment on OpenAI-compatible inference servers. Deploy and test cyber-inference on the Thor lab server. Use when the user wants to test on Thor, deploy to Thor, run on Thor, verify the server, or mentions thor.lab.

RamboRogers RamboRogers
[0]
[0]
Updated: 3/12/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Locale and body language aligned
Review Score
7/11
Quality Score
39
Canonical Locale
en
Detected Body Locale
en

Perfect for AI Agents needing automated model management and dynamic resource allocation for edge deployment on OpenAI-compatible inference servers. Deploy and test cyber-inference on the Thor lab server. Use when the user wants to test on Thor, deploy to Thor, run on Thor, verify the server, or mentions thor.lab.

Core Value

Empowers agents to manage OpenAI models with automatic deployment and resource allocation via SSH and HTTP protocols, streamlining integration testing on GPU lab servers like Thor.

Ideal Agent Persona

Perfect for AI Agents needing automated model management and dynamic resource allocation for edge deployment on OpenAI-compatible inference servers.

Capabilities Granted for run-thor

Deploying and testing AI models on edge devices
Automating model updates and rollbacks on Thor
Debugging cyber-inference integration tests

! Prerequisites & Limits

  • Requires SSH access to thor.lab
  • Limited to OpenAI-compatible models
  • Dependent on GPU lab server availability

Why this page is reference-only

  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is run-thor?

Perfect for AI Agents needing automated model management and dynamic resource allocation for edge deployment on OpenAI-compatible inference servers. Deploy and test cyber-inference on the Thor lab server. Use when the user wants to test on Thor, deploy to Thor, run on Thor, verify the server, or mentions thor.lab.

How do I install run-thor?

Run the command: npx killer-skills add RamboRogers/cyber-inference/run-thor. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for run-thor?

Key use cases include: Deploying and testing AI models on edge devices, Automating model updates and rollbacks on Thor, Debugging cyber-inference integration tests.

Which IDEs are compatible with run-thor?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for run-thor?

Requires SSH access to thor.lab. Limited to OpenAI-compatible models. Dependent on GPU lab server availability.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add RamboRogers/cyber-inference/run-thor. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use run-thor immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

run-thor

Install run-thor, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Deploy & Test on Thor

Thor is the GPU lab server used for integration testing of cyber-inference. It is accessible via SSH and hosts the production-like test environment.

Connection Details

FieldValue
Hostthor.lab
Usermatt
SSHssh matt@thor.lab
Project path/home/matt/Local/cyber-inference
Server URLhttp://thor.lab:8337

Deploy Workflow

Follow these steps in order. Each depends on the previous.

1. Commit & Push (local machine)

bash
1git add -A && git commit -m "<message>" && git push

2. Pull on Thor (remote)

bash
1ssh matt@thor.lab "cd /home/matt/Local/cyber-inference && git pull"

3. Start the Server (remote)

The server runs via start.sh which handles uv sync and auto-restart.

bash
1# Interactive (see logs live) - use for debugging 2ssh -t matt@thor.lab "cd /home/matt/Local/cyber-inference && ./start.sh" 3 4# Background (detached) - use for long-running tests 5ssh matt@thor.lab "cd /home/matt/Local/cyber-inference && nohup ./start.sh > /tmp/cyber-inference.log 2>&1 &"

CUDA PyTorch wheels are verified automatically when NVIDIA hardware is detected.

4. Verify the Server

bash
1# Health check 2curl -s http://thor.lab:8337/health 3 4# List models 5curl -s http://thor.lab:8337/v1/models | python3 -m json.tool 6 7# System status 8curl -s http://thor.lab:8337/admin/status | python3 -m json.tool 9

The web GUI is available at: http://thor.lab:8337

5. Test Inference

bash
1# Chat completion 2curl -s http://thor.lab:8337/v1/chat/completions \ 3 -H "Content-Type: application/json" \ 4 -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "Hello"}]}' \ 5 | python3 -m json.tool 6 7# Embeddings 8curl -s http://thor.lab:8337/v1/embeddings \ 9 -H "Content-Type: application/json" \ 10 -d '{"model": "<model_name>", "input": "test text"}' \ 11 | python3 -m json.tool

Quick One-Liner Deploy

Pull latest and restart in one command:

bash
1ssh -t matt@thor.lab "cd /home/matt/Local/cyber-inference && git pull && ./start.sh"

Troubleshooting

  • Server won't start: Check logs with ssh matt@thor.lab "tail -50 /tmp/cyber-inference.log"
  • Port in use: Kill existing process with ssh matt@thor.lab "pkill -f 'cyber-inference serve'"
  • Check running processes: ssh matt@thor.lab "ps aux | grep cyber-inference"
  • GPU/CUDA issues: ssh matt@thor.lab "nvidia-smi"

Related Skills

Looking for an alternative to run-thor or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer