aos — for Claude Code community, for Claude Code, ide skills, edge-computing, self-hosted, sovereign-ai, ubuntu, localhost:8000, aos-core.service, bash aos health

v4.1.0

About this Skill

Ideal for AI agents that need agenticos (aos) is a reactive llm gateway that routes inference requests between. aos is an AI agent skill for agenticos (aos) is a reactive llm gateway that routes inference requests between.

Features

AgenticOS (AOS) is a reactive LLM gateway that routes inference requests between
local and remote LLM backends (LM Studio, Ollama) with energy-aware model selection.
AOS daemon running on localhost:8000 (systemd: aos-core.service)
At least one LLM backend (LM Studio on port 1234 or Ollama on port 11434)
Check System Health

# Core Topics

maximilianwruhs-cyber maximilianwruhs-cyber
[0]
[0]
Updated: 4/3/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 10/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
10/11
Quality Score
56
Canonical Locale
en
Detected Body Locale
en

Ideal for AI agents that need agenticos (aos) is a reactive llm gateway that routes inference requests between. aos is an AI agent skill for agenticos (aos) is a reactive llm gateway that routes inference requests between.

Core Value

aos helps agents agenticos (aos) is a reactive llm gateway that routes inference requests between. List Available Backends Shows all configured LLM backends (local, remote, Ollama, LM Studio). This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Ideal Agent Persona

Ideal for AI agents that need agenticos (aos) is a reactive llm gateway that routes inference requests between.

Capabilities Granted for aos

Applying AgenticOS (AOS) is a reactive LLM gateway that routes inference requests between
Applying local and remote LLM backends (LM Studio, Ollama) with energy-aware model selection
Applying AOS daemon running on localhost:8000 (systemd: aos-core.service)

! Prerequisites & Limits

  • eGPU Switcher (optional, Thunderbolt machines only)
  • Requires repository-specific context from the skill documentation
  • Works best when the underlying tools and dependencies are already configured

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is aos?

Ideal for AI agents that need agenticos (aos) is a reactive llm gateway that routes inference requests between. aos is an AI agent skill for agenticos (aos) is a reactive llm gateway that routes inference requests between.

How do I install aos?

Run the command: npx killer-skills add maximilianwruhs-cyber/AOS/aos. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for aos?

Key use cases include: Applying AgenticOS (AOS) is a reactive LLM gateway that routes inference requests between, Applying local and remote LLM backends (LM Studio, Ollama) with energy-aware model selection, Applying AOS daemon running on localhost:8000 (systemd: aos-core.service).

Which IDEs are compatible with aos?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for aos?

eGPU Switcher (optional, Thunderbolt machines only). Requires repository-specific context from the skill documentation. Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add maximilianwruhs-cyber/AOS/aos. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use aos immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

aos

List Available Backends Shows all configured LLM backends (local, remote, Ollama, LM Studio). It covers ai, edge-computing, llm workflows. This AI agent skill

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

AOS Skill

AgenticOS (AOS) is a reactive LLM gateway that routes inference requests between local and remote LLM backends (LM Studio, Ollama) with energy-aware model selection.

Prerequisites

  • AOS daemon running on localhost:8000 (systemd: aos-core.service)
  • At least one LLM backend (LM Studio on port 1234 or Ollama on port 11434)

Commands

Check System Health

bash
1aos health

Returns daemon status, active model, backend reachability.

List Available Backends

bash
1aos hosts

Shows all configured LLM backends (local, remote, Ollama, LM Studio).

Switch Active Backend

bash
1aos switch <host-key>

Switches the active LLM backend at runtime. Available keys: local, aos-keller, ollama-local, ollama-keller

List Loaded Models

bash
1aos models

Proxies the models endpoint from the active backend.

Run Inference

bash
1aos ask "Your prompt here"

Sends a prompt through the AOS gateway with automatic complexity triage and model selection.

Run Benchmark

bash
1aos bench --model <model-name> [--suite full|math|code]

Runs the full benchmark suite, measuring energy, quality, and z-score.

Show Leaderboard

bash
1aos leaderboard

Compares all previous benchmark results, ranked by intelligence-per-watt.

eGPU Switcher (optional, Thunderbolt machines only)

bash
1sudo egpu-switcher setup # Interactive GPU selection 2sudo egpu-switcher enable # Enable auto-switch on boot 3sudo egpu-switcher disable # Disable auto-switch 4egpu-switcher status # Show current GPU config

Manages external GPU switching for machines with Thunderbolt eGPU enclosures. Installed automatically when a Thunderbolt controller is detected.

API Endpoints

EndpointMethodAuthDescription
/healthGETNoHealth check
/v1/hostsGETNoList backends
/v1/hosts/switchPOSTYesSwitch backend
/v1/modelsGETNoList models
/v1/chat/completionsPOSTYesRun inference
/v1/rag/ingestPOSTYesIngest a document into the RAG database
/v1/rag/queryPOSTYesQuery the RAG database with natural language

Ingest a Document (RAG)

bash
1aos ingest <file-path>

Parses a local PDF/document via LiteParse, embeds it with Ollama, and stores it in the local pgvector database.

Query the Knowledge Base (RAG)

bash
1aos query "Your question here"

Searches the local pgvector database and generates an answer using the local LLM.

Authentication

Set AOS_API_KEY in the environment to enable Bearer Token authentication. Pass the token as: Authorization: Bearer <your-key>

Related Skills

Looking for an alternative to aos or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer