inbox-one-llm-safety — for Claude Code inbox-one-llm-safety, mail_service, community, for Claude Code, ide skills, web-security-audit, inbox-one-security-review, AGENTS.md, current local mock risk, future external-model risk

v1.0.0

Über diesen Skill

Geeigneter Einsatz: Ideal for AI agents that need inbox one llm safety. Lokalisierte Zusammenfassung: # Inbox One LLM Safety Use this skill for AI-flow and prompt-boundary review in Inbox One. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Funktionen

Inbox One LLM Safety
Use this skill for AI-flow and prompt-boundary review in Inbox One.
This skill complements:
web-security-audit for dependency/header/config checks
inbox-one-security-review for application-layer auth, secret, SSRF, and persistence review

# Core Topics

minchanpark minchanpark
[0]
[0]
Updated: 4/18/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
45
Canonical Locale
en
Detected Body Locale
en

Geeigneter Einsatz: Ideal for AI agents that need inbox one llm safety. Lokalisierte Zusammenfassung: # Inbox One LLM Safety Use this skill for AI-flow and prompt-boundary review in Inbox One. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Warum diese Fähigkeit verwenden

Empfehlung: inbox-one-llm-safety helps agents inbox one llm safety. Inbox One LLM Safety Use this skill for AI-flow and prompt-boundary review in Inbox One. This AI agent skill supports Claude Code, Cursor, and

Am besten geeignet für

Geeigneter Einsatz: Ideal for AI agents that need inbox one llm safety.

Handlungsfähige Anwendungsfälle for inbox-one-llm-safety

Anwendungsfall: Applying Inbox One LLM Safety
Anwendungsfall: Applying Use this skill for AI-flow and prompt-boundary review in Inbox One
Anwendungsfall: Applying This skill complements:

! Sicherheit & Einschränkungen

  • Einschraenkung: Email content is untrusted input. It must never be treated like system instructions.
  • Einschraenkung: Check that model calls do not include unnecessary credentials, raw secrets, or unrelated mailbox content.
  • Einschraenkung: Prefer deterministic server-side rules for classification or safety-critical actions over model-only decisions.

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is inbox-one-llm-safety?

Geeigneter Einsatz: Ideal for AI agents that need inbox one llm safety. Lokalisierte Zusammenfassung: # Inbox One LLM Safety Use this skill for AI-flow and prompt-boundary review in Inbox One. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install inbox-one-llm-safety?

Run the command: npx killer-skills add minchanpark/mail_service. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for inbox-one-llm-safety?

Key use cases include: Anwendungsfall: Applying Inbox One LLM Safety, Anwendungsfall: Applying Use this skill for AI-flow and prompt-boundary review in Inbox One, Anwendungsfall: Applying This skill complements:.

Which IDEs are compatible with inbox-one-llm-safety?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for inbox-one-llm-safety?

Einschraenkung: Email content is untrusted input. It must never be treated like system instructions.. Einschraenkung: Check that model calls do not include unnecessary credentials, raw secrets, or unrelated mailbox content.. Einschraenkung: Prefer deterministic server-side rules for classification or safety-critical actions over model-only decisions..

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add minchanpark/mail_service. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use inbox-one-llm-safety immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

inbox-one-llm-safety

# Inbox One LLM Safety Use this skill for AI-flow and prompt-boundary review in Inbox One. This AI agent skill supports Claude Code, Cursor, and Windsurf

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Inbox One LLM Safety

Use this skill for AI-flow and prompt-boundary review in Inbox One.

This skill complements:

  • web-security-audit for dependency/header/config checks
  • inbox-one-security-review for application-layer auth, secret, SSRF, and persistence review

First read

  1. Read README.md.
  2. Read dev/TDD.md, especially AI, prompt, and data-retention sections.
  3. Read AGENTS.md.
  4. Review the nearest AGENTS.md files for any touched AI or API subtree.

Primary review targets

  • src/lib/server/services/ai-service.ts
  • src/app/api/ai/**
  • src/views/inbox/mail-compose-sheet.tsx
  • Any future provider code that calls external LLM APIs

What to look for

  1. Prompt boundary confusion Email content is untrusted input. It must never be treated like system instructions.
  2. Unsafe automation AI output may suggest replies, drafts, or classifications, but it must not directly send mail or mutate high-trust state without explicit user action.
  3. Sensitive data leakage Check what message content, account context, recipients, or internal metadata are sent to the model.
  4. Render safety AI-generated summaries and drafts should render as plain text unless there is a deliberate sanitization layer.
  5. User prompt handling Extra user instructions should stay bounded and should not silently override high-level product rules.
  6. Retention and provider assumptions When real external models are introduced, call out retention, logging, redaction, and model-provider data handling.

Review checklist

  • Treat all incoming email content as adversarial prompt input.
  • Separate system/product rules from message text and user free-form instructions.
  • Confirm AI output is advisory until the user explicitly applies or sends it.
  • Check that model calls do not include unnecessary credentials, raw secrets, or unrelated mailbox content.
  • Flag any place where AI output could influence provider choice, account routing, or send transport directly.
  • Prefer deterministic server-side rules for classification or safety-critical actions over model-only decisions.

How to report findings

  • Lead with prompt injection or unsafe automation issues first.
  • Include the exact trust boundary that is being crossed: message body, user prompt, model output, or send action.
  • Distinguish between current local mock risk, future external-model risk, and production blocker.
  • If the problem is really route validation or auth, hand it off to inbox-one-security-review.

Out of scope

  • Dependency CVEs, CSP, headers, or bundled library scanning. Use web-security-audit.
  • General route validation, secret storage, SSRF, and authorization review. Use inbox-one-security-review.

Verwandte Fähigkeiten

Looking for an alternative to inbox-one-llm-safety or another community skill for your workflow? Explore these related open-source skills.

Alle anzeigen

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
Künstliche Intelligenz

widget-generator

Logo of f
f

Erzeugen Sie anpassbare Widget-Plugins für das Prompts.Chat-Feed-System

149.6k
0
Künstliche Intelligenz

flags

Logo of vercel
vercel

Das React-Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensor und dynamische neuronale Netze in Python mit starker GPU-Beschleunigung

98.6k
0
Entwickler