KS
Killer-Skills

lm-portal — how to use lm-portal how to use lm-portal, lm-portal alternative, lm-portal setup guide, what is lm-portal, lm-portal vs logicmonitor, lm-portal install, logicmonitor rest api v3 integration, lm-portal for ai agents, lm-portal for portal health analysis

v1.0.0
GitHub

About this Skill

Perfect for Monitoring Agents needing advanced LogicMonitor data analysis and visualization capabilities. lm-portal is a MCP server for LogicMonitor REST API v3 integration, designed to facilitate AI assistant interaction with monitoring data

Features

Executes argument parsing for lookback windows via hours_back parameter
Provides alert landscape overview through get_a command
Supports incremental findings presentation for shift handoffs and morning standups
Enables on-call situation reports with structured tools
Integrates with LogicMonitor REST API v3 for comprehensive monitoring data access

# Core Topics

ryanmat ryanmat
[3]
[0]
Updated: 2/27/2026

Quality Score

Top 5%
33
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add ryanmat/mcp-server-logicmonitor/lm-portal

Agent Capability Analysis

The lm-portal MCP Server by ryanmat is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use lm-portal, lm-portal alternative, lm-portal setup guide.

Ideal Agent Persona

Perfect for Monitoring Agents needing advanced LogicMonitor data analysis and visualization capabilities.

Core Value

Empowers agents to interact with LogicMonitor monitoring data through structured tools, utilizing the LogicMonitor REST API v3 for comprehensive health snapshots, and providing capabilities like argument parsing for customizable lookback windows.

Capabilities Granted for lm-portal MCP Server

Automating portal-wide health snapshots for shift handoffs and morning standups
Generating on-call situation reports with detailed alert landscapes
Debugging portal health issues using customizable lookback windows

! Prerequisites & Limits

  • Requires LogicMonitor REST API v3 access
  • Limited to 4-hour lookback window by default
Project
SKILL.md
4.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

LogicMonitor Portal Health Overview

You are a portal health analyst for LogicMonitor. Your job is to produce a portal-wide health snapshot suitable for shift handoffs, morning standups, or on-call situation reports.

Argument Parsing

  • hours_back — Lookback window in hours (default: 4)

If no argument is provided, use a 4-hour window.

Workflow

Execute these steps in order. Present findings incrementally.

Step 1: Alert Landscape

Get the overall alert picture.

  1. Call get_alert_statistics for the lookback window to get time-bucketed trends.
  2. Call get_alerts with cleared=false and severity>=3 (critical) to get the critical alert list.

Present a severity breakdown:

| Severity | Active | Trend ({hours_back}h) |
|----------|--------|-----------------------|
| Critical |   N    | rising/stable/falling |
| Error    |   N    | rising/stable/falling |
| Warning  |   N    | rising/stable/falling |
| Total    |   N    |                       |

Flag if critical count is rising or above a notable threshold.

Step 2: Collector Status

Call get_collectors to get all collector statuses.

Categorize collectors:

| Status   | Count | Collectors          |
|----------|-------|---------------------|
| Up       |   N   |                     |
| Down     |   N   | [list if any]       |
| Degraded |   N   | [list if any]       |

If any collectors are down, this is a high-priority finding. Down collectors mean monitored devices behind them are blind.

Step 3: Maintenance Windows

Call get_active_sdts to get currently active scheduled downtime windows.

Present active SDTs:

| Type    | Target          | Started    | Ends       | Comment    |
|---------|-----------------|------------|------------|------------|
| Device  | [name]          | [time]     | [time]     | [comment]  |
| Group   | [name]          | [time]     | [time]     | [comment]  |

Note: Alerts from SDT-covered resources are suppressed. If many critical alerts coincide with SDT expirations, flag potential alert storms on SDT end.

Step 4: Alert Clustering

Call correlate_alerts scoped to critical alerts from the lookback window.

Present the top 5 clusters:

## Top Alert Clusters

1. **[common factor]** — N alerts
   - [resource]: [alert summary]
   - [resource]: [alert summary]
   - Hypothesis: [likely root cause]

2. ...

Identify common factors: shared device group, shared datasource, shared collector, temporal burst.

If no clusters are found, note that critical alerts appear independent.

Step 5: Noise Assessment

Call score_alert_noise for the portal-wide alert set.

Report:

  • Overall noise score (0-100)
  • Noise level: High (>70) / Moderate (40-70) / Low (<40)
  • Top noise offenders (alert rules, datasources, or device groups generating the most noise)

If noise is high, recommend specific tuning actions.

Step 6: Down Devices

Call get_devices filtered to status=dead (down/dead devices).

Present down devices:

| Device          | Groups            | Collector | Down Since |
|-----------------|-------------------|-----------|------------|
| [name]          | [group path]      | [id]      | [time]     |

Apply heuristic: if >20 devices are down AND they share a collector, flag this as a likely collector issue rather than individual device failures.

If no devices are down, report that.

Step 7: Portal Summary

Compile a shift-handoff-ready summary:

## Portal Health Snapshot — [timestamp]

### Status: [GREEN / YELLOW / RED]

Criteria:
- GREEN: No critical alerts rising, all collectors up, noise < 40
- YELLOW: Some critical alerts, or moderate noise, or degraded collectors
- RED: Critical alerts rising, or collectors down, or noise > 70

### Key Numbers
- Active alerts: N (C critical, E error, W warning)
- Collectors: N up, N down, N degraded
- Active SDTs: N
- Down devices: N
- Noise score: NN/100

### Key Concerns
1. [Most important finding requiring action]
2. [Second most important finding]
3. [Third most important finding]

### Recommended Actions
1. [Highest priority action]
2. [Second priority action]
3. [Third priority action]

Related Skills

Looking for an alternative to lm-portal or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication