KS
Killer-Skills

mediapipe-pose-detection — Categories.community

v1.0.0
GitHub

About this Skill

Perfect for Computer Vision Agents needing advanced human pose estimation and kinematic analysis capabilities. Video-based kinematic analysis for athletic performance.

feniix feniix
[0]
[0]
Updated: 3/4/2026

Quality Score

Top 5%
55
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add feniix/kinemotion/mediapipe-pose-detection

Agent Capability Analysis

The mediapipe-pose-detection MCP Server by feniix is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion.

Ideal Agent Persona

Perfect for Computer Vision Agents needing advanced human pose estimation and kinematic analysis capabilities.

Core Value

Empowers agents to analyze athletic performance using video-based kinematic analysis, providing key landmarks for jump analysis such as hip, knee, ankle, and heel detection, leveraging MediaPipe Pose Detection for accurate pose estimation.

Capabilities Granted for mediapipe-pose-detection MCP Server

Automating jump analysis for athletic performance evaluation
Generating kinematic data for sports science research
Detecting ground contact and takeoff points for jump optimization

! Prerequisites & Limits

  • Requires video input for pose detection
  • Limited to human pose estimation, not applicable for other objects or scenes
  • Dependent on MediaPipe library for pose detection functionality
Project
SKILL.md
12.8 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

MediaPipe Pose Detection

Key Landmarks for Jump Analysis

Lower Body (Primary for Jumps)

LandmarkLeft IndexRight IndexUse Case
Hip2324Center of mass, jump height
Knee2526Triple extension, landing
Ankle2728Ground contact detection
Heel2930Takeoff/landing timing
Toe3132Forefoot contact

Upper Body (Secondary)

LandmarkLeft IndexRight IndexUse Case
Shoulder1112Arm swing tracking
Elbow1314Arm action
Wrist1516Arm swing timing

Reference Points

LandmarkIndexUse Case
Nose0Head position
Left Eye2Face orientation
Right Eye5Face orientation

Confidence Thresholds

Default Settings

python
1min_detection_confidence = 0.5 # Initial pose detection 2min_tracking_confidence = 0.5 # Frame-to-frame tracking

Quality Presets (auto_tuning.py)

PresetDetectionTrackingUse Case
fast0.30.3Quick processing, tolerates errors
balanced0.50.5Default, good accuracy
accurate0.70.7Best accuracy, slower

Tuning Guidelines

  • Increase thresholds when: Jittery landmarks, false detections
  • Decrease thresholds when: Missing landmarks, tracking loss
  • Typical adjustment: ±0.1 increments

Common Issues and Solutions

Landmark Jitter

Symptoms: Landmarks jump erratically between frames

Solutions:

  1. Apply Butterworth low-pass filter (cutoff 6-10 Hz)
  2. Increase tracking confidence
  3. Use One-Euro filter for real-time applications
python
1# Butterworth filter (filtering.py) 2from kinemotion.core.filtering import butterworth_filter 3smoothed = butterworth_filter(landmarks, cutoff=8.0, fps=30) 4 5# One-Euro filter (smoothing.py) 6from kinemotion.core.smoothing import one_euro_filter 7smoothed = one_euro_filter(landmarks, min_cutoff=1.0, beta=0.007)

Left/Right Confusion

Symptoms: MediaPipe swaps left and right landmarks mid-video

Cause: Occlusion at 90° lateral camera angle

Solutions:

  1. Use 45° oblique camera angle (recommended)
  2. Post-process to detect and correct swaps
  3. Use single-leg tracking when possible

Tracking Loss

Symptoms: Landmarks disappear for several frames

Causes:

  • Athlete moves out of frame
  • Fast motion blur
  • Occlusion by equipment/clothing

Solutions:

  1. Ensure full athlete visibility throughout video
  2. Use higher frame rate (60+ fps)
  3. Interpolate missing frames (up to 3-5 frames)
python
1# Simple linear interpolation for gaps 2import numpy as np 3def interpolate_gaps(landmarks, max_gap=5): 4 # Fill NaN gaps with linear interpolation 5 for i in range(landmarks.shape[1]): 6 mask = np.isnan(landmarks[:, i]) 7 if mask.sum() > 0 and mask.sum() <= max_gap: 8 landmarks[:, i] = np.interp( 9 np.arange(len(landmarks)), 10 np.where(~mask)[0], 11 landmarks[~mask, i] 12 ) 13 return landmarks

Low Confidence Scores

Symptoms: Visibility scores consistently below threshold

Causes:

  • Poor lighting (backlighting, shadows)
  • Low contrast clothing vs background
  • Partial occlusion

Solutions:

  1. Improve lighting (front-lit, even)
  2. Ensure clothing contrasts with background
  3. Remove obstructions from camera view

Video Processing (video_io.py)

Rotation Handling

Mobile videos often have rotation metadata that must be handled:

python
1# video_io.py handles this automatically 2# Reads EXIF rotation and applies correction 3from kinemotion.core.video_io import read_video_frames 4 5frames, fps, dimensions = read_video_frames("mobile_video.mp4") 6# Frames are correctly oriented regardless of source

Manual Rotation (if needed)

bash
1# FFmpeg rotation options 2ffmpeg -i input.mp4 -vf "transpose=1" output.mp4 # 90° clockwise 3ffmpeg -i input.mp4 -vf "transpose=2" output.mp4 # 90° counter-clockwise 4ffmpeg -i input.mp4 -vf "hflip" output.mp4 # Horizontal flip

Frame Dimensions

Always read actual frame dimensions from first frame, not metadata:

python
1# Correct approach 2cap = cv2.VideoCapture(video_path) 3ret, frame = cap.read() 4height, width = frame.shape[:2] 5 6# Incorrect (may be wrong for rotated videos) 7width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) 8height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

Coordinate Systems

MediaPipe Output

  • Normalized coordinates: (0.0, 0.0) to (1.0, 1.0)
  • Origin: Top-left corner
  • X: Left to right
  • Y: Top to bottom
  • Z: Depth (relative, camera-facing is negative)

Conversion to Pixels

python
1def normalized_to_pixel(landmark, width, height): 2 x = int(landmark.x * width) 3 y = int(landmark.y * height) 4 return x, y

Visibility Score

Each landmark has a visibility score (0.0-1.0):

  • 0.5: Likely visible and accurate

  • < 0.5: May be occluded or estimated
  • = 0.0: Not detected

Debug Overlay (debug_overlay.py)

Skeleton Drawing

python
1# Key connections for jump visualization 2POSE_CONNECTIONS = [ 3 (23, 25), (25, 27), (27, 29), (27, 31), # Left leg 4 (24, 26), (26, 28), (28, 30), (28, 32), # Right leg 5 (23, 24), # Hips 6 (11, 23), (12, 24), # Torso 7]

Color Coding

ElementColor (BGR)Meaning
Skeleton(0, 255, 0)Green - normal tracking
Low confidence(0, 165, 255)Orange - visibility < 0.5
Key angles(255, 0, 0)Blue - measured angles
Phase markers(0, 0, 255)Red - takeoff/landing

Performance Optimization

Reducing Latency

  1. Use model_complexity=0 for fastest inference
  2. Process every Nth frame for batch analysis
  3. Use GPU acceleration if available
python
1import mediapipe as mp 2 3pose = mp.solutions.pose.Pose( 4 model_complexity=0, # 0=Lite, 1=Full, 2=Heavy 5 min_detection_confidence=0.5, 6 min_tracking_confidence=0.5, 7 static_image_mode=False # False for video (uses tracking) 8)

Memory Management

  • Release pose estimator after processing: pose.close()
  • Process videos in chunks for large files
  • Use generators for frame iteration

Integration with kinemotion

File Locations

  • Pose estimation: src/kinemotion/core/pose.py
  • Video I/O: src/kinemotion/core/video_io.py
  • Filtering: src/kinemotion/core/filtering.py
  • Smoothing: src/kinemotion/core/smoothing.py
  • Auto-tuning: src/kinemotion/core/auto_tuning.py

Typical Pipeline

text
1Video → read_video_frames() → pose.process() → filter/smooth → analyze

Manual Observation for Validation

During development, use manual frame-by-frame observation to establish ground truth and validate pose detection accuracy.

When to Use Manual Observation

  1. Algorithm development: Validating new phase detection methods
  2. Parameter tuning: Comparing detected vs actual frames
  3. Debugging: Investigating pose detection failures
  4. Ground truth collection: Building validation datasets

Ground Truth Data Collection Protocol

Step 1: Generate Debug Video

bash
1uv run kinemotion cmj-analyze video.mp4 --output debug.mp4

Step 2: Manual Frame-by-Frame Analysis

Open debug video in a frame-stepping tool (QuickTime, VLC with frame advance, or video editor).

Step 3: Record Observations

For each key phase, record the frame number where the event occurs:

text
1=== MANUAL OBSERVATION: PHASE DETECTION === 2 3Video: ________________________ 4FPS: _____ Total Frames: _____ 5 6PHASE DETECTION (frame numbers) 7| Phase | Detected | Manual | Error | Notes | 8|-------|----------|--------|-------|-------| 9| Standing End | ___ | ___ | ___ | | 10| Lowest Point | ___ | ___ | ___ | | 11| Takeoff | ___ | ___ | ___ | | 12| Peak Height | ___ | ___ | ___ | | 13| Landing | ___ | ___ | ___ | | 14 15LANDMARK QUALITY (per phase) 16| Phase | Hip Visible | Knee Visible | Ankle Visible | Notes | 17|-------|-------------|--------------|---------------|-------| 18| Standing | Y/N | Y/N | Y/N | | 19| Countermovement | Y/N | Y/N | Y/N | | 20| Flight | Y/N | Y/N | Y/N | | 21| Landing | Y/N | Y/N | Y/N | |

Phase Detection Criteria

Standing End: Last frame before downward hip movement begins

  • Look for: Hip starts descending, knees begin flexing

Lowest Point: Frame where hip reaches minimum height

  • Look for: Deepest squat position, hip at lowest Y coordinate

Takeoff: First frame where both feet leave ground

  • Look for: Toe/heel landmarks separate from ground plane
  • Note: May be 1-2 frames after visible liftoff due to detection lag

Peak Height: Frame where hip reaches maximum height

  • Look for: Hip at highest Y coordinate during flight

Landing: First frame where foot contacts ground

  • Look for: Heel or toe landmark touches ground plane
  • Note: Algorithm may detect 1-2 frames late (velocity-based)

Landmark Quality Assessment

For each landmark, observe:

QualityCriteria
GoodLandmark stable, positioned correctly on body part
JitteryLandmark oscillates ±5-10 pixels between frames
OffsetLandmark consistently displaced from actual position
LostLandmark missing or wildly incorrect
SwappedLeft/right landmarks switched

Recording Observations Format

When validating, provide structured data:

text
1## Ground Truth: [video_name] 2 3**Video Info:** 4- Frames: 215 5- FPS: 60 6- Duration: 3.58s 7- Camera: 45° oblique 8 9**Phase Detection Comparison:** 10 11| Phase | Detected | Manual | Error (frames) | Error (ms) | 12|-------|----------|--------|----------------|------------| 13| Standing End | 64 | 64 | 0 | 0 | 14| Lowest Point | 91 | 88 | +3 (late) | +50 | 15| Takeoff | 104 | 104 | 0 | 0 | 16| Landing | 144 | 142 | +2 (late) | +33 | 17 18**Error Analysis:** 19- Mean absolute error: 1.25 frames (21ms) 20- Bias detected: Landing consistently late 21- Accuracy: 2/4 perfect, 4/4 within ±3 frames 22 23**Landmark Issues Observed:** 24- Frame 87-92: Hip jitter during lowest point 25- Frame 140-145: Ankle tracking unstable at landing

Acceptable Error Thresholds

At 60fps (16.67ms per frame):

Error LevelFramesTimeInterpretation
Perfect00msExact match
Excellent±1±17msWithin human observation variance
Good±2±33msAcceptable for most metrics
Acceptable±3±50msMay affect precise timing metrics
Investigate>3>50msAlgorithm may need adjustment

Bias Detection

Look for systematic patterns across multiple videos:

PatternMeaningAction
Consistent +N framesAlgorithm detects lateAdjust threshold earlier
Consistent -N framesAlgorithm detects earlyAdjust threshold later
Variable ±N framesNormal varianceNo action needed
Increasing errorTracking degradesCheck landmark quality

Integration with Serena (Memory)

Store ground truth observations using write_note (Serena):

python
1# Save validation results 2write_note( 3 title="CMJ Phase Detection Validation - [video_name]", 4 content="[structured observation data]", 5 folder="biomechanics" 6) 7 8# Search previous validations 9search_notes(query="phase detection ground truth") 10 11# Build context for analysis 12build_context(url="memory://biomechanics/*")

Example: CMJ Validation Study Reference

See basic-memory (Serena) for complete validation study:

  • biomechanics/cmj-phase-detection-validation-45deg-oblique-view-ground-truth
  • biomechanics/cmj-landing-detection-bias-root-cause-analysis
  • biomechanics/cmj-landing-detection-impact-vs-contact-method-comparison

Key findings from validation:

  • Standing End: 100% accuracy (0 frame error)
  • Takeoff: ~0.7 frame mean error (excellent)
  • Lowest Point: ~2.3 frame mean error (variable)
  • Landing: +1-2 frame consistent bias (investigate)

Related Skills

Looking for an alternative to mediapipe-pose-detection or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication