video-editing — video-editing AI agent skill video-editing, everything-claude-code, affaan-m, official, video-editing AI agent skill, ai agent skill, ide skills, agent automation, video-editing for Claude Code, AI agent skills, Claude Code, Cursor

Verified
v1.0.0
GitHub

About this Skill

Ideal for Media Agents requiring advanced AI-assisted video editing capabilities with FFmpeg, Remotion, and Descript. AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.

# Core Topics

affaan-m affaan-m
[108.5k]
[14167]
Updated: 3/26/2026

Quality Score

Top 5%
95
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add affaan-m/everything-claude-code/video-editing
Supports 19+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The video-editing skill by affaan-m is an open-source official AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for video-editing AI agent skill, video-editing for Claude Code.

Ideal Agent Persona

Ideal for Media Agents requiring advanced AI-assisted video editing capabilities with FFmpeg, Remotion, and Descript.

Core Value

Empowers agents to efficiently edit and refine real footage using AI-assisted workflows, supporting libraries like FFmpeg and Remotion, and final polish in Descript or CapCut, enabling fast video content creation and optimization for various platforms.

Capabilities Granted for video-editing

Automating video editing for vlogs and tutorials
Reframing footage for different social media platforms like YouTube, TikTok, and Instagram
Adding overlays, subtitles, music, or voiceover to existing video content

! Prerequisites & Limits

  • Requires existing video footage for editing
  • Not suitable for generating video from prompts
  • Dependent on compatibility with specific video editing software like Descript or CapCut
Project
SKILL.md
9.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly

Video Editing

AI-assisted editing for real footage. Not generation from prompts. Editing existing video fast.

When to Activate

  • User wants to edit, cut, or structure video footage
  • Turning long recordings into short-form content
  • Building vlogs, tutorials, or demo videos from raw capture
  • Adding overlays, subtitles, music, or voiceover to existing video
  • Reframing video for different platforms (YouTube, TikTok, Instagram)
  • User says "edit video", "cut this footage", "make a vlog", or "video workflow"

Core Thesis

AI video editing is useful when you stop asking it to create the whole video and start using it to compress, structure, and augment real footage. The value is not generation. The value is compression.

The Pipeline

Screen Studio / raw footage
  → Claude / Codex
  → FFmpeg
  → Remotion
  → ElevenLabs / fal.ai
  → Descript or CapCut

Each layer has a specific job. Do not skip layers. Do not try to make one tool do everything.

Layer 1: Capture (Screen Studio / Raw Footage)

Collect the source material:

  • Screen Studio: polished screen recordings for app demos, coding sessions, browser workflows
  • Raw camera footage: vlog footage, interviews, event recordings
  • Desktop capture via VideoDB: session recording with real-time context (see videodb skill)

Output: raw files ready for organization.

Layer 2: Organization (Claude / Codex)

Use Claude Code or Codex to:

  • Transcribe and label: generate transcript, identify topics and themes
  • Plan structure: decide what stays, what gets cut, what order works
  • Identify dead sections: find pauses, tangents, repeated takes
  • Generate edit decision list: timestamps for cuts, segments to keep
  • Scaffold FFmpeg and Remotion code: generate the commands and compositions
Example prompt:
"Here's the transcript of a 4-hour recording. Identify the 8 strongest segments
for a 24-minute vlog. Give me FFmpeg cut commands for each segment."

This layer is about structure, not final creative taste.

Layer 3: Deterministic Cuts (FFmpeg)

FFmpeg handles the boring but critical work: splitting, trimming, concatenating, and preprocessing.

Extract segment by timestamp

bash
1ffmpeg -i raw.mp4 -ss 00:12:30 -to 00:15:45 -c copy segment_01.mp4

Batch cut from edit decision list

bash
1#!/bin/bash 2# cuts.txt: start,end,label 3while IFS=, read -r start end label; do 4 ffmpeg -i raw.mp4 -ss "$start" -to "$end" -c copy "segments/${label}.mp4" 5done < cuts.txt

Concatenate segments

bash
1# Create file list 2for f in segments/*.mp4; do echo "file '$f'"; done > concat.txt 3ffmpeg -f concat -safe 0 -i concat.txt -c copy assembled.mp4

Create proxy for faster editing

bash
1ffmpeg -i raw.mp4 -vf "scale=960:-2" -c:v libx264 -preset ultrafast -crf 28 proxy.mp4

Extract audio for transcription

bash
1ffmpeg -i raw.mp4 -vn -acodec pcm_s16le -ar 16000 audio.wav

Normalize audio levels

bash
1ffmpeg -i segment.mp4 -af loudnorm=I=-16:TP=-1.5:LRA=11 -c:v copy normalized.mp4

Layer 4: Programmable Composition (Remotion)

Remotion turns editing problems into composable code. Use it for things that traditional editors make painful:

When to use Remotion

  • Overlays: text, images, branding, lower thirds
  • Data visualizations: charts, stats, animated numbers
  • Motion graphics: transitions, explainer animations
  • Composable scenes: reusable templates across videos
  • Product demos: annotated screenshots, UI highlights

Basic Remotion composition

tsx
1import { AbsoluteFill, Sequence, Video, useCurrentFrame } from "remotion"; 2 3export const VlogComposition: React.FC = () => { 4 const frame = useCurrentFrame(); 5 6 return ( 7 <AbsoluteFill> 8 {/* Main footage */} 9 <Sequence from={0} durationInFrames={300}> 10 <Video src="/segments/intro.mp4" /> 11 </Sequence> 12 13 {/* Title overlay */} 14 <Sequence from={30} durationInFrames={90}> 15 <AbsoluteFill style={{ 16 justifyContent: "center", 17 alignItems: "center", 18 }}> 19 <h1 style={{ 20 fontSize: 72, 21 color: "white", 22 textShadow: "2px 2px 8px rgba(0,0,0,0.8)", 23 }}> 24 The AI Editing Stack 25 </h1> 26 </AbsoluteFill> 27 </Sequence> 28 29 {/* Next segment */} 30 <Sequence from={300} durationInFrames={450}> 31 <Video src="/segments/demo.mp4" /> 32 </Sequence> 33 </AbsoluteFill> 34 ); 35};

Render output

bash
1npx remotion render src/index.ts VlogComposition output.mp4

See the Remotion docs for detailed patterns and API reference.

Layer 5: Generated Assets (ElevenLabs / fal.ai)

Generate only what you need. Do not generate the whole video.

Voiceover with ElevenLabs

python
1import os 2import requests 3 4resp = requests.post( 5 f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}", 6 headers={ 7 "xi-api-key": os.environ["ELEVENLABS_API_KEY"], 8 "Content-Type": "application/json" 9 }, 10 json={ 11 "text": "Your narration text here", 12 "model_id": "eleven_turbo_v2_5", 13 "voice_settings": {"stability": 0.5, "similarity_boost": 0.75} 14 } 15) 16with open("voiceover.mp3", "wb") as f: 17 f.write(resp.content)

Music and SFX with fal.ai

Use the fal-ai-media skill for:

  • Background music generation
  • Sound effects (ThinkSound model for video-to-audio)
  • Transition sounds

Generated visuals with fal.ai

Use for insert shots, thumbnails, or b-roll that doesn't exist:

generate(app_id: "fal-ai/nano-banana-pro", input_data: {
  "prompt": "professional thumbnail for tech vlog, dark background, code on screen",
  "image_size": "landscape_16_9"
})

VideoDB generative audio

If VideoDB is configured:

python
1voiceover = coll.generate_voice(text="Narration here", voice="alloy") 2music = coll.generate_music(prompt="lo-fi background for coding vlog", duration=120) 3sfx = coll.generate_sound_effect(prompt="subtle whoosh transition")

Layer 6: Final Polish (Descript / CapCut)

The last layer is human. Use a traditional editor for:

  • Pacing: adjust cuts that feel too fast or slow
  • Captions: auto-generated, then manually cleaned
  • Color grading: basic correction and mood
  • Final audio mix: balance voice, music, and SFX levels
  • Export: platform-specific formats and quality settings

This is where taste lives. AI clears the repetitive work. You make the final calls.

Social Media Reframing

Different platforms need different aspect ratios:

PlatformAspect RatioResolution
YouTube16:91920x1080
TikTok / Reels9:161080x1920
Instagram Feed1:11080x1080
X / Twitter16:9 or 1:11280x720 or 720x720

Reframe with FFmpeg

bash
1# 16:9 to 9:16 (center crop) 2ffmpeg -i input.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" vertical.mp4 3 4# 16:9 to 1:1 (center crop) 5ffmpeg -i input.mp4 -vf "crop=ih:ih,scale=1080:1080" square.mp4

Reframe with VideoDB

python
1from videodb import ReframeMode 2 3# Smart reframe (AI-guided subject tracking) 4reframed = video.reframe(start=0, end=60, target="vertical", mode=ReframeMode.smart)

Scene Detection and Auto-Cut

FFmpeg scene detection

bash
1# Detect scene changes (threshold 0.3 = moderate sensitivity) 2ffmpeg -i input.mp4 -vf "select='gt(scene,0.3)',showinfo" -vsync vfr -f null - 2>&1 | grep showinfo

Silence detection for auto-cut

bash
1# Find silent segments (useful for cutting dead air) 2ffmpeg -i input.mp4 -af silencedetect=noise=-30dB:d=2 -f null - 2>&1 | grep silence

Highlight extraction

Use Claude to analyze transcript + scene timestamps:

"Given this transcript with timestamps and these scene change points,
identify the 5 most engaging 30-second clips for social media."

What Each Tool Does Best

ToolStrengthWeakness
Claude / CodexOrganization, planning, code generationNot the creative taste layer
FFmpegDeterministic cuts, batch processing, format conversionNo visual editing UI
RemotionProgrammable overlays, composable scenes, reusable templatesLearning curve for non-devs
Screen StudioPolished screen recordings immediatelyOnly screen capture
ElevenLabsVoice, narration, music, SFXNot the center of the workflow
Descript / CapCutFinal pacing, captions, polishManual, not automatable

Key Principles

  1. Edit, don't generate. This workflow is for cutting real footage, not creating from prompts.
  2. Structure before style. Get the story right in Layer 2 before touching anything visual.
  3. FFmpeg is the backbone. Boring but critical. Where long footage becomes manageable.
  4. Remotion for repeatability. If you'll do it more than once, make it a Remotion component.
  5. Generate selectively. Only use AI generation for assets that don't exist, not for everything.
  6. Taste is the last layer. AI clears repetitive work. You make the final creative calls.

Related Skills

  • fal-ai-media — AI image, video, and audio generation
  • videodb — Server-side video processing, indexing, and streaming
  • content-engine — Platform-native content distribution

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is video-editing?

Ideal for Media Agents requiring advanced AI-assisted video editing capabilities with FFmpeg, Remotion, and Descript. AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.

How do I install video-editing?

Run the command: npx killer-skills add affaan-m/everything-claude-code/video-editing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for video-editing?

Key use cases include: Automating video editing for vlogs and tutorials, Reframing footage for different social media platforms like YouTube, TikTok, and Instagram, Adding overlays, subtitles, music, or voiceover to existing video content.

Which IDEs are compatible with video-editing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for video-editing?

Requires existing video footage for editing. Not suitable for generating video from prompts. Dependent on compatibility with specific video editing software like Descript or CapCut.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add affaan-m/everything-claude-code/video-editing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use video-editing immediately in the current project.

Related Skills

Looking for an alternative to video-editing or another official skill for your workflow? Explore these related open-source skills.

View All

flags

Logo of facebook
facebook

Use when you need to check feature flag states, compare channels, or debug why a feature behaves differently across release channels.

244.2k
0
Design

extract-errors

Logo of facebook
facebook

extract-errors is a React error handling skill that automates the process of extracting and assigning error codes, ensuring accurate and up-to-date error messages in React applications.

244.2k
0
Design

fix

Logo of facebook
facebook

fix is a code optimization skill that automates formatting and linting using yarn prettier and linc.

244.2k
0
Design

flow

Logo of facebook
facebook

Use when you need to run Flow type checking, or when seeing Flow type errors in React code.

244.2k
0
Design