moviepy — for Claude Code moviepy, claude-code-video-toolkit, community, for Claude Code, ide skills, ai-video-generator, claude-code, developer-tools, elevenlabs, open-source

v1.0.0

Об этом навыке

Подходящий сценарий: Ideal for AI agents that need moviepy for video production. Локализованное описание: Source attributions must be pixel-perfect. It covers ai-video-generator, claude-code, developer-tools workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Возможности

moviepy for Video Production
When to use moviepy vs. Remotion
Use moviepy when… Use Remotion when…
------------------- ---------------------
Overlaying text/labels on an LTX-2 or SadTalker output Building long-form sprint reviews or

# Core Topics

digitalsamba digitalsamba
[940]
[146]
Updated: 4/23/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
80
Canonical Locale
en
Detected Body Locale
en

Подходящий сценарий: Ideal for AI agents that need moviepy for video production. Локализованное описание: Source attributions must be pixel-perfect. It covers ai-video-generator, claude-code, developer-tools workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Зачем использовать этот навык

Рекомендация: moviepy helps agents moviepy for video production. Source attributions must be pixel-perfect. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Подходит лучше всего

Подходящий сценарий: Ideal for AI agents that need moviepy for video production.

Реализуемые кейсы использования for moviepy

Сценарий использования: Applying moviepy for Video Production
Сценарий использования: Applying When to use moviepy vs. Remotion
Сценарий использования: Applying Use moviepy when… Use Remotion when…

! Безопасность и ограничения

  • Ограничение: Names must be spelled right
  • Ограничение: Prices must be exact
  • Ограничение: Source attributions must be pixel-perfect

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is moviepy?

Подходящий сценарий: Ideal for AI agents that need moviepy for video production. Локализованное описание: Source attributions must be pixel-perfect. It covers ai-video-generator, claude-code, developer-tools workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install moviepy?

Run the command: npx killer-skills add digitalsamba/claude-code-video-toolkit/moviepy. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for moviepy?

Key use cases include: Сценарий использования: Applying moviepy for Video Production, Сценарий использования: Applying When to use moviepy vs. Remotion, Сценарий использования: Applying Use moviepy when… Use Remotion when….

Which IDEs are compatible with moviepy?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for moviepy?

Ограничение: Names must be spelled right. Ограничение: Prices must be exact. Ограничение: Source attributions must be pixel-perfect.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add digitalsamba/claude-code-video-toolkit/moviepy. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use moviepy immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

moviepy

Source attributions must be pixel-perfect. It covers ai-video-generator, claude-code, developer-tools workflows. This AI agent skill supports Claude Code

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

moviepy for Video Production

moviepy is the toolkit's go-to library for putting deterministic text on top of AI-generated video and for building short, single-file Python video projects without a Remotion toolchain.

The deeper principle is trustworthy text: any genre where text has to be readable, accurate, and consistent (legally, editorially, or commercially) is a genre where AI-rendered in-frame text is unacceptable and a moviepy overlay step is the natural fix. Names must be spelled right. Prices must be exact. Source attributions must be pixel-perfect. AI generation models cannot guarantee any of that.

When to use moviepy vs. Remotion

Use moviepy when…Use Remotion when…
Overlaying text/labels on an LTX-2 or SadTalker outputBuilding long-form sprint reviews or product demos
Building sub-30s ad-style spots in a single build.pyMulti-template, multi-brand, design-heavy work
Compositing data-driven visuals (matplotlib FuncAnimation → mp4)Anything needing React components or design system reuse
One-off transformations on existing video filesAnything where the project lifecycle (planning → render) matters
You want zero Node.js / no React mental overheadYou want hot-reload preview in Remotion Studio

Two runnable references for everything in this skill live in examples/:

  • examples/quick-spot/build.py — 15-second ad-style spot. Audio-anchored timeline, text overlay, optional VO + ducked music. Renders silent out of the box with zero external assets.
  • examples/data-viz-chart/build.py — animated time-series chart with deterministic title and source attribution. Demonstrates the matplotlib (data) + moviepy (trustworthy text) split.

Both run with python3 build.py and produce a real out.mp4 immediately. Read them alongside this skill — every pattern below is shown working there.

Dependencies. moviepy, Pillow, and matplotlib are declared in tools/requirements.txt and installed with the toolkit's one-line Python setup: python3 -m pip install -r tools/requirements.txt. If you hit Missing dependency when running an example, run that command from the repo root — the examples' build.py files will tell you the same thing in their error message and exit cleanly rather than printing a bare traceback.

The main use case: text on AI-generated video

Both LTX-2 and SadTalker output bare visuals:

  • LTX-2 cannot reliably render readable text (the model hallucinates letterforms — see the ltx2 skill's "Bad Prompts").
  • SadTalker outputs a talking head with no captions, labels, lower thirds, or context.

The fix is to generate the visual cleanly, then composite text over it deterministically with moviepy. This is the canonical pattern in this toolkit:

python
1from moviepy import VideoFileClip, ImageClip, CompositeVideoClip 2 3# 1. AI-generated visual (LTX-2 or SadTalker output) 4bg = VideoFileClip("lugh_ltx.mp4").without_audio() 5 6# 2. Text rendered via PIL → ImageClip (see "Text rendering" below) 7title = ( 8 ImageClip("text_cache/intro_title.png") 9 .with_duration(2.0) 10 .with_start(0.5) 11 .with_position(("center", 880)) 12) 13 14# 3. Composite 15final = CompositeVideoClip([bg, title], size=(1920, 1080)) 16final.write_videofile("lugh_with_caption.mp4", fps=30, codec="libx264")

Common shapes this takes:

ShapeLTX-2 useSadTalker use
Title card over hero footage"INTRODUCING LONGARM" over a cinematic LTX-2 b-rolln/a
Lower third / name platen/a"Lugh — Ancient Warrior God" under a talking head
Quote caption"I am going home." over an LTX-2 character cameoSame, over a SadTalker talking head
Brand attributionLogo + URL fade-in over the last secondSame
Tinted overlay for contrastDark navy semi-transparent layer behind textSame

Genres where this shines

The "AI-visual + deterministic text overlay" pattern is the natural production pipeline for several styles of video. If the request matches one of these, reach for moviepy by default:

GenreWhat you overlayWhy moviepy is the right call
News / talking-head journalismSpeaker name plates, location bars, breaking-news banners, source attribution, pull quotesNames must be spelled right (editorial / legal). The biggest category by volume.
Documentary segmentsInterviewee lower thirds, chapter titles, archival source credits, location stampsSame trust requirement as news.
Trailers / promo spotsTitle cards, credit overlays ("FROM THE DIRECTOR OF…"), date stings, quote cards, CTAsTightly timed, text-heavy, every frame matters. The q2-townhall-longarm-ad example is exactly this.
Social short-form (Reels, TikTok, Shorts)Word-accurate captions for sound-off viewing, hashtag overlaysMost social viewing is muted; captions are non-negotiable.
Product demos with annotationsPricing callouts, feature labels, "click here" pointers over screen recordings, before/after labelsPrices and product names must be exact.
Tutorials / explainersStep number overlays, terminal-command captions, keyboard-shortcut calloutsStep numbers must be sequential, commands must be copy-pasteable.

Lesser-but-real fits: music videos (lyric overlays), reaction videos (source attribution), sports recaps (score overlays), real-estate tours (price / sqft), conference talks (speaker + session plate).

For full SRT-driven subtitling (long-form, time-coded, multilingual) moviepy is workable but not ideal — reach for ffmpeg with subtitles filter or a dedicated subtitle tool. moviepy is best for hand-placed overlays, not bulk caption tracks.

Text rendering — use PIL, not TextClip

Critical gotcha: moviepy 2.x's TextClip(method='label') has a tight-bbox bug that clips letter ascenders and descenders (the tops of capitals, the tails of g/p/y). On Apple Silicon you'll see characters with sliced edges and not realise what's wrong for hours.

The workaround: render text to a transparent PNG via PIL, then load it as an ImageClip. Cache the result by content hash so re-builds are free.

python
1import hashlib 2from pathlib import Path 3from PIL import Image, ImageDraw, ImageFont 4 5ARIAL_BOLD = "/System/Library/Fonts/Supplemental/Arial Bold.ttf" 6 7def render_text_png(txt, size, hex_color, cache_dir="./text_cache"): 8 cache = Path(cache_dir); cache.mkdir(parents=True, exist_ok=True) 9 key = hashlib.sha1(f"{txt}|{size}|{hex_color}".encode()).hexdigest()[:16] 10 path = cache / f"{key}.png" 11 if path.exists(): 12 return str(path) 13 14 font = ImageFont.truetype(ARIAL_BOLD, size) 15 bbox = ImageDraw.Draw(Image.new("RGBA", (1, 1))).textbbox((0, 0), txt, font=font) 16 tw, th = bbox[2] - bbox[0], bbox[3] - bbox[1] 17 pad = max(20, size // 4) 18 19 img = Image.new("RGBA", (tw + pad * 2, th + pad * 2), (0, 0, 0, 0)) 20 rgb = tuple(int(hex_color.lstrip("#")[i:i+2], 16) for i in (0, 2, 4)) 21 ImageDraw.Draw(img).text((pad - bbox[0], pad - bbox[1]), txt, font=font, fill=(*rgb, 255)) 22 img.save(path) 23 return str(path)

The full helper (with kwargs for bold, position, fades, and cleaner ergonomics) is in examples/quick-spot/build.py — copy it rather than re-implementing.

Audio-anchored timeline pattern

For ad-style edits where every frame matters, generate per-scene VO first and anchor every visual to known absolute timestamps. This eliminates timing drift entirely. See CLAUDE.md → Video Timing → Audio-Anchored Timelines for the full pattern. The short version:

python
1# Audio-anchored timeline (25s): 2# Scene 1 tired 0.3 → 3.74 (audio 3.44s) 3# Scene 2 worries 4.0 → 8.88 (audio 4.88s) 4 5text_clip("TIRED OF", start=0.5, duration=1.2) 6text_clip("THIRD-PARTY", start=1.0, duration=1.8) 7vo_clip("01_tired.mp3", start=0.3) 8vo_clip("02_worries.mp3", start=4.0)

Common recipes

Text on a single AI-generated clip

python
1from moviepy import VideoFileClip, ImageClip, CompositeVideoClip 2 3bg = VideoFileClip("ltx_hero.mp4").without_audio() 4caption = ( 5 ImageClip(render_text_png("THE FUTURE OF AGENTS", 140, "#FFFFFF")) 6 .with_duration(bg.duration) 7 .with_position(("center", 880)) 8) 9CompositeVideoClip([bg, caption], size=bg.size).write_videofile("captioned.mp4", fps=30)

Lower third over a SadTalker talking head

python
1from moviepy import VideoFileClip, ImageClip, ColorClip, CompositeVideoClip 2 3talking = VideoFileClip("narrator_sadtalker.mp4") 4W, H = talking.size 5 6# Semi-transparent bar across the bottom for contrast 7bar = ( 8 ColorClip((W, 140), color=(20, 24, 38)) 9 .with_duration(talking.duration) 10 .with_opacity(0.75) 11 .with_position(("center", H - 160)) 12) 13name = ( 14 ImageClip(render_text_png("LUGH", 72, "#F06859")) 15 .with_duration(talking.duration) 16 .with_position((80, H - 150)) 17) 18title = ( 19 ImageClip(render_text_png("Ancient Warrior God", 36, "#FFFFFF")) 20 .with_duration(talking.duration) 21 .with_position((80, H - 80)) 22) 23CompositeVideoClip([talking, bar, name, title]).write_videofile("with_lower_third.mp4", fps=30)

Tinted overlay for text contrast over busy footage

LTX-2 b-roll is often too visually busy for legible text. Drop a semi-transparent navy layer between the video and the text:

python
1from moviepy import ColorClip 2 3tint = ( 4 ColorClip((W, H), color=(20, 24, 38)) 5 .with_duration(duration) 6 .with_opacity(0.55) 7) 8# Composite order: bg → tint → text 9CompositeVideoClip([bg, tint, text_clip])

Side-by-side composite

python
1from moviepy import VideoFileClip, CompositeVideoClip, ColorClip 2 3left = VideoFileClip("demo_a.mp4").resized(width=960).with_position(( 0, "center")) 4right = VideoFileClip("demo_b.mp4").resized(width=960).with_position((960, "center")) 5bg = ColorClip((1920, 1080), color=(0, 0, 0)).with_duration(max(left.duration, right.duration)) 6CompositeVideoClip([bg, left, right]).write_videofile("split.mp4", fps=30)

Mix per-scene VO with ducked music

python
1from moviepy import AudioFileClip, CompositeAudioClip 2from moviepy.audio.fx.MultiplyVolume import MultiplyVolume 3from moviepy.audio.fx.AudioFadeIn import AudioFadeIn 4from moviepy.audio.fx.AudioFadeOut import AudioFadeOut 5 6music = AudioFileClip("music.mp3").with_effects([ 7 MultiplyVolume(0.22), # duck under VO 8 AudioFadeIn(0.5), 9 AudioFadeOut(1.5), 10]) 11vo = [ 12 AudioFileClip(f"scenes/0{i}.mp3").with_effects([MultiplyVolume(1.15)]).with_start(start) 13 for i, start in [(1, 0.3), (2, 4.0), (3, 9.1)] 14] 15final_audio = CompositeAudioClip([music] + vo)

Gotchas

  • moviepy 2.x renamed methods. Use subclipped (not subclip), with_duration / with_start / with_position (not set_duration etc.), with_effects([...]) instead of .fadein()/.fadeout(). Many tutorials online still show 1.x syntax — be skeptical.
  • TextClip(method='label') clips ascenders/descenders. Always use the PIL workaround above.
  • OffthreadVideo is Remotion-only. moviepy uses VideoFileClip. Don't mix the two.
  • Resizing requires Pillow ≥ 10.0 for the LANCZOS resample. If you see ANTIALIAS errors, upgrade Pillow.
  • ColorClip takes RGB tuples, not hex strings. Use (20, 24, 38), not "#141826".
  • Audio in VideoFileClip is loaded by default. Call .without_audio() if you only want the visual — composing with audio you don't want will cause silent VO drops in CompositeAudioClip.
  • Always set size=(W, H) on CompositeVideoClip. Without it, output dimensions follow the first clip, which can be smaller than your target.

When to reach for what

TaskTool
Animate a still imagetools/ltx2.py --input
Talking head from photoreal portraittools/sadtalker.py
Talking head from stylized charactertools/ltx2.py --input (see ltx2 skill)
Add a label/caption/lower third to either of the abovemoviepy + PIL (this skill)
Convert / compress / resize an existing fileffmpeg (see ffmpeg skill)
Long-form, design-system-driven videoRemotion (see remotion skill)

References

  • Runnable example — short ad-style spot: examples/quick-spot/build.py
  • Runnable example — data-viz with text overlay: examples/data-viz-chart/build.py
  • Audio-anchored timelines: CLAUDE.md → Video Timing → Audio-Anchored Timelines
  • Related skills: ltx2, ffmpeg, remotion

Связанные навыки

Looking for an alternative to moviepy or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Разработчик