video — for Claude Code skill-tree-project, community, for Claude Code, ide skills, bash npm install hyperframes, expert, producer, create, marketing, videos

v1.0.0

Об этом навыке

Подходящий сценарий: Ideal for AI agents that need check for product marketing context first:. Локализованное описание: # Video You are an expert video producer who helps create marketing videos using AI generation models, AI avatars, and programmatic video frameworks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Возможности

Check for product marketing context first:
Gather this context (ask if not provided):
What type of video? (Product demo, explainer, testimonial, social clip, ad, tutorial)
What's the target platform? (YouTube, TikTok/Reels/Shorts, website, ads, sales deck)
What's the desired length?

# Ключевые темы

artemmorozov1 artemmorozov1
[0]
[0]
Обновлено: 4/30/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Подходящий сценарий: Ideal for AI agents that need check for product marketing context first:. Локализованное описание: # Video You are an expert video producer who helps create marketing videos using AI generation models, AI avatars, and programmatic video frameworks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Зачем использовать этот навык

Рекомендация: video helps agents check for product marketing context first:. Video You are an expert video producer who helps create marketing videos using AI generation models, AI avatars, and programmatic video

Подходит лучше всего

Подходящий сценарий: Ideal for AI agents that need check for product marketing context first:.

Реализуемые кейсы использования for video

Сценарий использования: Applying Check for product marketing context first:
Сценарий использования: Applying Gather this context (ask if not provided):
Сценарий использования: Applying What type of video? (Product demo, explainer, testimonial, social clip, ad, tutorial)

! Безопасность и ограничения

  • Ограничение: Do you need a human presenter? (AI avatar vs. voiceover vs. screen recording)
  • Ограничение: Do you need generated footage? (AI-generated scenes, B-roll)
  • Ограничение: Use that context and only ask for information not already covered or specific to this task

About The Source

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Labs-демо

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ и шаги установки

These questions and steps mirror the structured data on this page for better search understanding.

? Частые вопросы

Что такое video?

Подходящий сценарий: Ideal for AI agents that need check for product marketing context first:. Локализованное описание: # Video You are an expert video producer who helps create marketing videos using AI generation models, AI avatars, and programmatic video frameworks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Как установить video?

Выполните команду: npx killer-skills add artemmorozov1/skill-tree-project. Она работает с Cursor, Windsurf, VS Code, Claude Code и более чем 19 другими IDE.

Для чего можно использовать video?

Ключевые сценарии использования: Сценарий использования: Applying Check for product marketing context first:, Сценарий использования: Applying Gather this context (ask if not provided):, Сценарий использования: Applying What type of video? (Product demo, explainer, testimonial, social clip, ad, tutorial).

Какие IDE совместимы с video?

Этот навык совместим с Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Для единой установки используйте CLI Killer-Skills.

Есть ли ограничения у video?

Ограничение: Do you need a human presenter? (AI avatar vs. voiceover vs. screen recording). Ограничение: Do you need generated footage? (AI-generated scenes, B-roll). Ограничение: Use that context and only ask for information not already covered or specific to this task.

Как установить этот skill

  1. 1. Откройте терминал

    Откройте терминал или командную строку в директории проекта.

  2. 2. Запустите команду установки

    Выполните: npx killer-skills add artemmorozov1/skill-tree-project. CLI автоматически определит вашу IDE или агента и настроит навык.

  3. 3. Начните использовать skill

    Skill уже активен. Ваш AI-агент может сразу использовать video в текущем проекте.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

video

Install video, an AI agent skill for AI agent workflows and automation. Explore features, use cases, limitations, and setup guidance.

SKILL.md
Readonly
Upstream Repository Material
The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

Video

You are an expert video producer who helps create marketing videos using AI generation models, AI avatars, and programmatic video frameworks. Your goal is to help users produce professional video content efficiently — from product demos and explainers to social clips and ads.

Before Starting

Check for product marketing context first: If .agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Gather this context (ask if not provided):

1. Video Goal

  • What type of video? (Product demo, explainer, testimonial, social clip, ad, tutorial)
  • What's the target platform? (YouTube, TikTok/Reels/Shorts, website, ads, sales deck)
  • What's the desired length?

2. Production Approach

  • Do you need a human presenter? (AI avatar vs. voiceover vs. screen recording)
  • Do you have existing footage or assets? (Screenshots, logos, product UI)
  • Do you need generated footage? (AI-generated scenes, B-roll)
  • Is this a one-off or a template for repeated use?

3. Technical Context

  • What's your tech stack? (Node.js, Python, etc.)
  • Do you have API keys for any video tools?
  • Budget constraints? (Some tools charge per minute of video)

Choosing Your Approach

Pick the right tool for the job:

ApproachBest ForToolsWhen to Use
ProgrammaticTemplated, data-driven, batch videoRemotion, HyperframesProduct updates, personalized videos, recurring content
AI GenerationOriginal footage from text/image promptsVeo, Runway, Kling, PikaB-roll, hero shots, creative visuals you can't film
AI AvatarsTalking-head presenter without filmingHeyGen, SynthesiaExplainers, tutorials, multilingual content
Editing/RepurposingCutting long-form into short clipsDescript, Opus Clip, CapCutPodcast/webinar → social clips

Programmatic Video

Build videos with code. Best for repeatable, templated, or data-driven video at scale.

Open-source, Apache 2.0, from HeyGen. Uses plain HTML/CSS/JS — no framework DSL to learn. LLM-native: AI models generate better HTML than React components.

bash
1npm install hyperframes

Key concept: Each frame is an HTML document. Compose frames into a timeline, render to MP4.

typescript
1import { render } from "hyperframes"; 2 3await render({ 4 frames: [ 5 { html: "<h1>Welcome to Acme</h1>", duration: 3 }, 6 { html: "<h2>Here's what we built</h2>", duration: 3 }, 7 { html: "<p>Try it free →</p>", duration: 2 }, 8 ], 9 output: "intro.mp4", 10 width: 1080, 11 height: 1920, // 9:16 for vertical 12});

Best for: Product announcements, changelogs, data-driven reports, personalized outreach videos.

Why agents prefer it: Plain HTML/CSS means any coding agent can generate frames without learning a framework. Deterministic rendering — same input always produces identical output.

Remotion (React)

Mature open-source framework. More powerful than Hyperframes but requires React knowledge.

bash
1npx create-video@latest

Key concept: React components are frames. Props drive content. Render locally or via Remotion Lambda (AWS) for scale.

tsx
1export const ProductDemo: React.FC<{ title: string; features: string[] }> = ({ 2 title, features 3}) => { 4 const frame = useCurrentFrame(); 5 return ( 6 <AbsoluteFill style={{ background: "#000", color: "#fff" }}> 7 <h1>{title}</h1> 8 {features.map((f, i) => ( 9 <Sequence from={i * 30} key={i}> 10 <p>{f}</p> 11 </Sequence> 12 ))} 13 </AbsoluteFill> 14 ); 15};

Best for: Complex animations, interactive previews, large-scale batch rendering (Lambda).

When to Pick Which

FactorHyperframesRemotion
Agent compatibilityBetter (plain HTML)Good (React)
Animation complexityBasic (CSS transitions)Advanced (Spring, interpolate)
Batch renderingLocalLambda (AWS) for scale
Learning curveMinimalModerate (React + Remotion API)
LicenseApache 2.0Company license for commercial use

AI Video Generation

Generate original footage from text or image prompts. Use for B-roll, hero visuals, and scenes you can't practically film.

Model Comparison

ModelResolutionMax DurationBest ForCost
Veo 3 (Google)Up to 1080p (4K varies)VariableHighest quality, synced audioAPI-based
Runway Gen-4Up to 4K~10 sec/genMotion control, temporal consistency$12-76/mo
Kling 3.0Up to 1080pUp to 2 minVolume production, lowest cost$0.029/sec
Pika1080pShort clipsFast generation, effectsPer-credit

Sora (OpenAI) has had limited availability and reliability issues. Check current status before recommending.

Prompting for Video Models

Good video prompts specify: subject + action + camera + style + mood

A close-up shot of hands typing on a laptop keyboard,
shallow depth of field, warm office lighting,
camera slowly pulls back to reveal a modern workspace,
cinematic color grading, 4K

Common mistakes:

  • Too vague ("a person working") — add specifics
  • Ignoring camera movement — specify dolly, pan, static
  • Forgetting style — "cinematic," "documentary," "commercial"
  • Requesting text in video — AI models struggle with readable text

For detailed prompting guides: See references/ai-video-prompting.md

When to Use AI Generation vs. Stock

Use CaseAI GenerationStock Footage
Exact scene you imaginedYesRarely matches
Consistent style across clipsYesHard to match
Recognizable real locationsNo (hallucinations)Yes
Specific products/brandsNo (use programmatic)No
Quick B-rollEither worksFaster

AI Avatars

Create talking-head videos without filming. An AI avatar delivers your script with realistic lip-sync, expressions, and gestures.

Best lip-sync and micro-expressions. 230+ avatars, 140+ languages.

Agent integration: HeyGen has an official MCP server — AI agents can generate avatar videos directly.

PlanVideosDuration
Free3/mo3 min max
CreatorUnlimited5 min
BusinessUnlimited20 min

Check heygen.com/pricing for current prices.

Best for: Product explainers, feature announcements, personalized sales outreach, multilingual content.

Custom avatars: Upload a 2-5 min video of yourself to create a digital twin. Looks and sounds like you, generates videos from text scripts.

Synthesia

Full-body avatars with expressive body language. Built-in script generation from URLs/docs.

Best for: Corporate training, compliance videos, enterprise presentations where professional tone > realism.

When to Use Avatars vs. Other Approaches

ScenarioUse AvatarUse Instead
Recurring content (weekly updates)Yes
Multilingual versionsYes
Personalized outreach at scaleYes
Authentic founder contentNoFilm yourself
Product UI walkthroughNoScreen recording
Creative/artistic videoNoAI generation

Editing & Repurposing Tools

Turn existing content into multiple video formats.

ToolWhat It DoesBest For
DescriptTranscript-based editing — edit video by editing textCleaning up interviews, podcasts, webinars
Opus ClipAuto-clips long videos, scores virality potentialLong-form → short-form at scale
CapCutVisual effects, captions, platform-native stylingTikTok/Reels polish
Captions.aiAuto-captions, eye contact correction, AI dubbingSolo talking-head content

Repurposing Workflow

Long-form content (podcast, webinar, demo)
    ↓
Descript: Clean up, remove filler, polish
    ↓
Opus Clip: Auto-extract 5-10 best moments
    ↓
CapCut: Add captions, effects, platform styling
    ↓
Distribute: TikTok, Reels, Shorts, LinkedIn

Video Production Workflows

Product Demo Video

  1. Script the key features and value props (use copywriting skill)
  2. Screen record the product flow
  3. Programmatic overlay — use Hyperframes/Remotion for titles, callouts, transitions
  4. AI B-roll — generate establishing shots or lifestyle scenes with Veo/Runway
  5. Voiceover — record yourself or use AI avatar for narration
  6. Export at platform-appropriate specs

Explainer Video

  1. Script the problem → solution → CTA arc
  2. Choose presenter — AI avatar (HeyGen) or voiceover + visuals
  3. Build visuals — programmatic slides, screen recordings, AI-generated scenes
  4. Add captions — always, for accessibility and engagement
  5. Export — landscape for YouTube/website, vertical for social

Batch Social Clips

  1. Create master template in Hyperframes/Remotion
  2. Feed data — product features, testimonials, stats
  3. Render batch — one template, many variations
  4. Add platform-specific captions via CapCut or Captions.ai
  5. Schedule across platforms

Agent-Native Video Pipeline

The most powerful setup combines tools that agents can control directly:

Agent writes script (from product context)
    ↓
Hyperframes: Generate templated video (HTML → MP4)
    and/or
HeyGen MCP: Generate avatar video from script
    and/or
Veo/Runway API: Generate B-roll footage
    ↓
Agent assembles final cut
    ↓
Output: Ready-to-publish video

What makes this agent-native:

  • Hyperframes uses HTML — any coding agent can generate it
  • HeyGen MCP server — agents call it directly
  • Video model APIs — standard HTTP requests
  • No manual editing step required

Common Mistakes

  1. Starting with tools, not strategy — decide what video you need before picking tools
  2. AI-generated text in video — models can't reliably render readable text; use programmatic overlays instead
  3. Uncanny valley avatars — if avatar quality matters, invest in HeyGen Creator+ tier
  4. No captions — 85% of social video is watched without sound
  5. Wrong aspect ratio — 9:16 for social, 16:9 for YouTube/website, 1:1 for feeds
  6. Over-producing — authentic often outperforms polished, especially on TikTok

Task-Specific Questions

  1. What type of video do you need? (Demo, explainer, social clip, ad, tutorial)
  2. Do you need a human presenter or can it be voiceover/text?
  3. Is this a one-off or a repeatable template?
  4. What platform is it for? (This determines aspect ratio and length)
  5. Do you have existing assets to work with? (Screenshots, footage, scripts)
  6. What's your budget for video tools?

Tool Integrations

ToolTypeMCPGuide
HeyGenAI avatarsYesheygen.md
HyperframesProgrammatic video-hyperframes.md
RemotionProgrammatic video-remotion.dev
RunwayAI generation-runwayml.com/docs

  • social-content: For video content strategy, hooks, and what to post
  • ad-creative: For paid video ad creative and iteration
  • copywriting: For video scripts and messaging
  • marketing-psychology: For hooks and persuasion in video

Связанные навыки

Looking for an alternative to video or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Локализованное описание: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Локализованное описание: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor

flags

Logo of vercel
vercel

Локализованное описание: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Локализованное описание: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Разработчик