nutrient-document-processing
[ Destaque ]O processamento de documentos Nutrient é uma habilidade de agente AI que permite processar documentos usando a API Nutrient DWS.
Navegue e instale milhares de habilidades para AI Agents no diretório Killer-Skills. Compatível com Claude Code, Windsurf, Cursor e mais.
O processamento de documentos Nutrient é uma habilidade de agente AI que permite processar documentos usando a API Nutrient DWS.
FFmpeg automation for cutting, trimming, concatenating videos. Audio mixing, timeline editing, transitions, effects. Export optimization for YouTube, social media. Subtitle handling, color grading, batch processing. Use for videogen projects, content creation, automated video production. Activate on video editing, FFmpeg, trim video, concatenate, transitions, export optimization. NOT for real-time video editing UI, 3D compositing, or motion graphics.
SoulMap AI: a content-first reflective companion with a curated Markdown knowledge base, Python detectors, and tooling to validate and bundle agent-ready skills.
O nano-pdf é uma ferramenta de processamento de PDF que fornece recursos para extração de texto e manipulação de arquivos
git-standards is a high-performance HTML to Markdown converter that maintains Conventional Commits 1.0.0 standards, benefiting developers with structured data extraction from 56+ document formats.
Cache expensive file processing results using SHA-256 content hashes — path-independent, auto-invalidating, with service layer separation.
Desenvolva APIs escaláveis com Spring Boot, utilizando padrões de arquitetura e design de API REST.
Data Extraction (from https://stats.nba.com) and Processing Scripts to Produce the NBA Database on Kaggle (https://kaggle.com/wyattowalsh/basketball)
Hugging Face Jobs é uma habilidade de agente de IA que permite executar trabalhos em nuvem para processamento de dados e treinamento de modelos
O Review-PR é uma ferramenta de análise de solicitações de pull que automatiza o processo de revisão de código
This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.
Audit codebase files against the 4-pillar quality manifesto using RECON work batches, with batch processing and context budget management (Titan Paradigm Phase 2)