video-use

Open-source Python library for AI-powered video editing. Handles the transcript generation and editing phase (filler word removal, silence cutting, retake detection) using word-level timestamps, then passes structured output downstream to HyperFrames or Remotion for motion graphics.

Key facts

  • Type: Video editing library (Python)
  • Status: Active (open-source)
  • Transcription backends: OpenAI Whisper, Whisper.cpp (local, free, RAM-intensive), ElevenLabs API
  • Output: Edited video (cuts applied) + word-level timestamp JSON for motion graphic sync
  • Integration: Works with both Remotion and HyperFrames for the animation step

What it does in the pipeline

In the AI video editing pipeline: raw recording → video-use (transcribe + cut filler/silences/retakes) → HyperFrames/Remotion (add motion graphics, sync animations to word timestamps) → FFmpeg → MP4 [012]

Related

Source references

  • [012] Nate Herk — Video editing & content creation cluster (2026-04-15 to 2026-04-23)