Overview

Livepeer Pipelines enable developers to build, deploy, and scale real-time AI video processing workflows. By combining multiple AI models into a single workflow, you can create sophisticated video experiences—from style transfer to object detection to live translation—without managing complex infrastructure. Pipelines abstract away the complexities of video processing, inference management, and scaling, letting you focus on creating unique video experiences.

Understanding Livepeer Pipelines

What is a Pipeline?

A Pipeline is a composable workflow that applies AI processing to live video in real-time. Think of it as an assembly line where each station performs a specific AI operation on your video stream—like style transfer, object detection, or live translation.

Right now, pipelines operate on video and return either video or data streams. In the future, pipelines will express additional modalities such as live text-to-video.

Key Concepts

Real-time Processing

Unlike traditional video AI that works with pre-recorded files, Pipelines operate on live video streams with minimal added latency. This enables interactive experiences like:

  • Live AI-powered video filters

  • Real-time content moderation

  • Dynamic visual effects

  • Instant language translation

Composability

Pipelines are built from smaller, reusable units that can be chained together. This modular approach means you can:

  • Combine multiple AI capabilities into a single workflow

  • Reuse common processing patterns

  • Modify individual components without rebuilding the entire Pipeline

Right now, these units are represented as ComfyUI nodes.

Pipeline Types

Pipelines generally fall into three categories:

  1. Transformation Pipelines

    • Change how video looks (style transfer, filters)

    • Modify video properties (resolution, framerate)

    • Add visual elements (overlays, effects)

  2. Analysis Pipelines

    • Typically outputs JSON or text (high volume)

    • Detect objects or activities

    • Track movement

    • Generate metadata

  3. Generation Pipelines

    • Create new video content

    • Add synthetic elements

    • Produce alternative views

Pipelines can be composed, so you could use the output of one pipeline to inform another.

Pipeline Lifecycle

A Pipeline moves through several states:

  1. Creation

    • Built using ComfyUI and ComfyStream or custom Python code (coming soon)

    • Defined processing steps and parameters

    • Set resource requirements

  2. Publication

    • Made available for other developers to see, use, and remix
  3. Usage

    • Integrated into creator workflows (available now) or applications (API docs coming soon)

    • Processes live video streams

    • Pipeline control parameters updated

    • Monitored for performance

  4. Remixing (Coming Soon)

    • Another developer uses Pipeline code as a starting point to create a novel experience

Key Characteristics

Configurable

  • Behavior can be adjusted through configuration

  • Supports runtime parameter updates

  • Enables dynamic control during streaming

Observable

  • Reports health and performance metrics

  • Provides detailed error information

  • Enables monitoring and debugging


Community & Support