Create a Component

A Pipelogic component is a small program that consumes typed pipeline data, transforms it, and emits typed output. Components are the building blocks of every pipeline.

You write components in Python or C++. Both languages produce the same kind of artifact (a container image registered to your workspace) and are first-class peers — pick the one that matches your stack.

When to pick which

LanguagePick when
PythonYou're integrating an ML model from PyTorch, ONNX Runtime, HuggingFace, Ultralytics, or any Python ecosystem library. You want fast iteration and the platform's pipelogic.cv / pipelogic.infer helpers.
C++Latency or throughput matter. You want to use the pipeml library directly (Triton client, multi-object trackers, OpenCV without the Python wrapper).

What's in a component

Every component has the same four pieces:

  • Name
    component.yml
    Type
    metadata
    Description

    Declares the name, runtime, typed inputs/outputs, configuration parameters, and tags.

  • Name
    src/
    Type
    implementation
    Description

    Either main.py (Python) or main.cpp (C++), plus pinned dependencies.

  • Name
    .pipecomponent
    Type
    registration
    Description

    Workspace-side metadata that ties the directory to your registered component. ppl init creates it.

  • Name
    README.md
    Type
    documentation
    Description

    Human-facing explainer. Pushed to the platform via ppl component update <id> --readme README.md.

The build & release loop

The ppl CLI handles everything end-to-end:

ppl login                 # authenticate
ppl init                  # creates .pipecomponent

ppl release builds the container image, runs your tests if any, and uploads the new version to your workspace. Backends that already use the component pick up the latest version automatically (or pin a specific version_id).

The type system

Inputs and outputs are strongly typed. component.yml declares them:

worker:
  input_type: Image                    # single input
  # or
  input_types:                         # multiple inputs
    - Image
    - "[BoundingBox]"
  output_type: Image

Type compatibility is checked at backend-wire time — connecting an Image output to an [BoundingBox] input fails immediately. See Types for the full type catalog.

Configuration

Parameters declared in config_schema show up in the web UI and become attributes on pipelogic.worker.config (Python) or parse_config() / read_config(...) (C++):

config_schema:
  threshold:
    type: Double
    default: 0.5
    description: "Detection confidence threshold (0..1)."
  classes:
    type: "[UInt64]"
    default: []
    description: "Class IDs to detect. Empty means all."

Model serving

If your component runs ML inference, declare which serving runtime it needs via depends_on:

depends_on: ["triton"]   # or "torchserve", "ollama", "sglang", "vllm"

For file-served runtimes (Triton, TorchServe), also declare a file_schema slot so the platform knows where to load uploaded model artifacts. For hub-fetched runtimes (Ollama, SGLang, vLLM), declare a cache: block so the runtime pre-fetches weights. See Models for the full wiring.

Where to go next

Was this page helpful?