Features
A short tour of what Pipelogic ships, with links to the in-depth pages for each area.
Typed dataflow
Every connection between components carries a typed stream. The type system is enforced at backend-wire time — connecting an Image output to a [BoundingBox] input fails immediately, before anything runs.
The type catalog covers atomics (Int32, Bool, String, Bytes, etc.), collections ([T], (T, U), records, unions), and named domain types (Image, BoundingBox, AudioFrame, Tensor, Mask, Landmark, Polygon, VideoFrame, ...).
The component model
A component is a small program with a typed I/O contract. It can be written in Python (pipelogic package, fast iteration, ML-friendly) or C++ (pipeml library, low latency, full Triton/tracker access).
Pipelogic ships 165+ components covering computer vision, audio, TTS, OCR, LLM access, messaging, file I/O, and visualization. You can use them as-is, fork, or write your own.
Stream transformations
21 built-in transformations let you reshape data inside a backend without writing a custom component:
- Cardinality:
flatten,lift_unroll,lift_reroll,length,repeat,infinite-repeat - Joins:
join,shuffle,unite_streams,select_stream - Type conversion:
convert_value,constant - Pack / unpack: tuple, record, named, union (
pack_*/unpack_*) - Predicates:
filter,cond,delay_by_one
Model deployment
Components that need ML inference connect to a serving runtime. Five ship by default; you can add your own:
| Runtime | Best for |
|---|---|
| Triton | ONNX, TensorRT, PyTorch, TensorFlow |
| TorchServe | PyTorch models with custom handlers |
| Ollama | Local LLM inference |
| vLLM | High-throughput LLM serving |
| SGLang | Vision-language models, structured generation |
The platform manages the runtime lifecycle — your component just declares which runtime it needs in component.yml and the platform spins it up.
Visual editor + CLI
The App lets you design backends visually with live preview. The ppl CLI lets you do everything from a terminal — build, release, deploy, debug, monitor.
Both manipulate the same underlying backend objects. Anything you build in one is editable in the other.
Managed cloud, on-prem, or air-gapped
Pick the runtime that matches your constraints — managed cloud for elastic GPUs, on-prem for your own hardware, air-gapped for sites with no internet egress. The same backend deploys unchanged onto any of them.
Bring your own everything
The bundled model-serving runtimes (Triton, TorchServe, Ollama, vLLM, SGLang) cover most ML inference. Everything else — third-party APIs, message brokers, databases, browsers, hardware devices — plugs in by writing a typed component. Same SDK, same typed streams, same ppl release. There is no separate plugin or extension API.
Maturity at a glance
| Feature | Maturity |
|---|---|
| Web backend editor | 🟢 Stable |
ppl CLI | 🟢 Stable |
Python SDK (pipelogic) | 🟢 Stable |
C++ SDK (pipeml) | 🟢 Stable |
| Live streaming, multimodal | 🟢 Stable |
| Stream transformations | 🟢 Stable |
| Triton, TorchServe, Ollama, vLLM, SGLang serving | 🟢 Stable |
| Backend-building agents | 🟡 Preview |