Solutions

A solution in Pipelogic is not just a model. It is the full working flow describing where data comes from, how it is processed, which models or logic are applied, and how results are delivered. Users can build anything from simple solutions that process inputs and save results to files or databases, to custom solutions that interact with apps, as well as autonomous systems with agentic capabilities.

Pipelogic’s modular design means you do not have to rebuild workflows from scratch each time. You can start with proven building blocks, swap out the components that need to change, and add custom logic only where new functionality is required.

When To Reuse Existing Building Blocks And When To Build Custom Logic

You should reuse existing building blocks whenever they already solve the job your product needs. In Pipelogic, many components exist in families: they play the same role in a workflow, but fit different environments, runtime backends, or delivery targets.

That is not duplication. That is what lets you adapt a solution without redesigning it.

Example: Querying an AI model

Suppose your solution has a step like this:

prompt in -> model answers -> answer continues downstream

That role can be filled by different components:

  • Query Ollama LLM (query-ollama-llm)
  • Query SGLang LLM (query-sglang-llm)
  • Query VLLM LLM (query-sglang-llm)
  • Query Anthropic LLM (query-anthropic-llm)
  • Query OpenAI LLM (query-openai-llm)

You do not create a different solution because the model backend changed. You swap the component that fulfills that role.

Why do we need three options?

  • query-ollama-llm when you want a local or self-hosted Ollama setup
  • query-sglang-llm when you want self-hosted serving through SGLang
  • query-sglang-llm when you want self-hosted serving through VLLM
  • query-anthropic-llm when you want Anthropic-hosted serving of models, like Claude.
  • query-openai-llm when you want OpenAI-hosted serving of models, like ChatGPT.

The product need stays the same: send text in, get model output back. What changes is how and where that model is served.

Example: Streaming video into the solution

Suppose the solution begins with visual input:

video in -> analysis -> result

That same role can also have multiple reusable components:

  • Input browser webcam (input-browser-webcam)
  • Input video URL (input-video-url)
  • Input image file (input-image-file)
  • Input video file (input-video-file)

Each one is useful in a different case:

  • input-browser-webcam for browser-based interaction, demos, remote inspection, or user-facing camera tools
  • input-video-url for RTSP cameras, IP cameras, live remote streams, and infrastructure-connected feeds
  • input-image-file for offline processing, repeatable evaluation, and file-based image input
  • input-video-file for offline processing, repeatable evaluation, and file-based video input

These are three ways to solve the same role: getting visual data into the system.

Example: Sending results out

At the end of a solution, you may want to deliver results in different ways:

  • Output browser video (output-browser-video)
  • Output video file (output-video-file)
  • Send HTTP (send-http)

You choose based on what the solution must produce:

  • output-browser-video when the result should be seen live
  • output-video-file when the result should be stored
  • send-http when the result should trigger or feed another system

Again, the role is stable. The delivery target changes.

When should you build custom logic?

Build custom logic only when the existing component families no longer cover what makes your product different.

That usually means:

  • the missing part is a new processing capability
  • the business rule is specific to your product
  • the integration does not exist yet
  • the data transformation is unique, not just differently hosted

The Minimum Path From Idea To A Running Result

The minimum path is not “design the perfect architecture first.” It is:

  1. Pick the outcome you want.
  2. Find the closest existing solution or component family that already solves most of it.
  3. Replace the input, model, and output with the ones that match your real use case.
  4. Run it as early as possible.
  5. Add custom logic only where the existing parts stop short.
  6. Keep the same solution shape as you move from prototype to production.

For example, if your goal is an assistant that sees a live feed, reasons over it, and returns a result, you do not start by writing everything from scratch. You start with an existing visual input, pick the AI backend that fits your environment, choose the output that matches your product, and only then write the missing product-specific step.

Was this page helpful?