ocv
The CV layer of pipeml. Domain wrappers over OpenCV plus pipelang serialization.
Pipelang construction is owned by the wrapper classes — prefer them over hand-rolling raw pipelang values, which is more code and breaks when the underlying record shape changes.
Image and ColorSpace
Header: pipelogic/cv/image.hpp. Implementation: src/image.cpp.
enum class ColorSpace { NONE, GRAY, RGB, RGBA, BGR, BGRA };
int channels_of(ColorSpace); // 1, 3, 3, 4, 3, 4 (NONE → throws)
Constructors
Image(); // empty
Image(cv::Mat img, ColorSpace cs = ColorSpace::BGR); // copies
Image(int height, int width, int cv_type, ColorSpace cs = ColorSpace::BGR);
Image(pipelang::Object<types::Image>); // from pipelang value
Image(pipelang::Object<pipelang::Any>); // coerced
Image(const pipelang::Reference<types::Image>&);
Image(pipelang::Reference<types::Image>&&);
Image(pipelang::ConstReference<types::Image>);
Image is non-copyable but movable.
Methods
- Name
int width() const, height() const, channels() const, depth() const- Description
- Name
ColorSpace color_space() const- Description
- Name
bool empty() const- Description
- Name
const cv::Mat mat() const- Description
Read-only view. Mutating through this is undefined behavior.
- Name
cv::Mat& mutable_mat()- Description
Materializes ownership (clones if backed by pipelang) and returns a writable matrix.
- Name
const uchar* data() const, uchar* mutable_data()- Description
- Name
void resize(const cv::Size&, int interpolation = cv::INTER_LINEAR)- Description
- Name
void crop(cv::Rect)- Description
- Name
void convert(ColorSpace)- Description
In-place color-space conversion.
- Name
void convert(ColorSpace, cv::OutputArray dst) const- Description
Out-of-place variant.
- Name
void convert(ColorSpace, Image& dst) const- Description
- Name
bool can_convert(ColorSpace) const- Description
- Name
Image clone() const- Description
- Name
operator pipelang::Object<types::Image>()- Description
Materializes the pipelang record. Called automatically when you return an
Imagefrom your worker.
Notes
- Construction from
cv::Matalways allocates andmemcpys into a fresh pipelang buffer, despite older comments suggesting otherwise. - The pipelang buffer is 8-bit-per-channel only; non-
CV_8Umatrices won't round-trip. - Conversion goes through a hand-written 5×5 colour-space table;
ColorSpace::NONEthrows.
MatTensor
Header: pipelogic/cv/tensor.hpp. Implementation: src/ocv_tensor.cpp.
A bridge between OpenCV's cv::Mat and pipeml's infer::Tensor taxonomy.
class MatTensor : public infer::Tensor { ... };
Constructors
MatTensor(cv::Mat data) noexcept; // dtype inferred
MatTensor(cv::Mat data, infer::DataType type); // explicit, validates
MatTensor(const infer::Tensor&) noexcept; // clones
MatTensor(infer::MovableTensor&&) noexcept;
template <pipelang::concepts::ConstSizeAtomic T>
MatTensor(pipelang::Object<infer::types::Tensor<T>>) noexcept;
// + Reference / ConstReference variants
Methods
- Name
const cv::Mat& mat() const- Description
Read-only view of the underlying matrix. Use this when you only need to read pixel/element data.
- Name
cv::Mat& mutable_mat()- Description
Writable reference to the underlying matrix. Operate in place on this to avoid extra allocations.
- Name
int ndims() const- Description
Number of logical tensor dimensions, including channels. Channels are exposed as a separate trailing dim, so a
(H, W, C)image reportsndims() == 3.
- Name
int64_t ldim(int dim) const- Description
Length of the
dim-th logical dimension. Withndims()and a loop overldim(i)you can recover the full shape without touching the OpenCV layout details.
- Name
size_t size() const- Description
Total element count — the product of all logical dimensions. Useful for sanity-checks before passing the buffer to an inference engine.
- Name
infer::DataType type() const- Description
Logical element type —
BOOL_T,UINT8_T,INT8_T,UINT32_T,INT32_T,UINT64_T,INT64_T,FP32_T,FP64_T. Compare against this when binding the tensor to a model input rather than poking atcv::Mat::type().
- Name
int width() const, height() const- Description
Image-style accessors for the last two spatial dimensions. Convenience wrappers over
ldim().
- Name
template<typename T> infer::PipeTensor<T> move_pipe_tensor()- Description
Releases the
cv::Mat, downcasts the heldMovableTensortoPipeTensor<T>, and returns by move. Use this when handing off ownership to inference code that expects the templated tensor form.
infer::DataType covers BOOL_T, UINT8_T, INT8_T, UINT32_T, INT32_T, UINT64_T, INT64_T, FP32_T, FP64_T. BOOL_T/UINT8_T both map to CV_8UC*. UINT64_T/INT64_T/FP64_T all map to CV_64FC* (8-byte preserving bit pattern).
Geometry
Header: pipelogic/cv/geometry.hpp. Implementation: src/geometry.cpp.
Pipelang named types
using DetectedClass = pipelang::Named<"DetectedClass",
pipelang::Record<RecordField<"id", UInt64>,
RecordField<"confidence", Double>>>;
template <typename T> using Point = pipelang::Named<"Point", ...>;
template <typename T> using Vector2d = pipelang::Named<"Vector2d", ...>;
template <typename T> using Rectangle = pipelang::Named<"Rectangle", ...>;
using BoundingBox = pipelang::Named<"BoundingBox", ...>;
using Landmark = pipelang::Named<"Landmark", ...>;
using Mask = pipelang::Named<"Mask", infer::types::Tensor<Bool>>;
using Segmentation = pipelang::Named<"Segmentation", ...>;
Wrapper classes
Use these — never construct named-records by hand.
class DetectedClass {
DetectedClass(); // (id=0, conf=0)
DetectedClass(uint64_t id, double confidence);
DetectedClass(PipeBase obj);
uint64_t id() const;
double confidence() const;
void set_id(uint64_t), set_confidence(double);
operator PipeBase() const;
bool operator==(const DetectedClass&) const noexcept;
};
class Point2d {
Point2d();
Point2d(cv::Point2d);
Point2d(double x, double y);
// + pipelang ctors
operator cv::Point2d() const noexcept; // ergonomic OpenCV interop
double x() const, y() const;
};
class Rect2d {
Rect2d();
Rect2d(cv::Rect2d); // uses .tl(), .br()
Rect2d(Point2d top_left, Point2d bottom_right);
// + pipelang ctors
operator cv::Rect2d() const noexcept;
Point2d top_left() const, bottom_right() const;
};
class Landmark {
Landmark();
Landmark(Point2d, double confidence);
Point2d point() const;
double confidence() const;
};
class BoundingBox {
BoundingBox();
BoundingBox(DetectedClass, Rect2d);
DetectedClass detected_class() const noexcept;
Rect2d rectangle() const noexcept;
};
class Vector2d {
Vector2d();
Vector2d(Point2d position, Point2d orientation);
};
class Mask {
Mask();
Mask(MatTensor); // boolean tensor
Mask(cv::Mat); // CV_8UC1 or CV_32FC1 (auto-thresholded at 0.5)
const MatTensor& tensor() const noexcept;
operator pipelang::Object<types::Mask>();
};
class Segmentation {
Segmentation();
Segmentation(DetectedClass, Mask);
};
Free utilities
float rect_iou(const Rect2d& lhs, const Rect2d& rhs);
float rect_iosd(const Rect2d& lhs, const cv::Rect2d& rhs); // intersection / sym diff
cv::Point array2point(const std::array<int, 2>&); // {y, x} -> cv::Point(x, y)
cv::Rect array2rect (const std::array<int, 4>&); // {y1, x1, y2, x2}
cv::Rect clip(const cv::Rect2d&, const cv::Mat&);
cv::Rect2d scale(const Rect2d&, float scale_factor);
template <typename T>
Rect2d covering_rect(const std::vector<cv::Point_<T>>&) noexcept;
array2point/array2rect accept (y, x) order — opposite of OpenCV's cv::Point(x, y) convention. Watch out.
What's next
- API: detect —
detect::params::*mixins, ObjectDetector, ImageClassifier, PoseEstimator. - API: triton — Triton client,
infer::Tensor/Modelabstraction. - How to write a detection worker — full walkthrough.