No description
Find a file
Frédérik Bilhaut c074d0a67d add 'inspect' binary
2025-06-09 18:23:26 +02:00
doc add 'inspect' binary 2025-06-09 18:23:26 +02:00
src add 'inspect' binary 2025-06-09 18:23:26 +02:00
.gitignore initial commit 2025-01-30 11:19:42 +01:00
Cargo.toml bump version number 2025-03-23 17:43:34 +01:00
LICENSE.txt initial commit 2025-01-30 11:19:42 +01:00
Readme.md fix readme 2025-01-31 16:32:56 +01:00

🧩 ORP: a Lightweight Rust Framework for Building ONNX Runtime Pipelines with ORT

💬 Introduction

orp is a lightweight framework designed to simplify the creation and execution of ONNX Runtime Pipelines in Rust. Built on top of the 🦀 ort runtime and the 🔗 composable crate, it provides an simple way to handle data pre- and post-processing, chain multiple ONNX models together, while encouraging code reuse and clarity.

🔨 Sample Use-Cases

GPU/NPU Inferences

The execution providers available in ort can be leveraged to perform considerably faster inferences on GPU/NPU hardware.

The first step is to pass the appropriate execution providers in RuntimeParameters. For example:

let rtp = RuntimeParameters::default().with_execution_providers([
    CUDAExecutionProvider::default().build()
]);

The second step is to activate the appropriate features (see related section below), otherwise ir may silently fall-back to CPU. For example:

$ cargo run --features=cuda ...

Please refer to doc/ORT.md for details about execution providers.

📦 Crate Features

This create mirrors the following ort features:

  • To allow for dynamic loading of ONNX-runtime libraries: load-dynamic
  • To allow for activation of execution providers: cuda, tensorrt, directml, coreml, rocm, openvino, onednn, xnnpack, qnn, cann, nnapi, tvm, acl, armnn, migraphx, vitis, and rknpu.

⚙️ Dependencies

  • ort: the ONNX runtime wrapper
  • composable: this crate is used to actually define the pre- and post-processing pipelines by composition or elementary steps, and can in turn be used to combine mutliple pipelines.