Skip to content

Latest commit

 

History

History
 
 

openvino

OpenVINO Backend for ExecuTorch

The OpenVINO backend enables optimized execution of deep learning models on Intel hardware, leveraging Intel's OpenVINO toolkit for inference acceleration.

Supported Hardware

OpenVINO backend supports the following hardware:

  • Intel CPUs
  • Intel integrated GPUs
  • Intel discrete GPUs
  • Intel NPUs

For more information on the supported hardware, please refer to OpenVINO System Requirements page.

Directory Structure

executorch
├── backends
│   └── openvino
│       ├── runtime
│           ├── OpenvinoBackend.cpp
│           └── OpenvinoBackend.h
│       ├── scripts
│           └── openvino_build.sh
│       ├── tests
│       ├── CMakeLists.txt
│       ├── README.md
│       ├── __init__.py
│       ├── partitioner.py
│       ├── preprocess.py
│       └── requirements.txt
└── examples
    └── openvino
        ├── aot_optimize_and_infer.py
        └── README.md

Build Instructions

Prerequisites

Before you begin, ensure you have openvino installed and configured on your system:

git clone https://github.com/openvinotoolkit/openvino.git
cd openvino && git checkout releases/2025/1
git submodule update --init --recursive
sudo ./install_build_dependencies.sh
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
make -j$(nproc)

cd ..
cmake --install build --prefix <your_preferred_install_location>
cd <your_preferred_install_location>
source setupvars.sh

Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon. For more information about OpenVINO build, refer to the OpenVINO Build Instructions.

Setup

Follow the steps below to setup your build environment:

  1. Setup ExecuTorch Environment: Refer to the Environment Setup guide for detailed instructions on setting up the ExecuTorch environment.

  2. Setup OpenVINO Backend Environment

  • Install the dependent libs. Ensure that you are inside executorch/backends/openvino/ directory
    pip install -r requirements.txt
    Note: To achieve optimal performance with NNCF quantization, you should install the latest development version of NNCF (version 2.16.0.dev0+191b53d9 or higher).
  1. Navigate to scripts/ directory.

  2. Build OpenVINO Backend C++ Libraries and Executor Runner: Once the prerequisites are in place, run the openvino_build.sh script to start the build process. By default, OpenVINO backend will be built under cmake-out/backends/openvino/ as libopenvino_backend.a

    ./openvino_build.sh

    Build OpenVINO Backend Python Package with Pybindings: To build and install the OpenVINO backend Python package with Python bindings, run the openvino_build.sh script with the --enable_python argument. This will compile and install the ExecuTorch Python package with the OpenVINO backend into your Python environment. This option will also enable python bindings required to execute OpenVINO backend tests and export_and_infer_openvino.py script inside executorch/examples/openvino folder.

    ./openvino_build.sh --enable_python

Run

Please refer to README.md for instructions on running examples of various of models with openvino backend.