The OpenVINO backend enables optimized execution of deep learning models on Intel hardware, leveraging Intel's OpenVINO toolkit for inference acceleration.
OpenVINO backend supports the following hardware:
- Intel CPUs
- Intel integrated GPUs
- Intel discrete GPUs
- Intel NPUs
For more information on the supported hardware, please refer to OpenVINO System Requirements page.
executorch
├── backends
│ └── openvino
│ ├── runtime
│ ├── OpenvinoBackend.cpp
│ └── OpenvinoBackend.h
│ ├── scripts
│ └── openvino_build.sh
│ ├── tests
│ ├── CMakeLists.txt
│ ├── README.md
│ ├── __init__.py
│ ├── partitioner.py
│ ├── preprocess.py
│ └── requirements.txt
└── examples
└── openvino
├── aot_optimize_and_infer.py
└── README.md
Before you begin, ensure you have openvino installed and configured on your system:
git clone https://github.com/openvinotoolkit/openvino.git
cd openvino && git checkout releases/2025/1
git submodule update --init --recursive
sudo ./install_build_dependencies.sh
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
make -j$(nproc)
cd ..
cmake --install build --prefix <your_preferred_install_location>
cd <your_preferred_install_location>
source setupvars.sh
Note: The OpenVINO backend is not yet supported with the current OpenVINO release packages. It is recommended to build from source. The instructions for using OpenVINO release packages will be added soon. For more information about OpenVINO build, refer to the OpenVINO Build Instructions.
Follow the steps below to setup your build environment:
-
Setup ExecuTorch Environment: Refer to the Environment Setup guide for detailed instructions on setting up the ExecuTorch environment.
-
Setup OpenVINO Backend Environment
- Install the dependent libs. Ensure that you are inside
executorch/backends/openvino/
directoryNote: To achieve optimal performance with NNCF quantization, you should install the latest development version of NNCF (version 2.16.0.dev0+191b53d9 or higher).pip install -r requirements.txt
-
Navigate to
scripts/
directory. -
Build OpenVINO Backend C++ Libraries and Executor Runner: Once the prerequisites are in place, run the
openvino_build.sh
script to start the build process. By default, OpenVINO backend will be built undercmake-out/backends/openvino/
aslibopenvino_backend.a
./openvino_build.sh
Build OpenVINO Backend Python Package with Pybindings: To build and install the OpenVINO backend Python package with Python bindings, run the
openvino_build.sh
script with the--enable_python
argument. This will compile and install the ExecuTorch Python package with the OpenVINO backend into your Python environment. This option will also enable python bindings required to execute OpenVINO backend tests andexport_and_infer_openvino.py
script insideexecutorch/examples/openvino
folder../openvino_build.sh --enable_python
Please refer to README.md for instructions on running examples of various of models with openvino backend.