- Support model optimizer. Multi nodes multi GPUs training and pruning.
- Support new model compiler.
- Support ARM CPU.
- Support custom runtimes and service cores.
- Support Benchmark Test Framework for Deep Learning Model.
- Support native cloud service.
- Support integration of model compiler and model quantization.
- Support RaspberryPi.
- Support optimizing Yolo model, and accelerate inference procedure.
- Support knowledge distillation on Resnet-50.
- Support Arm device compiling in Serving Engine.
- Support async mode in OpenVINO runtime.
- Model Compiler support Paddle model format.