-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Deci Platform
📚 This guide explains how to streamline the process of compiling and quantizing YOLOv5 🚀 to achieve better performance with the Deci platform. UPDATED 10 August 2022.
- About the Deci Platform
- First-time setup
- Runtime optimization and benchmarking of your model
The Deci platform includes free tools for easily managing, optimizing, and deploying models in any production environment. Deci supports all popular DL frameworks, such as TensorFlow, PyTorch, Keras and ONNX. All you need is our web-based platform or our Python client to run it from your code.
-
Improve Inference performance by up to 10X
Automatically compile and quantize your models and evaluate different production settings to achieve better latency, throughout, reduce model size and memory footprint on your hardware. -
Find the best inference hardware for your application
Benchmark your model's performance on various hardware (including edge) devices with a click of a button. Eliminate the need to manually setup and test various hardware and production settings. -
Deploy with a Few Lines of Code
Leverage Deci's python-based inference engine. Compatible with multiple frameworks and hardware types.
For more information about the Deci platform please visit Deci's website.
Go to https://console.deci.ai/sign-up and open your free account.
In order to start optimizing your pre-trained YOLOv5 model, you will need to convert it to ONNX format. See YOLOv5 Export Tutorial for instructions on how to convert your model to ONNX format.
Go to "Lab" tab and click the "New Model" button in the top right part of the screen to upload your YOLOv5 ONNX model.
Follow the steps of the model upload wizard to select your target hardware as well as desired batch size and quantization level for the model compilation.
After filling in the relevant information, click "Start". The Deci platform will automatically perform a runtime optimization of your YOLOv5 model for the hardware you selected as well as benchmark your model on various hardware types. This process takes approximately 10 minutes.
Once done, a new row will appear on your screen underneath the baseline model you previously uploaded. Here you can see the optimized version of your pre-trained YOLOv5 model.
- You can then download your optimized model by clicking on "Deploy" button
You will then be prompted to download your model and receive instructions on how to install and use Infery - Deci's runtime inference engine.
The use of Infery is optional. You can get the python raw files and use them with any other inference engine of your choice.
- Explore the optimization and benchmark results on the "Insights" tab.
© 2024 Ultralytics Inc. All rights reserved.
https://ultralytics.com