FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
-
Updated
Aug 28, 2023 - Python
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
Анализ трафика на круговом движении с использованием компьютерного зрения
Yolov5 TensorRT Implementations
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
VitPose without MMCV dependencies
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
Convert yolo models to ONNX, TensorRT add NMSBatched.
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
Convert ONNX models to TensorRT engines and run inference in containerized environments
An oriented object detection framework based on TensorRT
"Narrative Canvas" project is an edge computing project based on Nvidia Jetson. It can transform uploaded images into captivating stories and artworks.
Sinapsis repo with templates for face detection, face recognition and face verification
Export (from Onnx) and Inference TensorRT engine with Python
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
Real-time traffic analysis, a small yet tricky pet project
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
Practical, beginner-friendly LLM projects using Python, LangChain, and LangSmith. Modular, reusable, and easy to run.
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."