Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

use onnx model to inference #75

Open
JensenHJS opened this issue Nov 12, 2019 · 4 comments
Open

use onnx model to inference #75

JensenHJS opened this issue Nov 12, 2019 · 4 comments

Comments

@JensenHJS
Copy link

I can provide the onnx model. Have you planned to support onnx models with yolov3 currently?
it's original model is yolov3.weights, using python API convert it to yolov3-608.onnx,then convert onnx to yolo-608.trt. Yolov3.weights is downloaded from darknet official website.
https://drive.google.com/drive/folders/1fRcxY5YgEQ8DUmS1tEdDKxL45SsKkvFh?usp=sharing

@lewes6369
Copy link
Owner

lewes6369 commented Nov 24, 2019

As I know, recently in TensorRT 6.0 the nvidia official has already provided a sample that running the onnx-yolov3 model. And it is written in python?

@JensenHJS
Copy link
Author

thank you for your reply . i have solved my problem

@ttdd11
Copy link

ttdd11 commented Nov 25, 2019

@JensenHJS in this repo they use a YOLO last layer (not the three convolution output layers). Were you able to get that into your onnx model?

@JensenHJS
Copy link
Author

onnx doesn't support yolo layer. the output of trt file go through yolo layer,it is the final output.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants