-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
when exporting to coreml (mlmodel format) NMS is not added. #7011
Comments
@mattrichard-datascience 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem. How to create a Minimal, Reproducible ExampleWhen asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem. Thank you! 😃 |
Similar here, Can we export CoreML model with NMS layers ? |
There's this script which I got from another github repo. It worked really well for YOLOv5 version 5 models, however now for version 6 models it's not quite working. Perhaps someone with more knowledge of PyTorch than me could debug. Usage (script must be in yolov5 repo directory): It used to output an mlmodel file which had the "preview" tab in Xcode, as well as NMS. Worked great with Vision. I had updated 1 line from the original here to fix 1 error, though it's still not working fuly. Error now is about the shape of a numpy array and the input size not matching...
|
@mattrichard-datascience @jaehyunshinML - After much effort, and trial and error, I got the .mlmodel files from the YOLOv5 (v6.1) export.py script to work within my iOS app on still images (that is my use case). If this is something you are also doing with your models I'd be happy to share the code. |
That is amazing!! It would be wonderful if you could share your code/repo! |
@jaehyunshinML - here is the main section of code where I do the "decoding" of the YOLOv5 output ( Further up in that same file I actually setup the model for inference. Apple's documentation really helped in setting some of this up, along with a code snippet from another user and issue on this repo, though I did make some modifications to make it more flexible to the model used. I'll make a full demo repo probably this weekend and share that as well! |
@mshamash omg,I'm learning your coreml_export-new.py.txt and another friend's code, but you solved it, which is very helpful to me |
@liuzhiguai the coreml_export script (originally from here) worked well for models up to and including YOLOv5 (version 5), however not on the new v6 models due to some network changes. I'm sure the code could be updated, but my knowledge of PyTorch/NumPy code is limited so it's a bit beyond me... The other code/repo you mentioned supposedly only works on v4 models and earlier and is no longer supported by the author. So depending on your use case and desired model version, you may be able to use them. I used the coreml_export.py script originally in my app, however now I wanted to use the new YOLOv5n models which are so much smaller and for that I had to figure out a workaround. Let me know if you have any questions! My email inbox is always open. |
@mshamash yes, the code I used has a lot of issues in version 6.1, I also don't know enough about PyTorch. I'm glad to see the code you provided. Now I'm going to learn your method.😁 |
@liuzhiguai - I didn't add NMS to the model. That's something the other scripts you and I shared could do, but this method can't (since I use output directly from YOLOv5 export script). So the downside is you cannot use the "Preview" tab in Xcode, but you still can build a very simple app to just take any image and run inference, which is what I'm hoping to do soon as an example. Since I'm doing this for still images, the speed is already quite good, but if you're working with video streams there may be a better/faster way to iterate through. What were you planning to use your model for (image? video?)? Note that the exported YOLOv5 mlmodel outputs 4 different MultiArrays, the last one is the most important one here. I believe previously you'd have to use the other 3 outputs and the sigmoid function to simulate the detection layer, which is now included in the model. I'm guessing they kept the other 3 outputs just for backwards compatability, since now the detection layer is included and the final output is already somewhat processed. You can check your mlmodel's outputs here http://netron.app I didn't know that adding NMS directly to the model could also improve efficiency of detection, thankfully this seems to work good enough for me. |
@mshamash Thank you. I saw the last output and combined the other three outputs. Because I want to do real-time video detection and tracking by receiving video frames of UAV, I will pay attention to some problems in detection speed(Just my guess). My research just came to this part, and I tried to use Turicreate,but M1 is not supported.So I'm thinking about to deal with the final output OR to add NMS to the model. Fortunately, I saw your previous answer👍🏻 and your answer today😁(haha) At present, there is no good method to solve these problems in some methods I have seen, The Create ML use yolov2 leads to low accuracy, Yolov5 is very good, but there is no NMS module after converting to mlmodel,so [VNRequest.results] cannot be converted to [VNrecognizedobjectobservation] for direct use so I will try your current method to do some experiments, If I get some results, I'll tell you. After you see the good methods, you can also share them😏 💪🏻 |
@liuzhiguai - Good luck with the project! Another user who used the same detection code (I based my array flattening/processing code off of theirs) got their setup to detect at 60fps on a 320x320 video stream, see video here. So I imagine the only part which would need perhaps some more optimization is the NMS, or maybe it will be fast enough. Only way to know is by trying. UAVs have very high resolution output if I'm not mistaken, so you may wish to consider tiling the frames as well to detect your relatively small objects. I do this too in my app for detecting small features on a model that was trained at 640x640...no point in training multi-megapixel models! Cheers. |
@liuzhiguai - I think I managed to fix the Python script to export the final .pt model to a .mlmodel including NMS! Now that the NMS is done in the model, it's much quicker than using the for loop especially when you have many objects (hundreds/thousands). I can also use the "Preview" tab now for the model in Xcode, and work with several other Vision methods. The script is attached here. Let me know how it works for you... It needs to be in the 'yolov5' repo directory. Usage:
|
@mshamash Thank you very much. I've been busy with some other things recently. I've tried your previous method (use For loop to split (1 * 25200 * 5+C) output), which is very effective. I'll try your new method,If I have new progress, I will reply to you |
@liuzhiguai @mattrichard-datascience - I submitted a pull request #7263 which has an updated export.py script so that the exported CoreML model has a NMS layer. |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Search before asking
YOLOv5 Component
No response
Bug
when exporting to coreml (mlmodel format) NMS is not added.
Environment
No response
Minimal Reproducible Example
No response
Additional
when exporting to coreml (mlmodel format) NMS is not added.
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: