-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Multiple GPU #63
Comments
It should not be the bottleneck for generating engine.You can save engine only for the first time, and latter you can load from engine file. |
It is not be the bottleneck for generating engine。After create the engine file, I want to run the engine file on the specified GPU,so I set GPU ID by "cudaSetDevice", but it did not work. |
Have you ever done this experiment:For graphics cards with the same architecture, engine files generated under low-profile graphics cards (1060)be used under high-profile graphics cards(1080)? |
hi @guods, it nice to see you again :) and for the second question: |
Thanks for you reply. I also read the words in TensorRT document, although the output value of cudaSetDevice()(before create engine) is error, it create the engine and get the correct result, it suggested that it is no use by cudaSetDevice() . |
It should not return error, emmm... what kind of error did you get? |
I make the error deliberately, I want to know if the engine file is still generated properly even if I set it incorrectly. I set it incorrectly and the file is still generated properly. For ctreating the engine, it |
I am not sure your issues. As @zerollzeng said, the engine is not generic for different architecture cards. Maybe just try to set CUDA_VISIBLE_DEVICES to the value which graphic card you want to create engine and deploy. |
Thank you for your work, but I have some questions:
How does the generated engine run on multiple graphics cards? How to set GPU Id number ?
The text was updated successfully, but these errors were encountered: