-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Buffer donation to a jit function on GPU #1273
Comments
Buffer donation has been checked in! |
Thanks Peter, do you know how can I leverage it to reduce the memory consumption in the example above? So far, even if I do
I still get peak memory of 4x2 = 8Gb, and a message
|
I believe that means that there wasn't an output with the same shape that could have reused that buffer (or there weren't an equal number of such outputs as inputs). |
Interesting - how come it doesn't work in this example then? From my understanding here there's 1 input, 1 output, both of shape and type |
FYI buffeer donation is only supported on TPU at the moment, XLA team are working to support this on CPU/GPU but that may be why we cannot use the donation. |
I see, thanks! Could you please reopen this issue then? |
Fixed by #3800 |
Below is a CNN iteratedly applied to a 2Gb input. It produces a 4x2Gb = 8 Gb peak memory consumption.
Without JIT, the peak memory consumption is 2x2Gb = 4 Gb, as is expected.
Would be great to achieve a comparable memory usage with JIT by input buffer donation to the jit function (not sure on the exact terminology).
Thanks a lot!
The text was updated successfully, but these errors were encountered: