You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Opening this since @trimental is much more knowledgeable of the GPU related parts
After #74 the final piece of the lower peak memory usage puzzle is preventing decoding the entire compressed image blob in memory to render. The reasonable solution here, to me at least, seems to be chunking the image horizontally to render N rows instead of rendering the whole thing at once. This would lower the memory usage since we can control the number of rows used to use a smaller buffer. Does this seem workable @trimental?
The text was updated successfully, but these errors were encountered:
Right now the most efficient way to upload our image data from normal cpu ram memory to gpu texture memory is to use write_texture which is stores the data in staging memory and submits it to the gpu together with other queued gpu commands.
Even if we were to use multiple calls to write_texture so they were row by row, they would still be stored in staging memory until the submit I believe so there would be no point. It might be possible to stream an upload to a gpu buffer and then copy to a gpu texture but this would have performance hits.
I think the only proper way to get even lower memory usage would be to used a compressed texture format and write that to the gpu.
Opening this since @trimental is much more knowledgeable of the GPU related parts
After #74 the final piece of the lower peak memory usage puzzle is preventing decoding the entire compressed image blob in memory to render. The reasonable solution here, to me at least, seems to be chunking the image horizontally to render N rows instead of rendering the whole thing at once. This would lower the memory usage since we can control the number of rows used to use a smaller buffer. Does this seem workable @trimental?
The text was updated successfully, but these errors were encountered: