You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using this library in a server that has 4 GPU's with 16 GB each GPU. When I set the number to pre-annotated images to more than 8 the one GPU get's without memory making the process to crash.
Is this possible to run the process across multiple GPU's? If not what it's the optimal size from the annotated images that need to be used? Cause I've been resizing my images from 960 down to 448 and even with the low resolution I get the memory error. In which type of GPU is recommended to use this tool?
The text was updated successfully, but these errors were encountered:
I faced the same issue when running with just a extremely small dataset (7 images only), and it could not run on my decent 4GB vRAM GPU(I load the GroundedSAM model not SegGPT, but they all inherit from DetectionBaseModel, so same issue).
I read the code and seems that the way it works is that the code process all images into the RAM(in GPU in this case) at the same time to create a dataset, then the dataset will be saved, this exploded the memory.
If what I found was true, I'm gonna proceed to see if there is a way to process images in a serial way (or, in batches), in order to optimize memory usage for small GPU.
I'm using this library in a server that has 4 GPU's with 16 GB each GPU. When I set the number to pre-annotated images to more than 8 the one GPU get's without memory making the process to crash.
Is this possible to run the process across multiple GPU's? If not what it's the optimal size from the annotated images that need to be used? Cause I've been resizing my images from 960 down to 448 and even with the low resolution I get the memory error. In which type of GPU is recommended to use this tool?
The text was updated successfully, but these errors were encountered: