-
Notifications
You must be signed in to change notification settings - Fork 361
[BUG] JSMA massive gpu memory consumption #187
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
Hi @Dontoronto , have you tested your NVIDIA device for other attacks such as PGD or CW? Based on the fact that you're trying to attack imagenet on your only 6GB device, so I'm not quite sure if it's because your GPU is too tiny or if it's a problem with the code. |
yes i tested it. currently I'm running deepfool and pgd attacks without any problems. the problem occures at this line the variable |
Roger that, I'm going to do some testing and debugging to try to find the problem and fix it! 😘 |
i would like to give more information but my computer is currently generating a deepfool dataset. Thank you very much!😃 |
It seem that I have found the cause of the problem, due to an overly large dimension of the input tensor in the calculation of the Jacobi matrix. def compute_jacobian(model, x):
def model_forward(input):
return model(input)
jacobian = torch.autograd.functional.jacobian(model_forward, x)
return jacobian In the above code, even if I just input 3 images (from ImageNet), its GPU memory usage reaches 11G, and 5 => 16G, 6 => 36G. So even if I'll try to improve the algorithm and try to make it work on ImageNet! |
you are awesome! i really appreciate your effort :) |
Hi @Dontoronto , on a bad note, I've been trying to reduce memory consumption on Here are my reasons why. First, according to the original JSMA attack paper, Algorithm 2 and Algorithm 3 The JSMA attack will try to travel all (p1, p2) pairs of tau, and the tau in Second, when computing SM (Saliency Map), we need to run the addition once for each element in the matrix, an operation that is In the end, this is actually not bug, and if you are planning to run JSMA attacks on |
@rikonaka Sorry for causing you so much work. All things you mentioned sound plausible. I just stepped over this while generating samples for my thesis. Unfortunately I just have a 6gb gpu and can't use jsma for the imagenet case. I try to use OnePixel to get L0 attacks. |
You can close this issues, if I have an update I'll comment below!👍 |
✨ Short description of the bug [tl;dr]
today i tried to run jsma on an imagenet sample. Sample had the shape (1,3,224,224). JSMA code stuck a little bit in the approximation and then an error message popped up writing "JSMA needs to allocate 84,.. GiB of gpu memory" while my nvidia only had 6gb.
When looking into the code i could see a lot of clones, inits etc. which costs a lot of memory, computation device transfers etc. I think some smarter guys than me could be able to optimize the code to work on lower memory consumption.
💬 Detailed code and results
Traceback (most recent call last):
File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torchattacks\attacks\jsma.py", line 116, in saliency_map
alpha = target_tmp.view(-1, 1, nb_features) + target_tmp.view(
File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torch\utils_device.py", line 78, in torch_function
return func(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 84.41 GiB. GPU 0 has a total capacity of 6.00 GiB of which 4.21 GiB is free. Of the allocated memory
704.63 MiB is allocated by PyTorch, and 29.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLO
C_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-vari
ables)
The text was updated successfully, but these errors were encountered: