-
Notifications
You must be signed in to change notification settings - Fork 95
Trying to remove a volume from a different node multiple times leaves volume in un-usable state #716
Comments
@kerneltime |
I have to investigate the behavior with the vim.virtualdiskmanager, but I think vmkfstool -U is too destructive to allow execution without checking attached status. |
vmkfstools removes the sidecar (KV files) ahead of unlinking the vmdk itself. So, its possible that the command errors but has removed the side car file. Seems like a bug in disk removal as its leaving the disk with meta-data removed. Again, the error seems to be a busy error at the lowest level. Which is expected given the disk is attached and hence opened. |
Yes, the removeVMDK needs to check that the disk isn't in use - there is no check in vmdk_ops.py:removeVMDK() - which is a bug in itself. Issue #719 will fix this. |
@govint |
@brunotm, absolutely, but removeVMDK must handle that check as its got no idea who's asking it to remove a VMDK. And with the multi-tenant support coming up, we can have multiple VMs in the same tenant having access to the same VMDKs and this scenario can be common. |
@govint |
FWIW I think adding overhead to every remove operation (check before remove) is less appealing that adding overhead to very few operations (recover on failure). Besides, I suspect checking for usage is not going to be trivial. |
Fixed with Bruno's PR #719 |
I ran into this issue when trying to test the retry logic for volumes.
Node 1
Node 2
The first time I get the following logs
After that each retry gives the following result
The text was updated successfully, but these errors were encountered: