-
Notifications
You must be signed in to change notification settings - Fork 95
Handle SVM/XVM of VMs with VMDK volumes attached. #407
Comments
VMDK volume is local to VM folder and still independent-persistent. scsi3:0.present = "TRUE" |
What is the result of docker volume ls before and after SVM ? |
This is expected, right? ESX Server is hardwired to look into dockvols folders on VM datastore. Regarding Issue 1
Why? How would it help? Volume would still be inaccessible to 'docker volume ls' or any other docker command. Good news is storage migration won't affect data path of running container. As a side note, migration workflow has advanced setting where you can specify destination for individual disk. One possible workaround would be to migrate vmdk docker volume to dockvols folder on destination VM datastore. Regarding Issue 2, How did you initiate migration- CLI or API or UI? Compute + Storage or storage only migration? Powered on or powered off VM? |
Hi Prashant, Regarding Issue 1 This is certainly not what we want because the disk is no more in the Why? How would it help? Volume would still be inaccessible to 'docker TG> The volume is no longer in control of the plugin, its in a location As a side note, migration workflow has advanced setting where you can TG> That is disk only migration, SVM is where the VM now starts running on Regarding Issue 2, TG> This is related to issue 288, but looks like we may want to always keep How did you initiate migration- CLI or API or UI? Compute + Storage or TG> SVM is done via VC and the VM is on all the time. I don't have two Govindan On Thu, May 26, 2016 at 4:29 AM, Prashant Dhamdhere <
|
Discussed more on this offline and we may need a feature request to support skipping disks during SVM. For the plugin, a migration of the VM must not move the disks out of the folder that the plugin uses to manage all of the docker volumes. Else, this leads to scenarios that simply can't be handled by the plugin at all. |
Would be nice to have a quick summary of the current behavior, what we should expect and how things will behave with SVM as of today.
I understand that - a copy will be created in he VM folder. WHat is "local disks" in this context -just independent disks attached to the new (relocateds) VM ?
I do not understand that. How is "creating new dockvols dir" related to "containers in the VM can't see volumes created earlier" ? What happens to the original volumes - can't they be attached? You seem to be saying "an original is still there and can be attached if datastore is accessible. but new copy is created in local VM folder and the plugin does not see it". What happens with metadatas - is it gettting moved too ? Can we use it to mitigate he volume location ? |
With the way docker vols are managed a) outside of the VM folder in a location chosen by the ESX service or the user and b) the fact that these are independent disks, a migration (SVM/XVM) is not supportable as of now. Vmotion is supported while only the storage migration workflows aren't . Plus we don't have a usecase to support these for now, hence closing. |
There are two issues with SVM and neither are impacted by the KV side car.
Issue 1
SVM is copying the VMDK volumes (attached as independent disks to the VM) from the source to the destination datastore (into the VM folder). This is certainly not what we want because the disk is no more in the folder managed by the VMDK plugin server.
a. VMDK volume created.
05/25/16 09:08:09 1009714736 [INFO ] *** createVMDK: /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk opts = {}
05/25/16 09:08:09 1009714736 [DEBUG ] SETTING VMDK SIZE to 100mb for /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk
05/25/16 09:08:09 1009714736 [DEBUG ] Running cmd /sbin/vmkfstools -d thin -c 100mb /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk
05/25/16 09:08:09 1009714736 [DEBUG ] Running cmd /usr/lib/vmware/vmdkops/bin/mkfs.ext4 -qF -L clonevol-1 /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1-flat.vmdk
05/25/16 09:08:09 1009714736 [DEBUG ] executeRequest ret = None
05/25/16 09:08:09 1009714736 [DEBUG ] lib.vmci_reply: VMCI replied with errcode 0
05/25/16 09:08:49 1009714736 [DEBUG ] lib.vmci_get_one_op returns 7, buffer '{"cmd":"attach","details":{"Name":"clonevol-1"}}'
b. Attach the volume to the VM.
05/25/16 09:27:00 1009719737 [INFO ] *** attachVMDK: /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk to 2 VM uuid = 564d392b-3cb8-5a4a-862a-3df7a590d97e
05/25/16 09:27:00 1009719737 [DEBUG ] controller_key = 1003 slot = 0
05/25/16 09:27:01 1009719737 [DEBUG ] Set status=attached disk=/vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk VM uuid=564d392b-3cb8-5a4a-862a-3df7a590d97e
05/25/16 09:27:01 1009719737 [INFO ] Disk /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk successfully attached. disk_slot = 0, bus_number = 3
05/25/16 09:27:01 1009719737 [DEBUG ] executeRequest ret = {'Bus': '3', 'Unit': '0'}
After SVM, the same disk is renamed and copied to the destination datastore,
[root@localhost:/vmfs/volumes/4dfd35fc-5e364ec8-51f1-a0b3cce97ec0/2] ls -l *vmdk *vmfd
-rw------- 1 root root 154 May 25 10:49 2_1-d286f072022a0891.vmfd
-rw------- 1 root root 104857600 May 25 10:49 2_1-flat.vmdk
-rw------- 1 root root 575 May 25 10:49 2_1.vmdk
The side car and vmdk volume are in the Vm folder,
[root@localhost:/vmfs/volumes/4dfd35fc-5e364ec8-51f1-a0b3cce97ec0/2] grep vmdk *vmx
scsi0:0.fileName = "2-000001.vmdk"
scsi3:0.fileName = "2_1.vmdk"
Issue 2
Since the VM is now running in a different datastore, the plugin server now creates volumes in the destination datastore. Effectively abandoning all volumes created earlier. Containers in the VM can't view or attach any of the volumes created earlier.
The text was updated successfully, but these errors were encountered: