Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Handle SVM/XVM of VMs with VMDK volumes attached. #407

Closed
govint opened this issue May 25, 2016 · 7 comments
Closed

Handle SVM/XVM of VMs with VMDK volumes attached. #407

govint opened this issue May 25, 2016 · 7 comments

Comments

@govint
Copy link
Contributor

govint commented May 25, 2016

There are two issues with SVM and neither are impacted by the KV side car.

  1. Independent disks are getting copied into the VM folder as local disks.
  2. Server side plugin creates a new dockvols folder on destination datastore, containers in the VM can't see or attach VMDK volumes created earlier (those are simply abandoned).

Issue 1

SVM is copying the VMDK volumes (attached as independent disks to the VM) from the source to the destination datastore (into the VM folder). This is certainly not what we want because the disk is no more in the folder managed by the VMDK plugin server.

a. VMDK volume created.

05/25/16 09:08:09 1009714736 [INFO ] *** createVMDK: /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk opts = {}
05/25/16 09:08:09 1009714736 [DEBUG ] SETTING VMDK SIZE to 100mb for /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk
05/25/16 09:08:09 1009714736 [DEBUG ] Running cmd /sbin/vmkfstools -d thin -c 100mb /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk
05/25/16 09:08:09 1009714736 [DEBUG ] Running cmd /usr/lib/vmware/vmdkops/bin/mkfs.ext4 -qF -L clonevol-1 /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1-flat.vmdk
05/25/16 09:08:09 1009714736 [DEBUG ] executeRequest ret = None
05/25/16 09:08:09 1009714736 [DEBUG ] lib.vmci_reply: VMCI replied with errcode 0
05/25/16 09:08:49 1009714736 [DEBUG ] lib.vmci_get_one_op returns 7, buffer '{"cmd":"attach","details":{"Name":"clonevol-1"}}'

b. Attach the volume to the VM.

05/25/16 09:27:00 1009719737 [INFO ] *** attachVMDK: /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk to 2 VM uuid = 564d392b-3cb8-5a4a-862a-3df7a590d97e
05/25/16 09:27:00 1009719737 [DEBUG ] controller_key = 1003 slot = 0
05/25/16 09:27:01 1009719737 [DEBUG ] Set status=attached disk=/vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk VM uuid=564d392b-3cb8-5a4a-862a-3df7a590d97e
05/25/16 09:27:01 1009719737 [INFO ] Disk /vmfs/volumes/523c14b7-e5561042-5e2f-002481aaea6c/dockvols/clonevol-1.vmdk successfully attached. disk_slot = 0, bus_number = 3
05/25/16 09:27:01 1009719737 [DEBUG ] executeRequest ret = {'Bus': '3', 'Unit': '0'}

After SVM, the same disk is renamed and copied to the destination datastore,

[root@localhost:/vmfs/volumes/4dfd35fc-5e364ec8-51f1-a0b3cce97ec0/2] ls -l *vmdk *vmfd

-rw------- 1 root root 154 May 25 10:49 2_1-d286f072022a0891.vmfd
-rw------- 1 root root 104857600 May 25 10:49 2_1-flat.vmdk
-rw------- 1 root root 575 May 25 10:49 2_1.vmdk

The side car and vmdk volume are in the Vm folder,

[root@localhost:/vmfs/volumes/4dfd35fc-5e364ec8-51f1-a0b3cce97ec0/2] grep vmdk *vmx
scsi0:0.fileName = "2-000001.vmdk"
scsi3:0.fileName = "2_1.vmdk"

Issue 2

Since the VM is now running in a different datastore, the plugin server now creates volumes in the destination datastore. Effectively abandoning all volumes created earlier. Containers in the VM can't view or attach any of the volumes created earlier.

@govint
Copy link
Contributor Author

govint commented May 25, 2016

VMDK volume is local to VM folder and still independent-persistent.

scsi3:0.present = "TRUE"
scsi3:0.deviceType = "scsi-hardDisk"
scsi3:0.fileName = "2_1.vmdk"
scsi3:0.mode = "independent-persistent"

@govint govint changed the title Handle SVM of VMs with VMDK volumes attached. Handle SVM/XVM of VMs with VMDK volumes attached. May 25, 2016
@msterin
Copy link
Contributor

msterin commented May 25, 2016

What is the result of docker volume ls before and after SVM ?

@pdhamdhere
Copy link
Contributor

This is expected, right? ESX Server is hardwired to look into dockvols folders on VM datastore.

Regarding Issue 1

This is certainly not what we want because the disk is no more in the folder managed by the VMDK plugin server.

Why? How would it help? Volume would still be inaccessible to 'docker volume ls' or any other docker command. Good news is storage migration won't affect data path of running container.

As a side note, migration workflow has advanced setting where you can specify destination for individual disk.

One possible workaround would be to migrate vmdk docker volume to dockvols folder on destination VM datastore.

Regarding Issue 2,
What expected behavior you have in mind? Would #288 address this issue?

How did you initiate migration- CLI or API or UI? Compute + Storage or storage only migration? Powered on or powered off VM?

@govint
Copy link
Contributor Author

govint commented May 26, 2016

Hi Prashant,

Regarding Issue 1

This is certainly not what we want because the disk is no more in the
folder managed by the VMDK plugin server.

Why? How would it help? Volume would still be inaccessible to 'docker
volume ls' or any other docker command. Good news is storage migration
won't affect data path of running container.

TG> The volume is no longer in control of the plugin, its in a location
unknown to the plugin. Once that container stops it cannot be started to
use that same volume. Essentially, the volume is lost and so is the
capacity. None of this is what we expect to do or want with the plugin.

As a side note, migration workflow has advanced setting where you can
specify destination for individual disk.

TG> That is disk only migration, SVM is where the VM now starts running on
another location. I'll check what options are available for the disk only
migration. But doubt individual disks are allowed to be moved, I'll check.

Regarding Issue 2,
What expected behavior you have in mind? Would #288
#288 address this
issue?

TG> This is related to issue 288, but looks like we may want to always keep
the vmdk volumes in a specific location. Why? Because once a VM is SVM'ed
or XVM'ed then that VM no longer has access to any of the volumes that were
created earlier with the plugin. Say I create a volume XYZ via VM1 on
datastore DS1. Then I migrate VM1 to DS2. Docker in VM1 can't see XYZ
anymore. Thats not right.

How did you initiate migration- CLI or API or UI? Compute + Storage or
storage only migration? Powered on or powered off VM?

TG> SVM is done via VC and the VM is on all the time. I don't have two
hosts so just a SVM from one datastore to another. with one VMDK volume
attached. I'll see if I can get hosts from the QE teams here and run an XVM
or just vmotion to see if that opens up any new behavior.

Govindan

On Thu, May 26, 2016 at 4:29 AM, Prashant Dhamdhere <
notifications@github.com> wrote:

This is expected, right? ESX Server is hardwired to look into dockvols
folders on VM datastore.

Regarding Issue 1

This is certainly not what we want because the disk is no more in the
folder managed by the VMDK plugin server.

Why? How would it help? Volume would still be inaccessible to 'docker
volume ls' or any other docker command. Good news is storage migration
won't affect data path of running container.

As a side note, migration workflow has advanced setting where you can
specify destination for individual disk.

One possible workaround would be to migrate vmdk docker volume to dockvols
folder on destination VM datastore.

Regarding Issue 2,
What expected behavior you have in mind? Would #288
#288 address this
issue?

How did you initiate migration- CLI or API or UI? Compute + Storage or
storage only migration? Powered on or powered off VM?


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#407 (comment)

@govint
Copy link
Contributor Author

govint commented May 30, 2016

Discussed more on this offline and we may need a feature request to support skipping disks during SVM. For the plugin, a migration of the VM must not move the disks out of the folder that the plugin uses to manage all of the docker volumes. Else, this leads to scenarios that simply can't be handled by the plugin at all.

@msterin
Copy link
Contributor

msterin commented May 30, 2016

Would be nice to have a quick summary of the current behavior, what we should expect and how things will behave with SVM as of today.

Independent disks are getting copied into the VM folder as local disks.

I understand that - a copy will be created in he VM folder. WHat is "local disks" in this context -just independent disks attached to the new (relocateds) VM ?

Server side plugin creates a new dockvols folder on destination datastore, containers in the VM can't see or attach VMDK volumes created earlier (those are simply abandon

I do not understand that. How is "creating new dockvols dir" related to "containers in the VM can't see volumes created earlier" ? What happens to the original volumes - can't they be attached? You seem to be saying "an original is still there and can be attached if datastore is accessible. but new copy is created in local VM folder and the plugin does not see it". What happens with metadatas - is it gettting moved too ? Can we use it to mitigate he volume location ?

@govint
Copy link
Contributor Author

govint commented Feb 7, 2017

With the way docker vols are managed a) outside of the VM folder in a location chosen by the ESX service or the user and b) the fact that these are independent disks, a migration (SVM/XVM) is not supportable as of now. Vmotion is supported while only the storage migration workflows aren't . Plus we don't have a usecase to support these for now, hence closing.

@govint govint closed this as completed Feb 7, 2017
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Projects
None yet
Development

No branches or pull requests

3 participants