You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.
ESX-1 has vm1, vm2, vm3
ESX-2 has vm4
vm1 and vm4 are on shared vmfs.
Steps:
1. Created a volume from VM1.
2. Volume visible from VM2 and VM3.
3. Created a tenant T-1 and added vm1 to it.
4. Did docker volume ls from VM1. As expected, do not see any volumes.
5. Created volume vol-1 from VM1. Docker volume ls lists the volume vol-1.
5. Did docker volume ls from VM2 and vm3, do not see vol-1.
6. Did docker volume ls from vm4, able to see vol-1. <<<<<<<<<<<<<<<<<<<< vol-1 was created by vm1 after it moved to T-1
7. Created a tenant t-2 and added vm4 to it.
8 Did docker volume ls from vm4, still see vol-1. <<<<<<<<<<<<<<<<<<
Even though I see vol-1, if I try to do docker inspect vol-1, I get the message - No such volume
On ESX-2, admin cli lists vm-group as N/A for vol-1.
Is this a bug or it is an expected behavior?
Should we list the volume on a different vm which is on a different ESX even though that volume belongs to user created tenant?
================================================================
Steps and their output are as follows:
Create a volume from VM-1
root@sc-rdops-vm02-dhcp-52-237:~# docker volume create --driver=vsphere --name=TestVol -o size=500mb
TestVol
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
admin cli volume ls:
[root@sc2-rdops-vm01-dhcp-34-157:~] /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py volume ls
Volume Datastore VM-Group Capacity Used Filesystem Policy Disk Format Attached-to Access Attach-as Created By Created Date
------------- ------------ -------- -------- ---- ---------- ------ ----------- ----------- ---------- ---------------------- ------------------- ------------------------
TestVol sharedVmfs-0 _DEFAULT 500MB 23MB ext4 N/A thin detached read-write independent_persistent ubuntu-VM1-157-vmfs Wed Mar 29 18:43:02 2017
testVol1_Post sharedVmfs-0 N/A 200MB 14MB ext4 N/A thin detached read-write independent_persistent ubuntu-VM1-157-vmfs Wed Mar 29 17:23:43 2017
[root@sc2-rdops-vm01-dhcp-34-157:~]
Docker volume ls from VM-2 and VM-3:
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
Docker volume ls from VM-4:
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
Created a tenant and added a VM-1 to it:
[root@sc2-rdops-vm01-dhcp-34-157:~] /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py vm-group create --name=tenant1 --vm-list=ubuntu-VM1-157-vmfs
vm-group create succeeded
[root@sc2-rdops-vm01-dhcp-34-157:~] /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py vm-group ls
Uuid Name Description Default_datastore VM_list
------------------------------------ -------- -------------------------- ----------------- -------------------
11111111-1111-1111-1111-111111111111 _DEFAULT This is a default vm-group
cd760db8-d9db-462d-b267-a8ef3a69d7a7 tenant1 ubuntu-VM1-157-vmfs
[root@sc2-rdops-vm01-dhcp-34-157:~]
Created a volume - TestVol_T1 from VM-1:
root@sc-rdops-vm02-dhcp-52-237:~# docker volume create --driver=vsphere --name=TestVol_T1 -o size=500mb
TestVol_T1
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol_T1@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
[root@sc2-rdops-vm01-dhcp-34-157:~] /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py volume ls
Volume Datastore VM-Group Capacity Used Filesystem Policy Disk Format Attached-to Access Attach-as Created By Created Date
------------- ------------ -------- -------- ---- ---------- ------ ----------- ----------- ---------- ---------------------- ------------------- ------------------------
TestVol sharedVmfs-0 _DEFAULT 500MB 23MB ext4 N/A thin detached read-write independent_persistent ubuntu-VM1-157-vmfs Wed Mar 29 18:43:02 2017
testVol1_Post sharedVmfs-0 N/A 200MB 14MB ext4 N/A thin detached read-write independent_persistent ubuntu-VM1-157-vmfs Wed Mar 29 17:23:43 2017
TestVol_T1 sharedVmfs-0 tenant1 500MB 23MB ext4 N/A thin detached read-write independent_persistent ubuntu-VM1-157-vmfs Wed Mar 29 18:49:29 2017
[root@sc2-rdops-vm01-dhcp-34-157:~]
Docker volume ls from VM-2 and VM-3:
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
Docker volume ls from VM-4:
root@sc-rdops-vm02-dhcp-52-237:~# docker volume ls
DRIVER VOLUME NAME
vsphere TestVol@sharedVmfs-0
vsphere TestVol_T1@sharedVmfs-0 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
vsphere testVol1_Post@sharedVmfs-0
root@sc-rdops-vm02-dhcp-52-237:~#
Filtering out the crux.
Issue:
The issue is you see a volume created a VM part of tenant1 on ESX1; from a VM (tenant doesn't matter) on ESX2.
As per the admin cli on ESX2, the volume tenant is N/A because there is no entry of tenant1 in authdb on ESX2.
Reason:
Our get_volumes function in vmdk_utils traverses each tenant uuid subdirectory and then each vmdk, and returns volume info if the given tenant name reg exp matches the tenant which this volume is part of.
In case if we don't find tenant for a vmdk (unable to get tenant name for tenant uuid subdirectory from authdb), we treat the volume as orphan. However, we return such orphan volumes in all cases, even when a specific tenant was specified as input. We need to return such volumes only when volumes from any tenant are asked (tenant reg exp is "*").
basically we need to add an if check in the end part of get_volumes() in vmdk_utils like this:
# return orphan volumes only in case when volumes from any tenants are asked
if tenant_re == "*":
for file_name in list_vmdks(root):
volumes.append({'path': root,
'filename': file_name,
'datastore': datastore,
'tenant' : auth_data_const.ORPHAN_TENANT})
Checking the tenant reg exp to see if the call was made to return volumes from any tenant
and returning orphan/unknown tenant volumes only in such cases.
Fixes#1111
…1114)
* Not returning orphan/unknown tenant volumes during docker volume ls
Checking the tenant reg exp to see if the call was made to return volumes from any tenant
and returning orphan/unknown tenant volumes only in such cases.
Fixes#1111
# for freeto subscribe to this conversation on GitHub.
Already have an account?
#.
Setup:
Steps:
Even though I see vol-1, if I try to do docker inspect vol-1, I get the message - No such volume
On ESX-2, admin cli lists vm-group as N/A for vol-1.
Is this a bug or it is an expected behavior?
Should we list the volume on a different vm which is on a different ESX even though that volume belongs to user created tenant?
================================================================
Steps and their output are as follows:
Docker volume ls from VM-2 and VM-3:
Docker volume ls from VM-4:
Test steps are planned #957
Logs:
docker-volume-vsphere_vm1.txt
docker-volume-vsphere_vm4.txt
vmdk_ops_ESX1.txt
vmdk_ops_ESX2.txt
The text was updated successfully, but these errors were encountered: