-
Notifications
You must be signed in to change notification settings - Fork 95
VMDK Driver - VolumeDriver.Mount: Server rejected #942
Comments
@brockz We will need some more details about error and your setup. Can you please share some more details;
|
Hello pdhamdhere. vSphere: Docker/Swarm: Building Swarm: Create Volume: Created Service: Service started on Worker 2 succesfully: when i kill Worker 02 (vsphere poeroff vm id-nb-pht-wrk02) Service get restarted at worker 1 as expected but doesnt mount the volume: What i can see from vSphere Side the Volume (virtualdisk) is still attached at worker2 and not attached at worker1 where the Service is now running. I zipped all Log Files here: Br Stefan |
Thanks @brockz We have root caused the issue you are running into and working on a patch which will be available soon. |
…erin Chery-pick from master to 0.11 servicing - "Fix for #942 - set uuid correctly"
The issue is fixed in both master and release-0.11 branches. @brockz - with this fix, the failover will work as expected. However, the VM which was force-poweroffed (Worker2 in your case) will fail to poweron without manual intervention, because it will try to attach MyVolume.vmdk , which is already attached to worker1. To poweron Worker2 back, you'd need to manually (UI or vim-cmd or script) detach MyVolume.vmdk |
Great Work! Running as expected ! BR Stefan |
Hello
i have 2 Worker hosts (photon os based) running in swarm mode.
When i create a Service with an mounted Volume via vmdk Driver and the Docker Host fails . The vmdk volume doesnt get unmounted. The Instance is getting restarted on the other Worker but the volume is not mounted:
cswy7i9te7688920nh6wvjbu3 _ nginx.1 nginx id-nb-pht-wrk01 Shutdown Rejected 24 minutes ago "VolumeDriver.Mount: Server re…"
The text was updated successfully, but these errors were encountered: