-
Notifications
You must be signed in to change notification settings - Fork 95
Serialize attach/detach ops for the same vm #808
Serialize attach/detach ops for the same vm #808
Conversation
elif cmd == "detach": | ||
response = detachVMDK(vmdk_path, vm_uuid) | ||
with lockManager.get_lock(vm_uuid): | ||
response = detachVMDK(vmdk_path, vm_uuid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client plugin also has a lock (per docker-host), do we need one more lock in the ESX service? The disk level locks are good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disk level locks are good. :)
The error comes from already having a reconfig task running for the same VM. (different volumes)
I didn't investigated why the mutex in the plugin driver is not protecting this situation, but serialization for attach/detach is really needed on the esx part. I might be missing something, but in the plugin we need to protect the refcount part only?
The plugin holds the lock for the entire duration of the attach/detach. Two
reconfgs can't happen for the same VM in parallel but the host may sequence
those I guess.
…On Thu, Dec 8, 2016 at 5:58 PM, Bruno Moura ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In esx_service/vmdk_ops.py
<#808>:
> elif cmd == "detach":
- response = detachVMDK(vmdk_path, vm_uuid)
+ with lockManager.get_lock(vm_uuid):
+ response = detachVMDK(vmdk_path, vm_uuid)
Disk level locks are good. :)
The error comes from already having a reconfig task running for the same
VM. (different volumes)
I didn't investigated why the mutex in the plugin driver is not protecting
this situation, but serialization for attach/detach is really needed on the
esx part. I might be missing something, but in the plugin we need to
protect the refcount part only?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#808>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/APHseEUp4-DZ0vgXwJP0TydXnznpi2eaks5rF_f7gaJpZM4LHvwr>
.
|
88abd77
to
ca19ada
Compare
@govint writes
They can if there are 2 docker requests for different volumes issued in parallel on the same VM - which the test output in the description of this PR demonstrates |
This is a simple change that fixes concurrency on the same docker host. It adds a per vm lock on attach/detach ops.
Fixes #809
//CC @msterin
before:
after: