-
Notifications
You must be signed in to change notification settings - Fork 95
Use VirtualDiskManager API in vmdk_ops. #792
Conversation
b7134aa
to
7c344c0
Compare
@kerneltime - any hints as to why this one shows "failed" ? the log seems to indicate a pass... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, a few comments/questions inline
volume_datastore_path = vmdk_utils.get_datastore_path(vmdk_path) | ||
|
||
si = get_si() | ||
task = si.content.virtualDiskManager.CreateVirtualDisk( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[FYI] in just-released-6.5, the recommended way seems to be HostCreateDisk_Task(createDisk)
, see
https://pubs.vmware.com/vsphere-65/index.jsp#com.vmware.wssdk.apiref.doc/vim.VirtualDiskManager.html?path=4_2_0_2_5_3_98#createVirtualDisk
I think we should stick with the CreateVirtualDisk() as soon as we carefully test on VMFS and VSAN
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, i wasn't aware of that. Also pyvmomi doesn't seem to expose it yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please file a tracking issue to investigate s/vim.VirtualDiskManager/ vim.vslm.host.VStorageObjectManager/ APIs.
msterin> created #799
# Handle vsan policy | ||
if kv.VSAN_POLICY_NAME in opts: | ||
if not vsan_policy.set_policy_by_name(vmdk_path, opts[kv.VSAN_POLICY_NAME]): | ||
logging.error("Could not set policy: %s to volume %s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems to be treated as a warning and the policy is ignored. We should at least say ("policy ignored") then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't sure of what to do in that case.
Rolling back creation for something that can be updated in admin cli seemed overkill. So i just logged and ignored it.
You're right, it should be clear in the metadata that the volume has no policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have misread your comment. The mean was for logging and not for metadata, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could just drop the logging here and delete the key from opts, since we'll get the warning from vsan_info.set_policy:
rc, out = vmdk_ops.RunCommand(OBJTOOL_SET_POLICY.format(uuid,
policy_string))
if rc != 0:
logging.warning("Failed to set policy for %s : %s", vmdk_path, out)
os.makedirs(path) | ||
# Calculate the path, use the first datastore in datastores | ||
datastores = vmdk_utils.get_datastores() | ||
path = datastores[0][2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when the dir is created on a fresh test bed then ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vmdk_utils.get_datastores -> vmdk_utils.init_datastoreCache -> vmdk_ops.get_vol_path
There is a failure:
This is not really clear to me, because all the tests in VmdkCreateRemoveTestCase that create the vol_UnitTest_Create volume has a assertion for the removal. I will need to add additional logging in the tests to investigate this. |
bffcd72
to
ff3ea63
Compare
@msterin @kerneltime
I still have to identify the cause. As i don't have a vsan environment right now to test, i have submitted another run to fetch logs, sorry about that. To help debugging wouldn't be better to have the removeVMDK asserted inside testPolicyUpdate? |
ff3ea63
to
4565922
Compare
vmdk_ops.create/clone/removeVMDK: Use VirtualDiskManager API vsan_policy: Add get_policy_content vsan_policy: Add set_policy_by_name volume_kv: Make VALID_ALLOCATION_FORMATS a dict of "user option": "api value"
… can be correctly initialised vsan_info.get_vsan_dockvols_path: use vmdk_ops.get_vol_path
Is already logged in vsan_info.set_policy @msterin
Assert removeVMDK within test scope
4565922
to
45dcba8
Compare
@msterin @kerneltime I can also confirm that this change eliminates the timing/locking issue. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
volume_datastore_path = vmdk_utils.get_datastore_path(vmdk_path) | ||
|
||
si = get_si() | ||
task = si.content.virtualDiskManager.CreateVirtualDisk( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please file a tracking issue to investigate s/vim.VirtualDiskManager/ vim.vslm.host.VStorageObjectManager/ APIs.
msterin> created #799
This PR remove the usage of vmkfstools in vmdk_ops, and provide other small fixes.
With this change i was unable to reproduce the timing/locking (open descriptors after vmkfstools completed execution) issues observed in the parallel tests.
Probably fixes #768 #695 and part of #39
//CC @msterin @kerneltime @govint