Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Use VirtualDiskManager API in vmdk_ops. #792

Merged
merged 7 commits into from
Dec 7, 2016

Conversation

brunotm
Copy link
Contributor

@brunotm brunotm commented Dec 3, 2016

This PR remove the usage of vmkfstools in vmdk_ops, and provide other small fixes.

With this change i was unable to reproduce the timing/locking (open descriptors after vmkfstools completed execution) issues observed in the parallel tests.

Probably fixes #768 #695 and part of #39

//CC @msterin @kerneltime @govint

@brunotm brunotm force-pushed the brunotm.vmdk-api branch 5 times, most recently from b7134aa to 7c344c0 Compare December 4, 2016 16:50
@msterin
Copy link
Contributor

msterin commented Dec 5, 2016

@kerneltime - any hints as to why this one shows "failed" ? the log seems to indicate a pass...

Copy link
Contributor

@msterin msterin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, a few comments/questions inline

volume_datastore_path = vmdk_utils.get_datastore_path(vmdk_path)

si = get_si()
task = si.content.virtualDiskManager.CreateVirtualDisk(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[FYI] in just-released-6.5, the recommended way seems to be HostCreateDisk_Task(createDisk), see
https://pubs.vmware.com/vsphere-65/index.jsp#com.vmware.wssdk.apiref.doc/vim.VirtualDiskManager.html?path=4_2_0_2_5_3_98#createVirtualDisk
I think we should stick with the CreateVirtualDisk() as soon as we carefully test on VMFS and VSAN

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, i wasn't aware of that. Also pyvmomi doesn't seem to expose it yet.

Copy link
Contributor

@pdhamdhere pdhamdhere Dec 6, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please file a tracking issue to investigate s/vim.VirtualDiskManager/ vim.vslm.host.VStorageObjectManager/ APIs.

msterin> created #799

# Handle vsan policy
if kv.VSAN_POLICY_NAME in opts:
if not vsan_policy.set_policy_by_name(vmdk_path, opts[kv.VSAN_POLICY_NAME]):
logging.error("Could not set policy: %s to volume %s",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to be treated as a warning and the policy is ignored. We should at least say ("policy ignored") then

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure of what to do in that case.
Rolling back creation for something that can be updated in admin cli seemed overkill. So i just logged and ignored it.

You're right, it should be clear in the metadata that the volume has no policy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have misread your comment. The mean was for logging and not for metadata, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could just drop the logging here and delete the key from opts, since we'll get the warning from vsan_info.set_policy:

rc, out = vmdk_ops.RunCommand(OBJTOOL_SET_POLICY.format(uuid,
                                                            policy_string))
    if rc != 0:
        logging.warning("Failed to set policy for %s : %s", vmdk_path, out)

os.makedirs(path)
# Calculate the path, use the first datastore in datastores
datastores = vmdk_utils.get_datastores()
path = datastores[0][2]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when the dir is created on a fresh test bed then ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vmdk_utils.get_datastores -> vmdk_utils.init_datastoreCache -> vmdk_ops.get_vol_path

@brunotm
Copy link
Contributor Author

brunotm commented Dec 5, 2016

@msterin

@kerneltime - any hints as to why this one shows "failed" ? the log seems to indicate a pass...

There is a failure:

FAIL: testPolicyUpdate (__main__.VmdkCreateRemoveTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/vmdk_ops_unittest17937/vmdk_ops_test.py", line 207, in testPolicyUpdate
    self.assertEqual(err, None, err)
AssertionError: {u'Error': 'File /vmfs/volumes/vsanDatastore/dockvols/vol_UnitTest_Create.vmdk already exists'}

This is not really clear to me, because all the tests in VmdkCreateRemoveTestCase that create the vol_UnitTest_Create volume has a assertion for the removal. I will need to add additional logging in the tests to investigate this.

@brunotm brunotm force-pushed the brunotm.vmdk-api branch 2 times, most recently from bffcd72 to ff3ea63 Compare December 5, 2016 15:26
@brunotm
Copy link
Contributor Author

brunotm commented Dec 5, 2016

@msterin @kerneltime
On the first run the removeVMDK in tearDown is failing with (but it doesn't fail the test):

11/15/16 14:29:21 6048576 [MainThread] [INFO   ] *** removeVMDK: /vmfs/volumes/datastore1/dockvols/vol_UnitTest_Create.vmdk
11/15/16 14:29:21 6048576 [MainThread] [ERROR  ] Failed to access /vmfs/volumes/datastore1/dockvols/vol_UnitTest_Create-1d616b145f50626a.vmfd
Traceback (most recent call last):
 File "/usr/lib/vmware/vmdkops/Python/kvESX.py", line 265, in load
   with open(meta_file, "r") as fh:

I still have to identify the cause. As i don't have a vsan environment right now to test, i have submitted another run to fetch logs, sorry about that.

To help debugging wouldn't be better to have the removeVMDK asserted inside testPolicyUpdate?

vmdk_ops.create/clone/removeVMDK: Use VirtualDiskManager API
vsan_policy: Add get_policy_content
vsan_policy: Add set_policy_by_name
volume_kv: Make VALID_ALLOCATION_FORMATS a dict of "user option": "api
value"
… can be correctly initialised

vsan_info.get_vsan_dockvols_path: use vmdk_ops.get_vol_path
Is already logged in vsan_info.set_policy
@msterin
Assert removeVMDK within test scope
@brunotm
Copy link
Contributor Author

brunotm commented Dec 6, 2016

@msterin @kerneltime
The issue with the testPolicyUpdate is fixed, the cause was the tearDown looking for the volume in the global path.

I can also confirm that this change eliminates the timing/locking issue.
From my part is ready for review/merge.

Thanks.

Copy link
Contributor

@pdhamdhere pdhamdhere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

volume_datastore_path = vmdk_utils.get_datastore_path(vmdk_path)

si = get_si()
task = si.content.virtualDiskManager.CreateVirtualDisk(
Copy link
Contributor

@pdhamdhere pdhamdhere Dec 6, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please file a tracking issue to investigate s/vim.VirtualDiskManager/ vim.vslm.host.VStorageObjectManager/ APIs.

msterin> created #799

@msterin msterin merged commit 88704cb into vmware-archive:master Dec 7, 2016
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Master build failed
3 participants