Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

IndexError('tuple index out of range',) when trying to create any volume with v0.18 #1996

Closed
Routhinator opened this issue Nov 20, 2017 · 6 comments

Comments

@Routhinator
Copy link

From the Docker CLI:

sudo docker volume create --driver=vsphere --name=infrastructure_portainer -o size=1GB
Error response from daemon: create infrastructure_portainer: VolumeDriver.Create: Server returned an error: IndexError('tuple index out of range',)

From the ESXI logs:

11/19/17 20:11:09 138683 [MainThread] [INFO   ] Started new thread : 571921139456 with target <function execRequestThread at 0x852725b9d8> and args (13, 233646, b'{"cmd":"create","details":{"Name":"infrastructure_portainer","Opts":{"fstype":"ext4","size":"1GB"}},"version":"2"}')
11/19/17 20:11:09 138683 [Thread-17284] [ERROR  ] Unhandled Exception:
Traceback (most recent call last):
  File "/usr/lib/vmware/vmdkops/bin/vmdk_ops.py", line 1751, in execRequestThread
    vc_uuid = UUID_FORMAT.format(*vc_uuid.replace("-",  " ").split())
IndexError: tuple index out of range

Cannot figure out what is going wrong. I've followed all the setup I can find for this.

I see someone else hit this but closed their issue #876 - In that issue @lipingxue was talking about some sort of "tenant" setup that is not documented? Is that relevant any longer?

@Routhinator
Copy link
Author

Still have the same issue, but I did determine that my authdb hadn't been initialized. There is no mention of this being needed in the installation instructions:

esxcli storage guestvol config init --datastore=datastore1_Raid_6_SSD

@Routhinator
Copy link
Author

Looking at the source code.. it looks like this plugin requires VCenter as the vc_uuid = UUID_FORMAT.format(*vc_uuid.replace("-", " ").split()) line is trying to work with what appears to be a VCenter UUID. This is a standalone ESXi server used for testing. Is there no support for standalone ESXi?

@govint
Copy link
Contributor

govint commented Nov 20, 2017

@Routhinator, thanks for trying the volume plugin on ESX, for this issue standalone ESX should be usable, although this part of the code is perhaps assuming the VC UUID always exists. This should get fixed up and verified with single ESX hosts without VC.

Can you describe your usecase to use a a single host(yes?) and local storage(yes?). How do you provision the VMs to the hosts without a VCenter instance?

@Routhinator
Copy link
Author

Routhinator commented Nov 21, 2017

@govint

Single Host with SSD Local storage for game servers. VMs are built with Packer (since there's no ESXi way to use Terraform). Final tweaks are made with the ESXi Web Interface for vSphere 6.5, which is identical to the VCenter 5.5>6.5 Interface but for one host. (No cloning either so Packer is a very helpful utility there.) This is for a home dev/gaming environment and lab sort of use case.

Docker swarm is provisioned on the VM hosts, and with no network storage available (plus a high iops requirement for modded Minecraft) my hope was to leverage the VDMK utilities in this plugin.

@tusharnt tusharnt added this to the Sprint - Kubecon milestone Nov 21, 2017
@govint
Copy link
Contributor

govint commented Nov 23, 2017

@Routhinator, thanks for the description of your use case. The fix is out for a review and should be merged soon.
Does your use case also need shared storage between containers running in different containers. The vDVS plugin is good for shared storage between hosts allowing volumes to be accessible across ESX hosts. But a volume can't be accessed by more than one VM at a time. In case your containers need the same volumes to be accessed by more than one container on different VMs/hosts then the vFile plugin may help with that requirement.

@govint
Copy link
Contributor

govint commented Nov 27, 2017

Fixed via #1997

@govint govint closed this as completed Nov 27, 2017
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Projects
None yet
Development

No branches or pull requests

3 participants