-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Multiple VMs different datastores #1163
Comments
Hi @alexandrudu! There are a couple ways you could do this, but it comes down to the same ideas. See the config below for an example:
|
Perfect thank you. I don't know why i did not think about that.
then declare the var.varmod_vs_datastore variable as follows: var_vs_test_datastore = ["FirstDatastoreName","SecondDatastoreName"] |
I got the below error after doing the above the and i cannot put count in output.tf
|
I recently provided an example of how to do this in another issue with the vsphere_storage_policy data source using that in the resource. I'm including this example here that would can build/modify. In the test scenario, I'm deploying two VMs ( You don't have to use the vApp - it was just used in this scenario. Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/vsphere v2.0.2 Storage Policies based on Tags:
- foo => local-ssd-01
- bar => local-ssd-02
terraform {
required_providers {
vsphere = {
source = "hashicorp/vsphere"
version = ">= 2.0.2"
}
}
required_version = ">= 1.0.11"
}
# Credentials
variable "vsphere_server" {
type = string
}
variable "vsphere_username" {
type = string
}
variable "vsphere_password" {
type = string
}
variable "vsphere_insecure" {
type = bool
default = false
}
# vSphere Settings
variable "vsphere_datacenter" {
type = string
}
variable "vsphere_cluster" {
type = string
}
variable "vsphere_folder" {
type = string
}
variable "vsphere_network" {
type = string
}
variable "vsphere_content_library" {
type = string
}
variable "vsphere_content_library_ovf" {
type = string
}
variable "vsphere_vapp_container" {
type = string
}
# Virtual Machine Settings
variable "vm_cpus" {
type = number
}
variable "vm_memory" {
type = number
}
variable "vm_disk_size" {
type = number
}
variable "vm_firmware" {
type = string
}
variable "vm_efi_secure_boot_enabled" {
type = bool
}
variable "vm_names" {
type = map(any)
}
provider "vsphere" {
vsphere_server = var.vsphere_server
user = var.vsphere_username
password = var.vsphere_password
allow_unverified_ssl = var.vsphere_insecure
}
data "vsphere_datacenter" "datacenter" {
name = var.vsphere_datacenter
}
data "vsphere_network" "network" {
name = var.vsphere_network
datacenter_id = data.vsphere_datacenter.datacenter.id
}
data "vsphere_compute_cluster" "cluster" {
name = var.vsphere_cluster
datacenter_id = data.vsphere_datacenter.datacenter.id
}
data "vsphere_storage_policy" "storage_policy" {
for_each = var.vm_names
name = each.value["name"]
}
data "vsphere_content_library" "content_library" {
name = var.vsphere_content_library
}
data "vsphere_content_library_item" "content_library_item" {
name = var.vsphere_content_library_ovf
type = "ovf"
library_id = data.vsphere_content_library.content_library.id
}
data "vsphere_vapp_container" "vapp_container" {
name = var.vsphere_vapp_container
datacenter_id = data.vsphere_datacenter.datacenter.id
}
resource "vsphere_virtual_machine" "vm" {
for_each = var.vm_names
name = each.value["name"]
folder = var.vsphere_folder
num_cpus = var.vm_cpus
memory = var.vm_memory
firmware = var.vm_firmware
efi_secure_boot_enabled = var.vm_efi_secure_boot_enabled
storage_policy_id = data.vsphere_storage_policy.storage_policy[each.key].id
resource_pool_id = data.vsphere_vapp_container.vapp_container.id
network_interface {
network_id = data.vsphere_network.network.id
}
disk {
label = "disk0"
size = var.vm_disk_size
thin_provisioned = true
}
clone {
template_uuid = data.vsphere_content_library_item.content_library_item.id
customize {
linux_options {
host_name = each.value["name"]
domain = each.value["domain"]
}
network_interface {
ipv4_address = each.value["ipv4_address"]
ipv4_netmask = each.value["ipv4_netmask"]
}
ipv4_gateway = each.value["ipv4_gateway"]
dns_suffix_list = each.value["dns_suffix_list"]
dns_server_list = each.value["dns_server_list"]
}
}
lifecycle {
ignore_changes = [
clone[0].template_uuid,
]
}
}
# Credentials
vsphere_server = "m01-vc01.rainpole.io"
vsphere_username = "administrator@vsphere.local"
vsphere_password = "VMware1!"
vsphere_insecure = true
# vSphere Settings
vsphere_datacenter = "m01-dc01"
vsphere_cluster = "m01-cl01"
vsphere_folder = ""
vsphere_network = "M - 172.16.11.0"
vsphere_content_library = "m01-lib01"
vsphere_content_library_ovf = "linux-ubuntu-server-20-04-lts"
vsphere_vapp_container = "hello-world"
# Virtual Machines Settings
vm_cpus = 2
vm_memory = 4096
vm_disk_size = 60
vm_firmware = "efi"
vm_efi_secure_boot_enabled = true
vm_names = {
foo_vm = {
name = "foo",
domain = "rainpole.io"
ipv4_address = "172.16.11.101"
ipv4_netmask = 24
ipv4_gateway = "172.16.11.1"
dns_suffix_list = ["rainpole.io"]
dns_server_list = ["172.16.11.11", "172.16.11.12"]
},
bar_vm = {
name = "bar",
domain = "rainpole.io"
ipv4_address = "172.16.11.102"
ipv4_netmask = 24
ipv4_gateway = "172.16.11.1"
dns_suffix_list = ["rainpole.io"]
dns_server_list = ["172.16.11.11", "172.16.11.12"]
}
}
Results: $ terraform apply --auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# vsphere_virtual_machine.vm["bar_vm"] will be created
+ resource "vsphere_virtual_machine" "vm" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = (known after apply)
+ default_ip_address = (known after apply)
+ efi_secure_boot_enabled = true
+ ept_rvi_mode = "automatic"
+ firmware = "efi"
+ force_power_off = true
+ guest_id = (known after apply)
+ guest_ip_addresses = (known after apply)
+ hardware_version = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ ide_controller_count = 2
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "bar"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ poweron_timeout = 300
+ reboot_required = (known after apply)
+ resource_pool_id = "resgroup-v25022"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ sata_controller_count = 0
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 3
+ storage_policy_id = "76e05bb9-66f8-4698-9bfd-ad9a2cfd10df"
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 0
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 5
+ clone {
+ template_uuid = "93d636ab-74c7-45d8-8795-e00997fa9755"
+ timeout = 30
+ customize {
+ dns_server_list = [
+ "172.16.11.11",
+ "172.16.11.12",
]
+ dns_suffix_list = [
+ "rainpole.io",
]
+ ipv4_gateway = "172.16.11.1"
+ timeout = 10
+ linux_options {
+ domain = "rainpole.io"
+ host_name = "bar"
+ hw_clock_utc = true
}
+ network_interface {
+ ipv4_address = "172.16.11.102"
+ ipv4_netmask = 24
}
}
}
+ disk {
+ attach = false
+ controller_type = "scsi"
+ datastore_id = "<computed>"
+ device_address = (known after apply)
+ disk_mode = "persistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = (known after apply)
+ size = 60
+ storage_policy_id = (known after apply)
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "network-1001"
}
}
# vsphere_virtual_machine.vm["foo_vm"] will be created
+ resource "vsphere_virtual_machine" "vm" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = (known after apply)
+ default_ip_address = (known after apply)
+ efi_secure_boot_enabled = true
+ ept_rvi_mode = "automatic"
+ firmware = "efi"
+ force_power_off = true
+ guest_id = (known after apply)
+ guest_ip_addresses = (known after apply)
+ hardware_version = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ ide_controller_count = 2
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "foo"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ poweron_timeout = 300
+ reboot_required = (known after apply)
+ resource_pool_id = "resgroup-v25022"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ sata_controller_count = 0
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 3
+ storage_policy_id = "a374b0e1-940c-497b-9dec-3ff14baff199"
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 0
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 5
+ clone {
+ template_uuid = "93d636ab-74c7-45d8-8795-e00997fa9755"
+ timeout = 30
+ customize {
+ dns_server_list = [
+ "172.16.11.11",
+ "172.16.11.12",
]
+ dns_suffix_list = [
+ "rainpole.io",
]
+ ipv4_gateway = "172.16.11.1"
+ timeout = 10
+ linux_options {
+ domain = "rainpole.io"
+ host_name = "foo"
+ hw_clock_utc = true
}
+ network_interface {
+ ipv4_address = "172.16.11.101"
+ ipv4_netmask = 24
}
}
}
+ disk {
+ attach = false
+ controller_type = "scsi"
+ datastore_id = "<computed>"
+ device_address = (known after apply)
+ disk_mode = "persistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = (known after apply)
+ size = 60
+ storage_policy_id = (known after apply)
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "network-1001"
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
vsphere_virtual_machine.vm["foo_vm"]: Creating...
vsphere_virtual_machine.vm["bar_vm"]: Creating...
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [10s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [10s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [20s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [20s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [30s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [30s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [40s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [40s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [50s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [50s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [1m0s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [1m0s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [1m10s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [1m10s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [1m20s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [1m20s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [1m30s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [1m30s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [1m40s elapsed]
vsphere_virtual_machine.vm["bar_vm"]: Still creating... [1m40s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Still creating... [1m50s elapsed]
vsphere_virtual_machine.vm["foo_vm"]: Creation complete after 1m50s [id=42022dea-6191-dd00-43e5-a51f63651920]
vsphere_virtual_machine.vm["bar_vm"]: Creation complete after 1m50s [id=42024e49-058b-27ef-e7df-da1acbb3cefc]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed. UI Screenshots: Hope this helps - if so, could you consider closing the issue? |
Hi @alexandrudu, could you review my prior comment and let me know if this resolves your question? Thanks, |
A full example using the I addition, PR #1541 added an example that demonstrates a similar use case with storage policies. cc @iBrandyJackson and @appilon for review of the above. |
I am going to close this issue since it was a question and not a bug, and what I believe is a sufficient example has been provided. Thanks @tenthirtyam for the detailed example, feel free to open a new issue if an issue arises with the proposed solution |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Hi,
Was wondering if terraform vsphere provider is capable of separating vms in specific datastores.
I know it is possible if i have individual VMs.
But what if i have count = 2 or more (to create 2 or more vms) then all those will go within the same datastore. Is there any available option for me to use count with the datastore and create a counter in the for the datastore name (i will create 2 or more datastores with a number at the end on the vcenter side).
Or would that be possible using dynamic "datastore_id" ? does something like that exist? Already using dynamic for network_interfaces and disk
The text was updated successfully, but these errors were encountered: