Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Fixup VSAN #1820

Merged
merged 9 commits into from
Feb 2, 2023
Merged

Fixup VSAN #1820

merged 9 commits into from
Feb 2, 2023

Conversation

appilon
Copy link
Contributor

@appilon appilon commented Jan 19, 2023

Description

Some of the previously merged VSAN work needed to be refactored. This PR includes the commits of #1784 and will supersede it. Having a look at the acceptance tests still needs to be done, unfortunately that's a challenge for me at the moment as my testing setup does not have the required 3 hosts, I tried it with just 2 but it seems to run into issues around maintenance mode (will follow up).

Unfortunately the legacy SDK doesn't support inter-key validation easily (I believe it's possible with CustomizeDiff, but that API can be painful and it's often easier to just include the logic in the create/update path). We ran into an edgecase with ConflictsWith, boolean fields set to false will trigger conflicts, when we in fact want to raise errors around combinations of fields being enabled (set to true).

The provider also has proliferated the use of Optional + Computed, there are edgecases around that, which I can't recall since it's been a while... so I personally prefer Optional with a Default when we know the backend system matches these defaults, I've opted for the VSAN work to have them all default to false (and remove Computed).

cc @tenthirtyam @SlimYang can you both manually test VSAN thoroughly and make sure I haven't caused any regressions?

Acceptance tests

  • Have you added an acceptance test for the functionality being added?
  • Have you run the acceptance tests on this branch?

Output from acceptance testing:

$ make testacc TESTARGS='-run=TestAccXXX'

...

Release Note

Release note for CHANGELOG:

* resource/compute_cluster: Add support for vSAN HCI Mesh. (#1820)
* resource/compute_cluster: Add support for vSAN Data-in-Transit Encryption. (#1820)

References

This PR doesn't address #1205 directly, but it does fix a panic that someone discovered in the thread of that issue. That issue should probably remain open still for a v3 breaking change.
Closes #1784

SlimYang and others added 3 commits January 11, 2023 16:17
This change is to add the vSAN DIT (data-in-transit encryption)
and HCI mesh support. The configuration parameters include the
following parameters.

* DIT - enable flag and the rekey interval
* HCI mesh - the remote datastore IDs and the host IDs.

With the above two feature support, the users should be able
to configure the vSAN data-in-transit encryption and HCI
mesh feature at the same level of support of the vSphere UI.

Note: vSAN data-in-transit feature cannot be enabled with the HCI
mesh feature at the same time:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-9113BBD6-5428-4287-9F61-C8C3EE51E07E.html.
This conflicts is called out in this code change as well.
Previously the error message mentioned the env var but checked for
another one. A user corrected this assuming it was a mistake, however it
appears this env var is defunct.
- Implement data read that was previously missed.
- Switch to default false and remove computed from attributes (not a
  breaking change).
- Unfortunately conflicts with can't be used with booleans since set as
  false would still trigger conflicts with.
@github-actions github-actions bot added documentation Type: Documentation provider Type: Provider size/xl Relative Sizing: Extra-Large labels Jan 19, 2023
@@ -924,6 +924,11 @@ func resourceVSphereComputeClusterApplyClusterConfiguration(

log.Printf("[DEBUG] %s: Applying cluster configuration", resourceVSphereComputeClusterIDString(d))

// handle VSAN first to avoid race condition
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SlimYang I've opted to refactor the code to just always process VSAN reconfigures first, is that safe under all scenarios? Personally in provider development I tend to lean towards calling code redundantly if it keeps the provider codebase clearer of conditional logic (try and make the provider codebase as idempotent as possible, unless excessive API use becomes a performance problem)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will cause issues when we are reconfiguring the vSAN and HA together. Specifically, we need to enable vSAN before HA enable (see the second note from https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan-planning.doc/GUID-D68890D8-841A-4BD1-ACA1-DA3D25B6A37A.html). Otherwise we will hit errors and the original code is to handle this scenario.

Copy link
Collaborator

@tenthirtyam tenthirtyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this pull request, the following now occurs:

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c28, 10s elapsed]

│ Error: A specified parameter was not correct: 

│   with vsphere_compute_cluster.vsan-cluster0,
│   on main.tf line 72, in resource "vsphere_compute_cluster" "vsan-cluster0":
│   72: resource "vsphere_compute_cluster" "vsan-cluster0" {



│ Error: A specified parameter was not correct: 

│   with vsphere_compute_cluster.vsan-cluster1,
│   on main.tf line 113, in resource "vsphere_compute_cluster" "vsan-cluster1":
│  113: resource "vsphere_compute_cluster" "vsan-cluster1" {

image

Test case:

Created a vSAN cluster without issue:

resource "vsphere_compute_cluster" "vsan-cluster0" {
  name            = "vsan-cluster0"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts0.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = true
}

resource "vsphere_compute_cluster" "vsan-cluster1" {
  name            = "vsan-cluster1"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts1.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = true
}

Then enabled the performance service....

resource "vsphere_compute_cluster" "vsan-cluster0" {
  name            = "vsan-cluster0"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts0.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = true
  vsan_performance_enabled             = true
}

resource "vsphere_compute_cluster" "vsan-cluster1" {
  name            = "vsan-cluster1"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts1.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = true
  vsan_performance_enabled             = true
}

and then explicitly set vsan_network_diagnostic_mode_enabled to true which resulted in the error.

resource "vsphere_compute_cluster" "vsan-cluster0" {
  name            = "vsan-cluster0"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts0.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = true
  vsan_performance_enabled             = true
  vsan_network_diagnostic_mode_enabled = true
}

resource "vsphere_compute_cluster" "vsan-cluster1" {
  name            = "vsan-cluster1"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts1.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = true
  vsan_performance_enabled             = true
  vsan_network_diagnostic_mode_enabled = true
}

It looks likes no longer computing the vsan_disk_group results in it trying to remove the disk groups (very, very bad!) to remove the disk group managed outside Terraform - I think this should remain computed.

image

resource "vsphere_compute_cluster" "vsan-cluster0" {
  name            = "vsan-cluster0"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts0.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = false
}

resource "vsphere_compute_cluster" "vsan-cluster1" {
  name            = "vsan-cluster1"
  datacenter_id   = data.vsphere_datacenter.datacenter.id
  host_system_ids = data.vsphere_host.hosts1.*.id
  drs_enabled     = true
  ha_enabled      = false
  vsan_enabled    = false
}
Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_dit_rekey_interval                               = 0 -> 1440
      ~ vsan_network_diagnostic_mode_enabled                  = false -> true
        # (55 unchanged attributes hidden)

      - vsan_disk_group {
          - cache   = "eui.76915fb3635eb65f000c2964315e162d" -> null
          - storage = [
              - "mpx.vmhba0:C0:T3:L0",
              - "mpx.vmhba0:C0:T4:L0",
              - "mpx.vmhba0:C0:T5:L0",
            ] -> null
        }
      - vsan_disk_group {
          - cache   = "eui.db92ff907856e40f000c296fab3c1796" -> null
          - storage = [
              - "mpx.vmhba0:C0:T3:L0",
              - "mpx.vmhba0:C0:T4:L0",
              - "mpx.vmhba0:C0:T5:L0",
            ] -> null
        }
      - vsan_disk_group {
          - cache   = "eui.78794ba2814dc792000c296393f2f052" -> null
          - storage = [
              - "mpx.vmhba0:C0:T3:L0",
              - "mpx.vmhba0:C0:T4:L0",
              - "mpx.vmhba0:C0:T5:L0",
            ] -> null
        }
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_dit_rekey_interval                               = 0 -> 1440
      ~ vsan_verbose_mode_enabled                             = false -> true
        # (55 unchanged attributes hidden)

      - vsan_disk_group {
          - cache   = "eui.709562de2e816461000c29659e94435d" -> null
          - storage = [
              - "mpx.vmhba0:C0:T3:L0",
              - "mpx.vmhba0:C0:T4:L0",
              - "mpx.vmhba0:C0:T5:L0",
            ] -> null
        }
      - vsan_disk_group {
          - cache   = "eui.301fa6be6d7c2f2c000c296942e7dba1" -> null
          - storage = [
              - "mpx.vmhba0:C0:T3:L0",
              - "mpx.vmhba0:C0:T4:L0",
              - "mpx.vmhba0:C0:T5:L0",
            ] -> null
        }
      - vsan_disk_group {
          - cache   = "eui.12ac219c83ac51d4000c29638d9cd765" -> null
          - storage = [
              - "mpx.vmhba0:C0:T3:L0",
              - "mpx.vmhba0:C0:T4:L0",
              - "mpx.vmhba0:C0:T5:L0",
            ] -> null
        }
    }

cc @SlimYang

@tenthirtyam tenthirtyam added the breaking-change Status: Breaking Change label Jan 23, 2023
@tenthirtyam tenthirtyam added this to the v2.3.0 milestone Jan 23, 2023
Copy link
Collaborator

@tenthirtyam tenthirtyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing on 2023-01-26 enabling HCI Mesh between two vSAN clusters managed by Terraform:

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 1s [id=datacenter-3]
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_datastore.vsan-ds0: Read complete after 0s [id=datastore-33]
data.vsphere_datastore.vsan-ds1: Read complete after 0s [id=datastore-34]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-20]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-19]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          + "datastore-34",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          + "datastore-33",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 30s elapsed]

│ Error: Plugin did not respond

│   with vsphere_compute_cluster.vsan-cluster0,
│   on main.tf line 72, in resource "vsphere_compute_cluster" "vsan-cluster0":
│   72: resource "vsphere_compute_cluster" "vsan-cluster0" {

│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.


│ Error: Plugin did not respond

│   with vsphere_compute_cluster.vsan-cluster1,
│   on main.tf line 89, in resource "vsphere_compute_cluster" "vsan-cluster1":
│   89: resource "vsphere_compute_cluster" "vsan-cluster1" {

│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.


Stack trace from the terraform-provider-vsphere_v2.3.0 plugin:

panic: runtime error: slice bounds out of range [:1] with capacity 0

goroutine 71 [running]:
github.com/hashicorp/terraform-provider-vsphere/vsphere/internal/helper/structure.DropSliceItem(...)
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/internal/helper/structure/structure_helper.go:577
github.com/hashicorp/terraform-provider-vsphere/vsphere.updateVsanDisks(0x21ba520?, 0x1cdc3c0?, {0x1cdc3c0?, 0xc000837260?})
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:1509 +0x149c
github.com/hashicorp/terraform-provider-vsphere/vsphere.resourceVSphereComputeClusterUpdate(0x0?, {0x1cdc3c0, 0xc000837260})
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:684 +0x192
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).update(0x21c38f0?, {0x21c38f0?, 0xc000322e10?}, 0xd?, {0x1cdc3c0?, 0xc000837260?})
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.17.0/helper/schema/resource.go:729 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0002ce460, {0x21c38f0, 0xc000322e10}, 0xc000423e10, 0xc0007c0800, {0x1cdc3c0, 0xc000837260})
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.17.0/helper/schema/resource.go:847 +0x83a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000241aa0, {0x21c3848?, 0xc000981740?}, 0xc0009fcb90)
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.17.0/helper/schema/grpc_provider.go:1021 +0xe8d
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc00032a640, {0x21c38f0?, 0xc0000b50e0?}, 0xc00057aee0)
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.9.1/tfprotov5/tf5server/server.go:812 +0x515
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x1f670e0?, 0xc00032a640}, {0x21c38f0, 0xc0000b50e0}, 0xc000801aa0, 0x0)
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.9.1/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002b2700, {0x21c74b8, 0xc0000b9ba0}, 0xc000567c20, 0xc0002bd2c0, 0x2ae98a0, 0x0)
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1283 +0xcfe
google.golang.org/grpc.(*Server).handleStream(0xc0002b2700, {0x21c74b8, 0xc0000b9ba0}, 0xc000567c20, 0x0)
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1620 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:922 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:920 +0x28a

Error: The terraform-provider-vsphere_v2.3.0 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Although it crashed. the remote datastores in state is….

"vsan_enabled": true,
            "vsan_network_diagnostic_mode_enabled": true,
            "vsan_performance_enabled": true,
            "vsan_remote_datastore_ids": [],
            "vsan_unmap_enabled": false,
            "vsan_verbose_mode_enabled": true
          },

But, it actually completed.

image

After doing a refresh, the remote datastores are read into state.

"vsan_enabled": true,
            "vsan_network_diagnostic_mode_enabled": true,
            "vsan_performance_enabled": true,
            "vsan_remote_datastore_ids": [
              "datastore-34"
            ],
            "vsan_unmap_enabled": false,
            "vsan_verbose_mode_enabled": true

Functionally, it is working but something is causing a crash (which could be related to legacy implementation for the disk groups. Not sure if that is related to the other type issue? See #1205

@tenthirtyam tenthirtyam removed the breaking-change Status: Breaking Change label Jan 26, 2023
@tenthirtyam tenthirtyam self-requested a review February 1, 2023 03:44
@tenthirtyam
Copy link
Collaborator

The same issue occurs after a56afd0

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-14]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-16]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-9]
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-12]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-17]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be created
  + resource "vsphere_compute_cluster" "vsan-cluster0" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "disabled"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + host_system_ids                                       = [
          + "host-12",
          + "host-16",
          + "host-24",
        ]
      + id                                                    = (known after apply)
      + name                                                  = "vsan-cluster0"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_compression_enabled                              = false
      + vsan_dedup_enabled                                    = false
      + vsan_dit_encryption_enabled                           = false
      + vsan_dit_rekey_interval                               = (known after apply)
      + vsan_enabled                                          = true
      + vsan_network_diagnostic_mode_enabled                  = false
      + vsan_performance_enabled                              = false
      + vsan_unmap_enabled                                    = false
      + vsan_verbose_mode_enabled                             = false

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

  # vsphere_compute_cluster.vsan-cluster1 will be created
  + resource "vsphere_compute_cluster" "vsan-cluster1" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "disabled"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + host_system_ids                                       = [
          + "host-14",
          + "host-17",
          + "host-9",
        ]
      + id                                                    = (known after apply)
      + name                                                  = "vsan-cluster1"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_compression_enabled                              = false
      + vsan_dedup_enabled                                    = false
      + vsan_dit_encryption_enabled                           = false
      + vsan_dit_rekey_interval                               = (known after apply)
      + vsan_enabled                                          = true
      + vsan_network_diagnostic_mode_enabled                  = false
      + vsan_performance_enabled                              = false
      + vsan_unmap_enabled                                    = false
      + vsan_verbose_mode_enabled                             = false

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Creating...
vsphere_compute_cluster.vsan-cluster1: Creating...
vsphere_compute_cluster.vsan-cluster1: Still creating... [10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [40s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [40s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [50s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [50s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m0s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m0s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m40s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m50s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m0s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m0s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [3m0s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [3m0s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [3m10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [3m10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [3m20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [3m20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [3m30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [3m30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Creation complete after 3m32s [id=domain-c28]
vsphere_compute_cluster.vsan-cluster0: Still creating... [3m40s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [3m50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Creation complete after 3m52s [id=domain-c29]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Downloads/vsan/cluster took 3m 59.3s terraform plan                
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 1s [id=datacenter-3]
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_datastore.vsan-ds0: Read complete after 0s [id=datastore-34]
data.vsphere_datastore.vsan-ds1: Read complete after 0s [id=datastore-33]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-24]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-14]
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-9]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-16]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

Downloads/vsan/cluster took 8.4s terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-34]
data.vsphere_datastore.vsan-ds1: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-17]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-9]
data.vsphere_host.hosts1[2]: Read complete after 2s [id=host-14]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-16]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-12]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c29, 10s elapsed]

│ Error: Plugin did not respond

│   with vsphere_compute_cluster.vsan-cluster0,
│   on main.tf line 72, in resource "vsphere_compute_cluster" "vsan-cluster0":
│   72: resource "vsphere_compute_cluster" "vsan-cluster0" {

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more
│ details.


│ Error: Plugin did not respond

│   with vsphere_compute_cluster.vsan-cluster1,
│   on main.tf line 89, in resource "vsphere_compute_cluster" "vsan-cluster1":
│   89: resource "vsphere_compute_cluster" "vsan-cluster1" {

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more
│ details.


Stack trace from the terraform-provider-vsphere plugin:

panic: runtime error: slice bounds out of range [:1] with capacity 0

goroutine 118 [running]:
github.com/hashicorp/terraform-provider-vsphere/vsphere/internal/helper/structure.DropSliceItem(...)
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/internal/helper/structure/structure_helper.go:577
github.com/hashicorp/terraform-provider-vsphere/vsphere.updateVsanDisks(0x21bdd80?, 0xc0003c9b90?, {0x1cdc6a0?, 0xc0006bb260?})
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:1521 +0x15dc
github.com/hashicorp/terraform-provider-vsphere/vsphere.resourceVSphereComputeClusterApplyVsanConfig(0x2009a36?, {0x1cdc6a0?, 0xc0006bb260}, 0xc0001ea280)
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:1463 +0x565
github.com/hashicorp/terraform-provider-vsphere/vsphere.resourceVSphereComputeClusterApplyClusterConfiguration(0x21ba8c0?, {0x1cdc6a0?, 0xc0006bb260}, 0xc0001ea280?)
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:924 +0xea
github.com/hashicorp/terraform-provider-vsphere/vsphere.resourceVSphereComputeClusterUpdate(0x0?, {0x1cdc6a0, 0xc0006bb260})
        /Users/johnsonryan/Library/Mobile Documents/com~apple~CloudDocs/Code/Work/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:673 +0x12c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).update(0x21c3c90?, {0x21c3c90?, 0xc000a0d140?}, 0xd?, {0x1cdc6a0?, 0xc0006bb260?})
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.17.0/helper/schema/resource.go:729 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0000ba7e0, {0x21c3c90, 0xc000a0d140}, 0xc000544a90, 0xc000919580, {0x1cdc6a0, 0xc0006bb260})
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.17.0/helper/schema/resource.go:847 +0x83a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000239a88, {0x21c3be8?, 0xc00044c680?}, 0xc0006532c0)
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.17.0/helper/schema/grpc_provider.go:1021 +0xe8d
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc0001a4fa0, {0x21c3c90?, 0xc000a0c960?}, 0xc0009160e0)
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.9.1/tfprotov5/tf5server/server.go:812 +0x515
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x1f673c0?, 0xc0001a4fa0}, {0x21c3c90, 0xc000a0c960}, 0xc0001b0ae0, 0x0)
        /Users/johnsonryan/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.9.1/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000cc8c0, {0x21c7858, 0xc0002d2d00}, 0xc0001ce120, 0xc000611470, 0x2ae98a0, 0x0)
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1283 +0xcfe
google.golang.org/grpc.(*Server).handleStream(0xc0000cc8c0, {0x21c7858, 0xc0002d2d00}, 0xc0001ce120, 0x0)
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:1620 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:922 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /Users/johnsonryan/go/pkg/mod/google.golang.org/grpc@v1.46.0/server.go:920 +0x28a

Error: The terraform-provider-vsphere plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Copy link
Collaborator

@tenthirtyam tenthirtyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing Status: OK

Provider Initialization

terraform init                

Initializing the backend...

Initializing provider plugins...
- Finding local/hashicorp/vsphere versions matching "2.3.0"...
- Installing local/hashicorp/vsphere v2.3.0...
- Installed local/hashicorp/vsphere v2.3.0 (unauthenticated)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.


│ Warning: Incomplete lock file information for providers

│ Due to your customized provider installation methods, Terraform was forced to calculate lock file checksums locally
│ for the following providers:
│   - local/hashicorp/vsphere

│ The current .terraform.lock.hcl file only includes checksums for darwin_amd64, so Terraform running on another
│ platform will fail to install these providers.

│ To calculate additional checksums for another platform, run:
│   terraform providers lock -platform=linux_amd64
│ (where linux_amd64 is the platform to generate)


Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Enable vSAN: OK

terraform plan                
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-15]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-12]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-17]
data.vsphere_host.hosts1[2]: Read complete after 2s [id=host-9]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be created
  + resource "vsphere_compute_cluster" "vsan-cluster0" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "disabled"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + host_system_ids                                       = [
          + "host-12",
          + "host-15",
          + "host-17",
        ]
      + id                                                    = (known after apply)
      + name                                                  = "vsan-cluster0"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_compression_enabled                              = false
      + vsan_dedup_enabled                                    = false
      + vsan_dit_encryption_enabled                           = false
      + vsan_dit_rekey_interval                               = (known after apply)
      + vsan_enabled                                          = true
      + vsan_network_diagnostic_mode_enabled                  = false
      + vsan_performance_enabled                              = false
      + vsan_unmap_enabled                                    = false
      + vsan_verbose_mode_enabled                             = false

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

  # vsphere_compute_cluster.vsan-cluster1 will be created
  + resource "vsphere_compute_cluster" "vsan-cluster1" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "disabled"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + host_system_ids                                       = [
          + "host-18",
          + "host-24",
          + "host-9",
        ]
      + id                                                    = (known after apply)
      + name                                                  = "vsan-cluster1"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_compression_enabled                              = false
      + vsan_dedup_enabled                                    = false
      + vsan_dit_encryption_enabled                           = false
      + vsan_dit_rekey_interval                               = (known after apply)
      + vsan_enabled                                          = true
      + vsan_network_diagnostic_mode_enabled                  = false
      + vsan_performance_enabled                              = false
      + vsan_unmap_enabled                                    = false
      + vsan_verbose_mode_enabled                             = false

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if
you run "terraform apply" now.

Downloads/vsan/cluster took 4.2s terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Read complete after 2s [id=host-9]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-12]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-17]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-15]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be created
  + resource "vsphere_compute_cluster" "vsan-cluster0" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "disabled"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + host_system_ids                                       = [
          + "host-12",
          + "host-15",
          + "host-17",
        ]
      + id                                                    = (known after apply)
      + name                                                  = "vsan-cluster0"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_compression_enabled                              = false
      + vsan_dedup_enabled                                    = false
      + vsan_dit_encryption_enabled                           = false
      + vsan_dit_rekey_interval                               = (known after apply)
      + vsan_enabled                                          = true
      + vsan_network_diagnostic_mode_enabled                  = false
      + vsan_performance_enabled                              = false
      + vsan_unmap_enabled                                    = false
      + vsan_verbose_mode_enabled                             = false

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

  # vsphere_compute_cluster.vsan-cluster1 will be created
  + resource "vsphere_compute_cluster" "vsan-cluster1" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "disabled"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + host_system_ids                                       = [
          + "host-18",
          + "host-24",
          + "host-9",
        ]
      + id                                                    = (known after apply)
      + name                                                  = "vsan-cluster1"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_compression_enabled                              = false
      + vsan_dedup_enabled                                    = false
      + vsan_dit_encryption_enabled                           = false
      + vsan_dit_rekey_interval                               = (known after apply)
      + vsan_enabled                                          = true
      + vsan_network_diagnostic_mode_enabled                  = false
      + vsan_performance_enabled                              = false
      + vsan_unmap_enabled                                    = false
      + vsan_verbose_mode_enabled                             = false

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster1: Creating...
vsphere_compute_cluster.vsan-cluster0: Creating...
vsphere_compute_cluster.vsan-cluster1: Still creating... [10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [50s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m0s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m0s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [1m50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [1m50s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m0s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m0s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m40s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m40s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still creating... [2m50s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still creating... [2m50s elapsed]
vsphere_compute_cluster.vsan-cluster1: Creation complete after 2m52s [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Creation complete after 2m57s [id=domain-c28]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Enable vSphere High Availability: OK

terraform plan                
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-18]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-24]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these
actions if you run "terraform apply" now.terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
      ~ ha_enabled                                            = false -> true
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Modifications complete after 28s [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifications complete after 29s [id=domain-c29]

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Eanble Depuplication withoout Compression: OK (Fails as Expected)

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[2]: Read complete after 2s [id=host-9]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-15]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-17]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-12]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_dedup_enabled                                    = false -> true
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_dedup_enabled                                    = false -> true
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]

│ Error: vsan compression must be enabled if vsan dedup is enabled

│   with vsphere_compute_cluster.vsan-cluster0,
│   on main.tf line 72, in resource "vsphere_compute_cluster" "vsan-cluster0":
│   72: resource "vsphere_compute_cluster" "vsan-cluster0" {



│ Error: vsan compression must be enabled if vsan dedup is enabled

│   with vsphere_compute_cluster.vsan-cluster1,
│   on main.tf line 89, in resource "vsphere_compute_cluster" "vsan-cluster1":
│   89: resource "vsphere_compute_cluster" "vsan-cluster1" {

Enable Deduplication with Compression: OK

Note: There appeared to be a race condition during the format update, but on reapply it worked fine. I do not think we should hold up this PR for that.

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 1s [id=datacenter-3]
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-18]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-24]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_compression_enabled                              = false -> true
      ~ vsan_dedup_enabled                                    = false -> true
        # (56 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_compression_enabled                              = false -> true
      ~ vsan_dedup_enabled                                    = false -> true
        # (56 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 30s elapsed]

│ Error: cannot apply vsan service on cluster 'vsan-cluster0': General vSAN error.

│   with vsphere_compute_cluster.vsan-cluster0,
│   on main.tf line 72, in resource "vsphere_compute_cluster" "vsan-cluster0":
│   72: resource "vsphere_compute_cluster" "vsan-cluster0" {

│ Error: cannot apply vsan service on cluster 'vsan-cluster1': General vSAN error.

│   with vsphere_compute_cluster.vsan-cluster1,
│   on main.tf line 89, in resource "vsphere_compute_cluster" "vsan-cluster1":
│   89: resource "vsphere_compute_cluster" "vsan-cluster1" {terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Read complete after 2s [id=host-9]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-17]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-12]
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-15]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no
changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Enable vSAN HCI Mesh between vSAN Clusters: OK

terraform plan                
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_datastore.vsan-ds1: Read complete after 1s [id=datastore-34]
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-18]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-24]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          + "datastore-34",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          + "datastore-33",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these
actions if you run "terraform apply" now.

Downloads/vsan/cluster took 7.5s terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_datastore.vsan-ds1: Read complete after 1s [id=datastore-34]
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          + "datastore-34",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          + "datastore-33",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 30s elapsed]
vsphere_compute_cluster.vsan-cluster1: Modifications complete after 37s [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Modifications complete after 37s [id=domain-c28]

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Enable Performance Service on vSAN Cluster + Verbos Mode and Network Diagnostic Mode: OK

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 1s [id=datacenter-3]
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_datastore.vsan-ds1: Read complete after 0s [id=datastore-34]
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-18]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-24]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_network_diagnostic_mode_enabled                  = false -> true
      ~ vsan_performance_enabled                              = false -> true
      ~ vsan_verbose_mode_enabled                             = false -> true
        # (55 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_network_diagnostic_mode_enabled                  = false -> true
      ~ vsan_performance_enabled                              = false -> true
      ~ vsan_verbose_mode_enabled                             = false -> true
        # (55 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster1: Modifications complete after 28s [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Modifications complete after 28s [id=domain-c28]

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Disable vSAN HCI Mesh Between vSAN Clusters: OK

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 1s [id=datacenter-3]
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_datastore.vsan-ds1: Read complete after 0s [id=datastore-34]
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-18]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-24]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_network_diagnostic_mode_enabled                  = false -> true
      ~ vsan_performance_enabled                              = false -> true
      ~ vsan_verbose_mode_enabled                             = false -> true
        # (55 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_network_diagnostic_mode_enabled                  = false -> true

Downloads/vsan/cluster terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_datastore.vsan-ds1: Read complete after 1s [id=datastore-34]
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts1[0]: Read complete after 2s [id=host-24]
data.vsphere_host.hosts0[1]: Read complete after 2s [id=host-12]
data.vsphere_host.hosts0[0]: Read complete after 2s [id=host-15]
data.vsphere_host.hosts1[1]: Read complete after 2s [id=host-18]
data.vsphere_host.hosts0[2]: Read complete after 2s [id=host-17]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]
data.vsphere_host.hosts1[2]: Read complete after 2s [id=host-9]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          - "datastore-34",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_remote_datastore_ids                             = [
          - "datastore-33",
        ]
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 30s elapsed]
vsphere_compute_cluster.vsan-cluster0: Modifications complete after 31s [id=domain-c28]

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Enable Data-in-Transit Encryption on vSAN Clusters: OK

terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[1]: Reading...

Downloads/vsan/cluster terraform apply --auto-approve
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_datastore.vsan-ds1: Reading...
data.vsphere_host.hosts1[0]: Reading...
data.vsphere_host.hosts1[2]: Reading...
data.vsphere_host.hosts1[1]: Reading...
data.vsphere_host.hosts0[0]: Reading...
data.vsphere_datastore.vsan-ds0: Reading...
data.vsphere_host.hosts0[1]: Reading...
data.vsphere_host.hosts0[2]: Reading...
data.vsphere_datastore.vsan-ds1: Read complete after 1s [id=datastore-34]
data.vsphere_datastore.vsan-ds0: Read complete after 1s [id=datastore-33]
data.vsphere_host.hosts0[0]: Read complete after 1s [id=host-15]
data.vsphere_host.hosts1[2]: Read complete after 1s [id=host-9]
data.vsphere_host.hosts0[1]: Read complete after 1s [id=host-12]
data.vsphere_host.hosts1[0]: Read complete after 1s [id=host-24]
data.vsphere_host.hosts1[1]: Read complete after 1s [id=host-18]
vsphere_compute_cluster.vsan-cluster1: Refreshing state... [id=domain-c29]
data.vsphere_host.hosts0[2]: Read complete after 1s [id=host-17]
vsphere_compute_cluster.vsan-cluster0: Refreshing state... [id=domain-c28]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # vsphere_compute_cluster.vsan-cluster0 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster0" {
        id                                                    = "domain-c28"
        name                                                  = "vsan-cluster0"
        tags                                                  = []
      ~ vsan_dit_rekey_interval                               = 0 -> 2160
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # vsphere_compute_cluster.vsan-cluster1 will be updated in-place
  ~ resource "vsphere_compute_cluster" "vsan-cluster1" {
        id                                                    = "domain-c29"
        name                                                  = "vsan-cluster1"
        tags                                                  = []
      ~ vsan_dit_rekey_interval                               = 0 -> 2160
        # (57 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.
vsphere_compute_cluster.vsan-cluster1: Modifying... [id=domain-c29]
vsphere_compute_cluster.vsan-cluster0: Modifying... [id=domain-c28]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 10s elapsed]
vsphere_compute_cluster.vsan-cluster1: Still modifying... [id=domain-c29, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Still modifying... [id=domain-c28, 20s elapsed]
vsphere_compute_cluster.vsan-cluster0: Modifications complete after 28s [id=domain-c28]
vsphere_compute_cluster.vsan-cluster1: Modifications complete after 30s [id=domain-c29]

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Note: This change also resolves GH-1205.

Copy link
Collaborator

@tenthirtyam tenthirtyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@appilon - I reviewed the markdown once more and recommended three suggestions for correcting the feature name for vSAN HCI Mesh and correcting the default value for the data-in-transit encryption rekey interval from 1400 to 1440.

Co-authored-by: Ryan Johnson <johnsonryan@vmware.com>
@appilon appilon merged commit 3c8d02e into main Feb 2, 2023
@github-actions
Copy link

github-actions bot commented Feb 9, 2023

This functionality has been released in v2.3.0 of the Terraform Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 11, 2023
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
documentation Type: Documentation provider Type: Provider size/xl Relative Sizing: Extra-Large
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants