Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Can add & remove node to kubernetes cluster from terraform - TC388 #243

Closed
dmitri-alleo opened this issue May 17, 2022 · 1 comment
Closed

Comments

@dmitri-alleo
Copy link

Updating existing Kubernetes cluster

Environment

Testnet

Terraform Version

v1.1.7

TestLodge

https://app.testlodge.com/a/26076/projects/40893/runs/638489/run?selected=27207982

Affected Resource(s)

  • k8s

Terraform Configuration Files

resource "grid_network" "net1" {
    nodes = [3,5]
    ip_range = "10.1.0.0/16"
    name = "network12346"
    description = "newer network"
    add_wg_access = true
}

resource "grid_kubernetes" "k8s1" {
  network_name = grid_network.net1.name
  nodes_ip_range = grid_network.net1.nodes_ip_range 
  token = "12345678910122"
  ssh_key = var.openssh_key

  master {
    disk_size = 23
    node = 3
    name = "mr"
    cpu = 2
    publicip = true
    memory = 2048
  }
  workers {
    disk_size = 15
    node = 3
    name = "w0"
    cpu = 2
    memory = 2048
  }
  workers {
    disk_size = 14
    node = 3
    name = "w2"
    cpu = 1 
    memory = 2048
  }
  workers {
    disk_size = 13
    node = 5
    name = "w3"
    cpu = 1
    memory = 2048
  }

  workers {
    disk_size = 12
    node = 5
    name = "w4"
    cpu = 1
    memory = 2048
  }

Expected Behavior

What should have happened?
The cluster should get updated and moved to another node

Actual Behavior

What actually happened?

│ Error: Provider produced inconsistent final plan

│ When expanding the plan for grid_kubernetes.k8s1 to include new values learned so far during apply, provider "registry.terraform.io/threefoldtech/grid" produced an invalid new value for .nodes_ip_range: new element "5" has
│ appeared.

│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply -parallelism=1 -var-file=../qa.tfvars

Important Factoids

If the cluster is destroyed first and then provisioned again, it works as expected

@OmarElawady
Copy link
Contributor

Duplicate of #13

By the way, the worker won't be "moved", it will be destroyed and recreated from scratch which depending on the cluster operation may be an issue.

A workaround as mentioned in the issue is to apply again.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants