Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

GKE storage configuration #1568

Closed
leurs247 opened this issue May 23, 2020 · 1 comment
Closed

GKE storage configuration #1568

leurs247 opened this issue May 23, 2020 · 1 comment

Comments

@leurs247
Copy link

leurs247 commented May 23, 2020

I have an app with a Node.JS backend. This backend was running on Google Cloud's Compute Engine, but recently I migrated to Google Cloud's Kubernetes Engine. It works. I use Google Cloud SQL (PostgreSQL) as my database, but I want to get more control over my database, so I'm planning to use Chrunchy Data's postgres operator to setup a postgresql cluster on my kubernetes cluster.

Yesterday, I've created a kubernetes test cluster with a small Node.JS app and I installed the postgres operator. I was able to fetch data from the postgresql database using the Node.JS app, so I got it working. But now, before I move my production environment to a postgresql cluster on GKE, I have a few questions, especially about the storage options.

Question 1
Which storage configuration do I have to use for GKE? I've created a storage class on kubernetes containing the following information:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fastdbstorage
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd

This storage class uses SSD persistent disks, but how do I couple this storage class to the postgres-operator? Is the following code snippet the correct way to do this? And is ReadWriteOnce the correct access mode (because GKE does not support ReadWriteMany).

gkestorage:
  AccessMode: ReadWriteOnce
  StorageType: dynamic
  StorageClass: fastdbstorage
  Size: 50G

Side question: how do I make sure this configuration is used for creating a new cluster? I've created a custom pgo.yaml file with the settings I want to use, should I just run the kubectl apply -f pgo.yaml file to update the default configuration before doing pgo create cluster mycluster?

Question 2
The current size of my database is around 14 GB, which is not that much, but it's growing fast. I've set the size for gkestorage to 50G, but is there a way to 'upgrade' the size when 50G is not sufficient enough? Or is cloning the cluster into a new cluster with a bigger PVC size the only option?

I'm sorry for the multiple question, but I can't find the appropriate answers to my questions in the documentation or here on GitHub in the issues. Thanks for understanding! Keep up the good work! ✌️

@jkatz
Copy link
Contributor

jkatz commented Oct 5, 2020

Hi,

For Question #1, you would configure the storage class as part of the installation. Setting up storage is covered here: https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/#storage-settings

Since Operator 4.5, you can leverage default storage classes, so if the SSD is set as the default, you can opt to use the "default storage" as the default.

For question #1b, you would set the primary_storage, replica_storage, backrest_storage, and backup_storage to the name of the storage configuration you have set up. You can find an example here:

https://access.crunchydata.com/documentation/postgres-operator/4.5.0/quickstart/#configure-the-postgresql-operator-installer

For Question #2, you are correct, the best way to resize is to clone/restore. In theory, you could create a replica that uses a larger PVC by finagling some things with storage classes (create a storage class with a larger size), failover to that replica, demote the others, and then scale up from that replica (though you'd also have to update the pgcluster CRD...as you can see this is getting messy...a future release will make this much easier).

Likewise, you could create a standby PostgreSQL cluster within a single Operator deployment (so similar to what's outlined htere) where the standby cluster has more resources. When you're ready to cut over, you shut down your existing cluster and promote the standby, and have your application point to that.

A future release will provide direct support for Kubernetes ability to resize PVCs, but the Operator 4.x is maintaining backwards compatibility to OpenShift 3.11, which uses Kubernetes 1.11 as the base.

Hope this helps!

@jkatz jkatz closed this as completed Oct 5, 2020
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants