You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an app with a Node.JS backend. This backend was running on Google Cloud's Compute Engine, but recently I migrated to Google Cloud's Kubernetes Engine. It works. I use Google Cloud SQL (PostgreSQL) as my database, but I want to get more control over my database, so I'm planning to use Chrunchy Data's postgres operator to setup a postgresql cluster on my kubernetes cluster.
Yesterday, I've created a kubernetes test cluster with a small Node.JS app and I installed the postgres operator. I was able to fetch data from the postgresql database using the Node.JS app, so I got it working. But now, before I move my production environment to a postgresql cluster on GKE, I have a few questions, especially about the storage options.
Question 1
Which storage configuration do I have to use for GKE? I've created a storage class on kubernetes containing the following information:
This storage class uses SSD persistent disks, but how do I couple this storage class to the postgres-operator? Is the following code snippet the correct way to do this? And is ReadWriteOnce the correct access mode (because GKE does not support ReadWriteMany).
Side question: how do I make sure this configuration is used for creating a new cluster? I've created a custom pgo.yaml file with the settings I want to use, should I just run the kubectl apply -f pgo.yaml file to update the default configuration before doing pgo create cluster mycluster?
Question 2
The current size of my database is around 14 GB, which is not that much, but it's growing fast. I've set the size for gkestorage to 50G, but is there a way to 'upgrade' the size when 50G is not sufficient enough? Or is cloning the cluster into a new cluster with a bigger PVC size the only option?
I'm sorry for the multiple question, but I can't find the appropriate answers to my questions in the documentation or here on GitHub in the issues. Thanks for understanding! Keep up the good work! ✌️
The text was updated successfully, but these errors were encountered:
Since Operator 4.5, you can leverage default storage classes, so if the SSD is set as the default, you can opt to use the "default storage" as the default.
For question #1b, you would set the primary_storage, replica_storage, backrest_storage, and backup_storage to the name of the storage configuration you have set up. You can find an example here:
For Question #2, you are correct, the best way to resize is to clone/restore. In theory, you could create a replica that uses a larger PVC by finagling some things with storage classes (create a storage class with a larger size), failover to that replica, demote the others, and then scale up from that replica (though you'd also have to update the pgcluster CRD...as you can see this is getting messy...a future release will make this much easier).
Likewise, you could create a standby PostgreSQL cluster within a single Operator deployment (so similar to what's outlined htere) where the standby cluster has more resources. When you're ready to cut over, you shut down your existing cluster and promote the standby, and have your application point to that.
A future release will provide direct support for Kubernetes ability to resize PVCs, but the Operator 4.x is maintaining backwards compatibility to OpenShift 3.11, which uses Kubernetes 1.11 as the base.
I have an app with a Node.JS backend. This backend was running on Google Cloud's Compute Engine, but recently I migrated to Google Cloud's Kubernetes Engine. It works. I use Google Cloud SQL (PostgreSQL) as my database, but I want to get more control over my database, so I'm planning to use Chrunchy Data's postgres operator to setup a postgresql cluster on my kubernetes cluster.
Yesterday, I've created a kubernetes test cluster with a small Node.JS app and I installed the postgres operator. I was able to fetch data from the postgresql database using the Node.JS app, so I got it working. But now, before I move my production environment to a postgresql cluster on GKE, I have a few questions, especially about the storage options.
Question 1
Which storage configuration do I have to use for GKE? I've created a storage class on kubernetes containing the following information:
This storage class uses SSD persistent disks, but how do I couple this storage class to the postgres-operator? Is the following code snippet the correct way to do this? And is ReadWriteOnce the correct access mode (because GKE does not support ReadWriteMany).
Side question: how do I make sure this configuration is used for creating a new cluster? I've created a custom pgo.yaml file with the settings I want to use, should I just run the
kubectl apply -f pgo.yaml
file to update the default configuration before doingpgo create cluster mycluster
?Question 2
The current size of my database is around 14 GB, which is not that much, but it's growing fast. I've set the size for gkestorage to 50G, but is there a way to 'upgrade' the size when 50G is not sufficient enough? Or is cloning the cluster into a new cluster with a bigger PVC size the only option?
I'm sorry for the multiple question, but I can't find the appropriate answers to my questions in the documentation or here on GitHub in the issues. Thanks for understanding! Keep up the good work! ✌️
The text was updated successfully, but these errors were encountered: