- -This playbook requires root privileges or sudo. - -Ansible ([What is Ansible](https://www.ansible.com/how-ansible-works/)?) - -if dcs_type: "consul", please install consul role requirements on the control node: - -`ansible-galaxy install -r roles/consul/requirements.yml` - -### Port requirements -List of required TCP ports that must be open for the database cluster: - -- `5432` (postgresql) -- `6432` (pgbouncer) -- `8008` (patroni rest api) -- `2379`, `2380` (etcd) - -for the scheme "[Type A] PostgreSQL High-Availability with Load Balancing": - -- `5000` (haproxy - (read/write) master) -- `5001` (haproxy - (read only) all replicas) -- `5002` (haproxy - (read only) synchronous replica only) -- `5003` (haproxy - (read only) asynchronous replicas only) -- `7000` (optional, haproxy stats) - -for the scheme "[Type C] PostgreSQL High-Availability with Consul Service Discovery (DNS)": - -- `8300` (Consul Server RPC) -- `8301` (Consul Serf LAN) -- `8302` (Consul Serf WAN) -- `8500` (Consul HTTP API) -- `8600` (Consul DNS server) - -
- -- **linux (Operation System)**: - -Update your operating system on your target servers before deploying; - -Make sure you have time synchronization is configured (NTP). -Specify `ntp_enabled:'true'` and `ntp_servers` if you want to install and configure the ntp service. - -- **DCS (Distributed Consensus Store)**: - -Fast drives and a reliable network are the most important factors for the performance and stability of an etcd (or consul) cluster. - -Avoid storing etcd (or consul) data on the same drive along with other processes (such as the database) that are intensively using the resources of the disk subsystem! -Store the etcd and postgresql data on **different** disks (see `etcd_data_dir`, `consul_data_path` variables), use ssd drives if possible. -See [hardware recommendations](https://etcd.io/docs/v3.3/op-guide/hardware/) and [tuning](https://etcd.io/docs/v3.3/tuning/) guides. - -It is recommended to deploy the DCS cluster on dedicated servers, separate from the database servers. - -- **Placement of cluster members in different data centers**: - -If you’d prefer a cross-data center setup, where the replicating databases are located in different data centers, etcd member placement becomes critical. - -There are quite a lot of things to consider if you want to create a really robust etcd cluster, but there is one rule: *do not placing all etcd members in your primary data center*. See some [examples](https://www.cybertec-postgresql.com/en/introduction-and-how-to-etcd-clusters-for-patroni/). - - -- **How to prevent data loss in case of autofailover (synchronous_modes)**: - -Due to performance reasons, a synchronous replication is disabled by default. - -To minimize the risk of losing data on autofailover, you can configure settings in the following way: -- synchronous_mode: 'true' -- synchronous_mode_strict: 'true' -- synchronous_commit: 'on' (or 'remote_apply') - -
- -0. [Install Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) on one control node (which could easily be a laptop) - -``` -sudo apt update && sudo apt install -y python3-pip sshpass git -pip3 install ansible -``` - -1. Download or clone this repository - -``` -git clone https://github.com/vitabaks/autobase.git -``` - -2. Go to the automation directory - -``` -cd autobase/automation -``` - -3. Install requirements on the control node - -``` -ansible-galaxy install --force -r requirements.yml -``` - -Note: If you plan to use Consul (`dcs_type: consul`), install the consul role requirements -``` -ansible-galaxy install -r roles/consul/requirements.yml -``` - -4. Edit the inventory file - -Specify (non-public) IP addresses and connection settings (`ansible_user`, `ansible_ssh_pass` or `ansible_ssh_private_key_file` for your environment - -``` -nano inventory -``` - -5. Edit the variable file vars/[main.yml](./automation/vars/main.yml) - -``` -nano vars/main.yml -``` - -Minimum set of variables: -- `proxy_env` to download packages in environments without direct internet access (optional) -- `patroni_cluster_name` -- `postgresql_version` -- `postgresql_data_dir` -- `cluster_vip` to provide a single entry point for client access to databases in the cluster (optional) -- `with_haproxy_load_balancing` to enable load balancing (optional) -- `dcs_type` "etcd" (default) or "consul" - -See the vars/[main.yml](./automation/vars/main.yml), [system.yml](./automation/vars/system.yml) and ([Debian.yml](./automation/vars/Debian.yml) or [RedHat.yml](./automation/vars/RedHat.yml)) files for more details. - -6. Try to connect to hosts - -``` -ansible all -m ping -``` - -7. Run playbook: - -``` -ansible-playbook deploy_pgcluster.yml -``` - -#### Deploy Cluster with TimescaleDB - -To deploy a PostgreSQL High-Availability Cluster with the [TimescaleDB](https://github.com/timescale/timescaledb) extension, add the `enable_timescale` variable: - -Example: -``` -ansible-playbook deploy_pgcluster.yml -e "enable_timescale=true" -``` - -[](https://asciinema.org/a/251019?speed=5) - -### How to start from scratch - -If you need to start from the very beginning, you can use the playbook `remove_cluster.yml`. - -Available variables: -- `remove_postgres`: stop the PostgreSQL service and remove data. -- `remove_etcd`: stop the ETCD service and remove data. -- `remove_consul`: stop the Consul service and remove data. - -Run the following command to remove specific components: - -```bash -ansible-playbook remove_cluster.yml -e "remove_postgres=true remove_etcd=true" -``` - -This command will delete the specified components, allowing you to start a new installation from scratch. - -:warning: **Caution:** be careful when running this command in a production environment. - -