You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current Dockerfile for the Bitnami/PostgreSQL container, there is no way to let the database server do a restore using the recovery.signal file, because it gets removed/cleaned up during the startup of the container. In case of a restored database, that file would allow the native recovery method to be used.
In our case using wal-g, the base backup is pulled from S3 storage in an InitContainer and the required WAL files are pulled using the native recovery_command (also from S3). We used the official Bitnami PostgreSQL helm chart with an extendedConfiguration containing the archive and recovery configuration. The InitContainer restores a backup to the persistent volume and creates the recovery.signal file, but that gets cleaned up during the startup of the container.
What is the feature you are proposing to solve the problem?
The below is just a quick proof-of-concept solution on how a flag/env var could be implemented in oder to enable a potential restore. Being controlled by an optional environment variable would not sacrifice stability, while enabling the feature if needed.
# Adjusted functionpostgresql_clean_from_restart() {
# local -r -a files=(# Removed read-only attributelocal -a files=(
"$POSTGRESQL_DATA_DIR"/postmaster.pid
"$POSTGRESQL_DATA_DIR"/standby.signal
)
# "$POSTGRESQL_DATA_DIR"/recovery.signal# Replaced static entry with flagif [[ -z"${POSTGRESQL_PERFORM_RESTORE:-}" ]];then
files+=("$POSTGRESQL_DATA_DIR"/recovery.signal)
fiforfilein"${files[@]}";doif [[ -f"$file" ]];then
info "Cleaning stale $file file"
rm "$file"fidone
}
What alternatives have you considered?
For the direct restore if the database on Kubernetes without using any of the operator-based solutions available, this seems to be the only viable option for a point-in-time recovery.
The text was updated successfully, but these errors were encountered:
Thank you for bringing this issue to our attention. We appreciate your involvement! If you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.
Thank you for opening this issue and submitting the associated Pull Request. Our team will review and provide feedback. Once the PR is merged, the issue will automatically close.
Name and Version
bitnami/postgresql-17.2.0-debian-12
What is the problem this feature will solve?
In the current Dockerfile for the Bitnami/PostgreSQL container, there is no way to let the database server do a restore using the
recovery.signal
file, because it gets removed/cleaned up during the startup of the container. In case of a restored database, that file would allow the native recovery method to be used.This happens in the function postgresql_clean_from_restart and cannot be stopped.
In our case using
wal-g
, the base backup is pulled from S3 storage in an InitContainer and the required WAL files are pulled using the nativerecovery_command
(also from S3). We used the official Bitnami PostgreSQL helm chart with anextendedConfiguration
containing the archive and recovery configuration. The InitContainer restores a backup to the persistent volume and creates therecovery.signal
file, but that gets cleaned up during the startup of the container.Here are the snippets from our proof-of-concept:
What is the feature you are proposing to solve the problem?
The below is just a quick proof-of-concept solution on how a flag/env var could be implemented in oder to enable a potential restore. Being controlled by an optional environment variable would not sacrifice stability, while enabling the feature if needed.
What alternatives have you considered?
For the direct restore if the database on Kubernetes without using any of the operator-based solutions available, this seems to be the only viable option for a point-in-time recovery.
The text was updated successfully, but these errors were encountered: