Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[bitnami/postgresql] Feature Request: Allow recovery by feature flag #77295

Open
FuxMak opened this issue Feb 11, 2025 · 3 comments · May be fixed by #77502
Open

[bitnami/postgresql] Feature Request: Allow recovery by feature flag #77295

FuxMak opened this issue Feb 11, 2025 · 3 comments · May be fixed by #77502
Assignees

Comments

@FuxMak
Copy link

FuxMak commented Feb 11, 2025

Name and Version

bitnami/postgresql-17.2.0-debian-12

What is the problem this feature will solve?

In the current Dockerfile for the Bitnami/PostgreSQL container, there is no way to let the database server do a restore using the recovery.signal file, because it gets removed/cleaned up during the startup of the container. In case of a restored database, that file would allow the native recovery method to be used.

This happens in the function postgresql_clean_from_restart and cannot be stopped.

In our case using wal-g, the base backup is pulled from S3 storage in an InitContainer and the required WAL files are pulled using the native recovery_command (also from S3). We used the official Bitnami PostgreSQL helm chart with an extendedConfiguration containing the archive and recovery configuration. The InitContainer restores a backup to the persistent volume and creates the recovery.signal file, but that gets cleaned up during the startup of the container.

Here are the snippets from our proof-of-concept:

# primary.extendedConfiguration
  extendedConfiguration: |
    archive_mode = on
    archive_command = 'wal-g wal-push %p'
    archive_timeout = 60
    restore_command = 'wal-g wal-fetch %f %p'
# Restore script
...

perform_restore() {
    log "Starting restore to ${DATA_DIR}..."
    wal-g backup-fetch "$DATA_DIR" ${BACKUP_NAME:-LATEST} || {
        log "Restore failed"
        return 1
    }
    touch "${DATA_DIR}/recovery.signal"
    chown -R 1001:1001 "$DATA_DIR"
    chmod 0660 $DATA_DIR/*
    chmod u=rwx,g=rwxs,o= $DATA_DIR/*/
    log "Restore completed"
    return 0
}

What is the feature you are proposing to solve the problem?

The below is just a quick proof-of-concept solution on how a flag/env var could be implemented in oder to enable a potential restore. Being controlled by an optional environment variable would not sacrifice stability, while enabling the feature if needed.

# Adjusted function
postgresql_clean_from_restart() {
    # local -r -a files=(
    # Removed read-only attribute
    local -a files=(
        "$POSTGRESQL_DATA_DIR"/postmaster.pid
        "$POSTGRESQL_DATA_DIR"/standby.signal
    )
   
    # "$POSTGRESQL_DATA_DIR"/recovery.signal
    # Replaced static entry with flag
    if [[ -z "${POSTGRESQL_PERFORM_RESTORE:-}" ]]; then
        files+=("$POSTGRESQL_DATA_DIR"/recovery.signal)
    fi

    for file in "${files[@]}"; do
        if [[ -f "$file" ]]; then
            info "Cleaning stale $file file"
            rm "$file"
        fi
    done
}

What alternatives have you considered?

For the direct restore if the database on Kubernetes without using any of the operator-based solutions available, this seems to be the only viable option for a point-in-time recovery.

@github-actions github-actions bot added the triage Triage is needed label Feb 11, 2025
@carrodher
Copy link
Member

Thank you for bringing this issue to our attention. We appreciate your involvement! If you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.

@FuxMak
Copy link
Author

FuxMak commented Feb 14, 2025

@carrodher I've created a PR 77502 with the small feature flag attached. Thanks for encouraging to submit a PR, hope it is sufficient!

@carrodher
Copy link
Member

Thank you for opening this issue and submitting the associated Pull Request. Our team will review and provide feedback. Once the PR is merged, the issue will automatically close.

Your contribution is greatly appreciated!

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants