Skip to content

AlertaDengue/UpdateHistoricoAlerta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UpdateHistoricoAlerta

UpdateHistoricoAlerta is a tool designed to automate the process of copying SQL script files generated by AlertTools and sending them to a server for database updates. Additionally, it allows for the deployment of incidence map images on the AlertaDengue homepage.

Table of Contents

Playbooks

historico-alert-prepare-hosts.yaml

This playbook is responsible for preparing the hosts by creating directories and including another playbook for further tasks. It targets the 'localhost' machine to perform initial setup.

Playbook Tasks:

  1. Create Directories: This playbook checks if an output directory (/tmp/sql/) exists on the local machine (localhost). If the directory doesn't exist, it creates it. This task is used to ensure the required directory structure is in place.
  2. Include a Play for the Tasks: This task includes another playbook, historico-alert-update.yaml, that contains the specific tasks for updating historical alerts data.. This is done using the ansible.builtin.import_playbook module. It effectively extends the execution by incorporating tasks from the included playbook.

Variables:

  • username: This variable is set to the username of the user running the playbook on the local machine.

Note: The playbook serves as a preparation step before executing more complex tasks related to updating historical alerts data., which are defined in the included playbook (historico-alert-update.yaml).


historico-alert-update.yaml

This playbook is responsible for updating historical alerts data. on a cluster of hosts. It is designed to run tasks in parallel on multiple hosts specified in the 'cluster' group.

Playbook Tasks:

  1. Check SSH Availability: This task checks if SSH is running on the specified desired_port for each host in the cluster. It verifies SSH availability on each host before proceeding with other tasks.
  2. Configure Ansible Port: Ansible dynamically sets the ansible_port based on the desired_port, ensuring that Ansible communicates with the correct SSH port on each host.
  3. Create Directories: This task ensures that the required directories (SRC_DIR) for SQL files exist on the local machine (localhost). If not, it creates the directories.
  4. Copy SQL Files: SQL files from the role's scripts directory (../ansible/roles/hist-alertas-update/scripts/sql/*.sql) are copied to the local SRC_DIR. These SQL files are later used to update historical alerts data. in the database.
  5. Copy Script to Server: The SQL file (SQL_FNAME) is copied to the destination directory (DEST_DIR) on each cluster node. Ownership and permissions are set for the copied file.
  6. Add Data to Database: This task runs the SQL script on the PostgreSQL database. It utilizes the psql command to connect to the database and execute the SQL file.
  7. Execute Script on Server: The playbook executes a Bash script (SCRIPT_NAME) on each cluster node. The script's execution is detached with nohup to ensure it continues running even if the SSH session ends. It is run with elevated privileges (administrador user).
  8. Remove SQL Scripts: SQL files are removed from both the local SRC_DIR and the /tmp/sql/ directory on each cluster node after execution.
  9. Register Upload in Log File: This task registers the update operation in a log file (system_update_epiweeks.log). It appends a timestamp and the SQL filename to the log, providing a record of the update process.

Variables:

  • desired_port: The desired SSH port for the cluster nodes.
  • yearweek: The year and week for which the historical alerts data. are being updated.
  • disease: The specific disease for which historical alerts data. are being updated.
  • db_user, db_name, db_password: Database connection details.
  • SRC_DIR, DEST_DIR: Directories for storing SQL files on the local and remote machines.
  • SQL_FNAME: The name of the SQL file generated based on the year, week, and disease.
  • SCRIPT_DIR: The directory of the Bash script to be executed on remote servers.
  • SCRIPT_NAME: The name of the script to be executed on the remote servers.
  • LOG_PATH: The path to the log file where update records are maintained.

incidence-map-upload.yaml

This playbook is responsible for synchronizing map images from a source directory to a destination directory on remote servers and executing a script. It targets the 'cluster' group of hosts.

Playbook Tasks:

  1. Synchronize Map Images Directory: Uses the Ansible synchronize module to copy map images from the source directory (SRC_DIR) to the destination directory (DEST_DIR). This task runs on the local machine (localhost) and is delegated to the same machine.
  2. Execute Script on Server: Executes a shell script named sync_incidence_maps.sh on the remote servers. The script is located in the SCRIPT_DIR. The task runs with elevated privileges (become: true), and the user executing the script is 'administrador'.
  3. Register sent images in the log file: Appends a log entry to a log file (system_update_epiweeks.log) located in the LOG_PATH. The log entry includes the current timestamp and the name of the script (SCRIPT_NAME) that was executed.

Variables:

  • SCRIPT_NAME: The name of the script to be executed on the remote servers.
  • SCRIPT_DIR: The directory path where the script is located.
  • ROLE_MAPS_DIR: The relative directory path to the role's map images directory.
  • SRC_DIR: The full path to the source directory containing map images to be synchronized.
  • DEST_DIR: The full path to the destination directory where map images will be copied.
  • LOG_PATH: The path to the directory where log files are stored.

Note: The playbook assumes the existence of the sync_incidence_maps.sh script in the specified directory and the availability of the necessary permissions to execute the script.


containers-system-update.yaml

This playbook updates the database and deploys container services by executing scripts on the server and logging the operations. It is designed for operations requiring database updates and container deployment across a cluster of hosts.

Playbook Tasks:

  1. Execute Update Script: Executes a script to update historical alert data in the database.
  2. Execute Staging Script: Deploys updated data to a staging environment for testing.
  3. Log Updates: Records the completion of the database update process in the system log.
  4. Log Deployment: Records the deployment of container services in the system log.

Variables:

  • SCRIPT_UPDATE_TABLES: The script for updating tables in the database.
  • SCRIPT_UPDATE_STAGING: The script for updating the staging environment.
  • SCRIPT_PROD_DIR: The production directory for AlertaDengue services.
  • SCRIPT_STAGING_DIR: The staging directory for AlertaDengue services.
  • LOG_PATH: The path to where log files are stored.

Note: This playbook ensures that database updates and container deployments are executed smoothly and logged for audit and tracking purposes.


Get Started!

Installation steps

For development, we encourage you to use conda. If you don't know what is that, check these links:

We recommend you to use mamba-forge, a combination of miniconda + conda-forge + mamba. You can download it from here: https://github.com/conda-forge/miniforge#mambaforge

Here’s how to set up UpdateHistoricoAlerta for local environment.

  1. Clone this repository locally:
$ git clone git@github.com:AlertaDengue/UpdateHistoricoAlerta.git
  1. Create a conda environment and activate it:
$ cd UpdateHistoricoAlerta
$ mamba env create --file conda/base.yaml

and

$ conda activate update-alertas

Usage (Makim)

To use the Makim, you need to have makim installed on your system. You can execute the defined targets by running the following command in your project directory

makim <target>
# to visualize all makim commands execute:
makim --help
# to install maki autocompletions execute:
makim --install-autocompletion

Replace with one of the following available targets:

vault.create-vault-config:

  • Creates a new vault configuration file using ansible-vault create. This file is used for securely storing sensitive data.

vault.change-vault-config:

  • Edits the existing vault configuration file using ansible-vault edit. You can modify sensitive data securely.

vault.change-vault-passwd:

  • Changes the password of the vault configuration file for added security.

ansible.update-alertas:

  • Executes an Ansible playbook to update alerts. It activates the virtual environment, prompts for a vault password, and runs the historico-alert-prepare-hosts.yaml playbook with additional variables, such as yearweek and disease, specified via the -e option.

ansible.history:

  • Allows you to view the execution history of Ansible tasks. It activates the virtual environment and uses the ansible command to run a task (cat /var/log/ansible/system_update_epiweeks.log) on your servers to display the log.

ansible.sync-maps:

  • Executes an Ansible playbook to synchronize map images. Similar to the update-alertas target, it activates the virtual environment, prompts for a vault password, and runs the incidence-map-upload.yaml playbook to synchronize map images.

Example usage:

makim ansible.update-alertas --disease dengue --yearweek 202406 # Execute the playbook to update alerts.
makim ansible.sync-maps # Execute the playbook to synchronize the incidence map images.
makim ansible.create-vault-config # See the vault-template in the "ansible/config" directory to configure your variables.
makim ansible.history --logfile incidence_maps_update.log #
makim ansible.containers-system-update $

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published