This repository and walkthrough guides you through deploying HAProxy, Apache serving a Drupal8 site using a Mysql server on AWS based on other atlas-examples. This has been merged now in atlas-examples repo so this repo will be improved adding new functionality so not just a basic generic example.
- Clone this repository
- Create an Atlas account
- Generate an Atlas token and save as environment variable.
export ATLAS_TOKEN=<your_token>
- In the Vagrantfile, Packer files
haproxy.json
andapache-php.json
,mysql.json
, Terraform fileinfrastructure.tf
, and Consul upstart scriptconsul_client.conf
you need to replace all instances of<username>
,YOUR_ATLAS_TOKEN
,YOUR_SECRET_KEY
, andYOUR_ACCESS_KEY
with your Atlas username, Atlas token, and AWS keys.
Before jumping into configuration steps, it's helpful to have a mental model for how services connect and how the Atlas workflow fits in.
For HAProxy to work properly, it needs to have a real-time list of backend nodes to balance traffic between. In this example, HAProxy needs to have a real-time list of healthy php nodes. To accomplish this, we use Consul and Consul Template. Any time a server is created, destroyed, or changes in health state, the HAProxy configuration updates to match by using the Consul Template haproxy.ctmpl
. Pay close attention to the backend stanza:
backend webs
balance roundrobin
mode http{{range service "php.web"}}
server {{.Node}} {{.Address}}:{{.Port}}{{end}}
Consul Template will query Consul for all web servers with the tag "php", and then iterate through the list to populate the HAProxy configuration. When rendered, haproxy.cfg
will look like:
backend webs
balance roundrobin
mode http
server node1 172.29.28.10:8888
server node2 172.56.28.10:8888
This setup allows us to destroy and create backend servers at scale with confidence that the HAProxy configuration will always be up-to-date. You can think of Consul and Consul Template as the connective webbing between services.
Consul Template will query Consul for all "database" servers with the tag "mysql", and then iterate through the list to populate the PHP/Drupal configuration. When rendered, settings.php
will look like:
$databases = array();
$databases['default']['default'] = array(
'driver' => 'mysql',
'database' => 'drupal',
'username' => 'apache',
'password' => 'password',
'host' => '172.56.28.10',
'prefix' => '',
);
This setup allows us to destroy and create Apache+PHP serving drupal with confidence that their configurations will always be correct and they will always write to the proper MySQL instances. You can think of Consul and Consul Template as the connective webbing between services.
- For Consul Template to work for HAProxy, we first need to create a Consul cluster. You can follow this walkthrough to guide you through that process.
- Build an AMI with HAProxy installed. To do this, run
packer push -create haproxy.json
in the ops/HAProxy directory. This will send the build configuration to Atlas so it can build your HAProxy AMI remotely. - View the status of your build in the Operations tab of your Atlas account.
- Build an AMI with the Drupal requirements Apache, PHP, Composer and drush installed. To do this, run
packer push -create apache-php.json
in the ops/apache_php directory. This will send the build configuration to Atlas so it can remotely build your AMI with Apache and PHP installed. - View the status of your build in the Operations tab of your Atlas account.
- This creates an AMI with Apache and PHP installed, and now you need to send the actual Drupal application code to Atlas and link it to the build configuration. To do this, put your Drupal code in the app folder or follow instructions here for cloning a clean drupal installation and simply run
vagrant push
in the app directory. This will send your full Drupal application code to Atlas. Then link the Drupal application with the Apache+PHP build configuration by clicking on your build configuration, then 'Links' in the left navigation. Complete the form with your username, 'drupal' as the application name, and '/app' as the destination path. - Now that your application and build configuration are linked, simply rebuild the Apache+PHP configuration and you will have a fully-baked AMI with Apache and PHP installed and your Drupal application code in place.
- Build an AMI with MySQL installed. To do this, run
packer push -create mysql.json
in the ops/mysql directory. This will send the build configuration to Atlas so it can build your MySQL AMI remotely. - View the status of your build in the Operations tab of your Atlas account.
- To deploy HAProxy, Drupal and Mysql, all you need to do is run
terraform apply
in the ops/terraform folder. Be sure to runterraform apply
only on the artifacts first. The easiest way to do this is comment out theaws_instance
resources and then runterraform apply
. Once the artifacts are created, just uncomment theaws_instance
resources and runterraform apply
on the full configuration. Watch Terraform provision five instances — two with Drupal, one with Mysql and one with HAProxy!
provider "aws" {
access_key = "YOUR_KEY_HERE"
secret_key = "YOUR_SECRET_HERE"
region = "us-east-1"
}
resource "atlas_artifact" "haproxy" {
name = "<username>/haproxy"
type = "aws.ami"
}
resource "atlas_artifact" "php" {
name = "<username>/apache-php"
type = "aws.ami"
}
resource "atlas_artifact" "mysql" {
name = "<username>/mysql"
type = "aws.ami"
}
resource "aws_security_group" "all" {
name = "haproxy"
description = "Allow all inbound traffic"
ingress {
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "haproxy" {
instance_type = "t2.micro"
ami = "${atlas_artifact.haproxy.metadata_full.region-us-east-1}"
security_groups = ["${aws_security_group.all.name}"]
# This will create 1 instance
count = 1
lifecycle = {
create_before_destroy = true
}
}
resource "aws_instance" "php" {
instance_type = "t2.micro"
ami = "${atlas_artifact.php.metadata_full.region-us-east-1}"
security_groups = ["${aws_security_group.all.name}"]
depends_on = ["aws_instance.mysql"]
# This will create 2 instance
count = 2
lifecycle = {
create_before_destroy = true
}
}
resource "aws_instance" "mysql" {
instance_type = "t2.micro"
ami = "${atlas_artifact.mysql.metadata_full.region-us-east-1}"
security_groups = ["${aws_security_group.all.name}"]
# This will create 1 instances
count = 1
lifecycle = {
create_before_destroy = true
}
}
- Navigate to your HAProxy stats page by going to it's Public IP on port 1936 and path /haproxy?stats. For example 52.1.212.85:1936/haproxy?stats
- In a new tab, hit your HAProxy Public IP on port 8080 a few times. You'll see in the stats page that your requests are being balanced evenly between the drupal nodes.
- That's it! You just deployed HAProxy, Drupal8 and Mysql. If you are deploying a clean Drupal installation you can follow steps here for installing drupal
- Navigate to the Runtime tab in your Atlas account and click on the newly created infrastructure. You'll now see the real-time health of all your nodes and services!