Deploy a Quorum Network in AWS using Terraform.
AWS Fargate is only available in certain regions, see AWS Region Table for more details.
AWS Fargate has default limits which might impact the provisioning, see Amazon ECS Service Limits for more details.
This will create a Quorum network (with 7 nodes by default) using AWS ECS Fargate, S3 and an EC2. The network can be configured to use either Raft or Istanbul consensus and either Tessera or Constellation privacy managers.
+----- Public Subnet -----+ +----- Private Subnet(s) ---+
| | | |
Internet <--- [NAT Gateway] [Bastion] ---------->| [ECS] [ECS] [ECS] .... |
^ | | | |
| +-------------------------+ +-------------.-------------+
| |
+------------------- Routable --------------------+
Each Quorum/privacy manager node pair is run in a separate AWS ECS (Elastic Container Service) Service.
Each ECS Service contains the following Tasks to bootstrap and start the node pair:
node-key-bootstrap
^
|
metadata-bootstrap
^
/ \
/ \
quorum-run --> {tx-privacy-engine}-run
node-key-bootstrap
: runbootnode
to generate a node key and marshall to node id, store them in shared foldermetadata-bootstrap
: prepare IP list and enode list{tx-privacy-engine}-run
: start the privacy managerquorum-run
: start Quorum
The Bastion is publically accessible and enables geth attach
to each Quorum node in the private subnet. Additionally it exposes ethstats
to more easily view activity on the network.
Terraform v0.12 introduced significant configuration language changes and so is not currently supported
- Install Terraform v0.11
- From HashiCorp website
- MacOS:
brew install terraform@0.11
- Install AWS CLI
- Configure AWS CLI
Follow the prompts to provide credentials and preferences for the AWS CLI
aws configure
- Create an AWS VPC with Subnets if one does not already exist
- Create a VPC with a public and private subnet and corresponding networking as visualised in the above diagram
- For more help see the AWS documentation
This will create an AWS environment with the resources required for deployment.
This has to be run once per region and per AWS Account.
aws cloudformation create-stack --stack-name quorum-prepare-environment --template-body file://./quorum-prepare-environment.cfn.yml
This will create a CloudFormation stack containing the following AWS resources:
- An S3 bucket to store Terraform state with default server-side-encryption enabled
- A KMS Key to encrypt objects stored in the above S3 bucket
These resources are exposed to CloudFormation Exports which will be used in subsequent steps.
This will read from CloudFormation Exports to generate two files (terraform.auto.backend_config
and terraform.auto.tfvars
) that are used in later steps.
cd /path/to/quorum-cloud/aws/templates/_terraform_init
terraform init
terraform apply -var network_name=dev -var region=us-east-1 -auto-approve
Replace <region>
with the AWS region being used.
If network_name
is not provided, a random name will be generated.
cd /path/to/quorum-cloud/aws/templates/
touch terraform.tfvars
Populate terraform.tfvars
with the below template, replacing the subnet IDs with the corresponding IDs for the VPC subnets being used.
is_igw_subnets = "false"
# private subnets routable to Internet via NAT Gateway
subnet_ids = [
"subnet-4c30c605",
"subnet-4c30c605",
"subnet-09263334",
"subnet-5236300a",
]
bastion_public_subnet_id = "subnet-3a8d8707"
consensus_mechanism = "istanbul"
# tx_privacy_engine = "constellation"
access_bastion_cidr_blocks = [
"190.190.190.190/32",
]
subnet_ids
: ECS will provision containers in these subnets. The subnets must be routable to the Internet (either because they are public subnets by default or because they are private subnets routed via NAT Gateway)is_igw_subnets
:true
if the abovesubnet_ids
are attached with Internet Gateway,false
otherwisebastion_public_subnet_id
: where Bastion node is provisioned. This must be a public subnettx_privacy_engine
: the default value istessera
access_bastion_cidr_blocks
: In order to access the Bastion node from a particular IP/set of IPs the corresponding CIDR blocks must be set
Note: variables.tf
contains full options to configure the network
To create an Ethereum private network use following:
consensus_mechanism = "clique"
is_ethereum_network = "true"
is_ethereum_v1_9_x = "true"
quorum_docker_image = "ethereum/client-go"
quorum_docker_image_tag = "alltools-v1.9.5"
cd /path/to/quorum-cloud/aws/templates/
terraform init -backend-config=terraform.auto.backend_config -reconfigure
terraform apply
Terraform will prompt to accept the proposed infrastructure changes. After the changes are accepted and the deployment is complete, information about the created network will be output.
An example of the output is:
Quorum Docker Image = quorumengineering/quorum:latest
Privacy Engine Docker Image = quorumengineering/tessera:latest
Number of Quorum Nodes = 7
ECS Task Revision = 1
CloudWatch Log Group = /ecs/quorum/dev
bastion_host_dns = ec2-5-1-112-217.us-east-1.compute.amazonaws.com
bastion_host_ip = 5.1.112.217
bucket_name = eu-west-2-ecs-dev-6dj72u9s6335853j
chain_id = 4021
ecs_cluster_name = quorum-network-dev
network_name = dev
private_key_file = /path/to/quorum-cloud/aws/templates/quorum-dev.pem
Noting the bastion_host_ip
/bastion_host_dns
and private_key_file
from the output of the previous step, run the following to SSH in to the Bastion node:
chmod 600 <private-key-file>
ssh -i <private-key-file> ec2-user@<bastion-DNS/IP>
From the Bastion node it is possible to geth attach
to any of the Quorum nodes with a simple alias:
[bastion-node]$ Node1
It is also possible to geth attach
to any of the nodes without having to first explicitly SSH into the Bastion node:
ssh -t -i <private-key-file> ec2-user@<bastion-DNS/IP> Node1
ethstats
is available at http://<bastion-DNS/IP>:3000
cd /path/to/quorum-cloud/aws/templates/
terraform destroy
Note: In case terraform destroy
is unable to detroy all the AWS resources, run utils/cleanup.sh
(which uses aws-nuke) to perform full clean up
- The logs for each running node and bootstrap tasks are available in CloudWatch Group
/ecs/quorum/**
- CPU and Memory utilization metrics are also available in CloudWatch