-
Notifications
You must be signed in to change notification settings - Fork 0
AWS
This is a guide to setting up Cardinal as an orchestrator on AWS.
-
Elastic Kubernetes service - creating a cluster
- Make sure the file
~/.aws/credentials
on your machine contains the access key ID and secret access key for the correct AWS account. If you are using a root account, you should not have any permissions issues for the remaining steps. For IAM accounts, please see the following list of the minimum permissions that the account must be granted: https://github.com/weaveworks/eksctl/blob/main/userdocs/src/usage/minimum-iam-policies.md - Install
eksctl
on your machine: https://eksctl.io/introduction/#installation - Create an EKS cluster via the terminal using the following command (this should also add the cluster configuration to your local Kubernetes config file and switch your context to this cluster):
eksctl create cluster --name {cluster_name} --version {Kubernetes version number} --region {region_code} --nodes {number_of_nodes}
NOTE: you can also specify the node type (i.e. machine size) and other fields; see full list here: https://eksctl.io/introduction/
- The previous step may take several minutes. Please ensure that the cluster is up and running either by checking the console or running
kubectl get nodes
in the terminal.- If
kubectl
commands do not work, but you see your cluster in the console, please ensure the cluster configuration was in fact added to the Kubernetes config file by runningeksctl utils write-kubeconfig --cluster={cluster_name}
- If
- Finally, to make sure the nodes in your cluster have S3 access, follow steps 1-4 here to find the permissions that your cluster has. Then click "Attach policies" and add the "AmazonS3FullAccess" policy.
- Make sure the file
-
Docker
NOTE: For setup in AWS, you can keep the cardinal repo on your local machine. We will be building a Docker image for cardinal locally and pushing it to an image repository. This is different from setup in Google Cloud.
- Install Docker Desktop onto your machine and make sure it is running.
- Make a Dockerhub account if you don't have one already.
- Clone the cardinal repo onto your machine and change directories into it.
git clone https://github.com/multiparty/cardinal.git
cd cardinal
- Move the Dockerfile from the
eks_docker
directory to the main cardinal directory (either on your file system or it your terminal with the following line).
mv eks_docker/Dockerfile .
- Edit the following fields in the Dockerfile:
- Change
REGION
to match the region of your cluster. - Set
CONGREGATION
to the image URI of the Congregation image you would like to use. See note below about Congregation images. - Set
CHAMBERLAIN
to the IP address or hostname of your deployed chamberlain server. - Set
PROFILE
to'true'
or'false'
depending on whether you would like this deployment to save profiling information for each workflow. - Enter the AWS credentials for the account where your cardinal results will be sent.
- Set
DESTINATION_BUCKET
to the name of the bucket in the above AWS account where your results will be sent. - If you intend to start multiple clusters (i.e. one for each party) in this AWS account, you will need to edit, build, and push this Dockerfile for each one under a different tag. Please see below for a template Dockerfile.
- Change
- In the terminal, run
docker build -t {name_of_your_Dockerhub_repo}/cardinal:{tag} .
wheretag
can be any string that helps you identify what version of this image you are working with or building. (e.g. hicsail/cardinal:eks-east1) - Run
docker push {name_of_your_image_repo}/cardinal:{tag}
to push the image you just made to your Dockerhub repository.
-
Kubernetes - Deploying your Cardinal image(s)
- Please locate the file cardinal-depl.yaml in the top-level directory of the cardinal repo. Edit line 19 in the file to point to the image URI of the image you pushed in step 2 (e.g. docker.io/hicsail/cardinal:eks-east1). You may also edit any lines having to do with the deployment and service names if you wish, provided they are consistent across the file.
- Make sure your current Kubernetes context is still the EKS cluster you just created.
- Make sure you are still in the top-level directory of the cardinal repo and run
kubectl apply -f role_def.yaml
andkubectl apply -f role_binding_def.yaml
to give the cluster full Kubernetes API permissions. - Make sure you are still in the top-level directory of the cardinal repo and run
kubectl apply -f cardinal-depl.yaml
. This will create a Deployment of cardinal and LoadBalancer Service with an external hostname that you can reach. - You can find the external hostname of the Service in the terminal using
kubectl get service
. In the browser, if you navigate to that hostname with the port 5000 and you see a page with just the word "homepage" on it, that means the cardinal server is up and running.
You are now set up to run workflows using Cardinal! You can begin by trying to send a submit
request from chamberlain.
If you do not know of a publicly available Congregation Docker image that you would like to use, you are welcome to build, push, and use your own. Please see the repository catechism to see what the current configuration of the pushed Congregation images are. There you will see directories labeled by which backend is being used in that particular image. Inside each directory will be a push_pull.py
script, a bash script, and a Dockerfile that specifies which libraries to clone to the image and then runs the bash script. To make a new congregation image, simply copy the file structure of an existing one, make the changes you would like to make, and then build and push the image in the same way that we built and pushed the cardinal images.
Below is a template Dockerfile for deploying cardinal on AWS:
FROM python:3.7
RUN mkdir /cardinal
WORKDIR /cardinal
ADD . /cardinal/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#------------------#
# CHANGE AS NEEDED #
#------------------#
# Environment information
ENV PORT=5000
ENV CLOUD_PROVIDER="EKS"
ENV INFRA="AWS"
ENV REGION="{cluster region}"
ENV CONGREGATION="{image URI of congregation image you would like to use}"
# profile flag - set to "true" to receive profiling timestamps with each workflow
ENV PROFILE="false"
# Information for curia
ENV AWS_REGION="{region of public-read results bucket}"
ENV AWS_ACCESS_KEY_ID="{access key for owner account of bucket}"
ENV AWS_SECRET_ACCESS_KEY="{secret access key for owner account of bucket}"
ENV DESTINATION_BUCKET="{name of public-read results bucket}"
CMD ["python", "/cardinal/wsgi.py"]