stateful and persistent applications in containers that work...
Quickly deploy an environment to begin application testing. Each has key pieces of technology that make them unique.
- Deploy VirtualBox with Docker Machine and Install REX-Ray
- Use Docker Machine to deploy a VirtualBox host that is installed and
configured with the latest and stable Docker Engine. Follow the directions
to install REX-Ray using
curl | sh
and learn how to write a properly formatted configuration file. This environment will use the VirtualBox driver for REX-Ray to allow stateful applications to persist data.
- Use Docker Machine to deploy a VirtualBox host that is installed and
configured with the latest and stable Docker Engine. Follow the directions
to install REX-Ray using
- 3-Node ScaleIO Environment Using Vagrant
- The Vagrant file provided will use VirtualBox to create three (3) hosts. Each host will have ScaleIO (a software that turns DAS storage into shared and scale-out block storage) installed and configured. The hosts will also automatically install and configure Docker Engine and REX-Ray. This gives a fully configured environment ready to test stateful applications and clustering functionality using the ScaleIO driver for REX-Ray.
- 3-Node ScaleIO + 3-Node Apache Mesos Cluster with Marathon on AWS
- Use an AWS Cloudformtion template to deploy two (2) clusters in AWS. The first cluster is a three (3) node ScaleIO environment which will be used by REX-Ray as the storage platform. A second cluster is a three (3) node Apache Mesos cluster fully configured with Marathon ready to accept requests for scheduling containers. Follow this with Application Demo #3
- GO ADVANCED
- Take it to the next level by exploring features of performing a custom ScaleIO configuration and installation. This process will take the existing Mesos Agent Nodes, add additional storage, and install all the SDS components to add more storage to the existing ScaleIO cluster based on your pool and domain configuration settings. Try it at the Custom ScaleIO Framework Deployment
- Deploy a 3-Node Ceph Environment Using Vagrant
- This Vagrant environment uses VirtualBox and Virtual Media as storage for Ceph to be consumed by REX-Ray along with Docker as the container runtime. This can be used as a quick way to get started working with Ceph and REX-Ray. The hosts will automatically install and configure Docker Engine with REX-Ray to provide a fully configured environment ready to test stateful applications.
- Kubernetes
- Kubernetes with FlexREX on ScaleIO
- FlexREX is an implementation of a FlexVolume driver for Kubernetes that enables the entire library of REX-Ray/libStorage supported storage platforms. This lab will expose a different kind of architecture where REX-Ray is deployed as a central controller within a Pod and FlexREX will be installed on the Kubernetes minion nodes.
- Kubernetes with libstorage integration on AWS
- Take a test drive with a proposed feature for Kubernetes using libStorage for persistent applications. Start from scratch with the complete setup needed to run a functioning fork of Kubernetes on AWS. Then explore the different types of volume architectures available to Kubernetes pods to persist applications.
- Kubernetes with FlexREX on ScaleIO
- Deploy AWS EC2 Host with Docker Machine and Install REX-Ray with Docker 1.13
Managed Plugin
EXPERIMENTAL
- Use Docker Machine to deploy an AWS EC2 host that is installed and configured with the latest and stable Docker Engine. Follow the directions to install REX-Ray using the Docker 1.13 Managed Plugin System. This environment will use AWS EC2 along with EBS driver for REX-Ray to allow stateful applications to persist data.
- Install REX-Ray as a Plugin on Docker for AWS (Cloudformation)
EXPERIMENTAL
- Bring persistent volume functionality to Docker for AWS with REX-Ray. Customizing the Cloudformation template allows automated installations of REX-Ray to AWS AutoScaling Groups and access to EBS volumes.
- Storage Persistence with Postgres using REX-Ray
- Learn how to read a Dockerfile to know which paths need persistent data. Manually deploy a Docker Postgres container and create a few tables to write data. Destroy the container and start a new container on a different host to see the data persist.
- Storage Persistence and Failover with Minecraft using REX-Ray and Docker Swarm Mode
- Take a set of nodes and cluster them together using Docker Swarm Mode which allows distributed computing, reconciliation of failed hosts, and extended networking functionality. Play a game of Minecraft to create an inventory of data to persist. Turn off the Docker service to watch Docker Swarm Mode along with REX-Ray redeploy the container on a new host to keep inventory intact.
- Storage Persistence with Postgres using Mesos, Marathon, Docker, and REX-Ray
- Use the supplied application spec for Marathon to deploy a Postgres service to Mesos. Use the restart button to redeploy the Postgres service on a new host and see the data persist.
- Storage Persistence with MySQL Persistent Volumes and Persistent Volume
Claims with Kubernetes Pods and REX-Ray (FlexREX)
- Learn how to use MySQL with a Persistent Volume in a Kubernetes Pod. Go through all the steps for manually creating the volume, creating the persistent volume claim, and attaching it to the pod. This will also demonstrate how to do a migration of the MySQL Pod from one host to another.
- Create a new folder with the title of your demo
- Add all relevant code and step-by-step instructions for completing the demo
- Remember that these are quick "demos" and not "tutorials"
- Create a README.md file for each demo to display on GitHub that lays out the instructions for completing your demo from start to finish.
- Screenshots are encouraged.
Create a fork of the project into your own repository. Make all your necessary changes and create a pull request with a description on what was added or removed and details explaining the changes in lines of code. If approved, project owners will merge it.
Please file bugs and issues on the GitHub issues page for this project. This is to help keep track and document everything related to this repo. For general discussions and further support you can join the {code} by Dell EMC Community slack channel. The code and documentation are released with no warranties or SLAs and are intended to be supported through a community driven process.