-
Notifications
You must be signed in to change notification settings - Fork 28
Tutorial | OmniMapper & Docker
In this tutorial we'll go through the steps in using Omnimapper with Docker. Docker is popular tool used to containerize application and software for development and distribution. We'll use it here as a method of simplifying our development setup that can be a barrier of entry for new users and developers of this complex project.
Docker is "An open platform for distributed applications for developers and sysadmins", a tool we'll use here to build, ship, and run OmniMapper. Using Docker we can quickly set up a development environment to start using and modifying the code-base. This also helpful for collaboration as a swift method of sharing runnable applications in a more reproducible manner, in much the same way git has helped improve sharing source code, be it for debugging or demos. Perhaps the best way to understand Docker is by trying it yourself by following Docker's quick guide.
Docker's instaltion guide for Ubuntu can be found here.
Note The available curl script in the guide simplifying the process to a one-liner. You may also want to follow the "Giving non-root access" section to avoid tediously prepending sudo when using docker commands.
Although Docker supports many more operating systems other that just ubuntu, allowing any containerized application a degree of platform independence, in this tutorial we'll be connecting our container to an unix x server for rendering the graphical interfaces. Thus a familiar debian distro like Ubuntu will be used here as the host OS for our Docker install.
If you followed Docker's tutorial link above, then you'll know that we'll need have an image on our local machine before we can run a container. We can do that two ways, either by using what is called a Dockerfile to build
our image, or pull
them down from hubs elsewhere, one such being DockerHub.
If you don't feel like waiting and have a decent internet connection, often the quickest thing to do is to pull the image down, that is if the image has been made available online. We have already done this by pushing the required base images to DockerHub for you, all you'll need to do is use the pull command:
docker pull cogrob/omnimapper-demo
If you want to build the images yourself, perhaps you needed to modify a core dependency, you can do this by using the build command. The Dockerfile
s used to build each image layer for OmniMapper are separated in the docker
folder at the root project directory. The hierarchy of the image layers are as follows:
-
cogrob/omnimapper-dep
: includes only ros dependencies -
cogrob/omnimapper-dev
: additional development dependencies-
cogrob/omnimapper-gui
: dependencies for graphical interfaces-
cogrob/omnimapper-dox
: environment and setup for development user account dox-
cogrob/omnimapper-demo
: compiled project and built binaries ready to run-
cogrob/omnimapper-nvidia
: optional dependencies for OpenGL + Nvidia hardware acceleration
-
-
-
-
To build an image yourself, you'll want to build the base image(s) that it may depend from. When running the build command within the same directory as the Dockerfile
, the command could look like so:
*docker build --tag="myusername/omnimapper-custom" .
Where by giving the image a distinct tag with your own username will help you differentiate it from our images available on DockerHub. Note That if you do choose to change an image's tag name, you may want to change the FROM
field in the Dockerfile
respectively for any later images you build using it as a dependant base image.
We also provide a basic Makefile
file you can use to quickly build, clone, clean and run cogrob/omnimapper base images in your docker setup:
make build
- builds all cogrob/omnimapper images using
Dockerfile
s locally make pull
- pulls all cogrob/omnimapper images from dockerhub
make clean
- cleans all cogrob/omnimapper images by removing them from your docker image cache
make bash
- simply run and starts a bash prompt in a demo container
make terminator
- same as
make bash
but opens a terminator window instead
Now that you have the images locally we can use a bash script included in the project's docker folder to run a container using our cogrob/omnimapper-demo
image. If we looked into this script, we we see it simply using Docker's run command with some specific augument to enable the use of a GUI, plus any arguments we provide. The arguments we'll need to provide are: the tag name of the image to use, and the command to execute inside the new container. In this case we'll use the cogrob/omnimapper-demo
image and start up an instance of terminator:
bash run.sh cogrob/omnimapper-demo terminator
Note that if you were to open the simple Makefile
, this is same method used for make terminator
.
Upon opening terminator within the demo container, you'll be all set for the Running OmniMapper Tutorial on this wiki. If you take a look at the run.sh
script, you'll find a --volume
argument used to mount your users Desktop to the demo user's Desktop, \home\dox\Desktop
. This is an example of a shared file system and is one of the ways to Manage Data in Containers. This is done just as an easy way of accessing any demo bag files you may want play with. Feel free customize the run.sh script as you need.
Before you jump to developing with the Docker tools here and/or while your waiting for your images to download/build, it really helps to have understand of how to interact with images or containers. So be sure you've had look over the Docker User Guide. Just like git or any amazing dev hammer, there's a good bit to learn. But the more familiar you become, the easier it is to to wield, the more nails you'll find, the bigger projects you'll build.
##Troubleshooting
If you happen to get an error about no connected X-server when attempting to run a GUI application within the running container, this is most likely an permissions issue. This is commonly due to a UID mismatch with user inside the container and that of the host user that currently has access to the x-server. There are few ways to currently to get about this. You can also visit the the references bellow for more details:
- Commit the container with a changed UID
- Using Docker's commit command, we can spin up a container, make the change to the UID and commit the change to our local image.
-
docker run -it --user=root cogrob/omnimapper-demo bash
: We'll need to be logged in as logedroot
, as we can't change the UID for the demo user,dox
, while that same user is active. -
usermod -uid <UID> dox
: Here we'll user the usermod command to change the UID for dox, where<UID>
should be the same as for your user on the host. Runid
on your host to see the UID and GID your host user belongs to. -
usermod --gid <GID> dox
You may want to set the GID to be the same as well. - Exit the container and commit the container to the same tag name. You may want to run Docker's ps command with the
--all
flag to find the container id or nick name for that last modified container. -
docker commit <ID> cogrob/omnimapper-demo
Now the next time you spin a container from the image with the same tag name, the issue should be resolved.
- Rebuild the containers
- By modifying the UID and GID specified in the
Dockerfile
for cogrob/omnimapper-dox, you can achieve the same effect as above. This requires you rebuild the image and all images that where previously built from it. This obviously takes longer and more patience than the first solution.
- By modifying the UID and GID specified in the
- Or run
xhost +
on the host- This allows any user of any UID to interact with the host's x-server. This of course compromises your host's security and is not recommended.
The Nvidia image is optional and is commented out by default in the make file as not every one may necessarily have Nvidia hardware, nor the same setup for that matter. If you wish to user graphical hardware acceleration within the container, which honestly brings great improvements to most things such as rviz and gazebo, you'll need to:
- First un-comment the relevant lines in the make for nvidia image
- Then run the
build.sh
script in the nvidia folder to download the same nvida drivers as used by your host into the same directory. You can then runmake build
in the docker directory to build the nvidia image using the demo image already on your system. - Finally, adjust the last few
--device
lines/arguments in therun.sh
inside the nvidia folder to reflect your hardware, as you'll need to mount the correct paths of the card(s) to the container. You can runls -la /dev | grep nvidia
to find all of your Nvidia devices. Then just use thisrun.sh
script instead to run containers using thecogrob/omnimapper-nvidia
image. Again, you can also visit the the references bellow for more details
- Gernot Klingler and his detailed post: How docker replaced my virtual machines and chroots, a guide in how to enable a container to connect to an x-server and graphical hardware acceleration.
- opencog with examples in the use of Docker for research and collaboration within the robotics community.
- sameersbn and his work with docker-browser-box, from which provided an example of GUI and Audio support for apps within Docker
- Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5 by Traun Leyden and providing details on using nvidia devices with Docker
- A relevant stackoverflow question about using docker with x-server.