Skip to content

Building Running with Docker

Tatiana Likhomanenko edited this page Jul 20, 2020 · 10 revisions

wav2letter++ and its dependencies can also be built with the provided Dockerfile. Both CUDA and CPU backends are supported with Docker. Also we provide lightweigth Dockerfile and image for the Inference pipeline.

Docker images on the Docker hub

Currently we have github actions to push new build for both CPU and CUDA backends for each commit on master. https://hub.docker.com/r/wav2letter/wav2letter/tags

Using pre-built Docker images

To use wav2letter++ with Docker:

  • Install Docker and, if using the CUDA backend, nvidia-docker
  • Run the docker image with CUDA/CPU backend in a new container:
    # with CUDA backend (for CUDA 10 and ubuntu 18.04), with latest master commits
    sudo docker run --runtime=nvidia --rm -itd --ipc=host --name w2l wav2letter/wav2letter:cuda-latest
    # or with CPU backend (ubuntu 18.04)
    sudo docker run --rm -itd --ipc=host --name w2l wav2letter/wav2letter:cpu-latest
    # for Inference pipeline
    sudo docker run --rm -itd --ipc=host --name w2l wav2letter/wav2letter:inference-latest
    # go into bash in the container
    sudo docker exec -it w2l bash

Note in CPU docker container one needs to run export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2018.5.274/linux/mkl/lib/intel64:$LD_IBRARY_PATH before doing anything.

  • One can also mount any necessary directories into a container, for example:

    sudo docker run --runtime=nvidia --rm -itd --ipc=host \
        --volume original/path/on/your/machine:mounted/path/in/the/container \
        --name w2l wav2letter/wav2letter:cuda-latest
  • To run tests inside a container

    cd /root/wav2letter/build && make test
  • Update flashlight and wav2letter inside a container to have current master

    • for CUDA backend
      cd /root/flashlight && git pull
      cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DFLASHLIGHT_BACKEND=CUDA && \
      make -j8 && make install 
      export MKLROOT=/opt/intel/mkl && export KENLM_ROOT_DIR=/root/kenlm && \
      cd /root/wav2letter && git remote set-url https://github.com/facebookresearch/wav2letter.git && \
      git pull && rm -rf build/* && \
      cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DW2L_LIBRARIES_USE_CUDA=ON -DW2L_BUILD_INFERENCE=ON && \
      make -j8 && \
    • for CPU backend
      cd /root/flashlight && git pull
       cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DFLASHLIGHT_BACKEND=CPU && \
      make -j8 && make install
      export KENLM_ROOT_DIR=/root/kenlm && \
      cd /root/wav2letter && git remote set-url https://github.com/facebookresearch/wav2letter.git && \
      git pull && rm -rf build/* && \
      cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DW2L_LIBRARIES_USE_CUDA=OFF -DW2L_BUILD_INFERENCE=ON && \
      make -j8 
    • for Inference pipeline
      export KENLM_ROOT_DIR=/root/kenlm && \
      cd /root/wav2letter && git remote set-url https://github.com/facebookresearch/wav2letter.git && \
      git pull && rm -rf build/* && \
      cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DW2L_BUILD_LIBRARIES_ONLY=ON -DW2L_LIBRARIES_USE_CUDA=OFF -DW2L_BUILD_INFERENCE=ON && \
      make -j8 

Build images by yourself

To build Docker image from the source (using --no-cache will provide the latest version of flashlight inside the image if you have built the image previously for earlier versions of wav2letter) run the following commands:

git clone --recursive https://github.com/facebookresearch/wav2letter.git
cd wav2letter
# for CUDA backend
sudo docker build --no-cache -f ./Dockerfile-CUDA -t wav2letter .
# for CPU backend
sudo docker build --no-cache -f ./Dockerfile-CPU -t wav2letter .
# for Inference pipeline
sudo docker build --no-cache -f ./Dockerfile-Inference -t wav2letter .

Training/decoding inside container

For logging during training/testing/decoding inside a container, use the --logtostderr=1 --minloglevel=0 flag.