#Multi-gpu using Theano and PyCUDA
Demonstration of training the same neural network with multipe GPUs using Theano and PyCUDA
See theano_alexnet and this technical report for how to use this to train AlexNet.
Note this code is developed on the old Theano backend, CudaNdarray, and the way of exchanging weights here only works within a single node. For using the new Theano backend, GPUArray, and across-node multi-GPU support, see Theano-MPI.
If you use this in your research, we kindly ask that you cite the above report:
@article{ding2014theano,
title={Theano-based Large-Scale Visual Recognition with Multiple GPUs},
author={Ding, Weiguang and Wang, Ruoyan and Mao, Fei and Taylor, Graham},
journal={arXiv preprint arXiv:1412.2302},
year={2014}
}
Packages
Files needed to be in the same folder
Download mnist.pkl.gz and change the shared_args['dataset'] to where you save it.
- dual_mlp.py : This script trains a multi-layer perceptron with 2 gpus. It uses data parallelism, where 2 minibatches trained separately on 2 gpus are combined to be a larger minibatch. This is by no means the best of way using 2 gpus. The purpose of this code is to show a way of using theano with multiprocessing and multiple gpus.
In terminal run:
THEANO_FLAGS=mode=FAST_RUN,floatX=float32 python dual_mlp.py arg1 arg2
where arg1 is the index of the 1st gpu and arg2 is the index of the 2nd gpu. These 2 gpus need to be connected directly by PCI-e, otherwise the p2p transfer won't work.
For people at University of Guelph, on GPU1~10, run
THEANO_FLAGS=mode=FAST_RUN,floatX=float32 python dual_mlp.py 1 2
on GPU11, run
THEANO_FLAGS=mode=FAST_RUN,floatX=float32 python dual_mlp.py 0 2
Frédéric Bastien, for providing the original page of Using Multiple GPUs
Lev Givon, for providing help on inter process communication between 2 gpus with PyCUDA, Lev's original script https://gist.github.com/lebedov/6408165
Fei Mao, for extensive discussions on GPUs, CUDA, and debugging
Graham Taylor, for extensive suggestions
Guangyu Sun, for help on debugging the code