Skip to content

andreasveit/awesome-very-deep-learning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 

Repository files navigation



-----------------

awesome-very-deep-learning is a curated list for papers and code about implementing and training very deep neural networks.

Value Iteration Networks

Value Iteration Networks are very deep networks that have tied weights and perform approximate value iteration. They are used as an internal (model-based) planning module.

Papers

  • Value Iteration Networks (2016) [original code], introduces VINs (Value Iteration Networks). The author shows that one can perform value iteration using iterative usage of convolutions and channel-wise pooling. It is able to generalize better in environments where a network needs to plan. NIPS 2016 best paper.

Densely Connected Convolutional Networks

Densely Connected Convolutional Networks are very deep neural networks consisting of dense blocks. Within dense blocks, each layer receives the the feature maps of all preceding layers. This leverages feature reuse and thus substantially reduces the model size (parameters).

Papers

Implementations

  1. Authors' [Caffe Implementation] (https://github.com/liuzhuang13/DenseNetCaffe)
  2. Authors' more memory-efficient [Torch Implementation] (https://github.com/gaohuang/DenseNet_lite).
  3. [Tensorflow Implementation] (https://github.com/YixuanLi/densenet-tensorflow) by Yixuan Li.
  4. [Tensorflow Implementation] (https://github.com/LaurentMazare/deep-models/tree/master/densenet) by Laurent Mazare.
  5. [Lasagne Implementation] (https://github.com/Lasagne/Recipes/tree/master/papers/densenet) by Jan Schlüter.
  6. [Keras Implementation] (https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/DenseNet) by tdeboissiere.
  7. [Keras Implementation] (https://github.com/robertomest/convnet-study) by Roberto de Moura Estevão Filho.
  8. [Chainer Implementation] (https://github.com/t-hanya/chainer-DenseNet) by Toshinori Hanya.
  9. [Chainer Implementation] (https://github.com/yasunorikudo/chainer-DenseNet) by Yasunori Kudo.
  10. PyTorch Implementation

Deep Residual Learning

Deep Residual Networks are a family of extremely deep architectures (up to 1000 layers) showing compelling accuracy and nice convergence behaviors. Instead of learning a new representation at each layer, deep residual networks use identity mappings to learn residuals.

Papers

Implementations

  1. Torch by Facebook AI Research (FAIR), with training code in Torch and pre-trained ResNet-18/34/50/101 models for ImageNet: blog, code
  2. Torch, CIFAR-10, with ResNet-20 to ResNet-110, training code, and curves: code
  3. Lasagne, CIFAR-10, with ResNet-32 and ResNet-56 and training code: code
  4. Neon, CIFAR-10, with pre-trained ResNet-32 to ResNet-110 models, training code, and curves: code
  5. Neon, Preactivation layer implementation: code
  6. Torch, MNIST, 100 layers: blog, code
  7. A winning entry in Kaggle's right whale recognition challenge: blog, code
  8. Neon, Place2 (mini), 40 layers: blog, code
  9. Tensorflow with tflearn, with CIFAR-10 and MNIST: code
  10. Tensorflow with skflow, with MNIST: code
  11. Stochastic dropout in Keras: code
  12. ResNet in Chainer: code
  13. Stochastic dropout in Chainer: code
  14. Wide Residual Networks in Keras: code
  15. ResNet in TensorFlow 0.9+ with pretrained caffe weights: code
  16. ResNet in PyTorch: code

In addition, this [code] (https://github.com/ry/tensorflow-resnet) by Ryan Dahl helps to convert the pre-trained models to TensorFlow.

Highway Networks

Highway Networks take inspiration from Long Short Term Memory (LSTM) and allow training of deep, efficient networks (with hundreds of layers) with conventional gradient-based methods

Papers

Implementations

  1. Lasagne: code
  2. Caffe: code
  3. Torch: code
  4. Tensorflow: blog, code

Very Deep Learning Theory

Theories in very deep learning concentrate on the ideas that very deep networks with skip connections are able to efficiently approximate recurrent computations (similar to the recurrent connections in the visual cortex) or are actually exponential ensembles of shallow networks

Papers

About

A curated list of papers and code about very deep neural networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published