This is a PyTorch implementation of Neural Speed Reading via Skim-RNN published on ICLR 2018.
The imdb dataset is used by default and stored in the ./data folder. Besides, the 300 dimensional GloVe word embedding trained under 840 billion words is used.
Unlike Skip RNN or Jump LSTM where the objective is discrete, Skim RNN introduces the Gumbel-softmax parametrization trick that makes the skimming objective differentiable:
python main.py [arguments]
-h, help help
-large_cell_size size of the large LSTM
-small_cell_size size ofthe small LSTM