This is a Pytorch port of OpenNMT, an open-source (MIT) neural machine translation system. It is designed to be research friendly to try out new ideas in translation, summary, image-to-text, morphology, and many other domains.
OpenNMT-py is run as a collaborative open-source project. It is currently maintained by Sasha Rush (Cambridge, MA), Ben Peters (Saarbrücken), and Jianyu Zhan (Shenzhen). The original code was written by Adam Lerer (NYC). Codebase is nearing a stable 0.1 version. We currently recommend forking if you want stable code.
We love contributions. Please consult the Issues page for any Contributions Welcome tagged post.
pip install -r requirements.txt
The following OpenNMT features are implemented:
- multi-layer bidirectional RNNs with attention and dropout
- data preprocessing
- saving and loading from checkpoints
- Inference (translation) with batching and beam search
- Context gate
- Multiple source and target RNN (lstm/gru) types and attention (dotprod/mlp) types
- TensorBoard/Crayon logging
- Source word features
Beta Features (committed):
- multi-GPU
- Image-to-text processing
- "Attention is all you need"
- Copy, coverage
- Structured attention
- Conv2Conv convolution model
- SRU "RNNs faster than CNN" paper
- Inference time loss functions.
python preprocess.py -train_src data/src-train.txt -train_tgt data/tgt-train.txt -valid_src data/src-val.txt -valid_tgt data/tgt-val.txt -save_data data/demo
We will be working with some example data in data/
folder.
The data consists of parallel source (src
) and target (tgt
) data containing one sentence per line with tokens separated by a space:
src-train.txt
tgt-train.txt
src-val.txt
tgt-val.txt
Validation files are required and used to evaluate the convergence of the training. It usually contains no more than 5000 sentences.
After running the preprocessing, the following files are generated:
demo.src.dict
: Dictionary of source vocab to index mappings.demo.tgt.dict
: Dictionary of target vocab to index mappings.demo.train.pt
: serialized PyTorch file containing vocabulary, training and validation data
Internally the system never touches the words themselves, but uses these indices.
python train.py -data data/demo -save_model demo-model
The main train command is quite simple. Minimally it takes a data file
and a save file. This will run the default model, which consists of a
2-layer LSTM with 500 hidden units on both the encoder/decoder. You
can also add -gpuid 1
to use (say) GPU 1.
python translate.py -model demo-model_epochX_PPL.pt -src data/src-test.txt -output pred.txt -replace_unk -verbose
Now you have a model which you can use to predict on new data. We do this by running beam search. This will output predictions into pred.txt
.
!!! note "Note" The predictions are going to be quite terrible, as the demo dataset is small. Try running on some larger datasets! For example you can download millions of parallel sentences for translation or summarization.
The example below uses the Moses tokenizer (http://www.statmt.org/moses/) to prepare the data and the moses BLEU script for evaluation.
wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/tokenizer/tokenizer.perl
wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/share/nonbreaking_prefixes/nonbreaking_prefix.de
wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/share/nonbreaking_prefixes/nonbreaking_prefix.en
sed -i "s/$RealBin\/..\/share\/nonbreaking_prefixes//" tokenizer.perl
wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/generic/multi-bleu.perl
An example of training for the WMT'16 Multimodal Translation task (http://www.statmt.org/wmt16/multimodal-task.html).
mkdir -p data/multi30k
wget http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz && tar -xf training.tar.gz -C data/multi30k && rm training.tar.gz
wget http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/validation.tar.gz && tar -xf validation.tar.gz -C data/multi30k && rm validation.tar.gz
wget https://staff.fnwi.uva.nl/d.elliott/wmt16/mmt16_task1_test.tgz && tar -xf mmt16_task1_test.tgz -C data/multi30k && rm mmt16_task1_test.tgz
# Delete the last line of val and training files.
for l in en de; do for f in data/multi30k/*.$l; do if [[ "$f" != *"test"* ]]; then sed -i "$ d" $f; fi; done; done
for l in en de; do for f in data/multi30k/*.$l; do perl tokenizer.perl -a -no-escape -l $l -q < $f > $f.atok; done; done
python preprocess.py -train_src data/multi30k/train.en.atok -train_tgt data/multi30k/train.de.atok -valid_src data/multi30k/val.en.atok -valid_tgt data/multi30k/val.de.atok -save_data data/multi30k.atok.low -lower
python train.py -data data/multi30k.atok.low -save_model multi30k_model -gpuid 0
python translate.py -gpu 0 -model multi30k_model_*_e13.pt -src data/multi30k/test.en.atok -tgt data/multi30k/test.de.atok -replace_unk -verbose -output multi30k.test.pred.atok
perl tools/multi-bleu.perl data/multi30k/test.de.atok < multi30k.test.pred.atok
The following pretrained models can be downloaded and used with translate.py (These were trained with an older version of the code; they will be updated soon).
- onmt_model_en_de_200k: An English-German translation model based on the 200k sentence dataset at OpenNMT/IntegrationTesting. Perplexity: 20.
- onmt_model_en_fr_b1M (coming soon): An English-French model trained on benchmark-1M. Perplexity: 4.85.
@inproceedings{opennmt,
author = {Guillaume Klein and
Yoon Kim and
Yuntian Deng and
Jean Senellart and
Alexander M. Rush},
title = {OpenNMT: Open-Source Toolkit for Neural Machine Translation},
booktitle = {Proc. ACL},
year = {2017},
url = {https://doi.org/10.18653/v1/P17-4012},
doi = {10.18653/v1/P17-4012}
}