Skip to content

Latest commit

 

History

History
16 lines (14 loc) · 792 Bytes

README.md

File metadata and controls

16 lines (14 loc) · 792 Bytes

XLM-K

This repo provides the code for reproducing the experiment in XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge.

This repo is developed based on Fairseq code base. So we only provide the code we added. We release the code for model and loss at first stage and leave the data loading part as second stage.

How to cite

If you extend or use this work please cite our paper.

@inproceedings{jiang2022xlmk,
  title={XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge},
  author={Xiaoze Jiang and Yaobo Liang and Weizhu Chen and Nan Duan},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2022}
}