This project presents an experimental approach to tackle the challenge of data scarcity in a specific NLP task by utilizing existing annotated datasets from related tasks. Our experiment involves training a single base model, such as BERT, with multiple heads that are each dedicated to a specific task, and running them simultaneously during training. We term these additional tasks as "supporting tasks." The goal is to leverage shared knowledge across different domains and by that enhance the model's performance.
See the full report in the following pdf: Advanced_NLP_Project.pdf
Branches:
- Medical tasks can be found in the
main
branch. - The GLUE (General Language Understanding Evaluation) tasks can be found in the
glue_tasks
branch.
The multi-head model can be viewed in models/multiHeadModel.py
The multi-head training can be viewed at train.py
pip install -r requirements.txt
Run:
python train.py --batch_size <batch size> --epochs <number of epochs> --device <device>
For the rest of the arguments, please see train.py