Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Language model fine-tuning #4

Open
braaannigan opened this issue Mar 3, 2020 · 0 comments
Open

Language model fine-tuning #4

braaannigan opened this issue Mar 3, 2020 · 0 comments

Comments

@braaannigan
Copy link

Nice work!

I often start out with much more unlabelled than labelled data. Is it possible to do masked language model fine-tuning (without the classification head) to start with on the full set of data before adding the classifier?

If not, would a second best approach be to do it iteratively i.e. train on the small amount of labelled data, predict for the unlabelled data, fine tune on the labels & predictions and then re-train just on the labelled data?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant