Skip to content

Latest commit

 

History

History
8 lines (5 loc) · 498 Bytes

README.md

File metadata and controls

8 lines (5 loc) · 498 Bytes

Health-related content in transformer-based language models: Bias in domain-specific training sets.

This paper is a follow-up to Samo et al. (2022) (available in the paper2022/ directory, repo here).

Samo G, Bonan C, Si F. Health-Related Content in Transformer-Based Deep Neural Network Language Models: Exploring Cross-Linguistic Syntactic Bias. Stud Health Technol Inform. 2022 Jun 29;295:221- 225. doi: 10.3233/SHTI220702. PMID: 35773848.