[1]Lee J, Yoon W, Kim S, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining[J]. Bioinformatics, 2020, 36(4): 1234-1240.
[2]Peng Y, Yan S, Lu Z. Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets[J]. arXiv preprint arXiv:1906.05474, 2019.
[3]Huang K, Altosaar J, Ranganath R. Clinicalbert: Modeling clinical notes and predicting hospital readmission[J]. arXiv preprint arXiv:1904.05342, 2019.
[4]Beltagy I, Lo K, Cohan A. SciBERT: A pretrained language model for scientific text[J]. arXiv preprint arXiv:1903.10676, 2019.
[5]Gu Y, Tinn R, Cheng H, et al. Domain-specific language model pretraining for biomedical natural language processing[J]. ACM Transactions on Computing for Healthcare (HEALTH), 2021, 3(1): 1-23.
[6]raj Kanakarajan K, Kundumani B, Sankarasubbu M. BioELECTRA: pretrained biomedical text encoder using discriminators[C]//Proceedings of the 20th Workshop on Biomedical Language Processing. 2021: 143-154.
[7]Zhang N, Jia Q, Yin K, et al. Conceptualized representation learning for chinese biomedical text mining[J]. arXiv preprint arXiv:2008.10813, 2020.
[8]Zhang T, Cai Z, Wang C, et al. SMedBERT: A knowledge-enhanced pre-trained language model with structured semantics for medical text mining[J]. arXiv preprint arXiv:2108.08983, 2021.
[9]He Y, Zhu Z, Zhang Y, et al. Infusing disease knowledge into BERT for health question answering, medical inference and disease name recognition[J]. arXiv preprint arXiv:2010.03746, 2020.
[10]Cai Z, Zhang T, Wang C, et al. EMBERT: A Pre-trained Language Model for Chinese Medical Text Mining[C]//Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data. Springer, Cham, 2021: 242-257.
[11]Roitero K, Portelli B, Popescu M H, et al. DiLBERT: Cheap Embeddings for Disease Related Medical NLP[J]. IEEE Access, 2021, 9: 159714-159723.
[12]He B, Zhou D, Xiao J, et al. Integrating graph contextualized knowledge into pre-trained language models[J]. arXiv preprint arXiv:1912.00147, 2019.
[13]Michalopoulos G, Wang Y, Kaka H, et al. Umlsbert: Clinical domain knowledge augmentation of contextual embeddings using the unified medical language system metathesaurus[J]. arXiv preprint arXiv:2010.10391, 2020.
[14]Liu F, Shareghi E, Meng Z, et al. Self-alignment pretraining for biomedical entity representations[J]. arXiv preprint arXiv:2010.11784, 2020.