hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Pre-trained Language Models in Biomedical Domain: A Systematic Survey
The Chinese University of Hong Kong, Shenzhen, China.
The Chinese University of Hong Kong, Shenzhen, China.
University of Manchester, Manchester, United Kingdom.
University of Amsterdam, Amsterdam, Netherlands.
Show others and affiliations
2024 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 56, no 3, article id 55Article, review/survey (Refereed) Published
Abstract [en]

Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing tasks. This also benefits the biomedical domain: researchers from informatics, medicine, and computer science communities propose various PLMs trained on biomedical datasets, e.g., biomedical text, electronic health records, protein, and DNA sequences for various biomedical tasks. However, the cross-discipline characteristics of biomedical PLMs hinder their spreading among communities; some existing works are isolated from each other without comprehensive comparison and discussions. It is nontrivial to make a survey that not only systematically reviews recent advances in biomedical PLMs and their applications but also standardizes terminology and benchmarks. This article summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in downstream biomedical tasks. Particularly, we discuss the motivations of PLMs in the biomedical domain and introduce the key concepts of pre-trained language models. We then propose a taxonomy of existing biomedical PLMs that categorizes them from various perspectives systematically. Plus, their applications in biomedical downstream tasks are exhaustively discussed, respectively. Last, we illustrate various limitations and future trends, which aims to provide inspiration for the future research. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Place, publisher, year, edition, pages
New York, NY: Association for Computing Machinery (ACM), 2024. Vol. 56, no 3, article id 55
Keywords [en]
Biomedical domain, pre-trained language models, natural language processing
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:hh:diva-52235DOI: 10.1145/3611651ISI: 001098058200002Scopus ID: 2-s2.0-85176943087OAI: oai:DiVA.org:hh-52235DiVA, id: diva2:1819830
Note

Funding: Chinese Key-Area Research and Development Program of Guangdong Province (2020B0101350001), the Shenzhen Science and Technology Program (JCYJ20220818103001002), the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen, Shenzhen Key Research Project (C10120230151) and Shenzhen Doctoral Startup Funding (RCBS20221008093330065).

Available from: 2023-12-15 Created: 2023-12-15 Last updated: 2023-12-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Tiwari, Prayag
By organisation
School of Information Technology
In the same journal
ACM Computing Surveys
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 49 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf