hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
AttentionMGT-DTA: A multi-modal drug-target affinity prediction using graph transformer and attention mechanism
Suzhou University of Science and Technology, Suzhou, China.
Suzhou University of Science and Technology, Suzhou, China; University of Electronic Science and Technology of China, Quzhou, China.
Nanjing Medical University, Suzhou, China.
University of Electronic Science and Technology of China, Quzhou, China.ORCID iD: 0000-0001-6406-1142
Show others and affiliations
2024 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 169, p. 623-636Article in journal (Refereed) Published
Abstract [en]

The accurate prediction of drug-target affinity (DTA) is a crucial step in drug discovery and design. Traditional experiments are very expensive and time-consuming. Recently, deep learning methods have achieved notable performance improvements in DTA prediction. However, one challenge for deep learning-based models is appropriate and accurate representations of drugs and targets, especially the lack of effective exploration of target representations. Another challenge is how to comprehensively capture the interaction information between different instances, which is also important for predicting DTA. In this study, we propose AttentionMGT-DTA, a multi-modal attention-based model for DTA prediction. AttentionMGT-DTA represents drugs and targets by a molecular graph and binding pocket graph, respectively. Two attention mechanisms are adopted to integrate and interact information between different protein modalities and drug-target pairs. The experimental results showed that our proposed model outperformed state-of-the-art baselines on two benchmark datasets. In addition, AttentionMGT-DTA also had high interpretability by modeling the interaction strength between drug atoms and protein residues. Our code is available at https://github.com/JK-Liu7/AttentionMGT-DTA. © 2023 The Author(s)

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024. Vol. 169, p. 623-636
Keywords [en]
Attention mechanism, Drug–target affinity, Graph neural network, Graph transformer, Multi-modal learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-52421DOI: 10.1016/j.neunet.2023.11.018PubMedID: 37976593Scopus ID: 2-s2.0-85181262524OAI: oai:DiVA.org:hh-52421DiVA, id: diva2:1829327
Note

Funding: The National Natural Science Foundation of China(62073231, 62176175, 62172076), National Research Project (2020YFC2006602), Provincial Key Laboratory for Computer Information Processing Technology, Soochow University (KJS2166), Opening Topic Fund of Big Data Intelligent Engineering Laboratory of Jiangsu Province (SDGC2157), Postgraduate Research and Practice Innovation Program of Jiangsu Province, Zhejiang Provincial Natural Science Foundation of China (Grant No. LY23F020003), and the Municipal Government of Quzhou, China (Grant No. 2023D038).

Available from: 2024-01-18 Created: 2024-01-18 Last updated: 2024-01-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Zou, QuanTiwari, PrayagDing, Yijie
By organisation
School of Information Technology
In the same journal
Neural Networks
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 59 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf