hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Document-level Relation Extraction with Relation Correlations
Jilin University, Changchun, China.
Jilin University, Changchun, China.
The Chinese University of Hong Kong, Shenzhen, China.ORCID iD: 0000-0002-1501-9914
Jilin University, Changchun, China.
Show others and affiliations
2024 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 171, p. 14-24Article in journal (Refereed) Published
Abstract [en]

Document-level relation extraction faces two often overlooked challenges: long-tail problem and multi-label problem. Previous work focuses mainly on obtaining better contextual representations for entity pairs, hardly address the above challenges. In this paper, we analyze the co-occurrence correlation of relations, and introduce it into the document-level relation extraction task for the first time. We argue that the correlations can not only transfer knowledge between data-rich relations and data-scarce ones to assist in the training of long-tailed relations, but also reflect semantic distance guiding the classifier to identify semantically close relations for multi-label entity pairs. Specifically, we use relation embedding as a medium, and propose two co-occurrence prediction sub-tasks from both coarse- and fine-grained perspectives to capture relation correlations. Finally, the learned correlation-aware embeddings are used to guide the extraction of relational facts. Substantial experiments on two popular datasets (i.e., DocRED and DWIE) are conducted, and our method achieves superior results compared to baselines. Insightful analysis also demonstrates the potential of relation correlations to address the above challenges. The data and code are released at https://github.com/RidongHan/DocRE-Co-Occur. © 2023 Elsevier Ltd

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024. Vol. 171, p. 14-24
Keywords [en]
Co-occurrence, Document-level, Multi-task, Relation Correlations, Relation Extraction
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:hh:diva-52317DOI: 10.1016/j.neunet.2023.11.062Scopus ID: 2-s2.0-85179586274OAI: oai:DiVA.org:hh-52317DiVA, id: diva2:1822493
Note

Funding: This work is supported by the National Natural Science Foundation of China under grant No. 61872163 and 61806084, Jilin Province Key Scientific and Technological Research and Development Project under grant No. 20210201131GX, and Jilin Provincial Education Department Project under grant No. JJKH20190160KJ.

Available from: 2023-12-22 Created: 2023-12-22 Last updated: 2024-01-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Wang, BenyouTiwari, Prayag
By organisation
School of Information Technology
In the same journal
Neural Networks
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 47 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf