hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Multimodal Coupled Graph Attention Network for Joint Traffic Event Detection and Sentiment Classification
Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou, China.ORCID iD: 0000-0002-7990-2458
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-2851-4260
Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou, China.ORCID iD: 0000-0001-7984-625X
Multimedia Communications Research Laboratory (MCRLab), University of Ottawa, Ottawa, Canada; Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.ORCID iD: 0000-0002-7690-8547
Show others and affiliations
2023 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 24, no 8, p. 8542-8554Article in journal (Refereed) Published
Abstract [en]

Traffic events are one of the main causes of traffic accidents, leading to traffic event detection being a challenging research problem in traffic management and intelligent transportation systems (ITSs). The main gap in this task lies in how to extract and represent the valuable information from various kinds of traffic data. Considering the important role that social networks play in traffic data analysis, we argue that sentiment classification and traffic event detection are two closely related tasks in ITSs, where event and sentiment can reveal both explicit and implicit traffic accidents, respectively. Unfortunately, none of the recent approaches in traffic event detection have taken sentiment knowledge into view. This paper proposes a multimodal coupled graph attention network (MCGAT). It aims to construct a multimodal multitask interactive graphical structure where terms (sucha as words, and pixels) are treated as nodes, and their contextual and cross-modal correlations are formalized as edges. The key components are cross-modal and cross-task graph connection layers. The cross-modal graph connection layer captures the multimodal representation, where each node in one modality connects all nodes in another modality. The cross-task graph connection layer is designed by connecting the multimodal node in one task to two single nodes in another task. Empirical evaluation of two benchmarking datasets, such as MGTES and Twitter, shows the effectiveness of the proposed model over state-of-the-art baselines in terms of F1 and accuracy, with significant improvements of 2.4%, 2.4%, 2.7%, and 2.7%. © 2022 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2023. Vol. 24, no 8, p. 8542-8554
Keywords [en]
Traffic event detection, sentiment classification, graph neural networks, graph embedding, intelligent transportation system
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:hh:diva-48580DOI: 10.1109/tits.2022.3205477ISI: 000910504600001Scopus ID: 2-s2.0-85141521242OAI: oai:DiVA.org:hh-48580DiVA, id: diva2:1709348
Available from: 2022-11-08 Created: 2022-11-08 Last updated: 2023-11-28Bibliographically approved

Open Access in DiVA

The full text will be freely available from 2024-10-25 12:46
Available from 2024-10-25 12:46

Other links

Publisher's full textScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Zhang, YazhouTiwari, PrayagZheng, QianSaddik, Abdulmotaleb ElHossain, M. Shamim
By organisation
School of Information Technology
In the same journal
IEEE transactions on intelligent transportation systems (Print)
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 49 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf