hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A systematic approach for tracking the evolution of XAI as a field of research
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-7055-2706
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-7796-5201
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-8413-963x
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-2590-6661
2023 (English)In: Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France, September 19-23, 2022, Proceedings, Part II / [ed] Irena Koprinska; Paolo Mignone; Riccardo Guidotti; Szymon Jaroszewicz; Holger Fröning; Francesco Gullo; Pedro M. Ferreira; Damian Roqueiro, Cham: Springer, 2023, Vol. 1753, p. 461-476Conference paper, Published paper (Refereed)
Abstract [en]

The increasing use of AI methods in various applications has raised concerns about their explainability and transparency. Many solutions have been developed within the last few years to either explain the model itself or the decisions provided by the model. However, the number of contributions in the field of eXplainable AI (XAI) is increasing at such a high pace that it is almost impossible for a newcomer to identify key ideas, track the field’s evolution, or find promising new research directions. 

Typically, survey papers serve as a starting point, providing a feasible entry point into a research area. However, this is not trivial for some fields with exponential growth in the literature, such as XAI. For instance, we analyzed 23 surveys in the XAI domain published within the last three years and surprisingly found no common conceptualization among them. This makes XAI one of the most challenging research areas to enter. To address this problem, we propose a systematic approach that enables newcomers to identify the principal ideas and track their evolution. The proposed method includes automating the retrieval of relevant papers, extracting their semantic relationship, and creating a temporal graph of ideas by post-analysis of citation graphs. 

The main outcome of our method is Field’s Evolution Graph (FEG), which can be used to find the core idea of each approach in this field, see how a given concept has developed and evolved over time, observe how different notions interact with each other, and perceive how a new paradigm emerges through combining multiple ideas. As for demonstration, we show that FEG successfully identifies the field’s key articles, such as LIME or Grad-CAM, and maps out their evolution and relationships.

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Cham: Springer, 2023. Vol. 1753, p. 461-476
Series
Communications in Computer and Information Science, ISSN 978-3-031-23632-7, E-ISSN 978-3-031-23633-4 ; 2
Keywords [en]
Field's Evolution, XAI, Explainable AI
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-49831DOI: 10.1007/978-3-031-23633-4_31ISI: 000967761200031Scopus ID: 2-s2.0-85149954978OAI: oai:DiVA.org:hh-49831DiVA, id: diva2:1727540
Conference
Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Workshop on IoT Streams for Predictive Maintenance, Grenoble, France, September 19-23, 2022
Funder
Swedish Research Council, CHIST-ERA-19-XAI-012Available from: 2023-01-16 Created: 2023-01-16 Last updated: 2024-12-04Bibliographically approved
In thesis
1. Towards better XAI: Improving Shapley Values and Federated LearningInterpretability
Open this publication in new window or tab >>Towards better XAI: Improving Shapley Values and Federated LearningInterpretability
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The use of Artificial Intelligence (AI) in various areas has resulted in significant progress. However, it has also raised concerns about how transparent, understandable, and reliable AI models are. Explainable Artificial Intelligence(XAI) has become an important area of study because it makes AI systems easier for people to understand. XAI also aims to create trust, accountability, and transparency in AI systems, especially in vital areas such as healthcare, finance, and autonomous systems. Furthermore, XAI helps detect biases, improve model debugging, and enable the discovery of new insights from data.

With the increasing focus on the XAI field, we have developed a framework called Field’s Evolution Graph (FEG) to track the evolution of research within the XAI field. This framework helps researchers identify key concepts and their interrelationships over time. Different approaches have been developed in XAI, and Shaply values are among the most well-known techniques. We further examine this method by evaluating its computational cost and analyzing various characteristic functions. We have introduced EcoShap, a computationally efficient method for calculating Shapley values to identify the most important features. By focusing calculations on a few of the most important features, EcoShap significantly reduces the computational cost, making it feasible to apply Shapley values to large datasets.

The thesis is extended by analyzing different characteristic functions used in Shapley value calculations theoretically and practically. We examine how feature importance rankings are reliable and how considering various characteristic functions, like accuracy and F1-score metrics, affects those rankings.

Additionally, Federated Learning (FL) is a machine learning paradigm that is able to train global models from different clients while keeping data decentralized. Similar to other machine learning paradigms, XAI is needed in this context. To address this need, we have proposed a method to use Incremental Decision Trees as an inherently interpretable model within the FL framework, offering an interpretable alternative to black-box models. The objective is to employ inherently explainable models instead of trying to explain black-box models. Three aggregation strategies have been presented to combine local models into a global model while maintaining interpretability and accuracy.

In summary, the research in this thesis contributes to the field of XAI by introducing new methods to improve efficiency, analyzing existing methods to assess the reliability of XAI techniques, and proposing a solution for utilizing more intrinsically explainable models in the Federated Learning framework.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 28
Series
Halmstad University Dissertations ; 124
Keywords
eXplainable AI, Shapley Values, Federated Learning
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-54994 (URN)978-91-89587-65-6 (ISBN)978-91-89587-64-9 (ISBN)
Presentation
2025-01-08, S3030, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Funder
VinnovaKnowledge Foundation
Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2024-12-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Jamshidi, SamanehNowaczyk, SławomirFanaee Tork, HadiRahat, Mahmoud

Search in DiVA

By author/editor
Jamshidi, SamanehNowaczyk, SławomirFanaee Tork, HadiRahat, Mahmoud
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 301 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf