hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Explainable Federated Learning by Incremental Decision Trees
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-7055-2706
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-7796-5201
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-2590-6661
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-1759-8593
2024 (English)In: Explainable AI for Time Series and Data Streams 2024: Proceedings of the Workshop on Explainable AI for Time Series and Data Streams / [ed] Zahraa Abdallah; Fabian Fumagalli; Barbara Hammer; Eyke Hüllermeier; Matthias Jakobs; Emmanuel Müller; Maximilian Muschalik; Panagiotis Papapetrou; Amal Saadallah; George Tzagkarakis, Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen , 2024, Vol. 3761, p. 58-69Conference paper, Published paper (Refereed)
Abstract [en]

Explainable Artificial Intelligence (XAI) is crucial in ensuring transparency, accountability, and trust in machine learning models, especially in applications involving high-stakes decision-making. This paper focuses on addressing the research gap in federated learning (FL), specifically emphasizing the use of inherently interpretable underlying models. While most FL frameworks rely on complex, black-box models such as Artificial Neural Networks (ANNs), we propose using Decision Tree (DT) classifiers to maintain explainability. More specifically, we introduce a novel framework for horizontal federated learning using Extremely Fast Decision Trees (EFDTs) with streaming data on the client side. Our approach involves aggregating clients' EFDTs on the server side without centralizing raw data, and the training process occurs on the clients' sides. We outline three aggregation strategies and demonstrate that our methods outperform local models and achieve performance levels close to centralized models while retaining inherent explainability. © 2024 CEUR-WS. All rights reserved.

Place, publisher, year, edition, pages
Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen , 2024. Vol. 3761, p. 58-69
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 3761
Keywords [en]
Data Stream, eXplainable AI (XAI), Extremely Fast Decision Tree, Federated Learning, Incremental Decision Tree
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-54760Scopus ID: 2-s2.0-85204974127OAI: oai:DiVA.org:hh-54760DiVA, id: diva2:1908917
Conference
2024 Workshop on Explainable AI for Time Series and Data Streams, TempXAI 2024, Vilnius, Lithuania, 9 September, 2024
Available from: 2024-10-29 Created: 2024-10-29 Last updated: 2025-10-01Bibliographically approved
In thesis
1. Towards better XAI: Improving Shapley Values and Federated LearningInterpretability
Open this publication in new window or tab >>Towards better XAI: Improving Shapley Values and Federated LearningInterpretability
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The use of Artificial Intelligence (AI) in various areas has resulted in significant progress. However, it has also raised concerns about how transparent, understandable, and reliable AI models are. Explainable Artificial Intelligence(XAI) has become an important area of study because it makes AI systems easier for people to understand. XAI also aims to create trust, accountability, and transparency in AI systems, especially in vital areas such as healthcare, finance, and autonomous systems. Furthermore, XAI helps detect biases, improve model debugging, and enable the discovery of new insights from data.

With the increasing focus on the XAI field, we have developed a framework called Field’s Evolution Graph (FEG) to track the evolution of research within the XAI field. This framework helps researchers identify key concepts and their interrelationships over time. Different approaches have been developed in XAI, and Shaply values are among the most well-known techniques. We further examine this method by evaluating its computational cost and analyzing various characteristic functions. We have introduced EcoShap, a computationally efficient method for calculating Shapley values to identify the most important features. By focusing calculations on a few of the most important features, EcoShap significantly reduces the computational cost, making it feasible to apply Shapley values to large datasets.

The thesis is extended by analyzing different characteristic functions used in Shapley value calculations theoretically and practically. We examine how feature importance rankings are reliable and how considering various characteristic functions, like accuracy and F1-score metrics, affects those rankings.

Additionally, Federated Learning (FL) is a machine learning paradigm that is able to train global models from different clients while keeping data decentralized. Similar to other machine learning paradigms, XAI is needed in this context. To address this need, we have proposed a method to use Incremental Decision Trees as an inherently interpretable model within the FL framework, offering an interpretable alternative to black-box models. The objective is to employ inherently explainable models instead of trying to explain black-box models. Three aggregation strategies have been presented to combine local models into a global model while maintaining interpretability and accuracy.

In summary, the research in this thesis contributes to the field of XAI by introducing new methods to improve efficiency, analyzing existing methods to assess the reliability of XAI techniques, and proposing a solution for utilizing more intrinsically explainable models in the Federated Learning framework.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 28
Series
Halmstad University Dissertations ; 124
Keywords
eXplainable AI, Shapley Values, Federated Learning
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-54994 (URN)978-91-89587-65-6 (ISBN)978-91-89587-64-9 (ISBN)
Presentation
2025-01-08, S3030, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Funder
VinnovaKnowledge Foundation
Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2025-10-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Jamshidi, ParisaNowaczyk, SławomirRahat, MahmoudTaghiyarrenani, Zahra

Search in DiVA

By author/editor
Jamshidi, ParisaNowaczyk, SławomirRahat, MahmoudTaghiyarrenani, Zahra
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 207 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf