Open this publication in new window or tab >>2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]
The use of Artificial Intelligence (AI) in various areas has resulted in significant progress. However, it has also raised concerns about how transparent, understandable, and reliable AI models are. Explainable Artificial Intelligence(XAI) has become an important area of study because it makes AI systems easier for people to understand. XAI also aims to create trust, accountability, and transparency in AI systems, especially in vital areas such as healthcare, finance, and autonomous systems. Furthermore, XAI helps detect biases, improve model debugging, and enable the discovery of new insights from data.
With the increasing focus on the XAI field, we have developed a framework called Field’s Evolution Graph (FEG) to track the evolution of research within the XAI field. This framework helps researchers identify key concepts and their interrelationships over time. Different approaches have been developed in XAI, and Shaply values are among the most well-known techniques. We further examine this method by evaluating its computational cost and analyzing various characteristic functions. We have introduced EcoShap, a computationally efficient method for calculating Shapley values to identify the most important features. By focusing calculations on a few of the most important features, EcoShap significantly reduces the computational cost, making it feasible to apply Shapley values to large datasets.
The thesis is extended by analyzing different characteristic functions used in Shapley value calculations theoretically and practically. We examine how feature importance rankings are reliable and how considering various characteristic functions, like accuracy and F1-score metrics, affects those rankings.
Additionally, Federated Learning (FL) is a machine learning paradigm that is able to train global models from different clients while keeping data decentralized. Similar to other machine learning paradigms, XAI is needed in this context. To address this need, we have proposed a method to use Incremental Decision Trees as an inherently interpretable model within the FL framework, offering an interpretable alternative to black-box models. The objective is to employ inherently explainable models instead of trying to explain black-box models. Three aggregation strategies have been presented to combine local models into a global model while maintaining interpretability and accuracy.
In summary, the research in this thesis contributes to the field of XAI by introducing new methods to improve efficiency, analyzing existing methods to assess the reliability of XAI techniques, and proposing a solution for utilizing more intrinsically explainable models in the Federated Learning framework.
Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 28
Series
Halmstad University Dissertations ; 124
Keywords
eXplainable AI, Shapley Values, Federated Learning
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-54994 (URN)978-91-89587-65-6 (ISBN)978-91-89587-64-9 (ISBN)
Presentation
2025-01-08, S3030, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Funder
VinnovaKnowledge Foundation
2024-12-042024-12-042024-12-04Bibliographically approved