hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Analysis of characteristic functions on Shapley values in Machine Learning
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).ORCID iD: 0000-0001-7055-2706
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).ORCID iD: 0000-0002-7796-5201
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).ORCID iD: 0000-0003-2590-6661
2024 (English)In: 2024 International Conference on Intelligent Environments (IE), Piscataway, NJ: IEEE, 2024, p. 70-77Conference paper, Published paper (Refereed)
Abstract [en]

In the rapidly evolving field of AI, Explainable Artificial Intelligence (XAI) has become paramount, particularly in Intelligent Environments applications. It offers clarity and understanding in complex decision-making processes, fostering trust and enabling rigorous scrutiny. The Shapley value, renowned for its accurate quantification of feature importance, has emerged as a prevalent standard in both academic research and practical application. Nevertheless, the Shapley value's reliance on the calculation of all possible coalitions poses a significant computational challenge, as it falls within the class of NP-hard problems. Consequently, approximation techniques are employed in most practical scenarios as a substitute for precise computations. The most common of those is the SHAP (SHapley Additive exPlanations) technique, which quantifies the influence exerted by a specific feature on decision outcomes of a specific Machine Learning model. However, the Shapley value's theoretical underpinnings focus on assessing and understanding feature impact on model evaluation metrics, rather than just alterations in the responses. This paper conducts a comparative analysis using controlled synthetic data with established ground truths. It juxtaposes the practical implementation of the SHAP approach with the theoretical model in two distinct scenarios: one using the F1-score and the other, the accuracy metric. These are two representative characteristic functions, capturing different aspects and whose appropriateness depends on the specific requirements and context of the task to be solved. We analyze how the three alternatives exhibit similarity and disparity in their manifestation of feature effects. We explore the parallels and differences between these approaches in reflecting feature effects. Ultimately, our research seeks to determine the conditions under which SHAP outcomes are more aligned with either the F1-score or the accuracy metric, thereby providing valuable insights for their application in various Intelligent Environment contexts. © 2024 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2024. p. 70-77
Series
International Conference on Intelligent Environments, ISSN 2469-8792, E-ISSN 2472-7571
Keywords [en]
accuracy, F1-score, imbalanced data, Shapley values, XAI
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-54471DOI: 10.1109/IE61493.2024.10599897Scopus ID: 2-s2.0-85200723106ISBN: 979-8-3503-8679-0 (print)OAI: oai:DiVA.org:hh-54471DiVA, id: diva2:1891277
Conference
20th International Conference on Intelligent Environments, IE 2024, Ljubljana, Slovenia, 17-20 June, 2024
Funder
Swedish Research Council, CHIST-ERA-19-XAI-012Available from: 2024-08-21 Created: 2024-08-21 Last updated: 2025-10-01Bibliographically approved
In thesis
1. Towards better XAI: Improving Shapley Values and Federated LearningInterpretability
Open this publication in new window or tab >>Towards better XAI: Improving Shapley Values and Federated LearningInterpretability
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The use of Artificial Intelligence (AI) in various areas has resulted in significant progress. However, it has also raised concerns about how transparent, understandable, and reliable AI models are. Explainable Artificial Intelligence(XAI) has become an important area of study because it makes AI systems easier for people to understand. XAI also aims to create trust, accountability, and transparency in AI systems, especially in vital areas such as healthcare, finance, and autonomous systems. Furthermore, XAI helps detect biases, improve model debugging, and enable the discovery of new insights from data.

With the increasing focus on the XAI field, we have developed a framework called Field’s Evolution Graph (FEG) to track the evolution of research within the XAI field. This framework helps researchers identify key concepts and their interrelationships over time. Different approaches have been developed in XAI, and Shaply values are among the most well-known techniques. We further examine this method by evaluating its computational cost and analyzing various characteristic functions. We have introduced EcoShap, a computationally efficient method for calculating Shapley values to identify the most important features. By focusing calculations on a few of the most important features, EcoShap significantly reduces the computational cost, making it feasible to apply Shapley values to large datasets.

The thesis is extended by analyzing different characteristic functions used in Shapley value calculations theoretically and practically. We examine how feature importance rankings are reliable and how considering various characteristic functions, like accuracy and F1-score metrics, affects those rankings.

Additionally, Federated Learning (FL) is a machine learning paradigm that is able to train global models from different clients while keeping data decentralized. Similar to other machine learning paradigms, XAI is needed in this context. To address this need, we have proposed a method to use Incremental Decision Trees as an inherently interpretable model within the FL framework, offering an interpretable alternative to black-box models. The objective is to employ inherently explainable models instead of trying to explain black-box models. Three aggregation strategies have been presented to combine local models into a global model while maintaining interpretability and accuracy.

In summary, the research in this thesis contributes to the field of XAI by introducing new methods to improve efficiency, analyzing existing methods to assess the reliability of XAI techniques, and proposing a solution for utilizing more intrinsically explainable models in the Federated Learning framework.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 28
Series
Halmstad University Dissertations ; 124
Keywords
eXplainable AI, Shapley Values, Federated Learning
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-54994 (URN)978-91-89587-65-6 (ISBN)978-91-89587-64-9 (ISBN)
Presentation
2025-01-08, S3030, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Funder
VinnovaKnowledge Foundation
Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2025-10-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Jamshidi, ParisaNowaczyk, SławomirRahat, Mahmoud

Search in DiVA

By author/editor
Jamshidi, ParisaNowaczyk, SławomirRahat, Mahmoud
By organisation
Center for Applied Intelligent Systems Research (CAISR)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 150 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf