hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Android Malware Classification Using Explainable AI
Halmstad University, School of Information Technology.
Halmstad University, School of Information Technology.
2025 (English)Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

Android malware is a growing cybersecurity concern, with attackers constantly developing new techniques to bypass traditional detection mechanisms. Artificial Intelligence (AI) and Machine Learning (ML)based detection techniques have shown promising results in identifying malicious applications. However, their lack of interpretability poses a significant challenge. Many existing models function as " black boxes," making it difficult for security analysts to understand why an application is classified as malicious or benign. This lack of transparency limits trust, hinders forensic investigations, and reduces the effectiveness of cybersecurity defences. Our research explores the use of Explainable AI (XAI) techniques in Android malware classification, addressing the topic of interpretability in these "black box" models. We employ the CICMalDroid 2020 dataset to evaluate a range of machine learning models, including Decision Tree, Random Forest, XGBoost, Multi-Layer Perceptron (MLP), LightGBM, and Support Vector Machines (SVM), for Android malware detection. To enhance the transparency and interpretability of these complex models, we apply post-hoc explainable AI techniques, specifically SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations), enabling detailed analysis of feature contributions to classification outcomes. By integrating these XAI techniques, our approach aims to improve detection accuracy and offer insight into which features contribute more to malware detection and classification of Android applications, thereby helping security analysts make informed decisions. 

Place, publisher, year, edition, pages
2025. , p. 60
Keywords [en]
Malware classification; Malware detection; Malware analysis; Explainable AI; Machine learning; SHAP; LIME;
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:hh:diva-56315OAI: oai:DiVA.org:hh-56315DiVA, id: diva2:1967375
Educational program
Master's Programme in Network Forensics, 60 credits
Supervisors
Examiners
Available from: 2025-06-12 Created: 2025-06-11 Last updated: 2025-10-01Bibliographically approved

Open Access in DiVA

fulltext(2878 kB)213 downloads
File information
File name FULLTEXT02.pdfFile size 2878 kBChecksum SHA-512
6b33553700d043d16cf927d7cde7db5787983064ac7d6e915f0cb1149252dbd1e2638910eed890c39ab5850665e4fc22b6bea4bcfc09685a49cbdd533bfc55ce
Type fulltextMimetype application/pdf

By organisation
School of Information Technology
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 215 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 867 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf