Android Malware Classification Using Explainable AI
2025 (English)Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE credits
Student thesis
Abstract [en]
Android malware is a growing cybersecurity concern, with attackers constantly developing new techniques to bypass traditional detection mechanisms. Artificial Intelligence (AI) and Machine Learning (ML)based detection techniques have shown promising results in identifying malicious applications. However, their lack of interpretability poses a significant challenge. Many existing models function as " black boxes," making it difficult for security analysts to understand why an application is classified as malicious or benign. This lack of transparency limits trust, hinders forensic investigations, and reduces the effectiveness of cybersecurity defences. Our research explores the use of Explainable AI (XAI) techniques in Android malware classification, addressing the topic of interpretability in these "black box" models. We employ the CICMalDroid 2020 dataset to evaluate a range of machine learning models, including Decision Tree, Random Forest, XGBoost, Multi-Layer Perceptron (MLP), LightGBM, and Support Vector Machines (SVM), for Android malware detection. To enhance the transparency and interpretability of these complex models, we apply post-hoc explainable AI techniques, specifically SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations), enabling detailed analysis of feature contributions to classification outcomes. By integrating these XAI techniques, our approach aims to improve detection accuracy and offer insight into which features contribute more to malware detection and classification of Android applications, thereby helping security analysts make informed decisions.
Place, publisher, year, edition, pages
2025. , p. 60
Keywords [en]
Malware classification; Malware detection; Malware analysis; Explainable AI; Machine learning; SHAP; LIME;
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:hh:diva-56315OAI: oai:DiVA.org:hh-56315DiVA, id: diva2:1967375
Educational program
Master's Programme in Network Forensics, 60 credits
Supervisors
Examiners
2025-06-122025-06-112025-10-01Bibliographically approved