hh.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 59) Show all publications
Altarabichi, M. G., Nowaczyk, S., Pashami, S., Sheikholharam Mashhadi, P. & Handl, J. (2024). A Review of Randomness Techniques in Deep Neural Networks. In: GECCO ’24 Companion, July 14–18, 2024, Melbourne, VIC, Australia: . Paper presented at 2024 Genetic and Evolutionary Computation Conference Companion, GECCO 2024 Companion, Melbourne, VIC, Australia, 14-18 July, 2024 (pp. 23-24). New York, NY: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>A Review of Randomness Techniques in Deep Neural Networks
Show others...
2024 (English)In: GECCO ’24 Companion, July 14–18, 2024, Melbourne, VIC, Australia, New York, NY: Association for Computing Machinery (ACM), 2024, p. 23-24Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the effects of various randomization techniques on Deep Neural Networks (DNNs) learning performance. We categorize the existing randomness techniques into four key types: injection of noise/randomness at the data, model structure, optimization or learning stage. We use this classification to identify gaps in the current coverage of potential mechanisms for the introduction of randomness, leading to proposing two new techniques: adding noise to the loss function and random masking of the gradient updates. We use a Particle Swarm Optimizer (PSO) for hyperparameter optimization and evaluate over 30,000 configurations across standard computer vision benchmarks. Our study reveals that data augmentation and weight initialization randomness significantly improve performance, and different optimizers prefer distinct randomization types. The complete implementation and dataset are available on GitHub1. This paper for the Hot-off-the-Press track at GECCO 2024 summarizes the original work published at [2]. © 2024 Copyright held by the owner/author(s).

[2] Mohammed Ghaith Altarabichi, Sławomir Nowaczyk, Sepideh Pashami, Peyman Sheikholharam Mashhadi, and Julia Handl. 2024. Rolling the dice for better deep learning performance: A study of randomness techniques in deep neural networks. Information Sciences 667 (2024), 120500.

Place, publisher, year, edition, pages
New York, NY: Association for Computing Machinery (ACM), 2024
Keywords
convolutional neural network, deep neural network, hyperparameter, particle swarm optimization, randomized neural networks
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-54562 (URN)10.1145/3638530.3664077 (DOI)2-s2.0-85201929793 (Scopus ID)9798400704956 (ISBN)
Conference
2024 Genetic and Evolutionary Computation Conference Companion, GECCO 2024 Companion, Melbourne, VIC, Australia, 14-18 July, 2024
Available from: 2024-09-05 Created: 2024-09-05 Last updated: 2024-09-05Bibliographically approved
Fan, Y., Nowaczyk, S., Wang, Z. & Pashami, S. (2024). Evaluating Multi-task Curriculum Learning for Forecasting Energy Consumption in Electric Heavy-duty Vehicles. In: Nowaczyk S.; Spiliopoulou M.; Ragni M.; Fink O. (Ed.), Proceedings of Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0 2024): . Paper presented at 2024 Workshop on Embracing Human-Aware AI in Industry 5.0, HAII5.0 2024, Santiago de Compostela, Spain, 19 October, 2024. Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen, 3765
Open this publication in new window or tab >>Evaluating Multi-task Curriculum Learning for Forecasting Energy Consumption in Electric Heavy-duty Vehicles
2024 (English)In: Proceedings of Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0 2024) / [ed] Nowaczyk S.; Spiliopoulou M.; Ragni M.; Fink O., Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen , 2024, Vol. 3765Conference paper, Published paper (Refereed)
Abstract [en]

Accurate energy consumption prediction is crucial for optimising the operation of electric commercial heavy-duty vehicles, particularly for efficient route planning, refining charging strategies, and ensuring optimal truck configuration for specific tasks. This study investigates the application of multi-task curriculum learning to enhance machine learning models for forecasting the energy consumption of various onboard systems in electric vehicles. Multi-task learning, unlike traditional training approaches, leverages auxiliary tasks to provide additional training signals, which has been shown to enhance predictive performance in many domains. By further incorporating curriculum learning, where simpler tasks are learned before progressing to more complex ones, neural network training becomes more efficient and effective. We evaluate the suitability of these methodologies in the context of electric vehicle energy forecasting, examining whether the combination of multi-task learning and curriculum learning enhances algorithm generalisation, even with limited training data. We primarily focus on understanding the efficacy of different curriculum learning strategies, including sequential learning and progressive continual learning, using complex, real-world industrial data. Our research further explores a set of auxiliary tasks designed to facilitate the learning process by targeting key consumption characteristics projected into future time frames. The findings illustrate the potential of multi-task curriculum learning to advance energy consumption forecasting, significantly contributing to the optimisation of electric heavy-duty vehicle operations. This work offers a novel perspective on integrating advanced machine learning techniques to enhance energy efficiency in the exciting field of electromobility. © 2024 Copyright for this paper by its authors.

Place, publisher, year, edition, pages
Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen, 2024
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 3765
Keywords
Curriculum Learning, Electric Vehicles, Energy Consumption Forecasting, Multi-task Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-54807 (URN)2-s2.0-85206261149 (Scopus ID)
Conference
2024 Workshop on Embracing Human-Aware AI in Industry 5.0, HAII5.0 2024, Santiago de Compostela, Spain, 19 October, 2024
Note

12 sidor

Available from: 2024-11-06 Created: 2024-11-06 Last updated: 2024-11-06Bibliographically approved
Altarabichi, M. G., Alabdallah, A., Pashami, S., Rögnvaldsson, T., Nowaczyk, S. & Ohlsson, M. (2024). Improving Concordance Index in Regression-based Survival Analysis: Discovery of Loss Function for Neural Networks. In: GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion. Paper presented at The Genetic and Evolutionary Computation Conference, Melbourne, Australia, July 14-18, 2024 (pp. 1863-1869). New York: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Improving Concordance Index in Regression-based Survival Analysis: Discovery of Loss Function for Neural Networks
Show others...
2024 (English)In: GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion, New York: Association for Computing Machinery (ACM), 2024, p. 1863-1869Conference paper, Published paper (Other academic)
Abstract [en]

In this work, we use an Evolutionary Algorithm (EA) to discover a novel Neural Network (NN) regression-based survival loss function with the aim of improving the C-index performance. Our contribution is threefold; firstly, we propose an evolutionary meta-learning algorithm SAGA$_{loss}$ for optimizing a neural-network regression-based loss function that maximizes the C-index; our algorithm consistently discovers specialized loss functions that outperform MSCE. Secondly, based on our analysis of the evolutionary search results, we highlight a non-intuitive insight that signifies the importance of the non-zero gradient for the censored cases part of the loss function, a property that is shown to be useful in improving concordance. Finally, based on this insight, we propose MSCE$_{Sp}$, a novel survival regression loss function that can be used off-the-shelf and generally performs better than the Mean Squared Error for censored cases. We performed extensive experiments on 19 benchmark datasets to validate our findings. © 2024 is held by the owner/author(s).

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2024
Keywords
evolutionary meta-learning, loss function, neural networks, survival analysis, regression
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-52468 (URN)10.1145/3638530.3664129 (DOI)2-s2.0-85200800944& (Scopus ID)979-8-4007-0495-6 (ISBN)
Conference
The Genetic and Evolutionary Computation Conference, Melbourne, Australia, July 14-18, 2024
Note

Som manuscript i avhandling/As manuscript in thesis

Available from: 2024-01-24 Created: 2024-01-24 Last updated: 2025-01-09Bibliographically approved
Fan, Y., Altarabichi, M. G., Pashami, S., Sheikholharam Mashhadi, P. & Nowaczyk, S. (2024). Invariant Feature Selection for Battery State of Health Estimation in Heterogeneous Hybrid Electric Bus Fleets. In: Nowaczyk S.; Spiliopoulou M.; Ragni M.; Fink O. (Ed.), Proceedings of Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0 2024): . Paper presented at 2024 Workshop on Embracing Human-Aware AI in Industry 5.0, HAII5.0 2024, Santiago de Compostela, Spain, 19 October, 2024. Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen, 3765
Open this publication in new window or tab >>Invariant Feature Selection for Battery State of Health Estimation in Heterogeneous Hybrid Electric Bus Fleets
Show others...
2024 (English)In: Proceedings of Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0 2024) / [ed] Nowaczyk S.; Spiliopoulou M.; Ragni M.; Fink O., Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen , 2024, Vol. 3765Conference paper, Published paper (Refereed)
Abstract [en]

Batteries are a safety-critical and the most expensive component for electric buses (EBs). Monitoring their condition, or the state of health (SoH), is crucial for ensuring the reliability of EB operation. However, EBs come in many models and variants, including different mechanical configurations, and deploy to operate under various conditions. Developing new degradation models for each combination of settings and faults quickly becomes challenging due to the unavailability of data for novel conditions and the low evidence for less popular vehicle populations. Therefore, building machine learning models that can generalize to new and unseen settings becomes a vital challenge for practical deployment. This study aims to develop and evaluate feature selection methods for robust machine learning models that allow estimating the SoH of batteries across various settings of EB configuration and usage. Building on our previous work, we propose two approaches, a genetic algorithm for domain invariant features (GADIF) and causal discovery for selecting invariant features (CDIF). Both aim to select features that are invariant across multiple domains. While GADIF utilizes a specific fitness function encompassing both task performance and domain shift, the CDIF identifies pairwise causal relations between features and selects the common causes of the target variable across domains. Experimental results confirm that selecting only invariant features leads to a better generalization of machine learning models to unseen domains. The contribution of this work comprises the two novel invariant feature selection methods, their evaluation on real-world EBs data, and a comparison against state-of-the-art invariant feature selection methods. Moreover, we analyze how the selected features vary under different settings. © 2024 Copyright for this paper by its authors.

Place, publisher, year, edition, pages
Aachen: Rheinisch-Westfaelische Technische Hochschule Aachen, 2024
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 3765
Keywords
Casual Discovery, Genetic Algorithm, Invariant Feature Selection, State of Health Estimation, Transfer Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-54808 (URN)2-s2.0-85206258591 (Scopus ID)
Conference
2024 Workshop on Embracing Human-Aware AI in Industry 5.0, HAII5.0 2024, Santiago de Compostela, Spain, 19 October, 2024
Note

19 sidor

Available from: 2024-11-06 Created: 2024-11-06 Last updated: 2024-11-06Bibliographically approved
Karlsson, A., Wang, T., Nowaczyk, S., Pashami, S. & Asadi, S. (2024). Mind the Data, Measuring the Performance Gap Between Tree Ensembles and Deep Learning on Tabular Data. In: Ioanna Miliou; Nico Piatkowski; Panagiotis Papapetrou (Ed.), Advances in Intelligent Data Analysis XXII: Proceedings, Part I. Paper presented at 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24–26, 2024 (pp. 65-76). Heidelberg: Springer Berlin/Heidelberg, 14641
Open this publication in new window or tab >>Mind the Data, Measuring the Performance Gap Between Tree Ensembles and Deep Learning on Tabular Data
Show others...
2024 (English)In: Advances in Intelligent Data Analysis XXII: Proceedings, Part I / [ed] Ioanna Miliou; Nico Piatkowski; Panagiotis Papapetrou, Heidelberg: Springer Berlin/Heidelberg, 2024, Vol. 14641, p. 65-76Conference paper, Published paper (Refereed)
Abstract [en]

Recent machine learning studies on tabular data show that ensembles of decision tree models are more efficient and performant than deep learning models such as Tabular Transformer models. However, as we demonstrate, these studies are limited in scope and do not paint the full picture. In this work, we focus on how two dataset properties, namely dataset size and feature complexity, affect the empirical performance comparison between tree ensembles and Tabular Transformer models. Specifically, we employ a hypothesis-driven approach and identify situations where Tabular Transformer models are expected to outperform tree ensemble models. Through empirical evaluation, we demonstrate that given large enough datasets, deep learning models perform better than tree models. This gets more pronounced when complex feature interactions exist in the given task and dataset, suggesting that one must pay careful attention to dataset properties when selecting a model for tabular data in machine learning – especially in an industrial setting, where larger and larger datasets with less and less carefully engineered features are becoming routinely available. © The Author(s)

Place, publisher, year, edition, pages
Heidelberg: Springer Berlin/Heidelberg, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14641
Keywords
Gradient boosting, Tabular data, Tabular Transformers
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hh:diva-53352 (URN)10.1007/978-3-031-58547-0_6 (DOI)2-s2.0-85192227414 (Scopus ID)9783031585463 (ISBN)
Conference
22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24–26, 2024
Available from: 2024-06-05 Created: 2024-06-05 Last updated: 2024-06-05Bibliographically approved
Altarabichi, M. G., Nowaczyk, S., Pashami, S., Sheikholharam Mashhadi, P. & Handl, J. (2024). Rolling The Dice For Better Deep Learning Performance: A Study Of Randomness Techniques In Deep Neural Networks. Information Sciences, 667, 1-17, Article ID 120500.
Open this publication in new window or tab >>Rolling The Dice For Better Deep Learning Performance: A Study Of Randomness Techniques In Deep Neural Networks
Show others...
2024 (English)In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 667, p. 1-17, article id 120500Article in journal (Refereed) Published
Abstract [en]

This paper presents a comprehensive empirical investigation into the interactions between various randomness techniques in Deep Neural Networks (DNNs) and how they contribute to network performance. It is well-established that injecting randomness into the training process of DNNs, through various approaches at different stages, is often beneficial for reducing overfitting and improving generalization. However, the interactions between randomness techniques such as weight noise, dropout, and many others remain poorly understood. Consequently, it is challenging to determine which methods can be effectively combined to optimize DNN performance. To address this issue, we categorize the existing randomness techniques into four key types: data, model, optimization, and learning. We use this classification to identify gaps in the current coverage of potential mechanisms for the introduction of noise, leading to proposing two new techniques: adding noise to the loss function and random masking of the gradient updates.

In our empirical study, we employ a Particle Swarm Optimizer (PSO) to explore the space of possible configurations to answer where and how much randomness should be injected to maximize DNN performance. We assess the impact of various types and levels of randomness for DNN architectures applied to standard computer vision benchmarks: MNIST, FASHION-MNIST, CIFAR10, and CIFAR100. Across more than 30\,000 evaluated configurations, we perform a detailed examination of the interactions between randomness techniques and their combined impact on DNN performance. Our findings reveal that randomness in data augmentation and in weight initialization are the main contributors to performance improvement. Additionally, correlation analysis demonstrates that different optimizers, such as Adam and Gradient Descent with Momentum, prefer distinct types of randomization during the training process. A GitHub repository with the complete implementation and generated dataset is available. © 2024 The Author(s)

Place, publisher, year, edition, pages
Philadelphia, PA: Elsevier, 2024
Keywords
Neural Networks, Randomized Neural Networks, Convolutional Neural Network, hyperparameter optimization, Particle swarm optimization
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-52467 (URN)10.1016/j.ins.2024.120500 (DOI)001224296500001 ()2-s2.0-85188777216& (Scopus ID)
Available from: 2024-01-24 Created: 2024-01-24 Last updated: 2024-06-11Bibliographically approved
Alabdallah, A., Ohlsson, M., Pashami, S. & Rögnvaldsson, T. (2024). The Concordance Index Decomposition: A Measure for a Deeper Understanding of Survival Prediction Models. Artificial Intelligence in Medicine, 148, 1-10, Article ID 102781.
Open this publication in new window or tab >>The Concordance Index Decomposition: A Measure for a Deeper Understanding of Survival Prediction Models
2024 (English)In: Artificial Intelligence in Medicine, ISSN 0933-3657, E-ISSN 1873-2860, Vol. 148, p. 1-10, article id 102781Article in journal (Refereed) Published
Abstract [en]

The Concordance Index (C-index) is a commonly used metric in Survival Analysis for evaluating the performance of a prediction model. This paper proposes a decomposition of the C-index into a weighted harmonic mean of two quantities: one for ranking observed events versus other observed events, and the other for ranking observed events versus censored cases. This decomposition enables a more fine-grained analysis of the strengths and weaknesses of survival prediction methods. The usefulness of this decomposition is demonstrated through benchmark comparisons against state-of-the-art and classical models, together with a new variational generative neural-network-based method (SurVED), which is also proposed in this paper. Performance is assessed using four publicly available datasets with varying levels of censoring. The analysis using the C-index decomposition and synthetic censoring shows that deep learning models utilize the observed events more effectively than other models, allowing them to keep a stable C-index in different censoring levels. In contrast, classical machine learning models deteriorate when the censoring level decreases due to their inability to improve on ranking the events versus other events. © 2024 The Author(s)

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2024
Keywords
Survival Analysis, Evaluation Metric, Concordance Index, Variational Encoder-Decoder
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52259 (URN)10.1016/j.artmed.2024.102781 (DOI)001171816900001 ()38325926 (PubMedID)2-s2.0-85184733529& (Scopus ID)
Funder
Knowledge Foundation, 20200001
Note

Som manuscript i avhandling/As manuscript in thesis

Available from: 2023-12-18 Created: 2023-12-18 Last updated: 2025-01-09Bibliographically approved
Bobek, S., Nowaczyk, S., Pashami, S., Taghiyarrenani, Z. & Nalepa, G. J. (2024). Towards Explainable Deep Domain Adaptation. In: Sławomir Nowaczyk et al. (Ed.), Artificial Intelligence. ECAI 2023 International Workshops: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part I. Paper presented at European Conference on Artificial Intelligence (ECAI 2023), Kraków, Poland, September 30 - October 4, 2023 (pp. 101-113). Cham: Springer, 1947
Open this publication in new window or tab >>Towards Explainable Deep Domain Adaptation
Show others...
2024 (English)In: Artificial Intelligence. ECAI 2023 International Workshops: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part I / [ed] Sławomir Nowaczyk et al., Cham: Springer, 2024, Vol. 1947, p. 101-113Conference paper, Published paper (Refereed)
Abstract [en]

In many practical applications data used for training a machine learning model and the deployment data does not always preserve the same distribution. Transfer learning and, in particular, domain adaptation allows to overcome this issue, by adapting the source model to a new target data distribution and therefore generalizing the knowledge from source to target domain. In this work, we present a method that makes the adaptation process more transparent by providing two complementary explanation mechanisms. The first mechanism explains how the source and target distributions are aligned in the latent space of the domain adaptation model. The second mechanism provides descriptive explanations on how the decision boundary changes in the adapted model with respect to the source model. Along with a description of a method, we also provide initial results obtained on publicly available, real-life dataset. © The Author(s) 2024.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Communications in Computer and Information Science (CCIS), ISSN 1865-0929, E-ISSN 1865-0937 ; 1947
Keywords
Explainable AI (XAI), Domain adaptation, artificial intelligence
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hh:diva-52508 (URN)10.1007/978-3-031-50396-2_6 (DOI)2-s2.0-85184123743 (Scopus ID)978-3-031-50395-5 (ISBN)978-3-031-50396-2 (ISBN)
Conference
European Conference on Artificial Intelligence (ECAI 2023), Kraków, Poland, September 30 - October 4, 2023
Funder
Swedish Research Council, CHIST-ERA19-XAI-012
Note

Funding: The paper is funded from the XPM project funded by the National Science Centre, Poland under the CHIST-ERA programme (NCN UMO2020/02/Y/ST6/00070) and the Swedish Research Council under grant CHIST-ERA19-XAI-012 and by a grant from the Priority Research Area (DigiWorld) under the Strategic Programme Excellence Initiative at Jagiellonian University.

Available from: 2024-01-31 Created: 2024-01-31 Last updated: 2024-03-20Bibliographically approved
Alabdallah, A., Jakubowski, J., Pashami, S., Bobek, S., Ohlsson, M., Rögnvaldsson, T. & Nalepa, G. J. (2024). Understanding Survival Models through Counterfactual Explanations. In: Elisa Bertino; Wen Gao; Bernhard Steffen; Moti Yung (Ed.), Computational Science – ICCS 2024: 24th International Conference, Malaga, Spain, July 2–4, 2024, Proceedings, Part IV. Paper presented at 24th International Conference, Malaga, Spain, July 2–4, 2024 (pp. 310-324). Cham: Springer Nature
Open this publication in new window or tab >>Understanding Survival Models through Counterfactual Explanations
Show others...
2024 (English)In: Computational Science – ICCS 2024: 24th International Conference, Malaga, Spain, July 2–4, 2024, Proceedings, Part IV / [ed] Elisa Bertino; Wen Gao; Bernhard Steffen; Moti Yung, Cham: Springer Nature, 2024, p. 310-324Conference paper, Published paper (Other academic)
Abstract [en]

The development of black-box survival models has created a need for methods that explain their outputs, just as in the case of traditional machine learning methods. Survival models usually predict functions rather than point estimates. This special nature of their output makes it more difficult to explain their operation. We propose a method to generate plausible counterfactual explanations for survival models. The method supports two options that handle the special nature of survival models' output. One option relies on the Survival Scores, which are based on the area under the survival function, which is more suitable for proportional hazard models. The other one relies on Survival Patterns in the predictions of the survival model, which represent groups that are significantly different from the survival perspective. This guarantees an intuitive well-defined change from one risk group (Survival Pattern) to another and can handle more realistic cases where the proportional hazard assumption does not hold. The method uses a Particle Swarm Optimization algorithm to optimize a loss function to achieve four objectives: the desired change in the target, proximity to the explained example, likelihood, and the actionability of the counterfactual example. Two predictive maintenance datasets and one medical dataset are used to illustrate the results in different settings. The results show that our method produces plausible counterfactuals, which increase the understanding of black-box survival models. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

Place, publisher, year, edition, pages
Cham: Springer Nature, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14835
Keywords
Survival Analysis, Explainable Artificial Intelligence, Survival Patterns, Counterfactual Explanations
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52260 (URN)001279326500028 ()2-s2.0-85199557114& (Scopus ID)978-3-031-63771-1 (ISBN)
Conference
24th International Conference, Malaga, Spain, July 2–4, 2024
Funder
Knowledge Foundation, 20200001
Note

Som manuscript i avhandling/As manuscript in thesis

Available from: 2023-12-18 Created: 2023-12-18 Last updated: 2025-01-09Bibliographically approved
Rajabi, E., Nowaczyk, S., Pashami, S., Bergquist, M., Ebby, G. S. & Wajid, S. (2023). A Knowledge-Based AI Framework for Mobility as a Service. Sustainability, 15(3), Article ID 2717.
Open this publication in new window or tab >>A Knowledge-Based AI Framework for Mobility as a Service
Show others...
2023 (English)In: Sustainability, E-ISSN 2071-1050, Vol. 15, no 3, article id 2717Article in journal (Refereed) Published
Abstract [en]

Mobility as a Service (MaaS) combines various modes of transportation to present mobility services to travellers based on their transport needs. This paper proposes a knowledge-based framework based on Artificial Intelligence (AI) to integrate various mobility data types and provide travellers with customized services. The proposed framework includes a knowledge acquisition process to extract and structure data from multiple sources of information (such as mobility experts and weather data). It also adds new information to a knowledge base and improves the quality of previously acquired knowledge. We discuss how AI can help discover knowledge from various data sources and recommend sustainable and personalized mobility services with explanations. The proposed knowledge-based AI framework is implemented using a synthetic dataset as a proof of concept. Combining different information sources to generate valuable knowledge is identified as one of the challenges in this study. Finally, explanations of the proposed decisions provide a criterion for evaluating and understanding the proposed knowledge-based AI framework. © 2023 by the authors.

Place, publisher, year, edition, pages
Basel: MDPI, 2023
Keywords
mobility as a service, knowledge-based, explainability
National Category
Computer Sciences
Research subject
Smart Cities and Communities
Identifiers
urn:nbn:se:hh:diva-49970 (URN)10.3390/su15032717 (DOI)000929663500001 ()2-s2.0-85148043364 (Scopus ID)
Funder
Knowledge Foundation, 20180181
Available from: 2023-02-14 Created: 2023-02-14 Last updated: 2023-08-21Bibliographically approved
Projects
Data-Driven Predictive Maintenance for Trucks [2016-03451_Vinnova]; Halmstad University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3272-4145

Search in DiVA

Show all publications