hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
CoxSE: Exploring the Potential of Self-Explaining Neural Networks with Cox Proportional Hazards Model for Survival Analysis
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-9416-5647
Halmstad University, School of Information Technology.
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-1145-4297
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-5163-2997
Show others and affiliations
(English)Manuscript (preprint) (Other academic)
Abstract [en]

The Cox Proportional Hazards (CPH) model has long been the preferred survival model for its explainability. However, to increase its predictive power beyond its linear log-risk, it was extended to utilize deep neural networks, sacrificing its explainability. In this work, we explore the potential of self-explaining neural networks (SENN) for survival analysis. We propose a new locally explainable Cox proportional hazards model, named CoxSE, by estimating a locally-linear log-hazard function using the SENN. We also propose a modification to the Neural additive (NAM) models hybrid with SENN, named CoxSENAM, which enables the control of the stability and consistency of the generated explanations. 

Several experiments using synthetic and real datasets are presented, benchmarking CoxSE and CoxSENAM against a NAM-based model, a DeepSurv model explained with SHAP, and a linear CPH model. The results show that, unlike the NAM-based model, the SENN-based model can provide more stable and consistent explanations while maintaining the predictive power of the black-box model. The results also show that, due to their structural design, NAM-based models demonstrate better robustness to non-informative features. Among the models, the hybrid model exhibits the best robustness.

Keywords [en]
Self-Explaining Neural Networks, Cox Proportional Hazards, Survival Analysis, Interpretability, XAI, Neural Additive Models
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-55201DOI: 10.48550/arXiv.2407.13849OAI: oai:DiVA.org:hh-55201DiVA, id: diva2:1925262
Note

Som manuscript i avhandling/As manuscript in thesis

Available from: 2025-01-08 Created: 2025-01-08 Last updated: 2025-10-01Bibliographically approved
In thesis
1. Towards Trustworthy Survival Analysis with Machine Learning Models
Open this publication in new window or tab >>Towards Trustworthy Survival Analysis with Machine Learning Models
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Survival Analysis is a major sub-field of statistics that studies the time to an event, like a patient's death or a machine's failure. This makes survival analysis crucial in critical applications like medical studies and predictive maintenance. In such applications, safety is critical creating a demand for trustworthy models. Machine learning and deep learning techniques started to be used, spurred by the growing volume of collected data. While this direction holds promise for improving certain qualities, such as model performance, it also introduces new challenges in other areas, particularly model explainability. This challenge is general in machine learning due to the black-box nature of most machine learning models, especially deep neural networks (DNN). However, survival models usually output functions rather than point estimates like regression and classification models which makes their explainability even more challenging task. 

Other challenges also exist due to the nature of time-to-event data, such as censoring. This phenomenon happens due to several reasons, most commonly due to the limited study time, resulting in a considerable number of studied subjects not experiencing the event during the study. Moreover, in industrial settings, recorded events do not always correspond to actual failures. This is because companies tend to replace machine parts before their failure due to safety or cost considerations resulting in noisy event labels. Censoring and noisy labels create a challenge in building and evaluating survival models.    

This thesis addresses these challenges by following two tracks, one focusing on explainability and the other on improving performance. The two tracks eventually merge providing an explainable survival model while maintaining the performance of its black-box counterpart.

In the explainability track, we propose two post-hoc explanation methods based on what we define as Survival Patterns. These are patterns in the predictions of the survival model that represent distinct survival behaviors in the studied population. We propose an algorithm for discovering the survival patterns upon which the two post-hoc explanation methods rely. The first method, SurvSHAP, utilizes a proxy classification model that learns the relationship between the input space and the discovered survival patterns. The proxy model is then explained using the SHAP method resulting in per-pattern explanations. The second post-hoc method relies on finding counterfactual explanations that would change the decision of the survival model from one source survival pattern to another. The algorithm uses Particle Swarm Optimization (PSO) with a tailored objective function to guarantee certain explanation qualities in plausibility and actionability.

On the performance track, we propose a Variational Encoder-Decoder model for estimating the survival function using a sampling-based approach. The model is trained using a regression-based objective function that accounts for censored instances assisted with a differentiable lower bound of the concordance index (C-index). In the same work, we propose a decomposition of the C-index where we found out that it can be expressed as a weighted harmonic average of two quantities; one quantifies the concordance among the observed event cases and the other quantifies the concordance between observed events and censored cases. The two quantities are weighted by a factor that balances the contribution of event and censored cases to the total C-index. Such decomposition uncovers hidden differences among survival models that seem equivalent based on the C-index. We also used genetic programming to search for a regression-based loss function for survival analysis with an improved concordance ability. The search results uncovered an interesting phenomenon, upon which we propose the use of the continuously differentiable Softplus function instead of the sharp-cut Relu function for handling censored cases. Lastly in the performance track, we propose an algorithm for correcting erroneous observed event labels that can be caused by preventive maintenance activities. The algorithm adopts an iterative expectation-maximization-like approach utilizing a genetic algorithm to search for better event labels that can maximize a surrogate survival model's performance.

Finally, the two tracks merge and we propose CoxSE a Cox-based deep neural network model that provides inherent explanations while maintaining the performance of its black-box counterpart. The model relies on the Self-Explaining Neural Networks (SENN) and the Cox Proportional Hazard formulation. We also propose CoxSENAM, an enhancement to the Neural Additive Model (NAM) by adopting the NAM structure along with the SENN loss function and type of output. The CoxSENAM model demonstrated better explanations than the NAM-based model with enhanced robustness to noise.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2025. p. 29
Series
Halmstad University Dissertations ; 128
National Category
Computer Sciences Information Systems
Identifiers
urn:nbn:se:hh:diva-55202 (URN)978-91-89587-72-4 (ISBN)978-91-89587-73-1 (ISBN)
Public defence
2025-01-31, S3030, Högskolan i Halmstad, Kristian IV:s väg 3, Halmstad, 09:00 (English)
Opponent
Supervisors
Available from: 2025-01-10 Created: 2025-01-08 Last updated: 2025-10-01Bibliographically approved
2.
The record could not be found. The reason may be that the record is no longer available or you may have typed in a wrong id in the address field.

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Alabdallah, AbdallahHamed, OmarOhlsson, MattiasRögnvaldsson, ThorsteinnPashami, Sepideh

Search in DiVA

By author/editor
Alabdallah, AbdallahHamed, OmarOhlsson, MattiasRögnvaldsson, ThorsteinnPashami, Sepideh
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 409 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf