Development of explainable artificial intelligence model for biomedical survival analysis

2026-1-20
Somuncuoğlu, Abdullah Nuri
While machine learning and deep learning architectures offer unprecedented predictive accuracy in survival analysis, their deployment in high-stakes biomedical domains is hindered by their "black-box" nature. Current Explainable AI (XAI) techniques, particularly Local Interpretable Model-Agnostic Explanations (LIME), rely on stochastic perturbation strategies that often yield unstable and inconsistent explanations for identical instances, thereby compromising clinical trust. This thesis addresses the inherent instability of linear surrogate models by introducing a novel framework: MARS-LIME. By integrating Multivariate Adaptive Regression Splines (MARS) into the local explanation process, the proposed method effectively captures non-linear relationships and high-dimensional interactions within local decision boundaries, which traditional linear approximations fail to resolve. Furthermore, an extended variant, MARS2-LIME, is developed to explicitly model second-order variable interactions, enhancing the granularity of risk factor analysis. Rigorous empirical evaluations conducted on the NHANES I epidemiological dataset demonstrate that MARS-based approaches significantly outperform standard LIME and OptiLIME in terms of Coefficients Stability Index (CSI) and Variables Stability Index (VSI). The study also presents a fuzzy-logic-based visualization technique to quantify feature ranking consistency. Ultimately, this research mitigates the stochastic volatility of post-hoc explanations, providing a robust and mathematically stable foundation for interpreting complex survival models in critical healthcare applications.
Citation Formats
A. N. Somuncuoğlu, “Development of explainable artificial intelligence model for biomedical survival analysis,” Ph.D. - Doctoral Program, Middle East Technical University, 2026.