Conventional Bayesian Neural Networks (BNNs) frequently suffer from variance underestimation and mode-seeking behavior due to the restrictive Mean-Field assumption and the exclusive use of Kullback-Leibler divergence in Variational Inference. These limitations often result in overconfident predictions and opaque decision boundaries, hindering their adoption in high-stakes domains. This study proposes a unified Interpretable Bayesian Neural Network framework that overcomes these challenges by integrating Normalizing Flows to construct a highly flexible, multi-modal approximate posterior, and Alpha Divergence to enforce robust mass-covering regularization. Applied to the complex non-linear task of forecasting Customer Lifetime Value (CLV) from stable psychometric traits, the proposed framework achieves state-of-the-art predictive accuracy, significantly outperforming both deterministic ensembles (Random Forest, XGBoost) and a standard BNN baseline. A key contribution is the principled decomposition of predictive variance into aleatoric and epistemic sources, enabling risk-adjusted resource allocation. Furthermore, the integration of synergistic Explainable AI (XAI) techniques—validated via rigorous stability analysis— renders the model transparent, identifying Conscientiousness and Neuroticism as the primary, theoretically grounded drivers of customer value. This work provides a normative blueprint for trustworthy AI, demonstrating that architectural flexibility, principled uncertainty quantification, and interpretability can be cohesively engineered to support complex data-driven strategies.
Norouzi, V., Ahmadian, D., Ballestra, L.V., Almasizadeh, J. (2026). An interpretable BNN framework with alpha divergence and normalizing flows for customer lifetime prediction using personality traits. NEUROCOMPUTING, 687, 1-17 [10.1016/j.neucom.2026.133723].
An interpretable BNN framework with alpha divergence and normalizing flows for customer lifetime prediction using personality traits
Ahmadian, Davood
;Ballestra, Luca Vincenzo;
2026
Abstract
Conventional Bayesian Neural Networks (BNNs) frequently suffer from variance underestimation and mode-seeking behavior due to the restrictive Mean-Field assumption and the exclusive use of Kullback-Leibler divergence in Variational Inference. These limitations often result in overconfident predictions and opaque decision boundaries, hindering their adoption in high-stakes domains. This study proposes a unified Interpretable Bayesian Neural Network framework that overcomes these challenges by integrating Normalizing Flows to construct a highly flexible, multi-modal approximate posterior, and Alpha Divergence to enforce robust mass-covering regularization. Applied to the complex non-linear task of forecasting Customer Lifetime Value (CLV) from stable psychometric traits, the proposed framework achieves state-of-the-art predictive accuracy, significantly outperforming both deterministic ensembles (Random Forest, XGBoost) and a standard BNN baseline. A key contribution is the principled decomposition of predictive variance into aleatoric and epistemic sources, enabling risk-adjusted resource allocation. Furthermore, the integration of synergistic Explainable AI (XAI) techniques—validated via rigorous stability analysis— renders the model transparent, identifying Conscientiousness and Neuroticism as the primary, theoretically grounded drivers of customer value. This work provides a normative blueprint for trustworthy AI, demonstrating that architectural flexibility, principled uncertainty quantification, and interpretability can be cohesively engineered to support complex data-driven strategies.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



