首页|Deep and interpretable regression models for ordinal outcomes
Deep and interpretable regression models for ordinal outcomes
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
Outcomes with a natural order commonly occur in prediction problems and often the available input data are a mixture of complex data like images and tabular predictors. Deep Learning (DL) models are state-of-the-art for image classification tasks but frequently treat ordinal outcomes as unordered and lack interpretability. In contrast, classical ordinal regression models consider the outcome's order and yield interpretable predictor effects but are limited to tabular data. We present ordinal neural network transformation models (omkams), which unite DL with classical ordinal regression approaches. ONTRAM5 are a special case of transformation models and trade off flexibility and interpretability by additively decomposing the transformation function into terms for image and tabular data using jointly trained neural networks. The performance of the most flexible ONTRAM is by definition equivalent to a standard multiclass DL model trained with cross-entropy while being faster in training when facing ordinal outcomes. Lastly, we discuss how to interpret model components for both tabular and image data on two publicly available datasets. (C) 2021 Published by Elsevier Ltd.
Deep learningInterpretabilityDistributional regressionOrdinal regressionTransformation models