The black box nature of deep neural networks seriously hinders one's intuitive analysis and understand-ing of network decision-making.Although various decision explanation methods based on neural contribution alloca-tion have been reported in the literature,the consistency of existing methods is difficult to ensure,and their robust-ness still needs improvement.This article starts with the concept of neuron relevance and proposes a new neural network explanation method LID-Taylor(layer-wise increment decomposition).Aiming at LID-Taylor,a contrast lifting strategy for top-layer neuron relevance and a non-linear lifting strategy for all-layer neuron relevance are in-troduced,respectively.Finally,a cross combination strategy is applied,obtaining the final method SIG-LID-IG and achieving a robust leap in decision attribution performance.Both qualitative and quantitative evaluation have been conducted via heatmaps on the decision attribution performance of existing works and the proposed method.Res-ults show that SIG-LID-IG is comparable or even superior to existing works in the rationality of positive and negat-ive relevance of neurons in decision-making attribution.SIG-LID-IG has also achieved better accuracy and stronger robustness in decision-making attribution in terms of multi-scale heatmaps.