Classification of breast pathological images based on multiscale information interaction and fusion
Objective Breast cancer recognition based on deep learning methods is a challenging task due to the large size of breast histopathology images(single image size is approximately 1 GB).Thus,these images must be cut and then identi-fied due to the current computational power limitations.Current research on breast cancer recognition focuses on single-scale networks,ignoring the characteristics of multiple magnifications and pyramidal structure storage of breast histopathol-ogy images.Several studies on multiscale networks only input images of different magnifications into the network model and concatenate or aggregate various features after multilayer convolutional layer operations.The feature fusion is simple and ignores the correlation between images of different scales as well as the guidance between images of different scales when extracting their texture features in the shallow part of the network model.Therefore,problems such as low feature utiliza-tion and lack of information interaction exist between images of different magnifications.Method This paper proposes a con-volutional neural network improvement strategy based on multiscale and group attention mechanisms to address the above problems.The strategy mainly includes the following two modules:information interaction and feature fusion modules.The first module extracts clear cell morphological structure and global context information from high-and low-magnification images,respectively,through a spatial attention mechanism.The feature information with high relevance to the classifica-tion target of the main branch will be given additional weight.Finally,these features are weighted and accumulated,and the results are fed back to the original branch for dynamic selection to achieve feature interaction and circulation.The sec-ond module considers that the number of channels on the feature map will multiply as the depth of the network increases,and the general channel attention encounters problems of large computation and low feature activation rate.Therefore,this paper proposes group attention based on group convolution and combines it into the feature fusion module.In addition,a difference in the receptive field of the images is observed at different magnifications(i.e.,the actual length of each pixel is different).Thus,this paper uses a feature pyramid to eliminate the perceptual domain difference in the feature fusion pro-cess.Result In this paper,the above strategy is applied to a variety of convolutional neural networks and compared with the latest methods.A fivefold cross-validation experiment is conducted on the Camelyon16 public dataset,and the mean and standard deviation are calculated for each evaluation metric.Compared with the single-scale convolutional network,the introduced method in this paper demonstrated 0.9%-1.1%improvement in accuracy and 1.1%-1.2%in F1-score.Compared with the best-performing TransPath network in the single-scale network,the enhanced DenseNet201 in this paper demonstrated a 0.6%improvement in accuracy,0.8%in precision,0.6%in Fl-score,and the standard deviation of the indicators is lower than that of TransPath,indicating that the network incorporating the strategy has a better stability.Conclusion Overall,the proposed strategy in this paper can compensate for the shortcomings of general multiscale net-works and has certain generality to obtain superior performance in breast cancer image classification.Thus,this strategy is useful for future multiscale research and feature extraction for downstream tasks.
classification of breast pathological imagesdense convolutional networkmultiscaleattentionfusion of fea-tures