In order to improve the real-time detection capability of human fall pose in multiple scenarios,an improved algorithm for YOLOv7-tiny target detection based on information enhancement module and attention feature fusion is proposed.Firstly,to address the lack of sensitivity of feature information in important regions,a contrast-aware global information enhancement module is embedded in the backbone network to effectively learn feature weights and enhance the network's ability to discriminate human poses.Secondly,in order to effectively utilize contextual information,a dense-coordinate attentional feature fusion structure is introduced by introducing a channel-dimensional fusion of shallow and deep semantic information that retains the position weight of useful feature information and facilitate the adequate representation of human pose information in the network.Finally,the proposed algorithm is validated on the human fall pose dataset.The experimental results show that the proposed algorithm achieves an average accuracy of 77%,which is 3.7%higher than that of the baseline network,and effectively improves the recognition ability of human fall behavior detection.Meanwhile,the proposed algorithm is validated on the student classroom behavior datasets SCB1,SCB2 and PASCAL VOC test sets,and the average detection accuracy over the baseline network is improved by 0.6%,0.5%and 2.1%respectively,validating the versatility of the algorithm.
fall detectionYOLOv7-tinyinformation enhancementfeature fusionattention mechanism