In order to solve the problems of high missing detection rate of single-model images and low detection speed of existing dual-model image fusion in pedestrian detection tasks under low visibility scenes,a lightweight pedestrian detection network based on dual-model relevant image fusion is proposed.The network model is designed based on YOLOv7-Tiny,and the backbone network is embedded with RAMFusion,which is used to extract and aggregate dual-model image complementary features.The 1×1 convolution of feature extraction is replaced by coordinate convolution with spatial awareness.Soft-NMS is introduced to improve the pedestrian omission in the cluster.The attention mechanism module is embedded to improve the accuracy of model detection.The ablation experiments in public infrared and visible pedestrian dataset LLVIP show that compared with other fusion methods,the missing detection rate of pedestrians is reduced and the detection speed of the proposed method is significantly increased.Compared with YOLOv7-Tiny,the detection accuracy of the improved model is increased by 2.4%,and the detection frames per second is up to 124 frame/s,which can meet the requirements of real-time pedestrian detection in low-visibility scenes.
pedestrian detectioninfrared and visible imagesrelevant fusionlightweight networkattention mechanismYOLOv7-Tiny