Lightweight multi-modal pedestrian detection algorithm based on YOLO
To address the problems of low accuracy in pedestrian detection and the large number of model parameters in low-light environments,a lightweight multi-modal pedestrian detection algorithm named EF-DEM-YOLO was proposed based on the YOLO framework.This algorithm employed the lightweight ES-MobileNet as the backbone feature extraction network and integrated ECA and SE-ECA attention mechanism modules in this network to enhance the important channel features,thereby elevating the detection accuracy for small-target pedestrians.A DBL module based on depth-separable convolution was also designed in the neck network to further reduce the number of parameters in the model.In addition,to improve the detection accuracy of pedestrians under low-light conditions,a weighted fusion method of visible and infrared modes based on image entropy was proposed.This method utilized the complementary features of visible and infrared modes under different lighting conditions,and the fusion module EWF is designed.In comparison to baseline methods:the proposed algorithm yielded significant improvements for pedestrian targets under different lighting conditions.The model's mAP was increased by 55.5%,the MR was reduced by 85.9%,and the inference speed reached 33.4 frames per second,outperforming other classical object detection algorithms.This algorithm provided the possibility for real-time detection of pedestrian targets in edge computing and low-light scenes.