确保动火作业过程中多种安全防护品的配置是预防人员受伤的有效途径。为了解决复杂动火场景下安全防护品的实时监测问题,设计实现了一种基于YOLOv5的改进目标检测算法,记为Hot work-YOLOv5s。首先,在YOLOv5s算法的主干网络中嵌入空间深度转换卷积(Space-to-Depth Convolution,SPD-Conv)模块以取代原有的步幅卷积和池化下采样,减少细微特征信息的丢失。随后,在残差模块引入无参注意力机制(Simple,parameter-free Attention Module,SimAM),以增强特征表达能力,提高网络速度;最后,将原网络的边界框回归损失函数优化为WIoU(Weighted Intersection over Union)损失函数,采用动态非单调聚焦机制,加速网络收敛。在自制的动火作业安全防护品的数据集上,改进后的模型平均检测精度达到了 96。8%,每秒处理帧数达到了 89帧,所提出的算法满足动火作业人员安全防护品检测任务的准确性与实时性的要求,同时在场景干扰较大、受光照影响的成像上也有很好的检测效果,给动火场景下安全防护品的快速检测提供了一种新的方法。
Method for identifying safety protective equipment in hot work scenarios
This paper proposes an improved object detection method for safety protection products in hot work scenarios,taking into account the complex background and high real-time requirements of the hot work process.First,using YOLOv5s as the base network,a new Convolution Neural Network(CNN)convolution module called Space-to-Depth Convolution(SPD-Conv)is introduced in the backbone network.This module replaces each convolution layer and pooling layer with a stride greater than 1 to downsample the feature map,thereby avoiding information loss and enhancing the model's detection capability for small and low-resolution targets.Secondly,a Simple Parameter-Free Attention Module(Sim AM)is incorporated into the residual module.Without introducing additional parameters,this module enhances the algorithm's ability to differentiate targets from the background and accelerates the model's recognition speed.Thirdly,Weighted Intersection over Union(WIoU)is employed to replace the original loss function,Complete Intersection over Union(CIoU),in YOLOv5s to enhance the accuracy of the predicted bounding boxes.This approach enables the model to focus on anchor boxes of ordinary quality and employs an appropriate gradient allocation strategy to accelerate model convergence.Then,hot work images are collected through field visits and online resources.The dataset is expanded using image enhancement techniques,including histogram equalization,HSV color gamut adjustments,the addition of Gaussian and salt-and-pepper noise,blurring,geometric transformations,and random occlusion.These methods enable the model to learn a wider range of color and brightness variation patterns,thereby improving its generalization ability and adaptability to real-world scenarios.Subsequently,detection targets are established,including safety goggles,safety helmets,gloves,protective shoes,fire extinguishers,safety belts,guardrails,masks,warning signs,protective suits,dust masks,as well as instances of no safety helmets,no protective shoes,no protective suits,uncovered faces,and no gloves.The original dataset is then divided into a training set and a test set.The training sample set is manually labeled using the LabelImg image annotation software.Experiments conducted on the custom hot work dataset reveal that,compared to the original YOLOv5s,this algorithm achieves an increase in Precision from 89.3%to 96.9%,an improvement in Recall from 91.2%to 96.4%,a rise in mean Average Precision from 90.9%to 96.8%,and an increase in Frames Per Second(FPS)from 83 to 89.This enhancement not only improves detection speed but also boosts accuracy.Additionally,the improved target detection model demonstrates a favorable convergence speed.Notably,when the Recall rate falls below 80%,the Precision for each detection category remains above 90%.Furthermore,missed detections and false positives are significantly reduced in scenarios involving small targets and light interference.This approach offers significant advantages for identifying safety protection products in complex hot work scenarios.Finally,PyQt5 technology is utilized to create a visual interface,and the improved algorithm is packaged into an executable(exe)file.This facilitates actual deployment and application at hot work sites,enabling timely risk control and accident prevention.