首页|复杂作业场景下的反光衣和安全帽检测方法

复杂作业场景下的反光衣和安全帽检测方法

扫码查看
针对现有算法在复杂的工地环境中进行反光衣和安全帽检测时存在的无法有效区分目标和背景的微小差异问题,提出了一种改进YOLOX的反光衣和安全帽检测算法。首先,将主干网络中空间金字塔池化中的最大池化替换为平均池化,减少特征图的信息损失和过拟合风险;其次,设计一种带权注意力模块(Weighted Convolutional Block Attention Module,W-CBAM)嵌入特征融合层,通过权重系数提升对特征图空间维度的关注,增强特征图的表达能力;最后,添加自适应特征融合(Adaptively Spatial Feature Fusion,ASFF)模块,解决多尺度特征融合时存在的不一致性问题。在扩充后的公开反光衣安全帽数据集的试验结果表明,所提算法精度高达98。79%,优于原始的YOLOX算法和其他先进算法,同时具有较快的检测速度,满足施工环境检测需求。
Method for detecting reflective vests and safety helmets in complex operational environments
In response to the limitations of existing reflective vest and safety helmet detection algorithms in complex site environments,such as low detection efficiency,poor accuracy,and difficulty in effectively distinguishing small differences between the target and the background,this paper proposes an enhanced algorithm based on YOLOX.Firstly,the spatial pyramid pooling in the backbone network now utilizes average pooling instead of maximum pooling.This adjustment aims to eliminate the influence of local maxima,reduce information loss,and mitigate the risk of overfitting in the feature map.Secondly,a Weighted Convolutional Block Attention Module(W-CBAM)has been developed and integrated into the feature fusion layer.This module enhances spatial dimension expression in the feature map by leveraging weight coefficients,emphasizing target region features,and guiding the network to focus more on the target being detected to enhance detection accuracy.Finally,an Adaptively Spatial Feature Fusion(ASFF)module has been incorporated to dynamically merge feature maps of varying scales.This addition effectively captures target feature information across different scales,boosting the model's ability to perceive and represent the target accurately.The study conducted experiments on an augmented public dataset for reflective vests and safety helmets,incorporating data enhancement techniques like image flipping and noise addition.The outcomes reveal that the enhanced algorithm achieves a mean average precision of 98.79%,with precision and recall rates of 98.72%and 94.63%respectively.The algorithm substantially reduces misdetections and misjudgments,outperforming not only the original YOLOX algorithm but also other state-of-the-art algorithms.Simultaneously,the algorithm achieves a detection speed of 68.47 frames per second,enabling real-time detection with high accuracy.The method presented in this study effectively addresses the issue of information loss during maximum pooling in the feature map,enhancing feature map expression and demonstrating precise and efficient performance on high-quality datasets with abundant samples.It adequately fulfills the detection requirements in construction environments and exhibits promising application potential.

safety engineeringreflective vest detectionsafety helmet detectionYOLOXattention moduleadaptive spatial feature fusion

谢国波、肖峰、林志毅、谢建辉、吴陈锋

展开 >

广东工业大学计算机学院,广州 510006

安全工程 反光衣检测 安全帽检测 YOLOX 注意力模块 自适应特征融合

国家自然科学基金项目广东电网有限责任公司科技项目

61802072GDKJXM20230718

2024

安全与环境学报
北京理工大学 中国环境科学学会 中国职业安全健康协会

安全与环境学报

CSTPCD北大核心
影响因子:0.943
ISSN:1009-6094
年,卷(期):2024.24(9)