首页|基于改进奖励机制的深度强化学习目标检测

基于改进奖励机制的深度强化学习目标检测

扫码查看
为提高深度强化学习目标检测模型的检测精度和检测速度,对传统模型进行改进.针对特征提取不充分的问题,使用融入通道注意力机制的VGG16 特征提取模块作为强化学习的状态输入,来更全面地捕获图像中的关键信息;针对仅使用交并比作为奖励出现的评价不精准问题,使用额外考虑了真实框与预测框中心点距离以及长宽比的改进奖励机制,使奖励更加合理;为加速训练过程的收敛并增强智能体对当前状态和动作评价的客观性,使用Dueling DQN算法进行训练.在PASCAL VOC2007 和PASCAL VOC2012 数据集上进行实验,实验结果表明,该检测模型仅需 4-10 个候选框即可检测到目标.与Caicedo-RL相比,准确率提高 9.8%,最终预测框和真实框的平均交并比提高5.6%.
Deep Reinforcement Learning for Object Detection Based on Improved Reward Mechanism
To improve the detection accuracy and speed of deep reinforcement learning object detection models,modifications are made to traditional models.To address inadequate feature extraction,a VGG16 feature extraction module integrated with a channel attention mechanism is introduced as the state input for reinforcement learning,enabling a more comprehensive capture of key information in images.To address inaccurate evaluation caused by relying solely on the intersection over union as a reward,an improved reward mechanism that also considers the distance between the center points and the aspect ratio of the ground truth box and the predicted box is employed,making the reward more reasonable.To accelerate the convergence of the training process and enhance the objectivity of the agent's evaluation of current states and actions,the Dueling DQN algorithm is used for training.After conducting experiments on the PASCAL VOC2007 and PASCAL VOC2012 datasets,experimental results show that the detection model only needs 4-10 candidate boxes to detect the target.Compared with Caicedo-RL,the accuracy is improved by 9.8%,and the mean intersection over union between the predicted and ground truth boxes is increased by 5.6%.

object detectiondeep reinforcement learningVGG16attention mechanismreward mechanismDueling DQN

陈盈君、武月、刘力铭

展开 >

长安大学信息工程学院,西安 710064

目标检测 深度强化学习 VGG16 注意力机制 奖励机制 Dueling DQN

2024

计算机系统应用
中国科学院软件研究所

计算机系统应用

CSTPCD
影响因子:0.449
ISSN:1003-3254
年,卷(期):2024.33(10)