高技术通讯2024,Vol.34Issue(5) :486-495.DOI:10.3772/j.issn.1002-0470.2024.05.005

基于改进HopeNet的头部姿态估计方法

Head pose estimation method based on improved HopeNet

张立国 胡林
高技术通讯2024,Vol.34Issue(5) :486-495.DOI:10.3772/j.issn.1002-0470.2024.05.005

基于改进HopeNet的头部姿态估计方法

Head pose estimation method based on improved HopeNet

张立国 1胡林1
扫码查看

作者信息

  • 1. 燕山大学测试计量技术与仪器重点实验室 秦皇岛 066004
  • 折叠

摘要

针对基于无需先验知识的头部姿态估计算法在复杂背景图像和多尺度图像场景下精度较差的问题,提出了一种基于改进HopeNet的头部姿态估计方法.首先在主干网络结构上增加特征融合结构使得模型能够充分利用网络的深层特征信息与浅层特征信息,提升模型的特征解析力;然后在主干网络的残差结构中增加特征压缩激励模块,使得网络能够自适应学习不同特征层重要程度的权重信息,让模型更加关注目标信息.实验结果表明,相较于HopeNet,本文方法在AFLW2000数据集上精度提升了 31.15%,平均误差降到4.20°,同时在复杂背景图像场景下有较好的鲁棒性.

Abstract

Aiming at the poor accuracy of the head pose estimation algorithm based on no prior knowledge in complex background images and multi-scale image scenes,a head pose estimation method based on improved HopeNet is proposed.Firstly,the feature fusion structure is added to the backbone network structure to make the model make full use of the deep and shallow feature information of the network and improve the feature analysis power of the model.Then feature squeeze and excitation module is added to the residual structure of the backbone network,so that the network can adaptively learn the weight information of different feature layers and the model can pay more attention to the target information.Experimental results show that compared with HopeNet,the accuracy of the pro-posed method on AFLW2000 dataset is improved by 31.15%,and the average error is reduced to 4.20 °.Mean-while,the proposed method has good robustness in complex background image scenes.

关键词

头部姿态估计/HopeNet/特征融合/特征压缩激励/自适应学习

Key words

head pose estimation/HopeNet/characteristics of the fusion/characteristic compression and ex-citation/adaptive learning

引用本文复制引用

基金项目

河北省中央引导地方科技发展专项(199477141G)

出版年

2024
高技术通讯
中国科学技术信息研究所

高技术通讯

CSTPCD北大核心
影响因子:0.19
ISSN:1002-0470
参考文献量24
段落导航相关论文