首页|双向融合CNN与Transformer的三维视线估计

双向融合CNN与Transformer的三维视线估计

扫码查看
针对当前视线估计任务在无约束环境中易受影响因素干扰,准确度不高的问题,提出一种卷积与注意力双分支并行的特征交叉融合视线估计方法,提升了特征融合的有效性和网络性能.首先,对Mobile-Former网络进行改进,引入了线性注意力机制和部分卷积,有效提高了特征提取能力并且降低了计算成本;其次,增加了基于300W-LP数据集预训练的ResNet50 头部姿态特征估计网络分支来增强视线估计的准确度,并使用Sigmoid函数作为门控单元来筛选有效特征;最后,将面部图像输入神经网络进行特征提取和融合,输出三维视线估计方向.在MPIIFace-Gaze和Gaze360 数据集上评估模型,该方法的视线平均角度误差为 3.70°和 10.82°,通过与其他主流三维视线估计方法比较,验证了该网络模型能够比较准确的估计三维视线方向并降低计算复杂度.
3D Gaze Estimation by Bidirectional Fusion of CNN and Transformer
To address the issue of low accuracy and susceptibility to interference from external factors in unconstrained environments,a convolution and attention double-branch parallel feature cross-fusion gaze estimation method is proposed to enhance feature fusion effectiveness and network performance.Firstly,the Mobile-Former network is enhanced by introducing a linear attention mechanism and partial convolution.This effectively improves the feature extraction capability while reducing computing costs.Additionally,a branch of the ResNet50 head pose feature estimation network,pre-trained on the 300W-LP dataset,is added to enhance gaze estimation accuracy.A Sigmoid function is used as a gating unit to screen effective features.Finally,facial images are inputted into the neural network for feature extraction and fusion,and the 3D gaze estimation direction is outputted.The model is evaluated on the MPIIFaceGaze and Gaze360 datasets,and the average angle error of the proposed method is 3.70°and 10.82°,respectively.The network model is verified to accurately estimate the 3D gaze direction and reduce computational complexity compared to other mainstream 3D gaze estimation methods.

3D gaze estimationparallel structurebidirectional fusionpartial convolutionlinear attention mechanism

吕嘉琦、王长元

展开 >

西安工业大学计算机科学与工程学院,西安 710021

三维视线估计 并行结构 双向融合 部分卷积 线性注意力机制

国家自然科学基金

52072293

2024

计算机系统应用
中国科学院软件研究所

计算机系统应用

CSTPCD
影响因子:0.449
ISSN:1003-3254
年,卷(期):2024.33(10)