首页|结合卷积和轴注意力的光流估计网络

结合卷积和轴注意力的光流估计网络

扫码查看
现有的光流估计网络为了获得更高的精度,往往使用相关性成本量和门控循环单元(gate recurrent unit,GRU)来进行迭代优化,但是这样会导致计算量大并限制了在边缘设备上的部署性能.为了实现更轻量的光流估计方法,本文提出局部约束与局部扩张模块(local constraint and local dilation module,LC-LD module),通过结合卷积和一次轴注意力来替代自注意力,以较低的计算量对每个匹配特征点周边区域内不同重要程度的关注,生成更准确的相关性成本量,进而降低迭代次数,达到更轻量化的目的.其次,提出了混洗凸优化上采样,通过将分组卷积、混洗操作与凸优化上采样相结合,在实现其参数数量降低的同时进一步提高精度.实验结果证明了该方法在保证高精度的同时,运行效率显著提升,具有较高的应用前景.
Optical flow estimation network combining convolution and axial attention
Existing optical flow estimation networks often utilize correlation cost volume and gated recurrent unit(GRU)to realize iterative optimization for improved accuracy.However,this approach incurs high computational volume and limits deployment performance on edge computing platforms.To realize a lightweight optical flow estima-tion method,the local constraint and local dilation(LC-LD)module is introduced.This approach combines convolution and primary axis attention to replace self-attention.A low computational volume enables the module to realize atten-tions with different important degrees for peripheral areas of each matching feature point,generate an accurate correla-tion cost volume,further reduce the iterations,and achieve lightweight features.In addition,the shuffling convex optim-ization upsampling method is proposed.This technique combines group convolution,shuffle operation,and convex op-timization upsampling,further increasing the precision and reducing the number of parameters.Experimental results show that the proposed method achieves significant improvements in running efficiency while maintaining high accur-acy and great potential for application.

optical flow estimationiterationsconvolutional neural networksaxial attention mechanismgated recur-rent unit networkdeep learningtime optimizationedge computing platform

刘爽、陈璟

展开 >

江南大学 人工智能与计算机学院,江苏 无锡 214122

江南大学 江苏省模式识别与计算智能工程实验室,江苏 无锡 214122

光流估计 迭代次数 卷积神经网络 轴注意力机制 门控循环单元网络 深度学习 时间优化 边缘计算平台

江苏省青年科学基金

BK20150159

2024

智能系统学报
中国人工智能学会 哈尔滨工程大学

智能系统学报

CSTPCD北大核心
影响因子:0.672
ISSN:1673-4785
年,卷(期):2024.19(3)
  • 4