改进YOLACT的服装图像实例分割方法
Garment image instance segmentation method based on improved YOLACT
顾梅花 1董晓晓 1花玮 1崔琳1
作者信息
- 1. 西安工程大学电子信息学院,陕西西安 710048
- 折叠
摘要
针对服装图像实例分割精度与速度较低的问题,提出一种基于改进YOLACT的服装图像实例分割方法.以YOLACT为基础模型,首先在ResNet 101网络中采用深度可分离卷积代替传统卷积,减少模型计算量和模型参数,加快模型速度;然后,在模板生成网络后引入高效通道注意力模块,优化输出特征,捕获服装图像的跨通道交互信息,加强对掩膜分支的特征提取能力;最后,训练过程采用Leaky ReLU激活函数,避免反向传播时权值信息得不到及时更新,提升模型对服装图像负值特征信息的提取能力.结果表明:与原模型相比,所提方法能有效减少模型参数量,在提升速度的同时提高了精度,其速度提升了 4.82帧/s,平均精度提升了 5.4%.
Abstract
A garment image instance segmentation method based on improved YOLACT was proposed to solve the problem of low accuracy and speed of clothing image instance segmentation.Based on the YOLACT model,firstly,the depth separable convolution was used in the ResNetl01 network to replace the traditional convolution,reduce the amount of model calculation and model parameters,and accelerate the speed of the model.Then,the efficient channel attention module was introduced to optimize the output features after the protonet,capture the cross-channel inter-action information of the clothing image,and strengthen the feature extraction ability of mask branches.Finally,the Leaky ReLU activation function was used in the training process to ensure that the weight information is updated in time,and to improve the model's ability to extract the negative feature information of the clothing image.The experimental results show that compared with the original model,the proposed method can effectively reduce the number of model param-eters,and increase the accuracy and the speed.The speed increased by 4.82 frame per second,and the average accuracy increased by 5.4%.
关键词
服装图像实例分割/YOLACT/深度可分离卷积/高效通道注意力/激活函数Key words
garment image instance segmentation/YOLACT/depth separable convolution/effi-cient channel attention/activation function引用本文复制引用
基金项目
国家自然科学基金青年基金(61901347)
出版年
2024