北京印刷学院学报2024,Vol.32Issue(3) :45-51.

基于自注意力的人脸表情识别算法研究

Research on Face Expression Recognition Algorithm Based on Self-Attention

刘一楠 王佳 简腾飞 曹少中
北京印刷学院学报2024,Vol.32Issue(3) :45-51.

基于自注意力的人脸表情识别算法研究

Research on Face Expression Recognition Algorithm Based on Self-Attention

刘一楠 1王佳 1简腾飞 2曹少中1
扫码查看

作者信息

  • 1. 北京印刷学院信息工程学院,北京 102600
  • 2. 贵州电网有限责任公司遵义凤冈供电局,遵义 564200
  • 折叠

摘要

本文采用Swin Transformer和Vision Transformer网络,结合迁移学习的方法,对人脸表情识别任务进行了深入研究.为了验证不同网络的性能,选择了RAF-DB、Fer2013、CK+和JAFFE这四个常用的人脸表情数据集进行研究.通过对不同网络的不同模型进行对比,实验结果表明,Swin Transformer网络中的WMSA与SWMS自注意力结构更能够正确地关注表情特征,在数据集CK+、JAFFE、RAF-DB上分别达到了99.48%,95.60%和86.73%的识别准确率.实验结果最终验证了,在人脸表情识别任务中,基于自注意力机制的Transformer网络具有很大的潜力.

Abstract

This paper adopts Swin Transformer and Vision Transformer networks, combined with a migration learning approach, to conduct an in-depth study on the task of face expression recognition. In order to verify the performance of different networks, four commonly used face expression datasets, RAF-DB, Fer2013, CK+and JAFFE, are selected for the study. By comparing different models of different networks, the experimental results show that the WMSA and SWMS self-attentive structures in Swin Transformer network are more capable of focusing on the expression features correctly, and achieve 99.48%, 95.60%, and 86.73% recognition accuracy rate on the datasets CK+, JAFFE, and RAF-DB, respectively. The experimental results finally verified that the Transformer network based on the self-attention mechanism has a great potential in the face expression recognition task.

关键词

深度学习/表情识别/迁移学习/Transformer/自注意力

Key words

deep learning/facial expression recognition/Transformer/self-attention

引用本文复制引用

基金项目

北京市自然基金项目-北京市教委科技计划重点项目(KZ202010015021)

专业学位研究生联合培养基地建设(21090223001)

出版年

2024
北京印刷学院学报
北京印刷学院

北京印刷学院学报

影响因子:0.247
ISSN:1004-8626
参考文献量18
段落导航相关论文