首页|Structural Dependence Learning Based on Self-attention for Face Alignment

Structural Dependence Learning Based on Self-attention for Face Alignment

扫码查看
Self-attention aggregates similar feature information to enhance the features.However,the attention covers nonface areas in face alignment,which may be disturbed in challenging cases,such as occlusions,and fails to predict landmarks.In addition,the learned feature similarity variance is not large enough in the experiment.To this end,we propose structural dependence learning based on self-attention for face alignment(SSFA).It limits the self-attention learning to the facial range and adaptively builds the significant landmark structure dependency.Compared with other state-of-the-art methods,SSFA effectively improves the performance on several standard facial landmark detection benchmarks and adapts more in challenging cases.

Computer visionface alignmentself-attentionfacial structurecontextual information

Biying Li、Zhiwei Liu、Wei Zhou、Haiyun Guo、Xin Wen、Min Huang、Jinqiao Wang

展开 >

Foundation Model Research Center,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China

School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing 100083,China

Alpha(Beijing)Private Equity,Beijing 100083,China

School of Computer Science,National University of Defense Technology,Changsha 410073,China

展开 >

National Key R&D Program of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaZhejiang LabMinistry of Education Industry-University Cooperative Education Program(Wei Qiao Venture Group)

2021YFE02057006207623562276260620023562021KH0AB07E1425201

2024

机器智能研究(英文)
中国科学院自动化所

机器智能研究(英文)

CSTPCDEI
影响因子:0.49
ISSN:2731-538X
年,卷(期):2024.21(3)
  • 43