首页|面向属性识别和组合检索的区域感知时尚对比学习

面向属性识别和组合检索的区域感知时尚对比学习

扫码查看
服装属性识别已成为一项关键技术,使用户能够自动识别服装的特征,并搜索具有相似属性的服装图片。然而,现有方法无法识别新添加的属性,并且可能无法捕获区域级别视觉特征。为解决上述问题,该研究提出一种区域感知时尚对比语言图像预训练(region-aware fashion contrastive language-image pre-training,RaF-CLIP)模型。该模型将裁剪和分割的图像与类别和多个细粒度属性文本进行对齐,通过对比学习实现时尚区域与相应文本的匹配。服装检索基于用户指定的服装类别和属性来找到合适的服装,为进一步提高检索的准确性,该研究在RaF-CLIP模型上引入属性引导的组合网络(attribute-guided composed network,AGCN),并将其作为附加组件,专用于组合图像检索任务。该任务旨在根据文本表达修改参考图像以检索预期的目标。通过采用基于transformer的双向注意力和门控机制,该网络实现了图像特征和属性文本特征的融合与选择。试验结果表明,所提出的模型在属性识别任务中平均精度达到0。663 3,在组合图像检索任务中recall@10(recall@k表示正确样本出现在前k个检索结果中的百分比)指标达到39。18,满足用户通过图像和文本自由搜索服装的需求。
Region-Aware Fashion Contrastive Learning for Unified Attribute Recognition and Composed Retrieval
Clothing attribute recognition has become an essential technology,which enables users to automatically identify the characteristics of clothes and search for clothing images with similar attributes.However,existing methods cannot recognize newly added attributes and may fail to capture region-level visual features.To address the aforementioned issues,a region-aware fashion contrastive language-image pre-training(RaF-CLIP)model was proposed.This model aligned cropped and segmented images with category and multiple fine-grained attribute texts,achieving the matching of fashion region and corresponding texts through contrastive learning.Clothing retrieval found suitable clothing based on the user-specified clothing categories and attributes,and to further improve the accuracy of retrieval,an attribute-guided composed network(AGCN)as an additional component on RaF-CLIP was introduced,specifically designed for composed image retrieval.This task aimed to modify the reference image based on textual expressions to retrieve the expected target.By adopting a transformer-based bidirectional attention and gating mechanism,it realized the fusion and selection of image features and attribute text features.Experimental results show that the proposed model achieves a mean precision of 0.663 3 for attribute recognition tasks and a recall@10(recall@k is defined as the percentage of correct samples appearing in the top k retrieval results)of 39.18 for composed image retrieval task,satisfying user needs for freely searching for clothing through images and texts.

attribute recognitionimage retrievalcontrastive language-image pre-training(CLIP)image text matchingtransformer

王康平、赵鸣博

展开 >

东华大学信息科学与技术学院,上海 201620

属性识别 图像检索 对比语言图像预训练(CLIP) 图像文本匹配 transformer

国家自然科学基金

61971121

2024

东华大学学报(英文版)
东华大学

东华大学学报(英文版)

影响因子:0.091
ISSN:1672-5220
年,卷(期):2024.41(4)