首页|DPStyler: Dynamic PromptStyler for Source-Free Domain Generalization

DPStyler: Dynamic PromptStyler for Source-Free Domain Generalization

扫码查看
Source-Free Domain Generalization (SFDG) aims to develop a model that works for unseen target domains without relying on any source domain. Research in SFDG primarily bulids upon the existing knowledge of large-scale vision-language models and utilizes the pre-trained model's joint vision-language space to simulate style transfer across domains, thus eliminating the dependency on source domain images. However, how to efficiently simulate rich and diverse styles using text prompts, and how to extract domain-invariant information useful for classification from features that contain both semantic and style information after the encoder, are directions that merit improvement. In this paper, we introduce Dynamic PromptStyler (DPStyler), comprising Style Generation and Style Removal modules to address these issues. The Style Generation module refreshes all styles at every training epoch, while the Style Removal module eliminates variations in the encoder's output features caused by input styles. Moreover, since the Style Generation module, responsible for generating style word vectors using random sampling or style mixing, makes the model sensitive to input text prompts, we introduce a model ensemble method to mitigate this sensitivity. Extensive experiments demonstrate that our framework outperforms state-of-the-art methods on benchmark datasets.

TrainingVectorsAdaptation modelsDogsUncertaintyBirdsElectronic mailStability analysisData modelsSensitivity

Yunlong Tang、Yuxuan Wan、Lei Qi、Xin Geng

展开 >

Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Nanjing, China

Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Nanjing, China|National Center of Technology Innovation for EDA, Nanjing, China

2025

IEEE transactions on multimedia

IEEE transactions on multimedia

ISSN:
年,卷(期):2025.27(1)
  • 77