A focally discriminative loss for unsupervised domain adaptation method
The maximum mean discrepancy(MMD),as a representative distribution metric between source domain and target domain,has been widely applied in unsupervised domain adaptation(UDA),where both domains follow different distributions,and the labels from source domain are merely availa-ble.However,MMD and its class-wise variants possibly ignore the intra-class compactness and inter-class separability,thus reducing discriminability of feature representation.This paper proposes a focally discriminative loss for unsupervised domain adaptation.This method endeavors to improve the discrimi-native ability of MMD from two aspects:(1)the weights are re-designed for MMD in order to align the distribution of relatively hard classes across domains;(2)a focally contrastive loss is explored to tradeoff the positive sample pairs and negative ones for better discrimination.The integration of both losses can not only make the intra-class features close,but also push away the inter-class features far from each other.Moreover,the improved loss is simple yet effective,and it can be extended to the net-work structure of the attention mechanism.Experiments on several domain adaptation datasets verify the effectiveness of the proposed method.
unsupervised domain adaptationweighted maximum mean discrepancyfocally contras-tive lossattention mechanism