Neural Networks2022,Vol.14912.DOI:10.1016/j.neunet.2022.02.011

Deep adversarial transition learning using cross-grafted generative stacks

Hou, Jinyong Ding, Xuejie Deng, Jeremiah D. Cranefield, Stephen
Neural Networks2022,Vol.14912.DOI:10.1016/j.neunet.2022.02.011

Deep adversarial transition learning using cross-grafted generative stacks

Hou, Jinyong 1Ding, Xuejie 1Deng, Jeremiah D. 1Cranefield, Stephen1
扫码查看

作者信息

  • 1. Univ Otago
  • 折叠

Abstract

As a common approach of deep domain adaptation in computer vision, current works have mainly focused on learning domain-invariant features from different domains, achieving limited success in transfer learning. In this paper, we present a novel "deep adversarial transition learning "(DATL) framework that bridges the domain gap by generating some intermediate, transitional spaces between the source and target domains through the employment of adjustable, cross-grafted generative network stacks and effective adversarial learning between transitions. Specifically, variational auto-encoders (VAEs) are constructed for the domains, and bidirectional transitions are formed by cross-grafting the VAEs' decoder stacks. Generative adversarial networks are then employed to map the target domain data to the label space of the source domain, which is achieved by aligning the transitions initiated by different domains. This results in a new, effective learning paradigm, where training and testing are carried out in the associated transitional spaces instead of the original domains. Experimental results demonstrate that our method outperforms the state-of-the-art on a number of unsupervised domain adaptation benchmarks.(C) 2022 Elsevier Ltd. All rights reserved.

Key words

Domain adaptation/Variational auto-encoders/Generative adversarial networks/Transfer learning

引用本文复制引用

出版年

2022
Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
被引量3
参考文献量71
段落导航相关论文