To address the issues of poor results and limited diversity in existing deep learning-based image inpainting methods,a domain adaptation approach was proposed based on the pre-training.The learnable knowledge was transferred from the source domain to supplement the required information during training.Structural information in the feature space was explored as the public representation between the source and target domains.An adaptive cross-domain distance consistency loss was proposed to preserve relative distances between the source and target domains by adaptively adjusting the loss weight.Experimental results demonstrate that the proposed method effectively improves the inpainting quality and realism,and exhibits good generalization performance.