Deep generative methods have recently made considerable progress in the field of image inpainting by employing a coarse-to-fine strategy,but multi-stage inpainting methods with serially connected sub-networks result in discontinuous image structures and blurred details due to inaccurate structural localization and the poor feature expressiveness of the bottleneck layer.To address the above problems,a multi-resolution feature collaborative image inpainting network is proposed to inpaint damaged images with a parallel multi-resolution network structure.Parallel multi-resolution encoding is performed on the damaged image to learn the structural features at different scales,and the iterative fusion module is used to dynamically fuse the multi-scale information to provide a more accurate localization for the recovery of the damaged structure,thus generating a structurally coherent image.The gated multi-feature extraction module is used in the bottleneck layer to combine the advantages of the attention mechanism and the convolutional operation,to capture the long-distance dependencies in different dimensions and extract the features under different receptive fields,and then the gated residual fusion is used to adjust the weights of the multi-features,to enhance the feature expression ability of the bottleneck layer,so as to recover the image details of the missing regions better.Extensive experiments on the CelebA-hq dataset,the FFHQ dataset and the Paris StreetView dataset show that the proposed method provides a larger improvement in PSNR,SSIM and FID metrics and in visual quality compared to other image inpainting methods.