For the defocus blur detection(DBD)model training,there is no learning optimization for the response error ar-ea(the area where the extracted image feature information does not correspond to the original image),and the blur of some images is homogeneous during the recognition process.Locations such as regions and handling boundary transitions remain challenging.This paper proposes a re-perceptual dual-model joint training method and a multi-scale semantic fusion defocus blur detection network with channel attention.In the model training phase,the model learning is driven by mapping the in-correctly responded predicted regions to a new synthetic image to re-perceive the image features of the wrong location.This approach involves creating a dual model,comprising a focus prediction model and a defocus prediction model,which lever-age the complementary nature of the DBD task.The redundant response areas in one model,which contain excess image feature information,are fed back to the other model to enhance the training effect.Considering the sensitivity of defocus blur features to scale,this paper utilizes a multi-scale feature fusion module to gradually integrate semantic information at differ-ent scales.In addition,a global channel attention module is designed during feature extraction to make the model focus on the effective feature information of the prediction results and increase the flexibility of the network under different input sce-narios.Comparative experiments show that the F-Measure index of the method in this paper shows improvements of 0.082,0.051,and 0.264 and the MAE index is reduced by 0.032,0.018,and 0.144,respectively.