Task-Specific Context Decoupling Object Detection Method for Remote Sensing Images
A remote sensing image detection method FasterYOLO-TSCDH based on task-specific context decoupling and fast partial convolution is proposed to address the issues of high miss rate and poor bounding-box regression accuracy caused by small and dense detection objects,large scale differences,random directions,and complex backgrounds in typi-cal object detection models in remote sensing image detection tasks.This paper improves the detection method to a task-specific context decoupling detection method,separating the classification task and regression task,and fusing feature maps of different spatial and semantic features separately to reduce mutual interference between different tasks and im-prove detection accuracy and robustness.The paper proposes a fast partial convolution multi-level aggregation module,which improves the cross stage partial convolution module in the feature extraction stage,strengthens the feature extrac-tion ability,and reduces the problem of parameter and computational bulk caused by decoupling heads.It adopts a dynamic evaluation of the quality of anchor frames using Wise-IoU to reduce the negative impact of high or low quality anchor on bounding-box regression and improve the overall performance of bounding-box regression.The experimental results show that the proposed method achieves a mAP@IoU=0.5 of 65.4%and 51.3%for object detection tasks on two common remote sensing image datasets,DOTAv2 and AI-TOD,which is 3 to 5 percentage points higher than the baseline model.The paper proves the feasibility and effectiveness of the improved method.
deep learningobject detectionremote sensing imagesmall objectdecoupled detectionfeature extraction