Adjacent Coordination Network for Salient Object Detection in 360 Degree Omnidirectional Images
To address the issues of significant target scale variation,edge discontinuity,and blurring in 360° omnidirectional images Salient Object Detection(SOD),a method based on the Adjacent Coordination Network(ACoNet)is proposed.First,an adjacent detail fusion module is used to capture detailed and edge information from adjacent features,which facilitates accurate localization of salient objects.Then,a semantic-guided feature aggregation module is employed to aggregate semantic feature information from different scales between shallow and deep features,suppressing the noise transmitted by shallow features.This helps alleviate the problem of discontinuous salient objects and blurred boundaries between the object and background in the decoding stage.Additionally,a multi-scale semantic fusion submodule is constructed to enlarge the receptive field across different convolution layers,thereby achieving better training of the salient object boundaries.Extensive experimental results on two public datasets demonstrate that,compared to 13 other advanced methods,the proposed approach achieves significant improvements in six objective evaluation metrics.Moreover,the subjective visualized detection results show better edge contours and clearer spatial structural details of the salient maps.
Salient Object Detection(SOD)Deep learning360° omnidirectional imagesMulti-scale features