The target and the environment in the camouflage object detection task exhibit great similarity.Important information is easily lost through simple feature extraction.Additionally,the direct aggregation of features from different layers introduces noise and inaccu-rate predictions.To address these issues,a camouflage object detection network is proposed in this paper based on multi-scale pro-gressive feature fusion.A pyramid vision Transformer is employed as the backbone network to extract multiscale features and the de-formable attention is utilized to enhance the extracted multiscale features emphasizing the boundaries of the camouflaged target.Subse-quently,neighboring layers of features are incrementally fused by a progressive feature fusion module to accumulate indistinguishable but effective information and avoid large semantic gaps between non-neighboring layers.An adaptive spatial fusion operation is intro-duced in the fusion process to reduce information conflict at the same spatial location.Finally,the prediction results are output to a-chieve camouflage object detection.The model is trained on the training set consisting of COD10K and CAMO,and the experimental results demonstrate that the method presented in this paper has more obvious advantages compared with other methods.