End-to-End Image Compression Method Based on Context Cluster Transform
Aiming at the problem that the end-to-end image compression method based on convolutional neural network transform has insufficient interaction of local similar features,an end-to-end image compression method based on context cluster transform was proposed in this study.Firstly,the image was transformed into several feature points containing coordinates,and the feature points were divided into several clusters.Then,the image features were extracted by aggregating and redistributing the feature points within each cluster.Finally,quantization,hyerprprior network and entropy coding based on joint spatial-channel context were introduced to construct a complete end-to-end image compression model.The experimental results showed that compared with the end-to-end image compression method based on CNN transform,the BD-rate of the proposed method saved 2.75% and 4.20% on the Kodak and CLIC test datasets,respectively,and achieved good subjective visual effect.The method of this study realizes the interaction of local similar features,and fully considers the correlation between adjacent pixels,so as to obtain satisfactory rate-distortion performance.
Deep learningImage compressionCorrelationContext clusterTransform network