In pansharpening tasks,misalignment is inevitable due to the fact that panchromatic(PAN)images and multispectral(MS)images are collected by different sensors.However,most existing pancsharpening methods completely ignore this inherent misalignment phenomenon.To address this issue,a novel observational model is proposed that better describes spectral degradation by ensuring consistency between the high-frequency components of PAN and HRMS images.Additionally,a sparse bias prior is designed to further reduce modeling errors caused by outliers.Based on these improvements,a new objective function for pansharpening is constructed,and the iterative solution is unrolled into a deep convolutional network,where spatial and spectral degradations are approximated using specific subnets.Experimental results at reduced-resolution and full-resolution are reported to demonstrate the effectiveness of the proposed method.