Detail-Preserving Multi-Exposure Image Fusion Based on Adaptive Weight
Multi-exposure image fusion addresses the issue of insufficient image sensors for capturing scenes with large dynamic ranges.Multiple images with different exposure levels in the same scene are fused to obtain a large-dynamic-range image that contains rich scene details.A self-adaptive weight-detail-preserving multi-exposure image-fusion algorithm is proposed to address the typical issues of insufficient image-detail preservation and edge halo in fusion.Contrast and structural components in image-block decomposition are used to extract fused structural weights and two-dimensional entropy is used to select brightness benchmarks to calculate exposure weights.Subsequently,saturation weights are used to better restore the brightness and color information of the scene in the fused image.Finally,double-pyramid fusion is used to fuse the source-image sequence at multiple scales to avoid unnatural halos at the boundaries and obtain a large-dynamic-range fused image that preserves more details.Seventy sets of multi-exposure images from three datasets are selected for experiments.The results show that the average values for the fusion-structure similarity and cross-entropy of the proposed algorithm are 0.983 and 2.341,respectively.Compared with classical or recent multi-exposure fusion algorithms,the proposed algorithm can maintain the brightness distribution of the scene while maintaining more image information,thus demonstrating its effectiveness.The proposed algorithm offers excellent fusion results and good visual effects.