首页期刊导航|International journal of image and data fusion
期刊信息/Journal information
International journal of image and data fusion
Taylor & Francis
International journal of image and data fusion

Taylor & Francis

季刊

1947-9832

International journal of image and data fusion/Journal International journal of image and data fusionEIESCI
正式出版
收录年代

    A brief overview and perspective of using airborne Lidar data for forest biomass estimation

    Dengsheng LuXiandie Jiang
    1-24页
    查看更多>>摘要:ABSTRACT Lidar data have been regarded as the most important data source for accurate forest biomass estimation. Different platforms such as terrestrial Laser scanning, unmanned aerial vehicle Laser scanning, airborne Laser scanning, and spaceborne Lidar (e.g. ICESat-1/2, GEDI, GF-7 Lidar) provide new opportunities to map forest biomass distribution at different scales. The ground-based Lidar data are mainly used for extracting individual tree parameters such as diameter at breast height (DBH) and tree height, attempting to replace or reduce field work, while spaceborne Lidar data are often used to extract canopy height data at national and global scales, but cannot provide wall-to-wall mapping. The airborne Lidar may be the most frequently used data for forest biomass estimation at local scale. Many studies have been conducted for mapping forest biomass distributions in different climate zones, but current research situations and challenges of using airborne Lidar data have not been fully overviewed. This paper attempts to provide an overview of using airborne Lidar data for forest biomass estimation and discuss current research problems and future directions, which will be valuable for professionals and practitioners to better understand the important role of using airborne Lidar data for forest biomass estimation at the local scale.

    Satellite image fusion using cyclic spatio-spectral GAN model

    Mahmoud M. HammadTarek A. MahmoudA. S. AmeinTarek M. Ghoniemy...
    25-43页
    查看更多>>摘要:ABSTRACT Preserving spectral and spatial information in the satellite image fusion problem is one of the essential challenges. This paper presents a GAN-based method for fusing panchromatic and multispectral satellite images. The proposed method utilises the idea of Cycle-GAN with two generators, for spectral and spatial information preservation, based on residual in residual dense block super-resolution architecture. First, generator-1 translates panchromatic and multispectral satellite images to a high-resolution fused image and then preserves details by utilising generator-2. The goal is to reach the spatial and spectral details of panchromatic and multispectral satellite images, respectively. Two discriminators are employed, one for spectral and the other for spatial transformations, and weighted L1 loss is used for both as cycle losses. By leveraging the unique capabilities of the two generators, the proposed method achieves high-quality image fusion results with improved spectral and spatial resolutions that are evaluated over two different datasets. The experimental results demonstrate the effectiveness of the proposed method, with enhancements ranging from approximately 2% to 30% and 0.5% to 35% in WorldView2 and GeoEye-1 images, respectively, in five metrics including PSNR. Moreover, we show significant enhancements of approximately 7% to 50% and 22% to 29% in the same datasets in metrics like SAM and ERGAS.

    Efficient image fusion method using improved Bi-dimensional Empirical Mode Decomposition

    Abdelkader Moustafa Radwane Ghellab
    44-72页
    查看更多>>摘要:ABSTRACT Several transformations are used in image fusion to extract the different image spatial details. In this paper, an improved version of the BEMD (Bi-dimensional Empirical Mode Decomposition) is proposed and is used to propose an image fusion method aiming to have the high spatial resolution of the panchromatic image PAN and the spectral resolution of the multispectral image MS in the same fused image. The proposed improved BEMD permits to avoid using injection models that are used in image fusion for preserving the spectral signatures in the fused image. Mainly, we propose a new 2D extrema points extraction for BEMD. One of the most important characteristics of the proposed BEMD components is that they are more faithful to the EMD components; their local behaviour being of pure oscillating 2D mono-components with zero mean values. Comparisons, among predecessor methods with qualitative evaluation and various spectral and spatial quantitative measures, show the effectiveness and efficiency of the proposed image fusion method. In addition, this fusion method is computationally fast and can be used to quickly merge a massive volume of data.

    A variational driven optimization framework for pansharpening of multispectral images

    Y RamakrishnaRicha Agrawal
    73-87页
    查看更多>>摘要:ABSTRACT Pansharpening is an established remote sensing image fusion technique that yields a high-resolution multispectral (HRMS) image. Despite the fact that the advanced technologies like sparse coding and deep learning have achieved a remarkable improvement in solving the pansharpening problem, a unified model is required to further enhance the fusion quality. The variational optimisation (VO) mechanism has gained interest of most of the researchers in recent years. In this article, pansharpening is designated as a constrained optimisation problem with a data generative term and two regularisers to ameliorate spatial details and spectral information. The gradient information is exploited to impart the spatial details from the panchromatic image to the fused image. The correlation among multispectral image bands inspired to promote the spectral quality of the HRMS image as well as to reduce the distortion. Consequently, the optimisation problem is efficiently solved for the required HRMS image using the operator splitting approach. The extensive experiments performed in accordance with the well-known protocols rationalise that the proposed model outperforms most of the state-of-the-art methods in terms of objective metrics and visual outcomes.

    Spatial enhancement of Landsat-9 land surface temperature imagery by Fourier transformation-based panchromatic fusion

    Kul Vaibhav SharmaVijendra KumarSumit KhandelwalNivedita Kaul...
    88-109页
    查看更多>>摘要:ABSTRACT Landsat-9 Panchromatic (PAN) band images are 7 times finer than land surface temperature (LST) photos of the Thermal Infrared (TIR) band. PAN bands have superior image resolution, consistency, and less ambiguity than TIR bands due to their smaller pixel sizes. Image fusion enhances images by combining data from several sources to make them better. Image fusion methods cannot combine PAN and TIR bands. This research proposes Fourier Transformation-based fusion (FTBF) to merge PAN and TIR band data to spatially enhance Landsat-9 LST images from 100 m to 15 m resolution. Fourier transformation integrates frequency domain filtering and spatial matching in FTBF. In-situ infrared thermometers data loggers verified temperature and picture quality parameters for FTBF algorithm fused image thermal points. Comparing downscaled LST with ground truth points yielded an RMSE of 0.18 and a correlation of 0.93. Eight qualitative and quantitative characteristics reveal that FTBF fusion methods improve TIR picture spatial resolution and preserve original LST data thermal attributes. LST-Pan fusion can detect surface temperature change for land-use change, fire detection, forest fire, agricultural analysis, crop management, and flood mapping at finer scales.

    Building classification extraction from remote sensing images combining hyperpixel and maximum interclass variance

    Hongning QinZili Li
    110-127页
    查看更多>>摘要:ABSTRACT In recent years, semantic segmentation algorithms based on deep learning have been widely used in building extraction, which requires large sample data and does not consider the geometric features of the building, and the effect of the extraction is greatly affected by the data scene, while the traditional methods are difficult to extract the remote sensing buildings accurately because they only consider their greyscale features when extracting them. To solve this problem, we propose a method for building classification extraction from remote sensing images that combine over-pixel and maximum interclass variance. The method combines superpixel and maximum interclass variance (OTSU). First, a number of superpixel subregions with different shapes and sizes are generated based on the watershed transform. Then, the superpixels of buildings are merged using the spectral features of buildings, so the first extraction of buildings is achieved by this method.Then, the noise is suppressed with median filtering. Finally, the post-extraction of buildings is performed according to the OTSU algorithm. In this paper, seven images of buildings located in different landscapes were selected. The experimental results show that the algorithm is more advantageous than the classical algorithm and the deep learning algorithm..

    Novel fusion strategy for image fusion using rescue hunt optimization-based modified guidance model

    Preeti MaddiShashidhar SonnadSharanbasav HosamaniAnuradha Savadi...
    129-153页
    查看更多>>摘要:ABSTRACT A new method for image fusion introduces an inventive strategy for amalgamating data from multiple images, resulting in the creation of a single, improved output image. This approach aims to address the limitations and challenges associated with conventional fusion techniques, paving the way for improved results in various applications. In this research, a novel approach for image fusion is presented, featuring a rescue hunt optimisation-based modified guidance model (RHO-based MG model). The methodology leverages the non-subsampled contourlet transform (NSCT) and non-subsampled shear let transform (NSST) to construct the fusion transform using two input images, typically infrared and visual images. By hybridising high-frequency (HF) and low-frequency (LF) bands from both types of images, the fusion model generates the final HF and LF bands. A distinctive modified guidance strategy is employed in the development of these bands. The proposed approach utilises a rescue hunt algorithm developed through the combination of search and rescue optimisation (SARO) and grey wolf optimisation (GWO) behaviours. In image fusion, optimisation fine-tunes VGG-19 and RESNET 18 models to improve their ability to combine multiple images effectively. This process involves adjusting the models’ internal parameters using optimisation techniques. By analysing the features in different input images, these models learn to extract meaningful information and create a fused image that retains the important details from each source. This strategy is further enhanced by integrating the VGG-19 and RESNET 18 models. The fused image is composed of combined HF and LF bands, with the final result obtained through an inverse hybrid transform. Experimental results, conducted on a dataset of 25 images, demonstrate the effectiveness of the approach with metrics average gradient (AG), edge intensity (EI), PSNR, RMSE, SSIM, and variance attained the values of 24.71, 236.67, 54.96 dB, 0.22, 63.16, and 0.11, respectively. This innovative method offers a promising direction for enhancing image fusion quality and is highlighted by its unique integration of optimisation and guidance strategies.

    Extracting Urban Built-up Areas from Optical and Radar Data Fusion using Machine Learning Algorithms

    Wubalem WoreketGebeyehu Abebe Zeleke
    154-173页
    查看更多>>摘要:ABSTRACT Accurate and up-to-date information on urban built-up areas is significant for managing urban growth and development. Earth Observation (EO) data are valuable sources for meeting this demand. However, the extraction of urban built-up areas from EO data is challenging due to the limitations of EO data sources. To overcome this challenge, this study follows an approach that assesses the performance of optical (Sentinel-2), radar (Sentinel-1) and fused (Sentinel-1 and Sentinel-2) data to extract urban built-up areas using machine learning algorithms including Random Forest (RF), K-Nearest Neighbors (KNN) and KDTree KNN. The results were statistically analyzed by considering the Overall Accuracy (OA) and kappa coefficient. In addition, 15 cm GSD (Ground Sample Distance) aerial photography of the study area was used to validate the results. According to the results, Sentinel-2 produced better representation and accuracy of urban built-up areas than Sentinel-1 and even the fused image. Regarding to machine learning algorithms classification performance, RF performed better in both OA and Kappa coefficient along all datasets. The research findings can have significant implications for various domains, such as urban planning, land use management and open avenues for further comparisons of different EO data sources and machine learning algorithms for built-up areas extraction.

    Enhancing WOFOST crop model with unscented Kalman filter assimilation of leaf area index

    O. D. Belozerova
    174-189页
    查看更多>>摘要:ABSTRACT The challenging task of early yield prediction is an essential problem for present-day agriculture. It is commonly solved with a crop model along with relevant observation data: field scouting, in-situ sensors, satellite imagery data and information from previous growing seasons. Crop growth simulation models benefit greatly from application of these data; however, only a limited number of established data assimilation procedures receive notable application. Most studies focus on model parameter calibration, machine learning, ensemble Kalman filters (EnKF) or particle filters. These methods are powerful yet computationally expensive, which limits their extensive application. In this study, we bring into consideration a modern KF variant – the unscented Kalman filter (UKF). We implement the UKF data assimilation for leaf area index (LAI) within WOFOST PCSE model. To demonstrate its efficiency, we conduct simulations with EnKF and UKF assimilation of Sentinel-2 LAI data and compare the results to actual historical yield data of five crops on 2740 fields. Also, a field-level numerical experiment is set up to demonstrate the influence of LAI assimilation on the predicted yield. The results indicate the proposed approach performs consistently and significantly improves the accuracy of predicted yields.

    Namib Beetle Firefly Optimization enabled Densenet architecture for hyperspectral image segmentation and classification

    Deepa S.S. Zulaikha BeeviLaxman L. KumarwadSabbineni Poojitha...
    190-213页
    查看更多>>摘要:ABSTRACT Many organisations have concentrated on how hyperspectral images allow for the automatic pixel-level classification and segmentation since each pixel’s underlying spectrum is abundantly documented. Due to the unpredictable nature of the spectrum and the noise in the hyperspectral data, this task is very challenging and calls for specific solutions. The hyperspectral picture segmentation procedure in this instance makes use of the newly developed Namib Beetle Firefly Optimization (NBFO) method, which was created by combining the Namib Beetle Optimization method (NBOA) and Firefly Algorithm (FAO) for tackling optimisation issues. Therefore, in order to segment the images, the U-Net++ model is used. The DenseNet model, which is likewise trained using the NBFO approach, is then used to classify the segmented images. Utilising hyperspectral image segmentation techniques, the NBFO-driven DenseNet model surpassed the competition, resulting in a True Positive Rate (TPR) of 0.906786, a Positive Rate (FPR) of 0.889466, and a False Pixel Accuracy (FPA) of 0.931562.