首页期刊导航|IEEE journal of biomedical and health informatics
期刊信息/Journal information
IEEE journal of biomedical and health informatics
Institute of Electrical and Electronics Engineers
IEEE journal of biomedical and health informatics

Institute of Electrical and Electronics Engineers

双月刊

2168-2194

IEEE journal of biomedical and health informatics/Journal IEEE journal of biomedical and health informaticsSCI
正式出版
收录年代

    IEEE Journal of Biomedical and Health Informatics Publication Information

    C2-C2页

    IEEE Journal of Biomedical and Health Informatics Information for Authors

    C3-C3页

    Front Cover

    C1-C1页

    Table of Contents

    3079-3082页

    Guest Editorial: Multi-Modal Joint Learning in Healthcare Imaging

    Tao TanZhang LiYue SunShandong Wu...
    3083-3085页

    PEARL: Cascaded Self-Supervised Cross-Fusion Learning for Parallel MRI Acceleration

    Qingyong ZhuBei LiuZhuo-Xu CuiChentao Cao...
    3086-3097页
    查看更多>>摘要:Supervised deep learning (SDL) methodology holds promise for accelerated magnetic resonance imaging (AMRI) but is hampered by the reliance on extensive training data. Some self-supervised frameworks, such as deep image prior (DIP), have emerged, eliminating the explicit training procedure but often struggling to remove noise and artifacts under significant degradation. This work introduces a novel self-supervised accelerated parallel MRI approach called PEARL, leveraging a multiple-stream joint deep decoder with two cross-fusion schemes to accurately reconstruct one or more target images from compressively sampled k-space. Each stream comprises cascaded cross-fusion sub-block networks (SBNs) that sequentially perform combined upsampling, 2D convolution, joint attention, ReLU activation and batch normalization (BN). Among them, combined upsampling and joint attention facilitate mutual learning between multiple-stream networks by integrating multi-parameter priors in both additive and multiplicative manners. Long-range unified skip connections within SBNs ensure effective information propagation between distant cross-fusion layers. Additionally, incorporating dual-normalized edge-orientation similarity regularization into the training loss enhances detail reconstruction and prevents overfitting. Experimental results consistently demonstrate that PEARL outperforms the existing state-of-the-art (SOTA) self-supervised AMRI technologies in various MRI cases. Notably, 5-fold$\sim$6-fold accelerated acquisition yields a 1$\%$ $\sim$ 2$\%$ improvement in SSIM$_{\mathsf{ROI}}$ and a 3$\%$ $\sim$ 6$\%$ improvement in PSNR$_{\mathsf{ROI}}$, along with a significant 15$\%$ $\sim$ 20$\%$ reduction in RLNE$_{\mathsf{ROI}}$.

    Towards High-Quality MRI Reconstruction With Anisotropic Diffusion-Assisted Generative Adversarial Networks and Its Multi-Modal Images Extension

    Yuyang LuoGengshen WuYi LiuWenjian Liu...
    3098-3111页
    查看更多>>摘要:Recently, fast Magnetic Resonance Imaging reconstruction technology has emerged as a promising way to improve the clinical diagnostic experience by significantly reducing scan times. While existing studies have used Generative Adversarial Networks to achieve impressive results in reconstructing MR images, they still suffer from challenges such as blurred zones/boundaries and abnormal spots caused by inevitable noise in the reconstruction process. To this end, we propose a novel deep framework termed Anisotropic Diffusion-Assisted Generative Adversarial Networks, which aims to maximally preserve valid high-frequency information and structural details while minimizing noises in reconstructed images by optimizing a joint loss function in a unified framework. In doing so, it enables more authentic and accurate MR image generation. To specifically handle unforeseeable noises, an Anisotropic Diffused Reconstruction Module is developed and added aside the backbone network as a denoise assistant, which improves the final image quality by minimizing reconstruction losses between targets and iteratively denoised generative outputs with no extra computational complexity during the testing phase. To make the most of valuable MRI data, we extend its application to support multi-modal learning to boost reconstructed image quality by aggregating more valid information from images of diverse modalities. Extensive experiments on public datasets show that the proposed framework can achieve superior performance in polishing up the quality of reconstructed MR images. For example, the proposed method obtains average PSNR and mSSIM values of 35.785 dB and 0.9765 on the MRNet dataset, which are at least about 2.9 dB and 0.07 higher than those from the baselines.

    M2Trans: Multi-Modal Regularized Coarse-to-Fine Transformer for Ultrasound Image Super-Resolution

    Zhangkai NiRunyu XiaoWenhan YangHanli Wang...
    3112-3123页
    查看更多>>摘要:Ultrasound image super-resolution (SR) aims to transform low-resolution images into high-resolution ones, thereby restoring intricate details crucial for improved diagnostic accuracy. However, prevailing methods relying solely on image modality guidance and pixel-wise loss functions struggle to capture the distinct characteristics of medical images, such as unique texture patterns and specific colors harboring critical diagnostic information. To overcome these challenges, this paper introduces the Multi-Modal Regularized Coarse-to-fine Transformer (M2Trans) for Ultrasound Image SR. By integrating the text modality, we establish joint image-text guidance during training, leveraging the medical CLIP model to incorporate richer priors from text descriptions into the SR optimization process, enhancing detail, structure, and semantic recovery. Furthermore, we propose a novel coarse-to-fine transformer comprising multiple branches infused with self-attention and frequency transforms to efficiently capture signal dependencies across different scales. Extensive experimental results demonstrate significant improvements over state-of-the-art methods on benchmark datasets, including CCA-US, US-CASE, and our newly created dataset MMUS1K, with a minimum improvement of 0.17dB, 0.30dB, and 0.28dB in terms of PSNR.

    Multimodal Distillation Pre-Training Model for Ultrasound Dynamic Images Annotation

    Xiaojun ChenJia KeYaning ZhangJianping Gou...
    3124-3136页
    查看更多>>摘要:With the development of medical technology, ultrasonography has become an important diagnostic method in doctors' clinical work. However, compared with the static medical image processing work such as CT, MRI, etc., which has more research bases, ultrasonography is a dynamic medical image similar to video, which is captured and generated by a real-time moving probe, so how to deal with the video data in the medical field and cross modal extraction of the textual semantics in the medical video is a difficult problem that needs to be researched. For this reason, this paper proposes a pre-training model of multimodal distillation and fusion coding for processing the semantic relationship between ultrasound dynamic Images and text. Firstly, by designing the fusion encoder, the visual geometric features of tissues and organs in ultrasound dynamic images, the overall visual appearance descriptive features and the named entity linguistic features are fused to form a unified visual-linguistic feature, so that the model obtains richer visual, linguistic cues aggregation and alignment ability. Then, the pre-training model is augmented by multimodal knowledge distillation to improve the learning ability of the model. The final experimental results on multiple datasets show that the multimodal distillation pre-training model generally improves the fusion ability of various types of features in ultrasound dynamic images, and realizes the automated and accurate annotation of ultrasound dynamic images.

    PMMNet: A Dual Branch Fusion Network of Point Cloud and Multi-View for Intracranial Aneurysm Classification and Segmentation

    Ruifen CaoDongwei ZhangPijing WeiYun Ding...
    3137-3147页
    查看更多>>摘要:Intracranial aneurysm (IA) is a vascular disease of the brain arteries caused by pathological vascular dilation, which can result in subarachnoid hemorrhage if ruptured. Automatically classification and segmentation of intracranial aneurysms are essential for their diagnosis and treatment. However, the majority of current research is focused on two-dimensional images, ignoring the 3D spatial information that is also critical. In this work, we propose a novel dual-branch fusion network called the Point Cloud and Multi-View Medical Neural Network (PMMNet) for IA classification and segmentation. Specifically, one branch based on 3D point clouds serves the purpose of extracting spatial features, whereas the other branch based on multi-view images acquires 2D pixel features. Ultimately, the two types of features are fused for IA classification and segmentation. To extract both local and global features from 3D point clouds, Multilayer Perceptron (MLP) and the attention mechanism are used in parallel. In addition, a SPSA module is proposed for multi-view image feature learning, which extracts more exquisite channel and spatial multi-scale features from 2D images. Experiments conducted on the IntrA dataset outperform other state-of-the-art methods, demonstrating that the proposed PMMNet exhibits strong superiority on the medical 3D dataset. We also obtain competitive results on public datasets, including ModelNet40, ModelNet10, and ShapeNetPart, which further validate the robustness and generality of the PMMNet.