首页期刊导航|Information Fusion
期刊信息/Journal information
Information Fusion
Elsevier Science
Information Fusion

Elsevier Science

1566-2535

Information Fusion/Journal Information FusionEIISTPSCI
正式出版
收录年代

    Distributed detection of sparse signals with censoring sensors in clustered sensor networks

    Chengxi LiGang LiPramod K. Varshney
    18页
    查看更多>>摘要:In this paper, we explore the distributed detection of sparse signals in energy-limited clustered sensor networks (CSNs). For this problem, the centralized detector based on locally most powerful test (LMPT) methodology that uses the analog data transmitted by all the sensor nodes in CSNs can be easily realized according to the prior work. However, for the centralized LMPT detector, the energy consumption caused by data transmission is excessively high, which makes its implementation in CSNs with limited energy supply impractical. To address this issue, we propose a new detector by combining the advantages of censoring and LMPT strategies, in which both the cluster head (CLH) nodes and the ordinary (ORD) nodes only send data deemed to be informative enough and the fusion center (FC) fuses the received data based on LMPT methodology. The detection performance of the proposed detector, characterized by Fisher Information, is analyzed in the asymptotic regime. Also, we analytically derive the relationship between the detection performance of the proposed censoring-based LMPT (cens-LMPT) detector and the communication rates, both of which are controlled by the censoring thresholds. We present an illustrative example by considering the detection problem with 2-CSNs, i.e., CSNs in which each cluster contains two nodes, and provide corresponding theoretical analysis and simulation results.

    A systematic review on affective computing: emotion models, databases, and recent advances

    Yan WangWei SongWei TaoAntonio Liotta...
    34页
    查看更多>>摘要:Affective computing conjoins the research topics of emotion recognition and sentiment analysis, and can be realized with unimodal or multimodal data, consisting primarily of physical information (e.g., text, audio, and visual) and physiological signals (e.g., EEG and ECG). Physical-based affect recognition caters to more researchers due to the availability of multiple public databases, but it is challenging to reveal one's inner emotion hidden purposefully from facial expressions, audio tones, body gestures, etc. Physiological signals can generate more precise and reliable emotional results; yet, the difficulty in acquiring these signals hinders their practical application. Besides, by fusing physical information and physiological signals, useful features of emotional states can be obtained to enhance the performance of affective computing models. While existing reviews focus on one specific aspect of affective computing, we provide a systematical survey of important components: emotion models, databases, and recent advances. Firstly, we introduce two typical emotion models followed by five kinds of commonly used databases for affective computing. Next, we survey and taxonomize state-of-the-art unimodal affect recognition and multimodal affective analysis in terms of their detailed architectures and performances. Finally, we discuss some critical aspects of affective computing and its applications and conclude this review by pointing out some of the most promising future directions, such as the establishment of benchmark database and fusion strategies. The overarching goal of this systematic review is to help academic and industrial researchers understand the recent advances as well as new developments in this fast-paced, high-impact domain.

    Towards a data collection methodology for Responsible Artificial Intelligence in health: A prospective and qualitative study in pregnancy

    A. M. OprescuG. Miro-AmaranteL. Garcia-DiazV. E. Rey...
    26页
    查看更多>>摘要:A medical field that is increasingly benefiting from Artificial Intelligence applications is Gynecology and Obstetrics. In previous work, we exposed that Artificial Intelligence (AI) technology and obstetric control by physicians can enhance pregnancy health, leading to better pregnancy outcomes and overall better experience, also reducing any possible long-term effects that can be produced by complications. This work presents a data collection methodology for responsible AI in Health and a case study in the pregnancy domain. It is a qualitative descriptive study on the preferences and expectations expressed by pregnant women regarding responsible AI and affective computing. A 41-items structured interview was distributed among 150 pregnant patients attending prenatal care at Hospital Virgen del Rocio and the Clinic Victoria Rey (Seville, Spain) during the months of October and November 2020. A substantial interest in intelligent pregnancy solutions among pregnant women has been revealed in this study. Participants with a lower level of interest reported privacy concerns and lack of trust towards AI solutions. Regarding affective computing based intelligent solutions specifically, most participants reported positively and no significant difference was found between women having a healthy or a high risk pregnancy on this matter. Our findings also suggest that a high demand of personalized intelligent solutions exists among participants. On the topic of sharing pregnancy data with the healthcare provider in favor of scientific research, pregnant women assisting public healthcare services were found to be more likely to share their data when the provider was a public healthcare system rather than a private entity. Pregnant women who are interested in using an AI pregnancy application share a strong idea that it needs to be responsible, trustworthy, useful, and safe. Likewise, we found that pregnant women would change their mind about their concerns and they would feel more confident if the intelligent solution gives explanations about the system decisions and recommendations, as XAI approach promotes.

    PIAFusion: A progressive infrared and visible image fusion network based on illumination aware

    Linfeng TangJiteng YuanHao ZhangXingyu Jiang...
    14页
    查看更多>>摘要:Infrared and visible image fusion aims to synthesize a single fused image containing salient targets and abundant texture details even under extreme illumination conditions. However, existing image fusion algorithms fail to take the illumination factor into account in the modeling process. In this paper, we propose a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. Specifically, we design an illumination-aware sub-network to estimate the illumination distribution and calculate the illumination probability. Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network. The cross-modality differential aware fusion module and halfway fusion strategy completely integrate common and complementary information under the constraint of illumination-aware loss. In addition, a new benchmark dataset for infrared and visible image fusion, i.e., Multi-Spectral Road Scenarios (available at https://github.com/Linfeng-Tang/MSRS), is released to support network training and comprehensive evaluation. Extensive experiments demonstrate the superiority of our method over state-of-the-art alternatives in terms of target maintenance and texture preservation. Particularly, our progressive fusion framework could round-the-clock integrate meaningful information from source images according to illumination conditions. Furthermore, the application to semantic segmentation demonstrates the potential of our PIAFusion for high-level vision tasks. Our codes will be available at https://github.com/Linfeng-Tang/PIAFusion.

    Multi-source information fusion for smart health with artificial intelligence

    Xiaohui TaoJuan D. Velasquez
    3页
    查看更多>>摘要:The rapid developments in Artificial Intelligence present an opportunity for the research community to provide and advance Smart Health for the well-being of our society. By considering the availability of multisource information and heterogeneous data in the era of Big Data, this Special Issue explores the theories, methodologies and possible breakthroughs that have designed and adopted information fusion for Smart Health powered by recent Artificial Intelligence advances. Specifically, this Special Issue focuses on three questions; How to achieve and realize human-level intelligence in Smart Health, How to achieve and benefit Smart Health from a multi-disciplinary balance, and How to utilize the power of Big Data for Smart Health. The Special Issue is a great success, with a small number of quality studies carefully selected from an overwhelming amount of contributions.

    Double-cohesion learning based multiview and discriminant palmprint recognition

    Shuping ZhaoJigang WuLunke FeiBob Zhang...
    14页
    查看更多>>摘要:Palmprint recognition has been widely used in security authentication. However, most of the existing palmprint representation methods are focused on a special application scenario using the hand-crafted features from a single-view. If the features become weak as the application scenario changes, the recognition performance will be degraded. To address this problem, we propose to comprehensively exploit palmprint features from multiple views to improve the recognition performance in generic scenarios. In this paper, a novel double-cohesion learning based multiview and discriminant palmprint recognition (DC_MDPR) method is proposed, which imposes a double-cohesion strategy to reduce the inter-view margins for each subject and the intra-class margins for each view. In this way, for each subject, the features from different views can be closer to each other in the binary-label space. Meanwhile, for each view, the features sharing the same label information can move towards each other by imposing a neighbor graph regularization. The proposed method can be flexibly applied to any type of palmprint feature fusion. Moreover, it presents the multiview features in a low-dimensionality sub-space, effectively reducing the computational complexity. Experimental results on various palmprint databases have shown that the proposed method can always achieve the best recognition performance compared to other state-of-the-art algorithms.

    Cross-sensor periocular biometrics in a global pandemic: Comparative benchmark and novel multialgorithmic approach

    Fernando Alonso-FernandezKiran B. RajaR. RaghavendraChristoph Busch...
    21页
    查看更多>>摘要:The massive availability of cameras and personal devices results in a wide variability between imaging conditions, producing large intra-class variations and a significant performance drop if images from heterogeneous environments are compared for person recognition purposes. However, as biometric solutions are extensively deployed, it will be common to replace acquisition hardware as it is damaged or newer designs appear or to exchange information between agencies or applications operating in different environments. Furthermore, variations in imaging spectral bands can also occur. For example, face images are typically acquired in the visible (VIS) spectrum, while iris images are usually captured in the near-infrared (NIR) spectrum. However, cross-spectrum comparison may be needed if, for example, a face image obtained from a surveillance camera needs to be compared against a legacy database of iris imagery. Here, we propose a multialgorithmic approach to cope with periocular images captured with different sensors. With face masks in the front line to fight against the COVID-19 pandemic, periocular recognition is regaining popularity since it is the only region of the face that remains visible. As a solution to the mentioned cross-sensor issues, we integrate different biometric comparators using a score fusion scheme based on linear logistic regression. This approach is trained to improve the discriminating ability and, at the same time, to encourage that fused scores are represented by log-likelihood ratios. This allows easy interpretation of output scores and the use of Bayes thresholds for optimal decision-making since scores from different comparators are in the same probabilistic range. We evaluate our approach in the context of the 1st Cross-Spectral Iris/Periocular Competition, whose aim was to compare person recognition approaches when periocular data from visible and near-infrared images is matched. The proposed fusion approach achieves reductions in the error rates of up to 30%-40% in cross-spectral NIR-VIS comparisons with respect to the best individual system, leading to an EER of 0.2% and a FRR of just 0.47% at FAR = 0.01%. It also represents the best overall approach of the mentioned competition. Experiments are also reported with a database of VIS images from two different smartphones as well, achieving even bigger relative improvements and similar performance numbers. We also discuss the proposed approach from the point of view of template size and computation times, with the most computationally heavy comparator playing an important role in the results. Lastly, the proposed method is shown to outperform other popular fusion approaches in multibiometrics, such as the average of scores, Support Vector Machines, or Random Forest.