首页期刊导航|Information Fusion
期刊信息/Journal information
Information Fusion
Elsevier Science
Information Fusion

Elsevier Science

1566-2535

Information Fusion/Journal Information FusionEIISTPSCI
正式出版
收录年代

    Supervised contrastive learning over prototype-label embeddings for network intrusion detection

    Lopez-Martin, ManuelSanchez-Esguevillas, AntonioArribas, Juan IgnacioCarro, Belen...
    29页
    查看更多>>摘要:Contrastive learning makes it possible to establish similarities between samples by comparing their distances in an intermediate representation space (embedding space) and using loss functions designed to attract/repel similar/dissimilar samples. The distance comparison is based exclusively on the sample features. We propose a novel contrastive learning scheme by including the labels in the same embedding space as the features and performing the distance comparison between features and labels in this shared embedding space. Following this idea, the sample features should be close to its ground-truth (positive) label and away from the other labels (negative labels). This scheme allows to implement a supervised classification based on contrastive learning. Each embedded label will assume the role of a class prototype in embedding space, with sample features that share the label gathering around it. The aim is to separate the label prototypes while minimizing the distance between each prototype and its same-class samples. A novel set of loss functions is proposed with this objective. Loss minimization will drive the allocation of sample features and labels in embedding space. Loss functions and their associated training and prediction architectures are analyzed in detail, along with different strategies for label separation. The proposed scheme drastically reduces the number of pair-wise comparisons, thus improving model performance. In order to further reduce the number of pair-wise comparisons, this initial scheme is extended by replacing the set of negative labels by its best single representative: either the negative label nearest to the sample features or the centroid of the cluster of negative labels. This idea creates a new subset of models which are analyzed in detail. The outputs of the proposed models are the distances (in embedding space) between each sample and the label prototypes. These distances can be used to perform classification (minimum distance label), features dimensionality reduction (using the distances and the embeddings instead of the original features) and data visualization (with 2 or 3D embeddings). Although the proposed models are generic, their application and performance evaluation is done here for network intrusion detection, characterized by noisy and unbalanced labels and a challenging classification of the various types of attacks. Empirical results of the model applied to intrusion detection are presented in detail for two well-known intrusion detection datasets, and a thorough set of classification and clustering performance evaluation metrics are included.

    Multi-modal bioelectrical signal fusion analysis based on different acquisition devices and scene settings: Overview, challenges, and novel orientation

    Li, JingjingWang, Qiang
    19页
    查看更多>>摘要:Multi-modal fusion combines multiple modal information to overcome the limitation of incomplete information expressed by a single modality, so as to realize the complementarity of modal information and enhance feature representation. Multi-modal medical signal fusion algorithm and extraction equipment play an important role in improving the recognition accuracy of brain diseases. This paper compared the existing data fusion methods and explored the fusion research of multi-modal bioelectrical signals, including: (1) the challenges and shortcomings in the signal acquisition phase are explored from the biological signal acquisition equipment and scene settings; (2) five multi-modal fusion forms are analyzed; (3) the fusion methods and evaluation indexes are briefly reviewed; (4) the research status and challenges of multi-modal fusion in the field of spatial cognitive impairment and biometrics are explored; (5) the advantages and challenges of multi-modal fusion are described. The conclusion of this review is that the research of multimodal medical signal fusion is in the initial stage, and some studies have proved that multi-modal fusion is meaningful for medical research. However, the fusion algorithm and fusion strategy need to be improved. While learning the relatively perfect image fusion algorithm, we need to develop the fusion algorithm and fusion strategy that is suitable for medical signal and strengthen its feasibility in clinical application.

    Multi-exposure image fusion via deep perceptual enhancement

    Han, DongLi, LiangGuo, XiaojieMa, Jiayi...
    15页
    查看更多>>摘要:Due to the huge gap between the high dynamic range of natural scenes and the limited (low) range of consumer-grade cameras, a single-shot image can hardly record all the information of a scene. Multi-exposure image fusion (MEF) has been an effective way to solve this problem by integrating multiple shots with different exposures, which is in nature an enhancement problem. During fusion, two perceptual factors including the informativeness and the visual realism should be concerned simultaneously. To achieve the goal, this paper presents a deep perceptual enhancement network for MEF, termed as DPE-MEF. Specifically, the proposed DPE-MEF contains two modules, one of which responds to gather content details from inputs while the other takes care of color mapping/correction for final results. Both extensive experimental results and ablation studies are conducted to show the efficacy of our design, and demonstrate its superiority over other state-of-the-art alternatives both quantitatively and qualitatively. We also verify the flexibility of the proposed strategy on improving the exposure quality of single images. Moreover, our DPE-MEF can fuse 720p images in more than 60 pairs per second on an Nvidia 2080Ti GPU, making it attractive for practical use. Our code is available at https://github.com/dongdong4fei/DPE-MEF.

    Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

    Holzinger, AndreasDehmer, MatthiasEmmert-Streib, FrankCucchiara, Rita...
    16页
    查看更多>>摘要:Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. There is no doubt that AI is important to improve human health in many ways and will disrupt various medical workflows in the future. Using AI to solve problems in medicine beyond the lab, in routine environments, we need to do more than to just improve the performance of existing AI methods. Robust AI solutions must be able to cope with imprecision, missing and incorrect information, and explain both the result and the process of how it was obtained to a medical expert. Using conceptual knowledge as a guiding model of reality can help to develop more robust, explainable, and less biased machine learning models that can ideally learn from less data. Achieving these goals will require an orchestrated effort that combines three complementary Frontier Research Areas: (1) Complex Networks and their Inference, (2) Graph causal models and counterfactuals, and (3) Verification and Explainability methods. The goal of this paper is to describe these three areas from a unified view and to motivate how information fusion in a comprehensive and integrative manner can not only help bring these three areas together, but also have a transformative role by bridging the gap between research and practical applications in the context of future trustworthy medical AI. This makes it imperative to include ethical and legal aspects as a cross-cutting discipline, because all future solutions must not only be ethically responsible, but also legally compliant.

    Multi-feature, multi-modal, and multi-source social event detection: A comprehensive survey

    Afyouni, ImadAl Aghbari, ZaherRazack, Reshma Abdul
    30页
    查看更多>>摘要:The tremendous growth of event dissemination over social networks makes it very challenging to accurately discover and track exciting events, as well as their evolution and scope over space and time. People have migrated to social platforms and messaging apps, which represent an opportunity to create a more accurate prediction of social developments by translating event related streams to meaningful insights. However, the huge spread of 'noise' from unverified social media sources makes it difficult to accurately detect and track events. Over the last decade, multiple surveys on event detection from social media have been presented, with the aim of highlighting the different NLP, data management and machine learning techniques used to discover specific types of events, such as social gatherings, natural disasters, and emergencies, among others. However, these surveys focus only on a few dimensions of event detection, such as emphasizing on knowledge discovery form single modality or single social media platform or applied only to one specific language. In this survey paper, we introduce multiple perspectives for event detection in the big social data era. This survey paper thoroughly investigates and summarizes the significant progress in social event detection and visualization techniques, by emphasizing crucial challenges ranging from the management, fusion, and mining of big social data, to the applicability of these methods to different platforms, multiple languages and dialects rather than a single language, and with multiple modalities. The survey also focuses on advanced features required for event extraction, such as spatial and temporal scopes, location inference from multi-modal data (i.e., text or image), and semantic analysis. Application-oriented challenges and opportunities are also discussed. Finally, quantitative and qualitative experimental procedures and results to illustrate the effectiveness and gaps in existing works are presented.