首页期刊导航|Information Fusion
期刊信息/Journal information
Information Fusion
Elsevier Science
Information Fusion

Elsevier Science

1566-2535

Information Fusion/Journal Information FusionEIISTPSCI
正式出版
收录年代

    Deriving the personalized individual semantics of linguistic information from flexible linguistic preference relations

    Jiang, LeLiu, HongbinMa, YueLi, Yongfeng...
    17页
    查看更多>>摘要:In group decision making, flexible linguistic preference relations (FLPRs) are very useful with the pairwise comparisons taking the form of flexible linguistic expressions (FLEs). Due to the fact that different decision makers have different understandings of words, this paper investigates the personalized individual semantics (PISs) of the linguistic information in FLPRs. Two optimization models are constructed to compute a linguistic distribution which is closest to an incomplete FLE. The FLPRs are transformed into fuzzy preference relations by using optimization models which maximize consistency and consensus of the fuzzy preference relations. The PISs of linguistic terms and subsets of the linguistic term set are obtained in this process. A group decision making model based on FLPRs is presented and a green supplier selection problem in automotive industry is solved by using the proposed model. The comparative analysis is presented to show the feasibility of the group decision making model.

    Information fusion for edge intelligence: A survey

    Zhang, YinJiang, ChiYue, BingleiWan, Jiafu...
    16页
    查看更多>>摘要:Edge intelligence capability is expected to enable the development of a new paradigm integrated with edge computing and artificial intelligence. However, due to the multisource nature, heterogeneity, and a large scale of the sensory data, it is necessary to improve the data processing and decision-making capacity for the edges. Hence, this paper asserts that information fusion is an important technique to power the capacity of edge intelligence in terms of collection, communication, computing, caching, control and collaboration. Specifically, it provides a comprehensive investigation of four representative scenarios assisted by information fusion at the edge, i.e., multisource information fusion, real-time information fusion, event-driven information fusion, and context-aware information fusion. Moreover, it discusses the future directions and open issues in this field.

    A fusion spatial attention approach for few-shot learning

    Song, HedaDeng, BowenPound, MichaelOzcan, Ender...
    16页
    查看更多>>摘要:Few-shot learning is a challenging problem in computer vision that aims to learn a new visual concept from very limited data. A core issue is that there is a large amount of uncertainty introduced by the small training set. For example, the few images may include cluttered backgrounds or different scales of objects. Existing approaches mostly address this problem from either the original image space or the embedding space by using meta-learning. To the best of our knowledge, none of them tackle this problem from both spaces jointly. To this end, we propose a fusion spatial attention approach that performs spatial attention in both image and embedding spaces. In the image space, we employ a Saliency Object Detection (SOD) module to extract the saliency map of an image and provide it to the network as an additional channel. In the embedding space, we propose an Adaptive Pooling (Ada-P) module tailored to few-shot learning that introduces a meta-learner to adaptively fuse local features of the feature maps for each individual embedding. The fusion process assigns different pooling weights to the features at different spatial locations. Then, weighted pooling can be conducted over an embedding to fuse local information, which can avoid losing useful information by considering the spatial importance of the features. The SOD and Ada-P modules can be used within a plug-and-play module and incorporated into various existing few-shot learning approaches. We empirically demonstrate that designing spatial attention methods for few-shot learning is a nontrivial task and our method has proven effective for it. We evaluate our method using both shallow and deeper networks on three widely used few-shot learning benchmarks, miniImageNet, tieredImageNet and CUB, and demonstrate very competitive performance.

    Multimodal Co-learning: Challenges, applications with datasets, recent advances and future directions

    Rahate, AnilWalambe, RaheeRamanna, SheelaKotecha, Ketan...
    37页
    查看更多>>摘要:Multimodal deep learning systems that employ multiple modalities like text, image, audio, video, etc., are showing better performance than individual modalities (i.e., unimodal) systems. Multimodal machine learning involves multiple aspects: representation, translation, alignment, fusion, and co-learning. In the current state of multimodal machine learning, the assumptions are that all modalities are present, aligned, and noiseless during training and testing time. However, in real-world tasks, typically, it is observed that one or more modalities are missing, noisy, lacking annotated data, have unreliable labels, and are scarce in training or testing, and or both. This challenge is addressed by a learning paradigm called multimodal co-learning. The modeling of a (resourcepoor) modality is aided by exploiting knowledge from another (resource-rich) modality using the transfer of knowledge between modalities, including their representations and predictive models. Co-learning being an emerging area, there are no dedicated reviews explicitly focusing on all challenges addressed by co-learning. To that end, in this work, we provide a comprehensive survey on the emerging area of multimodal co-learning that has not been explored in its entirety yet. We review implementations that overcome one or more co-learning challenges without explicitly considering them as co-learning challenges. We present the comprehensive taxonomy of multimodal co-learning based on the challenges addressed by co-learning and associated implementations. The various techniques, including the latest ones, are reviewed along with some applications and datasets. Additionally, we review techniques that appear to be similar to multimodal colearning and are being used primarily in unimodal or multi-view learning. The distinction between them is documented. Our final goal is to discuss challenges and perspectives and the important ideas and directions for future work that we hope will benefit for the entire research community focusing on this exciting domain.

    A multi-representation re-ranking model for Personalized Product Search

    Bassani, EliasPasi, Gabriella
    10页
    查看更多>>摘要:In recent years, a multitude of e-commerce websites arose. Product Search is a fundamental part of these websites, which is often managed as a traditional retrieval task. However, Product Search has the ultimate goal of satisfying specific and personal user needs, leading users to find and purchase what they are looking for, based on their preferences. To maximize users' satisfaction, Product Search should be treated as a personalized task. In this paper, we propose and evaluate a simple yet effective personalized results re-ranking approach based on the fusion of the relevance score computed by a well-known ranking model, namely BM25, with the scores deriving from multiple user/item representations. Our main contributions are: (1) we propose a score fusion-based approach for personalized re-ranking that leverages multiple user/item representations, (2) our approach accounts for both content-based features and collaborative information (i.e. features extracted from the user-item interactions graph), (3) the proposed approach is fast and scalable, can be easily added on top of any search engine and it can be extended to include additional features. The performed comparative evaluations show that our model can significantly increase the retrieval effectiveness of the underlying retrieval model and, in the great majority of cases, outperforms modern Neural Network-based personalized retrieval models for Product Search.