首页期刊导航|IEEE transactions on image processing
期刊信息/Journal information
IEEE transactions on image processing
Institute of Electrical and Electronics Engineers
IEEE transactions on image processing

Institute of Electrical and Electronics Engineers

1057-7149

IEEE transactions on image processing/Journal IEEE transactions on image processingSCIEI
正式出版
收录年代

    Portrait Shadow Removal Using Context-Aware Illumination Restoration Network

    Jiangjian YuLing ZhangQing ZhangQifei Zhang...
    1-15页
    查看更多>>摘要:Portrait shadow removal is a challenging task due to the complex surface of the face. Although existing work in this field makes substantial progress, these methods tend to overlook information in the background areas. However, this background information not only contains some important illumination cues but also plays a pivotal role in achieving lighting harmony between the face and the background after shadow elimination. In this paper, we propose a Context-aware Illumination Restoration Network (CIRNet) for portrait shadow removal. Our CIRNet consists of three stages. First, the Coarse Shadow Removal Network (CSRNet) mitigates the illumination discrepancies between shadow and non-shadow areas. Next, the Area-aware Shadow Restoration Network (ASRNet) predicts the illumination characteristics of shadowed areas by utilizing background context and non-shadow portrait context as references. Lastly, we introduce a Global Fusion Network to adaptively merge contextual information from different areas and generate the final shadow removal result. This approach leverages the illumination information from the background region while ensuring a more consistent overall illumination in the generated images. Our approach can also be extended to high-resolution portrait shadow removal and portrait specular highlight removal. Besides, we construct the first real facial shadow dataset for portrait shadow removal, consisting of 6200 pairs of facial images. Qualitative and quantitative comparisons demonstrate the advantages of our proposed dataset as well as our method.

    Saliency Segmentation Oriented Deep Image Compression With Novel Bit Allocation

    Yuan LiWei GaoGe LiSiwei Ma...
    16-29页
    查看更多>>摘要:Image compression distortion can cause performance degradation of machine analysis tasks, therefore recent years have witnessed fast progress in developing deep image compression methods optimized for machine perception. However, the investigation still lacks for saliency segmentation. First, in this paper we propose a deep compression network increasing local signal fidelity of important image pixels for saliency segmentation, which is different from existing methods utilizing the analysis network loss for backward propagation. By this means, these two types of networks can be decoupled to improve the compatibility of proposed compression method for diverse saliency segmentation networks. Second, pixel-level bit weights are modeled with probability distribution in the proposed bit allocation method. The ascending cosine roll-down (ACRD) function allocates bits to those important pixels, which fits the essence that saliency segmentation can be regarded as the pixel-level bi-classification task. Third, the compression network is trained without the help of saliency segmentation, where latent representations are decomposed into base and enhancement channels. Base channels are retained in the whole image, while enhancement channels are utilized only for important pixels, and therefore more bits can benefit saliency segmentation via enhancement channels. Extensive experimental results demonstrate that the proposed method can save an average of 10.34% bitrate compared with the state-of-the-art deep image compression method, where the rate-accuracy (R-A) performances are evaluated on sixteen downstream saliency segmentation networks with five conventional SOD datasets. The code will be available at: https://openi.pcl.ac.cn/OpenAICoding/SaliencyIC and https://github.com/AkeLiLi/SaliencyIC.

    PTH-Net: Dynamic Facial Expression Recognition Without Face Detection and Alignment

    Min LiXiaoqin ZhangTangfei LiaoSheng Lin...
    30-43页
    查看更多>>摘要:Pyramid Temporal Hierarchy Network (PTH-Net) is a new paradigm for dynamic facial expression recognition, applied directly to raw videos, without face detection and alignment. Unlike the traditional paradigm, which focus only on facial areas and often overlooks valuable information like body movements, PTH-Net preserves more critical information. It does this by distinguishing between backgrounds and human bodies at the feature level, offering greater flexibility as an end-to-end network. Specifically, PTH-Net utilizes a pre-trained backbone to extract multiple general features of video understanding at various temporal frequencies, forming a temporal feature pyramid. It then further expands this temporal hierarchy through differentiated parameter sharing and downsampling, ultimately refining emotional information under the supervision of expression temporal-frequency invariance. Additionally, PTH-Net features an efficient Scalable Semantic Distinction layer that enhances feature discrimination, helping to better identify target expressions versus non-target ones in the video. Finally, extensive experiments demonstrate that PTH-Net performs excellently in eight challenging benchmarks, with lower computational costs compared to previous methods. The source code is available at https://github.com/lm495455/PTH-Net.

    GeodesicPSIM: Predicting the Quality of Static Mesh With Texture Map via Geodesic Patch Similarity

    Qi YangJoel JungXiaozhong XuShan Liu...
    44-59页
    查看更多>>摘要:Static meshes with texture maps have attracted considerable attention in both industrial manufacturing and academic research, leading to an urgent requirement for effective and robust objective quality evaluation. However, current model-based static mesh quality metrics (i.e., metrics that directly use the raw data of the static mesh to extract features and predict the quality) have obvious limitations: most of them only consider geometry information, while color information is ignored, and they have strict constraints for the meshes’ geometrical topology. Other metrics, such as image-based and point-based metrics, are easily influenced by the prepossessing algorithms, e.g., projection and sampling, hampering their ability to perform at their best. In this paper, we propose Geodesic Patch Similarity (GeodesicPSIM), a novel model-based metric to accurately predict human perception quality for static meshes. After selecting a group keypoints, 1-hop geodesic patches are constructed based on both the reference and distorted meshes cleaned by an effective mesh cleaning algorithm. A two-step patch cropping algorithm and a patch texture mapping module refine the size of 1-hop geodesic patches and build the relationship between the mesh geometry and color information, resulting in the generation of 1-hop textured geodesic patches. Three types of features are extracted to quantify the distortion: patch color smoothness, patch discrete mean curvature, and patch pixel color average and variance. To the best of our knowledge, GeodesicPSIM is the first model-based metric especially designed for static meshes with texture maps. GeodesicPSIM provides state-of-the-art performance in comparison with image-based, point-based, and video-based metrics on a newly created and challenging database. We also prove the robustness of GeodesicPSIM by introducing different settings of hyperparameters. Ablation studies also exhibit the effectiveness of three proposed features and the patch cropping algorithm. The code is available at https://multimedia.tencent.com/resources/GeodesicPSIM.

    RSB-Pose: Robust Short-Baseline Binocular 3D Human Pose Estimation With Occlusion Handling

    Xiaoyue WanZhuo ChenXu Zhao
    60-72页
    查看更多>>摘要:In the domain of 3D Human Pose Estimation, which finds widespread daily applications, the requirement for convenient acquisition equipment continues to grow. To satisfy this demand, we focus on a short-baseline binocular setup that offers both portability and a geometric measurement capability that significantly reduces depth ambiguity. However, as the binocular baseline shortens, two serious challenges emerge: first, the robustness of 3D reconstruction against 2D errors deteriorates; second, occlusion reoccurs frequently due to the limited visual differences between two views. To address the first challenge, we propose the Stereo Co-Keypoints Estimation module to improve the view consistency of 2D keypoints and enhance the 3D robustness. In this module, the disparity is utilized to represent the correspondence of binocular 2D points, and the Stereo Volume Feature (SVF) is introduced to contain binocular features across different disparities. Through the regression of SVF, two-view 2D keypoints are simultaneously estimated in a collaborative way which restricts their view consistency. Furthermore, to deal with occlusions, a Pre-trained Pose Transformer module is introduced. Through this module, 3D poses are refined by perceiving pose coherence, a representation of joint correlations. This perception is injected by the Pose Transformer network and learned through a pre-training task that recovers iterative masked joints. Comprehensive experiments on H36M and MHAD datasets validate the effectiveness of our approach in the short-baseline binocular 3D Human Pose Estimation and occlusion handling.

    CrossEI: Boosting Motion-Oriented Object Tracking With an Event Camera

    Zhiwen ChenJinjian WuWeisheng DongLeida Li...
    73-84页
    查看更多>>摘要:With the differential sensitivity and high time resolution, event cameras can record detailed motion clues, which form a complementary advantage with frame-based cameras to enhance the object tracking, especially in challenging dynamic scenes. However, how to better match heterogeneous event-image data and exploit rich complementary cues from them still remains an open issue. In this paper, we align event-image modalities by proposing a motion adaptive event sampling method, and we revisit the cross-complementarities of event-image data to design a bidirectional-enhanced fusion framework. Specifically, this sampling strategy can adapt to different dynamic scenes and integrate aligned event-image pairs. Besides, we design an image-guided motion estimation unit for extracting explicit instance-level motions, aiming at refining the uncertain event clues to distinguish primary objects and background. Then, a semantic modulation module is devised to utilize the enhanced object motion to modify the image features. Coupled with these two modules, this framework learns both the high motion sensitivity of events and the full texture of images to achieve more accurate and robust tracking. The proposed method is easily embedded in existing tracking pipelines, and trained end-to-end. We evaluate it on four large benchmarks, i.e. FE108, VisEvent, FE240hz and CoeSot. Extensive experiments demonstrate our method achieves state-of-the-art performance, and large improvements are pointed as contributions by our sampling strategy and fusion concept.

    HAda: Hyper-Adaptive Parameter-Efficient Learning for Multi-View ConvNets

    Shiye WangChangsheng LiZeyu YanWanjun Liang...
    85-99页
    查看更多>>摘要:Recent years have witnessed a great success of multi-view learning empowered by deep ConvNets, leveraging a large number of network parameters. Nevertheless, there is an ongoing consideration regarding the essentiality of all these parameters in multi-view ConvNets. As we know, hypernetworks offer a promising solution to reduce the number of parameters by learning a concise network to generate weights for the larger target network, illustrating the presence of redundant information within network parameters. However, how to leverage hypernetworks for learning parameter-efficient multi-view ConvNets remains underexplored. In this paper, we present a lightweight multi-layer shared Hyper-Adaptive network (HAda), aiming to simultaneously generate adaptive weights for different views and convolutional layers of deep multi-view ConvNets. The adaptability inherent in HAda not only contributes to a substantial reduction in parameter redundancy but also enables the modeling of intricate view-aware and layer-wise information. This capability ensures the maintenance of high performance, ultimately achieving parameter-efficient learning. Specifically, we design a multi-view shared module in HAda to capture information common across views. This module incorporates a shared global gated interpolation strategy, which generates layer-wise gating factors. These factors facilitate adaptive interpolation of global contextual information into the weights. Meanwhile, we put forward a tailored weight-calibrated adapter for each view that facilitates the conveyance of view-specific information. These adapters generate view-adaptive weight scaling calibrators, allowing the selective emphasis of personalized information for each view without introducing excessive parameters. Extensive experiments on six publicly available datasets demonstrate the effectiveness of the proposed method. In particular, HAda can serve as a flexible plug-in strategy to work well with existing multi-view methods for both image classification and image clustering tasks.

    Generalizable Deepfake Detection With Phase-Based Motion Analysis

    Ekta PrashnaniMichael GoebelB. S. Manjunath
    100-112页
    查看更多>>摘要:We propose PhaseForensics, a DeepFake (DF) video detection method that uses a phase-based motion representation of facial temporal dynamics. Existing methods that rely on temporal information across video frames for DF detection have many advantages over the methods that only utilize the per-frame features. However, these temporal DF detection methods still show limited cross-dataset generalization and robustness to common distortions due to factors such as error-prone motion estimation, inaccurate landmark tracking, or the susceptibility of the pixel intensity-based features to adversarial distortions and the cross-dataset domain shifts. Our key insight to overcome these issues is to leverage the temporal phase variations in the band-pass frequency components of a face region across video frames. This not only enables a robust estimate of the temporal dynamics in the facial regions, but is also less prone to cross-dataset variations. Furthermore, we show that the band-pass filters used to compute the local per-frame phase form an effective defense against the perturbations commonly seen in gradient-based adversarial attacks. Overall, with PhaseForensics, we show improved distortion and adversarial robustness, and state-of-the-art cross-dataset generalization, with 92.4% video-level AUC on the challenging CelebDFv2 benchmark (a recent state of-the-art method, FTCN, compares at 86.9%).

    Learning Lossless Compression for High Bit-Depth Volumetric Medical Image

    Kai WangYuanchao BaiDaxin LiDeming Zhai...
    113-125页
    查看更多>>摘要:Recent advances in learning-based methods have markedly enhanced the capabilities of image compression. However, these methods struggle with high bit-depth volumetric medical images, facing issues such as degraded performance, increased memory demand, and reduced processing speed. To address these challenges, this paper presents the Bit-Division based Lossless Volumetric Image Compression (BD-LVIC) framework, which is tailored for high bit-depth medical volume compression. The BD-LVIC framework skillfully divides the high bit-depth volume into two lower bit-depth segments: the Most Significant Bit-Volume (MSBV) and the Least Significant Bit-Volume (LSBV). The MSBV concentrates on the most significant bits of the volumetric medical image, capturing vital structural details in a compact manner. This reduction in complexity greatly improves compression efficiency using traditional codecs. Conversely, the LSBV deals with the least significant bits, which encapsulate intricate texture details. To compress this detailed information effectively, we introduce an effective learning-based compression model equipped with a Transformer-Based Feature Alignment Module, which exploits both intra-slice and inter-slice redundancies to accurately align features. Subsequently, a Parallel Autoregressive Coding Module merges these features to precisely estimate the probability distribution of the least significant bit-planes. Our extensive testing demonstrates that the BD-LVIC framework not only sets new performance benchmarks across various datasets but also maintains a competitive coding speed, highlighting its significant potential and practical utility in the realm of volumetric medical image compression.

    A Self-Adaptive Feature Extraction Method for Aerial-View Geo-Localization

    Jinliang LinZhiming LuoDazhen LinShaozi Li...
    126-139页
    查看更多>>摘要:Cross-view geo-localization aims to match the same geographic location from different view images, e.g., drone-view images and geo-referenced satellite-view images. Due to UAV cameras’ different shooting angles and heights, the scale of the same captured target building in the drone-view images varies greatly. Meanwhile, there is a difference in size and floor area for different geographic locations in the real world, such as towers and stadiums, which also leads to scale variants of geographic targets in the images. However, existing methods mainly focus on extracting the fine-grained information of the geographic targets or the contextual information of the surrounding area, which overlook the robust feature for scale changes and the importance of feature alignment. In this study, we argue that the key underpinning of this task is to train a network to mine a discriminative representation against scale variants. To this end, we design an effective and novel end-to-end network called Self-Adaptive Feature Extraction Network (Safe-Net) to extract powerful scale-invariant features in a self-adaptive manner. Safe-Net includes a global representation-guided feature alignment module and a saliency-guided feature partition module. The former applies an affine transformation guided by the global feature for adaptive feature alignment. Without extra region annotations, the latter computes saliency distribution for different regions of the image and adopts the saliency information to guide a self-adaptive feature partition on the feature map to learn a visual representation against scale variants. Experiments on two prevailing large-scale aerial-view geo-localization benchmarks, i.e., University-1652 and SUES-200, show that the proposed method achieves state-of-the-art results. In addition, our proposed Safe-Net has a significant scale adaptive capability and can extract robust feature representations for those query images with small target buildings. The source code of this study is available at: https://github.com/AggMan96/Safe-Net.