首页|图神经网络与CNN融合的虹膜特征编码方法

图神经网络与CNN融合的虹膜特征编码方法

扫码查看
目的 更具可解释性的虹膜特征编码方法一直是虹膜识别中的一个关键问题,且低质量虹膜样本识别比较困难,图神经网络的发展为此类虹膜图像特征编码带来了新思路。本文提出了一种图神经网络与卷积神经网络融合的虹膜特征编码网络IrisFusionNet。方法 在骨干网络前添加一个像素级增强模块以消除输入图像不确定性,并使用双分支骨干网络提取虹膜微观与宏观融合特征。训练阶段使用一个独特的联合损失函数对网络参数进行优化;推理阶段使用融合特征匹配策略进行特征匹配。结果 实验结果表明:使用IrisFusionNet训练得到的特征提取器在多个公开低质量虹膜数据集上进行测试分别得到了 EER(equal error rate)和FAR@FRR=0。01%的最佳值0。27%与0。84%,并且将分离度DI(discriminating index)提升30%以上,识别准确性以及类聚性均远远领先于基于卷积神经网络和其他使用图神经网络模型的虹膜识别优秀算法。结论 本文提出的IrisFusionNet应用于虹膜识别任务具有极高的可行性和优越性。
An iris feature-encoding method by fusion of graph neural networks and convolutional neural networks
Objective Iris recognition is a prevalent biometric feature in identity recognition technology owing to its inher-ent advantages,including stability,uniqueness,noncontact modality,and live-body authentication.The complete iris rec-ognition workflow comprises four main steps:iris image acquisition,image preprocessing,feature encoding,and feature matching.Feature encoding serves as the core component of iris recognition algorithms.The improvement in interpretable iris feature encoding methods have become a pivotal concern in the field of iris recognition.Moreover,the recognition of low-quality iris samples,which often relies on specific parameter-dependent feature encoders,results in a poor generaliza-tion performance.The graph structure represents a data form with an irregular topological arrangement.Graph neural net-works(GNNs)effectively update and aggregate features within such graph structures.The advancement of GNN led to the development of new approaches for feature encoding of these types of iris images.In this paper,a pioneering iris feature-fusion encoding network called IrisFusionNet,which integrates GNN with a convolutional neural network(CNN),is pro-posed.This network eradicates the need to implement complex parameter tuning steps and exhibits excellent generalization performance across various iris datasets.Method In the backbone network,the previously inserted pixel-level enhance-ment module alleviates local uncertainty in the input image through median filtering.In addition,global uncertainty was mitigated via Gaussian normalization.A dual-branch backbone network was proposed,where the head of the backbone net-work comprised a shared stack of CONV modules,and the neck was divided into two branches.The primary branch con-structed a graph structure from an image using graph converter.We designed a hard graph attention network that introduces an efficient channel attention mechanism to aggregate and update features through utilization of edge-associated information within the graph structure.This step led to the extraction of microfeatures of iris textures.The auxiliary branch,on the other hand,used conventional CNN pipeline components,such as simple convolutional layers,pooling layers,and fully connected layers,to capture the macrostructural information on the iris.During the training phase,the fused features from the primary and auxiliary branches were optimized using a unique unified loss function graph triplet and additive angular margin unified loss(GTAU-Loss).The primary branch mapped iris images into a graph feature space,with the use of cosine similarity to measure semantic information in node feature vectors,L2 norm to measure the spatial relationship infor-mation within the adjacency matrix,and graph triplet loss to constrain feature distances within the feature space.The auxil-iary branch applied an additional angular margin loss,which normalized the input image feature vectors and introduced an additional angular margin to constrain feature angle intervals,which improved intraclass feature compactness and interclass separation.Ultimately,a dynamic learning method based on an exponential model was used to fuse the features extracted from the primary and auxiliary branches and obtain the GTAU-Loss.The hyperparameter settings during training included the following:The optimization of network parameters involved the use of stochastic gradient descent(SGD)with a Nest-erov momentum set to 0.9,an initial learning rate of 0.001,and a warm-up strategy adjusting the learning rate with a warm-up rate set to 0.1,conducted over 200 epochs.The iteration process of SGD was accelerated using NVIDIA RTX 3060 12 GB GPU devices,with 100 iterations lasting approximately one day.For feature matching concerning two distinct graph structures,the auxiliary branch calculated the cosine similarity between the output node features.Meanwhile,the primary branch applied a gate-based method and initially calculated the mean cosine similarity of all node pairs as the threshold for the gate,removed node pairs below this threshold,and retained node features above it to compute their cosine similarity.The similarity between these graph structures was represented as the weighted sum of cosine similarities from the primary and auxiliary branches.The similarity weights of the feature pairs computed using the primary and auxiliary branches were both set to 0.5.All experiments were conducted on a Windows 11 operating system,with PyTorch as the deep learning framework.Result To validate the effectiveness of integrating GNNs into the framework,this study conducted iris recognition experiments using a single-branch CNN framework and a dual-branch framework.The experimental out-comes substantiated the superior recognition performance involved in the structural design incorporating the GNN branch.Furthermore,to determine the optimal values for two crucial parameters,namely,the number of nearest neighbors(k)and the global feature dimension within the IrisFusionNet framework,we conducted detailed parameter experiments to deter-mine their most favorable values.k was set to 8,and the optimal global feature dimension was 256.We compared the pres-ent method with several state-of-art(SOTA)methods in iris recognition,including CNN-based methods,such as ResNet,MobileNet,EfficientNet,ConvNext,etc.,and GNN-based methods,such as dynamic graph representation.Comparative experimental results indicate that the feature extractor trained using IrisFusionNet,which was tested on three publicly,available low-quality iris datasets—CASIA-Iris-V4-Distance,CASIA-Iris-V4-Lamp,CASIA-Iris-Mobile-V1.0-S2—to achieve equal error rates of 1.06%,0.71%,and 0.27%and false rejection rates at a false acceptance rate of 0.01%(FRR@FAR=0.01%)of 7.49%,4.21%,and 0.84%,respectively.In addition,the discriminant index reached 6.102,6.574,and 8.451,which denote an improvement of over 30%compared with the baseline algorithm.The accu-racy and clustering capability of iris recognition tasks using the feature extractor derived from IrisFusionNet substantially outperformed SOTA iris recognition algorithms based on convolutional neural networks and other GNN models.Further-more,the graph structures derived from the graph transformer were visually displayed.The generated graph structures of similar iris images exhibited a high similarity,and those of dissimilar iris images presented remarkable differences.This intuitive visualization explained the excellent performance achieved in iris recognition by constructing graph structures and utilizing GNN methods.Conclusion In this paper,we proposed a feature fusion coding a method based on GNN(IrisFusi-onNet).The macro features of iris images were extracted using the CNN and the micro features of iris images were extracted using GNNs to obtain fusion features encompassing comprehensive texture characteristics.The experimental results indi-cate that our method considerably improved the accuracy and clustering of iris recognition and obtained a high feasibility and generalizability without necessitating complex parameter tuning specific to particular datasets.

iris feature codinggraph neural network(GNN)hard graph attention operatorsfeature fusionunified loss function

孙金通、沈文忠

展开 >

上海电力大学电子与信息工程学院,上海 201306

虹膜特征编码 图神经网络(GNN) 硬图注意力算子 特征融合 联合损失函数

2024

中国图象图形学报
中国科学院遥感应用研究所,中国图象图形学学会 ,北京应用物理与计算数学研究所

中国图象图形学报

CSTPCD北大核心
影响因子:1.111
ISSN:1006-8961
年,卷(期):2024.29(9)