With the continuous progress of deep learning,it has been widely applied in numerous fields.However,the training of deep models requires a large amount of labeled data,and the cost of time and resources is high.How to maximize the model per-formance with the least amount of labeled data has become an important research topic.Active learning aims to address this issue by selecting the most valuable samples for annotation and utilizing them for model training.Traditional active learning approaches usually concentrate on uncertainty or diversity,aiming to query the most difficult or representative samples.Nevertheless,these methods typically only take into account one-sided effects and overlook the interaction between labeled and unlabeled data in ac-tive learning scenarios.Another type of active learning method utilizes auxiliary networks for sample selection,but these methods usually result in higher computational complexity.This paper proposes a novel active learning approach designed to optimize the model's total performance gain by taking into account sample-to-sample interactions and comprehensively measuring local uncer-tainty and the influence of candidate samples on other samples.The method first estimates the influence of samples on each other based on the distance between the hidden layer representations of the samples,and further estimates the potential gain that the sample can bring based on the influence of candidate samples and the uncertainty of unlabeled samples.The sample with the high-est global gain is iteratively chosen for annotation.On a series of tasks across several domains,the study further compares the proposed method with other active learning strategies.Experimental results demonstrate that the proposed method outperforms all competitors in all tasks.Further quantitative analysis experiments have also demonstrated that it balances uncertainty and di-versity well,and explores the factors that should be emphasized at different stages of active learning.