首页|Researchers from East China Normal University Detail Findings in Machine Learnin g (Dpsur: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release)
Researchers from East China Normal University Detail Findings in Machine Learnin g (Dpsur: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-Investigators discuss new findings in Machine Learning. According to news reporting from Shanghai, People's Republic o f China, by NewsRx journalists, research stated, "Machine learning models are kn own to memorize private data to reduce their training loss, which can be inadver tently exploited by privacy attacks such as model inversion and membership infer ence. To protect against these attacks, differential privacy (DP) has become the de facto standard for privacy-preserving machine learning, particularly those p opular training algorithms using stochastic gradient descent, such as DPSGD." Funders for this research include Natural Science Foundation of Shanghai, Nation al Natural Science Foundation of China (NSFC), National Natural Science Foundati on of China (NSFC), Hong Kong Research Grants Council, CAAI-Huawei MindSpore Ope n Fund. The news correspondents obtained a quote from the research from East China Norma l University, "Nonetheless, DPSGD still suffers from severe utility loss due to its slow convergence. This is partially caused by the random sampling, which bri ngs bias and variance to the gradient, and partially by the Gaussian noise, whic h leads to fluctuation of gradient updates. Our key idea to address these issues is to apply selective updates to the model training, while discarding those use less or even harmful updates. Motivated by this, this paper proposes DPSUR, a Di fferentially Private training framework based on Selective Updates and Release, where the gradient from each iteration is evaluated based on a validation test, and only those updates leading to convergence are applied to the model. As such, DPSUR ensures the training in the right direction and thus can achieve faster c onvergence than DPSGD. The main challenges lie in two aspects-privacy concerns arising from gradient evaluation, and gradient selection strategy for model upd ate. To address the challenges, DPSUR introduces a clipping strategy for update randomization and a threshold mechanism for gradient selection."
ShanghaiPeople's Republic of ChinaAs iaCyborgsEmerging TechnologiesMachine LearningEast China Normal Universi ty