Robotics & Machine Learning Daily News2024,Issue(Jun.26) :18-19.

Researchers from East China Normal University Detail Findings in Machine Learnin g (Dpsur: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release)

华东师范大学的研究人员详细介绍了机器学习的发现(Dpsur:使用选择性更新和释放加速差异私有随机梯度下降)

Robotics & Machine Learning Daily News2024,Issue(Jun.26) :18-19.

Researchers from East China Normal University Detail Findings in Machine Learnin g (Dpsur: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release)

华东师范大学的研究人员详细介绍了机器学习的发现(Dpsur:使用选择性更新和释放加速差异私有随机梯度下降)

扫码查看

摘要

由一名新闻记者-机器人与机器学习的工作人员新闻编辑每日新闻-调查人员讨论机器学习的新发现。根据NewsRx记者在中国上海的新闻报道,研究表明:“机器学习模型可以记忆私有数据,减少训练损失,这种模型可以被模型反转和成员推断等隐私攻击所有效利用。为了防范这些攻击,差分隐私(DP)已经成为保护隐私的机器学习的事实标准。”"特别是使用随机梯度下降的popular训练算法,如DPSGD算法."本研究的资助机构包括上海市自然科学基金、中国国家自然科学基金(NSFC)、中国国家自然科学基金(NSFC)、香港研究资助局、中国caai-huaviumindspore Ope n基金。新闻记者从华东师范大学的研究中得到一句话:“尽管如此,DPSGD由于收敛速度慢,仍然存在严重的效用损失,这部分是由随机抽样引起的,这部分是由高斯噪声引起的。”为此,本文提出了一种基于选择性更新和释放的私有训练框架DPSUR,在验证测试的基础上对每次迭代的梯度进行评估。因此,DPSUR保证了训练方向的正确,比DPSGD更快地收敛,主要的挑战在于梯度评估带来的隐私问题和模型UPD的梯度选择策略,DPSUR引入了更新随机化的裁剪策略和梯度选择的阈值机制.

Abstract

By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-Investigators discuss new findings in Machine Learning. According to news reporting from Shanghai, People's Republic o f China, by NewsRx journalists, research stated, "Machine learning models are kn own to memorize private data to reduce their training loss, which can be inadver tently exploited by privacy attacks such as model inversion and membership infer ence. To protect against these attacks, differential privacy (DP) has become the de facto standard for privacy-preserving machine learning, particularly those p opular training algorithms using stochastic gradient descent, such as DPSGD." Funders for this research include Natural Science Foundation of Shanghai, Nation al Natural Science Foundation of China (NSFC), National Natural Science Foundati on of China (NSFC), Hong Kong Research Grants Council, CAAI-Huawei MindSpore Ope n Fund. The news correspondents obtained a quote from the research from East China Norma l University, "Nonetheless, DPSGD still suffers from severe utility loss due to its slow convergence. This is partially caused by the random sampling, which bri ngs bias and variance to the gradient, and partially by the Gaussian noise, whic h leads to fluctuation of gradient updates. Our key idea to address these issues is to apply selective updates to the model training, while discarding those use less or even harmful updates. Motivated by this, this paper proposes DPSUR, a Di fferentially Private training framework based on Selective Updates and Release, where the gradient from each iteration is evaluated based on a validation test, and only those updates leading to convergence are applied to the model. As such, DPSUR ensures the training in the right direction and thus can achieve faster c onvergence than DPSGD. The main challenges lie in two aspects-privacy concerns arising from gradient evaluation, and gradient selection strategy for model upd ate. To address the challenges, DPSUR introduces a clipping strategy for update randomization and a threshold mechanism for gradient selection."

Key words

Shanghai/People's Republic of China/As ia/Cyborgs/Emerging Technologies/Machine Learning/East China Normal Universi ty

引用本文复制引用

出版年

2024
Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
段落导航相关论文