基于编码计算的分布式人工智能系统安全防护研究
Research on security protection of distributed AI system based on coded computing
陈雨梁 1林夕 1李建华1
作者信息
摘要
[目的/意义]分布式学习的出现,解决了大规模深度模型的训练难以在单卡上完成的问题,提升了模型训练的效率,并在物联网等场景中得到了广泛的应用.但是,分布式人工智能系统在安全防护方面仍然存在缺陷,容易受到边缘环境中恶意节点的攻击.[方法/过程]提出了一个基于拉格朗日编码计算的鲁棒性分布式学习框架,通过编码冗余实现了S个恶意节点或拖后腿节点的错误容忍,同时提出了基于余弦相似度的恶意节点检测方法,并实现了基于节点声誉评估的恶意节点退出机制,实现了对分布式人工智能系统的安全防护.[结果/结论]在MNIST和Cifar-10数据集上,对分布式人工智能系统学习框架进行实验,准确率与联邦逻辑回归框架下的误差控制在5%左右.同时,在引入恶意节点攻击后,框架的检测验证机制有效地阻止了攻击导致的准确率下降,并节约了58.6%的验证时间.
Abstract
[Purpose/Significance]The emergence of distributed learning has solved the problem that the training of large-scale deep models is difficult to be completed on a single GPU,improved the efficiency of model training,and has been widely used in IoT and other scenarios.However,distributed AI systems still have defects in security protection and are vulnerable to attacks by malicious nodes in the edge environment.[Method/Process]To solve the problems above,this paper proposes a robust distributed learning framework based on Lagrange coded computing,which achieves the error tolerance of S malicious nodes or straggler nodes through coding redundancy.We further proposed a malicious node detection method based on cosine similarity,and implements a malicious node exit mechanism based on node reputation evaluation to realize the security protection for distributed AI systems.[Results/Conclusion]In this paper,the distributed learning framework is experimented on MNIST and Cifar-10 datasets,and the accuracy obtained was controlled to be around 5%error with the logistic regression federation framework.After the introduction of malicious node attack,the detection and verification mechanism of this framework effectively reduces the accuracy degradation caused by the attack and saves 58.6%of the verification time.
关键词
分布式学习/编码计算/恶意检测/鲁棒性/网络与信息安全Key words
distributed learning/coded computing/malicious detection/robustness/network and information security引用本文复制引用
基金项目
国家自然科学基金(62202302)
国家自然科学基金(U20B2048)
出版年
2024