摘要
由机器人与机器学习每日新闻的新闻记者兼新闻编辑-研究人员详细介绍了人工智能的新数据。根据NewsRx C orresponders《来自波兰华沙的新闻》报道,研究表明:“可解释人工智能(XAI)方法被描述为调试和信任统计模型和深度学习模型以及解释它们预测的一种补救方法。然而,对抗机器学习(AdvML)的最新进展突出了最先进解释方法的局限性和脆弱性。”让他们的安全性和耐锈性受到质疑。
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Researchers detail new data in Artific ial Intelligence. According to news originating from Warsaw, Poland, by NewsRx c orrespondents, research stated, “Explainable artificial intelligence (XAI) metho ds are portrayed as a remedy for debugging and trusting statistical and deep lea rning models, as well as interpreting their predictions. However, recent advance s in adversarial machine learning (AdvML) highlight the limitations and vulnerab ilities of state -of -the -art explanation methods, putting their security and t rustworthiness into question.”