首页|Keio University Researchers Release New Study Findings on Ma- chine Learning (Toward Building Trust in Machine Learning Models: Quantifying the Explainability by SHAP and References to Human Strategy)
Keio University Researchers Release New Study Findings on Ma- chine Learning (Toward Building Trust in Machine Learning Models: Quantifying the Explainability by SHAP and References to Human Strategy)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
IEEE
2024 FEB 02 (NewsRx) – By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Investigators publish new report on artificial intelligence. According to news reporting out of Yokohama, Japan, by NewsRx editors, research stated, “Local model-agnostic Explainable Artificial Intelligence (XAI), such as LIME or SHAP, has recently gained popularity among researchers and data scientists for explaining black box Machine Learning (ML) models.” The news editors obtained a quote from the research from Keio University: “In the industry, practitioners focus not only on how these explanations can validate their models but also on how they can help maintain trust from end-users. Some studies attempted to measure this ability by quantifying what they refer to as the explainability or interpretability of ML models. In this paper, we introduce a new method for measuring explainability with reference to an approximated human model. We develop a human-friendly interface to strategically collect human decision-making and translate it into a set of logical rules and intuitions, or simply annotations. These annotations are then compared with the local explanations derived from common XAI tools. Through a human survey, we demonstrate that it is possible to quantify human intuition and empirically compare it to a given explanation, enabling a practical quantification of explainability.”