首页|Study Findings on Machine Learning Reported by Researchers at Australian Nationa l University (Transparency challenges in policy evaluation with causal machine l earning: improving usability and accountability)
Study Findings on Machine Learning Reported by Researchers at Australian Nationa l University (Transparency challenges in policy evaluation with causal machine l earning: improving usability and accountability)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – A new study on artificial intelligence is now available. According to news originating from Canberra, Australia, by Ne wsRx correspondents, research stated, “Causal machine learning tools are beginni ng to see use in real-world policy evaluation tasks to flexibly estimate treatme nt effects.” The news reporters obtained a quote from the research from Australian National U niversity: “One issue with these methods is that the machine learning models use d are generally black boxes, that is, there is no globally interpretable way to understand how a model makes estimates. This is a clear problem for governments who want to evaluate policy as it is difficult to understand whether such models are functioning in ways that are fair, based on the correct interpretation of e vidence and transparent enough to allow for accountability if things go wrong. H owever, there has been little discussion of transparency problems in the causal machine learning literature and how these might be overcome. This article explor es why transparency issues are a problem for causal machine learning in public p olicy evaluation applications and considers ways these problems might be address ed through explainable AI tools and by simplifying models in line with interpret able AI principles. It then applies these ideas to a case study using a causal f orest model to estimate conditional average treatment effects for returns on edu cation study. It shows that existing tools for understanding black-box predictiv e models are not as well suited to causal machine learning and that simplifying the model to make it interpretable leads to an unacceptable increase in error (i n this application).”
Australian National UniversityCanberraAustraliaAustralia and New ZealandCyborgsEmerging TechnologiesMachine Learning