摘要
由一名新闻记者-机器人与机器学习的工作人员新闻编辑每日新闻-一项关于机器学习的新研究现在可以获得。根据NewsRx Cor受访者来自荷兰代尔夫特的新闻报道,研究表明:“本文提供了一个经验性和概念性的计算,将机器学习模型视为社会技术系统的一部分,以识别在使用环境中出现的相关脆弱性。由于机器学习越来越多地应用于社会敏感和安全关键领域,许多机器学习应用最终没有兑现承诺。”并助长了新形式的算法危害。这项研究的财政支持来自荷兰科学研究组织(NWO)。我们的新闻记者从代尔夫特科技大学的研究中获得了一句话:“仍然缺乏经验洞察力以及概念工具和框架,以正确理解和设计ML模型在其社会技术背景下的影响。”我们遵循设计科学的研究方法来研究这些见解和工具。我们的研究集中在金融行业,在那里我们首先实证地绘制了最近出现的管理ML应用的MLO PS实践,并用最近的文献来证实我们的见解。然后我们进行了一项综合文献研究,以确定在ML应用的社会技术背景下出现的漏洞列表。我们从八个维度对这些进行理论分析。然后,我们在两个现实世界的用例和广泛的相关行为者和组织中进行半结构化访谈,以验证概念维度,并确定解决基于ML的系统设计和治理中的社会技术漏洞的挑战。
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-A new study on Machine Learning is now available. According to news originating from Delft, Netherlands, by NewsRx cor respondents, research stated, "This paper provides an empirical and conceptual a ccount on seeing machine learning models as part of a sociotechnical system to i dentify relevant vulnerabilities emerging in the context of use. As ML is increa singly adopted in socially sensitive and safety-critical domains, many ML applic ations end up not delivering on their promises, and contributing to new forms of algorithmic harm." Financial support for this research came from Netherlands Organization for Scien tific Research (NWO). Our news journalists obtained a quote from the research from the Delft Universit y of Technology, "There is still a lack of empirical insights as well as concept ual tools and frameworks to properly understand and design for the impact of ML models in their sociotechnical context. In this paper, we follow a design scienc e research approach to work towards such insights and tools. We center our study in the financial industry, where we first empirically map recently emerging MLO ps practices to govern ML applications, and corroborate our insights with recent literature. We then perform an integrative literature research to identify a lo ng list of vulnerabilities that emerge in the sociotechnical context of ML appli cations, and we theorize these along eight dimensions. We then perform semi-stru ctured interviews in two real-world use cases and across a broad set of relevant actors and organizations, to validate the conceptual dimensions and identify ch allenges to address sociotechnical vulnerabilities in the design and governance of ML-based systems."