首页|算法可解释性的价值及其法治化路径

算法可解释性的价值及其法治化路径

扫码查看
随着自主学习算法技术的纵深迭代,自动化决策已广泛嵌入人类社会的决策系统中。然而,由于其专业性和黑箱性,算法决策 自动化对人类法律秩序构建的程序正当性及问责机制形成威胁,最终对人性尊严构成根本挑战。算法可解释性是将算法系统纳入社会规范系统约束的关键理念,其实现程度对维护法治秩序和保护被决策主体的权益至关重要。当下实现算法可解释性的主要制度依托包括设置比例化的透明度不同程度地打开黑箱、构筑多方协同审查机制落实责任主体、制度化直观关系保证"人在回路"。这些举措的落脚点在于维护算法技术的内在善。
The Value of Algorithmic Interpretability and Its Path to Rule of Law
With the deep iteration of autonomous learning algorithm technology,automated decision-making has been widely embedded into the decision systems of human society.However,due to its specialization and opacity,the automation of algorithmic decision-making poses a threat to the legitimacy of the procedural construction of human legal order and accountability mechanisms,ultimately constituting a fundamental challenge to human dignity.Algorithmic interpretability is a key concept in incorporating algorithmic systems into the constraints of social norm systems,and the degree of its implementation is crucial for maintaining the rule of law and protecting the rights of decision-making subjects.The current main institutional basis for achieving algorithmic interpretability includes setting varying degrees of transparency to open the black box,establishing multi-party collaborative review mechanisms to implement responsibility subjects,and institutionalizing intuitive relationships to ensure"human in the loop."The focal point of these measures is to uphold the inherent goodness of algorithmic technology.

algorithmic decision-makingalgorithmic interpretabilitydue processaccountabilityhuman dignity

王海燕

展开 >

西南政法大学行政法学院,重庆 401120

算法决策 算法可解释性 正当程序 问责制 人类尊严

国家社会科学基金重大项目

20&ZD190

2024

重庆社会科学
重庆市社会科学界联合会

重庆社会科学

CHSSCD北大核心
影响因子:0.627
ISSN:1673-0186
年,卷(期):2024.(1)
  • 59