首页|司法人工智能可解释性难题的法律论证分析

司法人工智能可解释性难题的法律论证分析

扫码查看
司法人工智能因采用神经网络算法而陷入了可解释性难题.法律论证为从社会科学视角解决这一难题提供了新思路,它启发解释者依据法律论证的程序向解释听众进行解释.法律论证的程序正当性体现了程序正义的价值.解释者在遵循公平和透明的论证规则的前提下,向解释听众明确地解释司法人工智能的决策过程并使其理解.法律论证的理论研究诞生了多样化的程序性理论和方法,而人工智能研究也充分体现了法律论证的可解释性功能.法律论证启发构建司法人工智能的可解释性论证程序,包括制定论证规则以规范论证双方(解释者和解释听众)之间的言语行为,以及通过最佳解释推论的评估方法来界定可解释性的检验标准.
Judicial artificial intelligence(AI)faces an explainability dilemma due to its use of neural network algorithms.Legal argumentation offers a new perspective from the social sciences to address this dilemma.It inspires explainers to provide explanations to those seeking them based on the procedures of legal argumentation,where procedural legitimacy embodies the value of procedural justice.Under the premise of adhering to fair and transparent argumentation rules,explainers clearly elucidate the decision-making process of judicial AI to the explainees,ensuring their understanding.Research in legal argumentation has developed diverse procedural theories and methods,which also highlight its function of explainability in artificial intelligence research.Legal argumentation inspires the construction of explainable argumentation procedures for judicial AI,including the formulation of argumentation rules to regulate the verbal behavior between the explainer and the audience,as well as defining standards for testing explainability through the evaluation method of the best explanatory inference.

Legal ArgumentationArtificial IntelligenceExplainabilityDeep LearningAr-gumentation Procedure

魏斌

展开 >

浙江大学光华法学院、数字法治实验室

法律论证 人工智能 可解释性 深度学习 论证程序

国家重点研发计划重点专项国家社会科学基金青年项目

2021YFC330030021CFX006

2024

法制与社会发展
吉林大学

法制与社会发展

CSTPCDCSSCICHSSCD北大核心
影响因子:2.707
ISSN:1006-6128
年,卷(期):2024.30(4)
  • 16