Regulation of Automated Decision-making by Personal Credit Reporting Agencies:Risk Identification and Differentiation
The personal credit reporting agencies use automated decision-making techniques to provide credit services or products,such as credit information inquiries,credit evaluations,and credit anti-fraud,which may pose risks to people's rights and interests,due to the transmission of false information,the reproduction of social prejudices and discrimination,and the weakening of the personalized space for human decision-making.The analysis of the application scenarios of automated decision-making by individual credit reporting agencies shows that credit information inquiry products can cause the risk of"algorithm black-box",and credit evaluation and credit anti-fraud products can cause risks such as"algorithm black-box"and"algorithm discrimination"at the same time.China's existing legal systems have derived three kinds of regulatory mechanisms,namely individual empowerment mechanism,transparency obligation mechanism,and external supervision mechanism for third-party entities to deal with various risks caused by automated decision-making.But it is difficult for a single regulatory mechanism to solve all problems.To regulate the risk caused by automated decision-making of personal credit reporting agencies,it is necessary to take comprehensive consideration into the object,purpose and effectiveness of regulation,and construct a three-dimensional risk regulatory framework connecting public-private legal regulatory systems,based on the analysis of the processing relationship concerning personal credit reporting data.
personal credit reporting agenciesautomated decision-makingrisk identificationrisk regulatory framework