Triple Standards for Information Content Security in Generative Artificial Intelligence Services——An Analysis Based on the"Interim Measures for the Administration of Generative Artificial Intelligence Services"
The Interim Measures for the Administration of Generative Artificial Intelligence Services focus on the safety of information content,and the relevant provisions can be categorized into performance standards,design standards,and internal management standards.Performance standards set obligations such as"providers and users shall not generate specific information,"highlighting the uncontrollability of mismatched generative results and the extremely high compliance costs.Design standards establish specific behavioral obligations,defining the technologies and behaviors that providers should adopt,with the premise of clear regulation targets.However,some design standards in the method target the contingent event of"discovering illegal behaviors or information".Internal management standards set abstract behavioral obligations,requiring providers to autonomously take measures to improve information quality,matching the uncontrollability,rapid iteration,and big data characteristics of generative artificial intelligence,but relying on effective implementation.By transforming standard types based on applicable scenarios,constructing execution environments,and promoting the maturity of standard systems through regulatory learning,the conformity of standards can be improved.