首页|网络爬虫的规制逻辑:行政监管前置的路径提倡

网络爬虫的规制逻辑:行政监管前置的路径提倡

扫码查看
网络爬虫是网络时代的必须技术,但也容易被滥用于违法犯罪.其边界的模糊性导致法律规制的诸多障碍,主要体现为监管规制的迂回与缺失,司法评价的无序和矛盾,刑事上体现为罪名适用不稳定,摇摆于数据犯罪和数据承载的内容犯罪之间,过于夸大绕过robots.txt的规范意义,有滥用数据犯罪之嫌.对此,理论与实践大体形成两种规制路径:一是依据robots.txt明晰爬虫边界;二是根据网络爬虫效果即利益衡量原则判断其行为边界.然而,robots.txt合同化存在民法上的规范和法理障碍,不具有理论说服力;利益衡量原则在数据犯罪与民事法的评价中同样无法提供稳定标准,本质上是逃避对网络爬虫行为边界的划定.因此,将行政监管前置化具有必要性,这不仅能够为各部门法提供爬虫边界判断的有效指引,还能够协调民刑判断趋于一致.监管前置并不意味着重回"一刀切式"的硬性标准,而是逐案对网络数据爬取行为进行行政确认.在具体措施上,应诉诸"robots.txt特别认证"与"反反爬特别授权"制度,并赋予各方申诉权利.司法上则应推定监管规范的有效性,从而将"行业标准"规范化,借此正当化并限定robots.txt的规范填补功能,进而为司法提供明确指引.
The Logic of Regulating Web Crawlers:Advocating for Preceding Administrative Oversight
Web crawlers are an indispensable technology in the digital age,but they are also susceptible to misuse for illegal activities.The ambiguity surrounding their boundaries has led to several challenges in legal regulation.This is primarily manifested in the circumvention and absence of regulatory oversight and the disorderly and contradictory judicial evaluations.In criminal law,these challenges are evident in the unstable application of charges,which vacillate between data-related crimes and content-related offenses.There is also an overemphasis on the significance of bypassing ro-bots.txt,raising suspicions of the misuse of data-related crimes.To address these issues,two primary regulatory ap-proaches have emerged in theory and practice.The first involves delineating web crawler boundaries based on robots.txt,while the second relies on the principle of assessing the web crawlers,known as the principle of interest evaluation,to determine their behavioral boundaries.However,the contractual nature of rorbots.txt encounters normative and juris-prudential obstacles in civil law and lacks convincing theoretical foundations.Additionally,the principle of interest eval-uation fails to provide stable standards for evaluating data-related crimes and civil law,essentially avoiding the delinea-tion of boundaries for web crawler behavior.Therefore,it is necessary to implement preceding administrative regulation due to its ability to offer effective guidance for delineating web crawler boundaries across various legal domains and har-monizing civil and criminal judgments.Preceding regulatory oversight does not signify a return to rigid,one-size-fits-all standards.Instead,it involves case-by-case administrative confirmation of web data crawling behavior.In terms of specific measures,it should involve processes such as"special certification for robots.txt"and"special authorization for anti-web crawling measures",with the inclusion of appeal rights for all parties involved.In the judicial realm,the ef-fectiveness of regulatory standards should be presumed,thereby standardizing industry practices and legitimizing and lim-iting the functions of robots.txt.This would provide clear guidance for judicial authorities.

web crawlerjudicial regulationpreceding regulatory oversightadministrative regulation of web crawlers

高永明、马光远

展开 >

扬州大学法学院 扬州 225127

海南大学法学院 海口 570000

网络爬虫 司法规制 监管前置 行政规制

国家社会科学基金江苏省高等学校区域法治发展协同创新中心项目

20BFX007

2024

吉林大学社会科学学报
吉林大学

吉林大学社会科学学报

CSTPCDCSSCICHSSCD北大核心
影响因子:1.224
ISSN:0257-2834
年,卷(期):2024.64(3)
  • 39