查看更多>>摘要:Context: Modern blockchain, such as Ethereum, supports the deployment and execution of so-called smart contracts, autonomous digital programs with significant value of cryptocurrency. Executing smart contracts requires gas costs paid by users, which define the limits of the contract's execution. Logic vulnerabilities in smart contracts can lead to excessive gas consumption, financial losses, and are often the root cause of high-impact cyberattacks. Objective: Our objective is threefold: (ⅰ) empirically investigate logic vulnerabilities in real-world smart contracts extracted from code changes on GitHub, (ⅱ) introduce S61ey, an automated method for detecting logic vulnerabilities in smart contracts, leveraging Large Language Models (LLMs), and (ⅲ) examine mitigation strategies employed by smart contract developers to address these vulnerabilities in real-world scenarios. Method: We obtained smart contracts and related code changes from GitHub. To address the first and third objectives, we qualitatively investigated available logic vulnerabilities using an open coding method. We identified these vulnerabilities and their mitigation strategies. For the second objective, we extracted various logic vulnerabilities, focusing on those containing inline assembly fragments. We then applied preprocessing techniques and trained the proposed S61ey model. We evaluated Soley along with the performance of various LLMs and compared the results with the state-of-the-art baseline on the task of logic vulnerability detection. Results: Our results include the curation of a large-scale dataset comprising 50,000 Ethereum smart contracts, with a total of 428,569 labeled instances of smart contract vulnerabilities, including 171,180 logic-related vulnerabilities. Our analysis uncovered nine novel logic vulnerabilities, which we used to extend existing taxonomies. Furthermore, we introduced several mitigation strategies extracted from observed developer modifications in real-world scenarios. Experimental results show that S61ey outperforms existing approaches in automatically identifying logic vulnerabilities, achieving a 9% improvement in accuracy and a maximum improvement of 24% in F1-measure over the Baseline. Interestingly, the efficacy of LLMs in this task was evident with minimal feature engineering. Despite the positive results, Soley struggles to identify certain classes of logic vulnerabilities, which remain for future work. Conclusion: Early identification of logic vulnerabilities from code changes can provide valuable insights into their detection and mitigation. Recent advancements, such as LLMs, show promise in detecting logic vulnerabilities and contributing to smart contract security and sustainability.