首页|New Computational Intelligence Findings Reported from University of Idaho (Balan cing Security and Correctness In Code Generation: an Empirical Study On Commerci al Large Language Models)
New Computational Intelligence Findings Reported from University of Idaho (Balan cing Security and Correctness In Code Generation: an Empirical Study On Commerci al Large Language Models)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
2024 OCT 03 (NewsRx)-By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-Research findings on Machine Learning - Computational Intelligence are discussed in a new report. According to news re porting out of Moscow, Idaho, by NewsRx editors, research stated, "Large languag e models (LLMs) continue to be adopted for a multitude of previously manual task s, with code generation as a prominent use. Multiple commercial models have seen wide adoption due to the accessible nature of the interface." Our news journalists obtained a quote from the research from the University of I daho, "Simple prompts can lead to working solutions that save developers time. H owever, the generated code has a significant challenge with maintaining security . There are no guarantees on code safety, and LLM responses can readily include known weaknesses. To address this concern, our research examines different promp t types for shaping responses from code generation tasks to produce safer output s. The top set of common weaknesses is generated through unconditioned prompts t o create vulnerable code across multiple commercial LLMs. These inputs are then paired with different contexts, roles, and identification prompts intended to im prove security. Our findings show that the inclusion of appropriate guidance red uces vulnerabilities in generated code, with the choice of model having the most significant effect."
MoscowIdahoUnited StatesNorth and Central AmericaComputational IntelligenceMachine LearningUniversity of Ida ho.