Balancing Security and Correctness in Code Generation: An Empirical Study on Commercial Large Language Models PDF
Outlet Title
IEEE Transactions on Emerging Topics in Computational Intelligence
Document Type
Article
Publication Date
2024
Abstract
Large language models (LLMs) continue to be adopted for a multitude of previously manual tasks, with code generation as a prominent use. Multiple commercial models have seen wide adoption due to the accessible nature of the interface. Simple prompts can lead to working solutions that save developers time. However, the generated code has a significant challenge with maintaining security. There are no guarantees on code safety, and LLM responses can readily include known weaknesses. To address this concern, our research examines different prompt types for shaping responses from code generation tasks to produce safer outputs. The top set of common weaknesses is generated through unconditioned prompts to create vulnerable code across multiple commercial LLMs. These inputs are then paired with different contexts, roles, and identification prompts intended to improve security. Our findings show that the inclusion of appropriate guidance reduces vulnerabilities in generated code, with the choice of model having the most significant effect. Additionally, timings are presented to demonstrate the efficiency of singular requests that limit the number of model interactions.
Recommended Citation
Black, Gavin; Rimal, Bhaskar P. Dr.; and Vaidyan, Varghese, "Balancing Security and Correctness in Code Generation: An Empirical Study on Commercial Large Language Models PDF" (2024). Research & Publications. 133.
https://scholar.dsu.edu/ccspapers/133