Deep-dive: Navigating vulnerabilities generated by AI coding assistants

Published Dec 11, 2023
by Laura Verheyde
cASE sTUDY

Deep-dive: Navigating vulnerabilities generated by AI coding assistants

Published Dec 11, 2023
by Laura Verheyde
View Resource
View Resource

No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios. 

Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.

While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.

How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’

Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.

Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.

Have you played the challenge yet? If not, try now before reading further.

… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS). 

XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.

Using AI coding assistants safely in software development

A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.

With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output. 

Further reading

For XSS in general, check out our comprehensive guide.

Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.

If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.

Want to learn more? Download our latest white paper.
View Resource
View Resource

Mission: Cross-site scripting (XSS) in ‘ChatterGPT’

How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself! This mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.

Try the mission
Author

Laura Verheyde

Laura Verheyde is a software developer at Secure Code Warrior focused on researching vulnerabilities and creating content for Missions and Coding labs.

Want more?

Dive into onto our latest secure coding insights on the blog.

Our extensive resource library aims to empower the human approach to secure coding upskilling.

View Blog
Want more?

Get the latest research on developer-driven security

Our extensive resource library is full of helpful resources from whitepapers to webinars to get you started with developer-driven secure coding. Explore it now.

Resource Hub

Deep-dive: Navigating vulnerabilities generated by AI coding assistants

Published Dec 11, 2023
By Laura Verheyde

No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios. 

Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.

While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.

How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’

Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.

Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.

Have you played the challenge yet? If not, try now before reading further.

… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS). 

XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.

Using AI coding assistants safely in software development

A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.

With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output. 

Further reading

For XSS in general, check out our comprehensive guide.

Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.

If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.

Want to learn more? Download our latest white paper.

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.