Deep-dive: Navigating vulnerabilities generated by AI coding assistants
Deep-dive: Navigating vulnerabilities generated by AI coding assistants
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!
Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
Mission: Cross-site scripting (XSS) in ‘ChatterGPT’
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself! This mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Resources to get you started
Trust Agent by Secure Code Warrior
Discover SCW Trust Agent, an innovative solution designed to enhance security by aligning developer secure code knowledge and skills with the work they commit. It provides comprehensive visibility and controls across an organization's entire code repository, analyzing each commit against developers' secure code profiles. With SCW Trust Agent, organizations can strengthen their security posture, optimize development lifecycles, and scale developer-driven security.
Resources to get you started
Women in Security are Winning: How the AWSN is Setting Up a New Generation of Security Superwomen
Secure-by-Design is the latest initiative on everyone’s lips, and the Australian government, collaborating with CISA at the highest levels of global governance, is guiding a higher standard of software quality and security from vendors.
Women in Security are Winning: How the AWSN is Setting Up a New Generation of Security Superwomen
Secure-by-Design is the latest initiative on everyone’s lips, and the Australian government, collaborating with CISA at the highest levels of global governance, is guiding a higher standard of software quality and security from vendors.
SCW Trust Agent - Visibility and Control to Scale Developer Driven Security
SCW Trust Agent, introduced by Secure Code Warrior, offers security leaders the visibility and control needed to scale developer-driven security within organizations. By connecting to code repositories, it assesses code commit metadata, inspects developers, programming languages used, and shipment timestamps to determine developers' security knowledge.
Deep-dive: Navigating vulnerabilities generated by AI coding assistants
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!
Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
Resources to get you started
Women in Security are Winning: How the AWSN is Setting Up a New Generation of Security Superwomen
Secure-by-Design is the latest initiative on everyone’s lips, and the Australian government, collaborating with CISA at the highest levels of global governance, is guiding a higher standard of software quality and security from vendors.
SCW Trust Agent - Visibility and Control to Scale Developer Driven Security
SCW Trust Agent, introduced by Secure Code Warrior, offers security leaders the visibility and control needed to scale developer-driven security within organizations. By connecting to code repositories, it assesses code commit metadata, inspects developers, programming languages used, and shipment timestamps to determine developers' security knowledge.
Trust Agent by Secure Code Warrior
Discover SCW Trust Agent, an innovative solution designed to enhance security by aligning developer secure code knowledge and skills with the work they commit. It provides comprehensive visibility and controls across an organization's entire code repository, analyzing each commit against developers' secure code profiles. With SCW Trust Agent, organizations can strengthen their security posture, optimize development lifecycles, and scale developer-driven security.