Deep-dive: Navigating vulnerabilities generated by AI coding assistants
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
.avif)

Explore the security risks of AI in software development and learn how to navigate these challenges effectively with Secure Code Warrior.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoLaura Verheyde is a software developer at Secure Code Warrior focused on researching vulnerabilities and creating content for Missions and Coding labs.

No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
.avif)
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
.avif)

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demo
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself! This mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Try the missionLaura Verheyde is a software developer at Secure Code Warrior focused on researching vulnerabilities and creating content for Missions and Coding labs.
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
.avif)
Table of contents

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Secure by Design: Defining Best Practices, Enabling Developers and Benchmarking Preventative Security Outcomes
In this research paper, Secure Code Warrior co-founders, Pieter Danhieux and Dr. Matias Madou, Ph.D., along with expert contributors, Chris Inglis, Former US National Cyber Director (now Strategic Advisor to Paladin Capital Group), and Devin Lynch, Senior Director, Paladin Global Institute, will reveal key findings from over twenty in-depth interviews with enterprise security leaders including CISOs, a VP of Application Security, and software security professionals.
Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise
Finding meaningful data on the success of Secure-by-Design initiatives is notoriously difficult. CISOs are often challenged when attempting to prove the return on investment (ROI) and business value of security program activities at both the people and company levels. Not to mention, it’s particularly difficult for enterprises to gain insights into how their organizations are benchmarked against current industry standards. The President’s National Cybersecurity Strategy challenged stakeholders to “embrace security and resilience by design.” The key to making Secure-by-Design initiatives work is not only giving developers the skills to ensure secure code, but also assuring the regulators that those skills are in place. In this presentation, we share a myriad of qualitative and quantitative data, derived from multiple primary sources, including internal data points collected from over 250,000 developers, data-driven customer insights, and public studies. Leveraging this aggregation of data points, we aim to communicate a vision of the current state of Secure-by-Design initiatives across multiple verticals. The report details why this space is currently underutilized, the significant impact a successful upskilling program can have on cybersecurity risk mitigation, and the potential to eliminate categories of vulnerabilities from a codebase.
Secure code training topics & content
Our industry-leading content is always evolving to fit the ever changing software development landscape with your role in mind. Topics covering everything from AI to XQuery Injection, offered for a variety of roles from Architects and Engineers to Product Managers and QA. Get a sneak peak of what our content catalog has to offer by topic and role.
Resources to get you started
Revealed: How the Cyber Industry Defines Secure by Design
In our latest white paper, our Co-Founders, Pieter Danhieux and Dr. Matias Madou, Ph.D., sat down with over twenty enterprise security leaders, including CISOs, AppSec leaders and security professionals, to figure out the key pieces of this puzzle and uncover the reality behind the Secure by Design movement. It’s a shared ambition across the security teams, but no shared playbook.
Is Vibe Coding Going to Turn Your Codebase Into a Frat Party?
Vibe coding is like a college frat party, and AI is the centerpiece of all the festivities, the keg. It’s a lot of fun to let loose, get creative, and see where your imagination can take you, but after a few keg stands, drinking (or, using AI) in moderation is undoubtedly the safer long-term solution.