Deep-dive: Navigating vulnerabilities generated by AI coding assistants
.png)
No matter where you look, there is an ongoing fixation with AI technology in almost every vertical. Lauded by some as the answer to rapid-fire feature creation in software development, the gains in speed come with a price: the potential for severe security bugs making their way into codebases, thanks to a lack of contextual awareness from the tool itself, and low-level security skills from the developers relying on them to boost productivity and generate answers to challenging development scenarios.
Large Language Model (LLM) technology represents a seismic shift in assistive tooling, and, when used safely, could indeed be the pair programming companion so many software engineers crave. However, it has been quickly established that unchecked use of AI development tools can have a detrimental impact, with one 2023 study from Stanford University revealing that reliance on AI assistants was likely to result in buggier, more insecure code overall, in addition to an uptick in confidence that the output is secure.
While it is valid to assume that the tools will continue to improve as the race to perfect LLM technology marches on, a swathe of recommendations - including a new Executive Order from the Biden Administration, as well as the Artificial Intelligence Act from the EU - renders their use a tricky path in any case. Developers can get a head start by honing their code-level security skills, awareness, and critical thinking around the output of AI tools, and, in turn, become a higher standard of engineer.
How do AI coding assistants introduce vulnerabilities? Play our NEW public mission and see for yourself!

Example: Cross-site scripting (XSS) in ‘ChatterGPT’
Our new public mission reveals the familiar interface of a popular LLM, and utilizes a real code snippet generated in late November 2023. Users can interpret this snippet and investigate any potential security pitfalls if it were to be used for its intended purpose.
Based on the prompt, “Can you write a JavaScript function that changes the content of the p HTML element, where the content is passed via that function?” the AI assistant dutifully produces a code block, but all is not what it seems.
Have you played the challenge yet? If not, try now before reading further.
… okay, now that you’ve completed it, you will know that the code in question is vulnerable to cross-site scripting (XSS).
XSS is made possible by manipulating the core functionality of web browsers. It can occur when untrusted input is rendered as output on a page, but misinterpreted as executable and safe code. An attacker can place a malicious snippet (HTML tags, JavaScript, etc.) within an input parameter, which -- when returned back to the browser -- is then executed instead of displayed as data.
Using AI coding assistants safely in software development
A recent survey of working development teams revealed that almost all of them - or 96% - have started using AI assistants in their workflow, with 80% even bypassing security policies to keep them in their toolkit. Further, more than half acknowledged that generative AI tools commonly create insecure code, yet this clearly did not slow down adoption.
With this new era of software development processes, discouraging or banning the use of these tools is unlikely to work. Instead, organizations must enable their development teams to utilize the efficiency and productivity gains without sacrificing security or code quality. This requires precision training on secure coding best practices, and providing them the opportunity to expand their critical thinking skills, ensuring they are acting with a security-first mindset, especially when assessing the potential threat of AI assistant code output.
Further reading
For XSS in general, check out our comprehensive guide.
Want to learn more about how to write secure code and mitigate risk? Try out our XSS injection challenge for free.
If you’re interested in getting more free coding guidelines, check out Secure Code Coach to help you stay on top of secure coding best practices.
.png)
Govern AI-driven development before it ships
Measure AI-assisted risk, enforce secure coding policy at commit, and accelerate secure delivery across your SDLC.
이것은 태그 및 스타일 옵션이 있는 동적 제목입니다.
우리는 이 방법을 잘 알고 있습니다. 우리는 이 두 가지 축복을 골고루 살기 위해 노력하고 있습니다.
%252520%252520(3).png)
Supercharged Security Awareness: How Tournaments are Inspiring Developers at Erste Group

Security as culture: How Blue Prism cultivates world-class secure developers
Learn how Blue Prism, the global leader in intelligent automation for the enterprise, used Secure Code Warrior's agile learning platform to create a security-first culture with their developers, achieve their business goals, and ship secure code at speed

One Culture of Security: How Sage built their security champions program with agile secure code learning
Discover how Sage enhanced security with a flexible, relationship-focused approach, creating 200+ security champions and achieving measurable risk reduction.
Secure AI-driven development before it ships
See developer risk, enforce policy, and prevent vulnerabilities across your software development lifecycle.