Blog

AI Can Write and Review Code — But Humans Still Own the Risk

Pieter Danhieux
Published Feb 25, 2026
Last updated on Feb 25, 2026

Anthropic’s launch of Claude Code Security marks a defining collision point between AI-assisted software development, and the rapid augmentation of how we approach modern cybersecurity. While this new Claude capability can identify vulnerabilities in AI-generated code, this creates a single point of both trust and failure, and a skilled human must still evaluate those findings and determine the appropriate remediation path. This approach is supported by the likes of Justin Greis, CEO of consulting firm Acceligence, who told CSO Online, “For those who blindly rely on any code scanning tool, AI or otherwise, to replace the fundamentals of good security practices and secure coding, this is your red blinking light to not outsource the very expertise that protects the value proposition of the product or service you’re developing.” 

In that respect, the model is not fundamentally different from traditional SAST tools. It is more advanced in its reasoning, but a risk-sensitive use case still relies on a human-in-the-loop to interpret, validate and remediate the issues it surfaces safely.

The threat for organizations is not AI capability, it is unchecked AI autonomy and low oversight within the software development lifecycle. When AI both generates and evaluates code, strong, precision governance becomes a critical control. 

The Expanding Definition of the “Developer”

AI has lowered the barrier to entry for creating apps and software. But, just because something can be done quickly using AI, doesn’t mean that you are doing it in the most secure or resilient way, nor that the project itself is user-ready. The entire premise of vibe coding is to get into the “flow state” and deal with enterprise-level development formalities, like security, later. 

Today’s “developer” may be:

  • A traditional engineer using AI to speed up coding tasks
  • A product manager prototyping features via prompts
  • A data analyst automating scripts through AI
  • A QA engineer leveraging AI to generate test cases

From an organizational security perspective, who writes the code matters far less than the code's impact once it reaches production. From a compliance and risk standpoint, if an individual’s interaction with AI results in code entering the software development lifecycle (SDLC) without proper, company-specific security oversight, it introduces organizational risk — risk that must be understood, measured, and mitigated.

Why Human Judgment Still Matters

As more individuals gain the ability to generate code that could impact production or sensitive codebases, an organization’s risk profile expands. Governance must evolve to enable AI-driven development at scale while ensuring the controls required to protect the enterprise remain firmly in place.

AI can, fairly reliably, generate code and flag potential vulnerabilities. What it cannot do is validate whether that code is appropriate within the context of your architecture, data flows, identity model, regulatory obligations, or risk tolerance, and this is fundamental intel that impacts the potency of any security program. Additionally, implementing a tool like Claude Code into the SDLC is one thing, but tools like BaxBench demonstrate via vast data analysis that different models (for example, Opus vs Sonnet 4.5 vs Sonnet 3) are returning different outputs in terms of security and accuracy, resulting in an enormous difference in the real dollars a company will ultimately pay for usage as they push for working, secure code.

Secure software is not simply code that passes a scan. It must follow good, safe patterns that align seamlessly with system design, business intent, and enterprise policy. That requires judgment. When developers rely heavily on AI to generate or even review code, there is a real risk that their understanding of the codebase erodes. If an engineer cannot fully explain why a piece of code works or is secure, the organization has already lost a layer of control.

Validation is not the same as detection. Accountability is not the same as automation. AI can assist, but it cannot assume responsibility (and no legislation to date absolves humans from consequences of rogue AI actions).

Human-in-the-loop oversight is not a legacy concept. In an AI-driven development environment, it is the primary safeguard that ensures code entering the software development lifecycle has been consciously reviewed, understood, and approved. Without that layer of judgment, speed becomes exposure.

Education Must Be the Foundation of Safe AI Adoption

In this context, secure coding training evolves from a nice-to-have to a core enterprise control. “Developers” will evolve from operators to orchestrators, shifting education from developing secure code to evaluating the security of AI-generated code.

The skills required to validate AI-generated code, anticipate emergent AI vulnerability classes such as prompt injection, and understand how AI patterns interact with your architecture cannot be learned through episodic compliance modules. They must be continuous, practical, integrated into existing workflows, and measurable against real risk outcomes. 

AI Software Governance: The Control Layer We’re Missing

Many security programs still treat application security as a downstream function. In an AI-driven environment, that model ceases to have the same impact on risk reduction. What’s required is AI Software Governance: a true enterprise control plane for an AI-driven software development lifecycle. This discipline spans the AI-driven software development lifecycle and establishes structured oversight of code creation, review, and approval.

That includes:

  • Visibility into AI tool usage across teams
  • Clear attribution of AI-generated code within the SDLC
  • Correlation of code changes to risk signals and policy requirements
  • Enforcement of secure coding standards inside developer workflows
  • Continuous upskilling and measurable secure coding capability
  • Demonstrable reduction in security risk before code reaches production

Governance is what bridges detection and decision, ensuring that productivity gains do not come at the expense of accountability.

AI will continue transforming how software is built, and abandoning it is neither realistic nor desirable. The productivity and innovation gains are, after all, too significant. But, working to extract value safely requires disciplined oversight and intentional human validation, and this infrastructure equally cannot be rushed or ignored.

View Resource
View Resource

Anthropic’s launch of Claude Code Security marks a defining collision point between AI-assisted software development, and the rapid augmentation of how we approach modern cybersecurity.

Interested in more?

Chief Executive Officer, Chairman, and Co-Founder

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Pieter Danhieux
Published Feb 25, 2026

Chief Executive Officer, Chairman, and Co-Founder

Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.

Share on:

Anthropic’s launch of Claude Code Security marks a defining collision point between AI-assisted software development, and the rapid augmentation of how we approach modern cybersecurity. While this new Claude capability can identify vulnerabilities in AI-generated code, this creates a single point of both trust and failure, and a skilled human must still evaluate those findings and determine the appropriate remediation path. This approach is supported by the likes of Justin Greis, CEO of consulting firm Acceligence, who told CSO Online, “For those who blindly rely on any code scanning tool, AI or otherwise, to replace the fundamentals of good security practices and secure coding, this is your red blinking light to not outsource the very expertise that protects the value proposition of the product or service you’re developing.” 

In that respect, the model is not fundamentally different from traditional SAST tools. It is more advanced in its reasoning, but a risk-sensitive use case still relies on a human-in-the-loop to interpret, validate and remediate the issues it surfaces safely.

The threat for organizations is not AI capability, it is unchecked AI autonomy and low oversight within the software development lifecycle. When AI both generates and evaluates code, strong, precision governance becomes a critical control. 

The Expanding Definition of the “Developer”

AI has lowered the barrier to entry for creating apps and software. But, just because something can be done quickly using AI, doesn’t mean that you are doing it in the most secure or resilient way, nor that the project itself is user-ready. The entire premise of vibe coding is to get into the “flow state” and deal with enterprise-level development formalities, like security, later. 

Today’s “developer” may be:

  • A traditional engineer using AI to speed up coding tasks
  • A product manager prototyping features via prompts
  • A data analyst automating scripts through AI
  • A QA engineer leveraging AI to generate test cases

From an organizational security perspective, who writes the code matters far less than the code's impact once it reaches production. From a compliance and risk standpoint, if an individual’s interaction with AI results in code entering the software development lifecycle (SDLC) without proper, company-specific security oversight, it introduces organizational risk — risk that must be understood, measured, and mitigated.

Why Human Judgment Still Matters

As more individuals gain the ability to generate code that could impact production or sensitive codebases, an organization’s risk profile expands. Governance must evolve to enable AI-driven development at scale while ensuring the controls required to protect the enterprise remain firmly in place.

AI can, fairly reliably, generate code and flag potential vulnerabilities. What it cannot do is validate whether that code is appropriate within the context of your architecture, data flows, identity model, regulatory obligations, or risk tolerance, and this is fundamental intel that impacts the potency of any security program. Additionally, implementing a tool like Claude Code into the SDLC is one thing, but tools like BaxBench demonstrate via vast data analysis that different models (for example, Opus vs Sonnet 4.5 vs Sonnet 3) are returning different outputs in terms of security and accuracy, resulting in an enormous difference in the real dollars a company will ultimately pay for usage as they push for working, secure code.

Secure software is not simply code that passes a scan. It must follow good, safe patterns that align seamlessly with system design, business intent, and enterprise policy. That requires judgment. When developers rely heavily on AI to generate or even review code, there is a real risk that their understanding of the codebase erodes. If an engineer cannot fully explain why a piece of code works or is secure, the organization has already lost a layer of control.

Validation is not the same as detection. Accountability is not the same as automation. AI can assist, but it cannot assume responsibility (and no legislation to date absolves humans from consequences of rogue AI actions).

Human-in-the-loop oversight is not a legacy concept. In an AI-driven development environment, it is the primary safeguard that ensures code entering the software development lifecycle has been consciously reviewed, understood, and approved. Without that layer of judgment, speed becomes exposure.

Education Must Be the Foundation of Safe AI Adoption

In this context, secure coding training evolves from a nice-to-have to a core enterprise control. “Developers” will evolve from operators to orchestrators, shifting education from developing secure code to evaluating the security of AI-generated code.

The skills required to validate AI-generated code, anticipate emergent AI vulnerability classes such as prompt injection, and understand how AI patterns interact with your architecture cannot be learned through episodic compliance modules. They must be continuous, practical, integrated into existing workflows, and measurable against real risk outcomes. 

AI Software Governance: The Control Layer We’re Missing

Many security programs still treat application security as a downstream function. In an AI-driven environment, that model ceases to have the same impact on risk reduction. What’s required is AI Software Governance: a true enterprise control plane for an AI-driven software development lifecycle. This discipline spans the AI-driven software development lifecycle and establishes structured oversight of code creation, review, and approval.

That includes:

  • Visibility into AI tool usage across teams
  • Clear attribution of AI-generated code within the SDLC
  • Correlation of code changes to risk signals and policy requirements
  • Enforcement of secure coding standards inside developer workflows
  • Continuous upskilling and measurable secure coding capability
  • Demonstrable reduction in security risk before code reaches production

Governance is what bridges detection and decision, ensuring that productivity gains do not come at the expense of accountability.

AI will continue transforming how software is built, and abandoning it is neither realistic nor desirable. The productivity and innovation gains are, after all, too significant. But, working to extract value safely requires disciplined oversight and intentional human validation, and this infrastructure equally cannot be rushed or ignored.

View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

Anthropic’s launch of Claude Code Security marks a defining collision point between AI-assisted software development, and the rapid augmentation of how we approach modern cybersecurity. While this new Claude capability can identify vulnerabilities in AI-generated code, this creates a single point of both trust and failure, and a skilled human must still evaluate those findings and determine the appropriate remediation path. This approach is supported by the likes of Justin Greis, CEO of consulting firm Acceligence, who told CSO Online, “For those who blindly rely on any code scanning tool, AI or otherwise, to replace the fundamentals of good security practices and secure coding, this is your red blinking light to not outsource the very expertise that protects the value proposition of the product or service you’re developing.” 

In that respect, the model is not fundamentally different from traditional SAST tools. It is more advanced in its reasoning, but a risk-sensitive use case still relies on a human-in-the-loop to interpret, validate and remediate the issues it surfaces safely.

The threat for organizations is not AI capability, it is unchecked AI autonomy and low oversight within the software development lifecycle. When AI both generates and evaluates code, strong, precision governance becomes a critical control. 

The Expanding Definition of the “Developer”

AI has lowered the barrier to entry for creating apps and software. But, just because something can be done quickly using AI, doesn’t mean that you are doing it in the most secure or resilient way, nor that the project itself is user-ready. The entire premise of vibe coding is to get into the “flow state” and deal with enterprise-level development formalities, like security, later. 

Today’s “developer” may be:

  • A traditional engineer using AI to speed up coding tasks
  • A product manager prototyping features via prompts
  • A data analyst automating scripts through AI
  • A QA engineer leveraging AI to generate test cases

From an organizational security perspective, who writes the code matters far less than the code's impact once it reaches production. From a compliance and risk standpoint, if an individual’s interaction with AI results in code entering the software development lifecycle (SDLC) without proper, company-specific security oversight, it introduces organizational risk — risk that must be understood, measured, and mitigated.

Why Human Judgment Still Matters

As more individuals gain the ability to generate code that could impact production or sensitive codebases, an organization’s risk profile expands. Governance must evolve to enable AI-driven development at scale while ensuring the controls required to protect the enterprise remain firmly in place.

AI can, fairly reliably, generate code and flag potential vulnerabilities. What it cannot do is validate whether that code is appropriate within the context of your architecture, data flows, identity model, regulatory obligations, or risk tolerance, and this is fundamental intel that impacts the potency of any security program. Additionally, implementing a tool like Claude Code into the SDLC is one thing, but tools like BaxBench demonstrate via vast data analysis that different models (for example, Opus vs Sonnet 4.5 vs Sonnet 3) are returning different outputs in terms of security and accuracy, resulting in an enormous difference in the real dollars a company will ultimately pay for usage as they push for working, secure code.

Secure software is not simply code that passes a scan. It must follow good, safe patterns that align seamlessly with system design, business intent, and enterprise policy. That requires judgment. When developers rely heavily on AI to generate or even review code, there is a real risk that their understanding of the codebase erodes. If an engineer cannot fully explain why a piece of code works or is secure, the organization has already lost a layer of control.

Validation is not the same as detection. Accountability is not the same as automation. AI can assist, but it cannot assume responsibility (and no legislation to date absolves humans from consequences of rogue AI actions).

Human-in-the-loop oversight is not a legacy concept. In an AI-driven development environment, it is the primary safeguard that ensures code entering the software development lifecycle has been consciously reviewed, understood, and approved. Without that layer of judgment, speed becomes exposure.

Education Must Be the Foundation of Safe AI Adoption

In this context, secure coding training evolves from a nice-to-have to a core enterprise control. “Developers” will evolve from operators to orchestrators, shifting education from developing secure code to evaluating the security of AI-generated code.

The skills required to validate AI-generated code, anticipate emergent AI vulnerability classes such as prompt injection, and understand how AI patterns interact with your architecture cannot be learned through episodic compliance modules. They must be continuous, practical, integrated into existing workflows, and measurable against real risk outcomes. 

AI Software Governance: The Control Layer We’re Missing

Many security programs still treat application security as a downstream function. In an AI-driven environment, that model ceases to have the same impact on risk reduction. What’s required is AI Software Governance: a true enterprise control plane for an AI-driven software development lifecycle. This discipline spans the AI-driven software development lifecycle and establishes structured oversight of code creation, review, and approval.

That includes:

  • Visibility into AI tool usage across teams
  • Clear attribution of AI-generated code within the SDLC
  • Correlation of code changes to risk signals and policy requirements
  • Enforcement of secure coding standards inside developer workflows
  • Continuous upskilling and measurable secure coding capability
  • Demonstrable reduction in security risk before code reaches production

Governance is what bridges detection and decision, ensuring that productivity gains do not come at the expense of accountability.

AI will continue transforming how software is built, and abandoning it is neither realistic nor desirable. The productivity and innovation gains are, after all, too significant. But, working to extract value safely requires disciplined oversight and intentional human validation, and this infrastructure equally cannot be rushed or ignored.

View webinar
Get Started

Click on the link below and download the PDF of this resource.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
View Resource
Share on:
Interested in more?

Share on:
Author
Pieter Danhieux
Published Feb 25, 2026

Chief Executive Officer, Chairman, and Co-Founder

Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.

Share on:

Anthropic’s launch of Claude Code Security marks a defining collision point between AI-assisted software development, and the rapid augmentation of how we approach modern cybersecurity. While this new Claude capability can identify vulnerabilities in AI-generated code, this creates a single point of both trust and failure, and a skilled human must still evaluate those findings and determine the appropriate remediation path. This approach is supported by the likes of Justin Greis, CEO of consulting firm Acceligence, who told CSO Online, “For those who blindly rely on any code scanning tool, AI or otherwise, to replace the fundamentals of good security practices and secure coding, this is your red blinking light to not outsource the very expertise that protects the value proposition of the product or service you’re developing.” 

In that respect, the model is not fundamentally different from traditional SAST tools. It is more advanced in its reasoning, but a risk-sensitive use case still relies on a human-in-the-loop to interpret, validate and remediate the issues it surfaces safely.

The threat for organizations is not AI capability, it is unchecked AI autonomy and low oversight within the software development lifecycle. When AI both generates and evaluates code, strong, precision governance becomes a critical control. 

The Expanding Definition of the “Developer”

AI has lowered the barrier to entry for creating apps and software. But, just because something can be done quickly using AI, doesn’t mean that you are doing it in the most secure or resilient way, nor that the project itself is user-ready. The entire premise of vibe coding is to get into the “flow state” and deal with enterprise-level development formalities, like security, later. 

Today’s “developer” may be:

  • A traditional engineer using AI to speed up coding tasks
  • A product manager prototyping features via prompts
  • A data analyst automating scripts through AI
  • A QA engineer leveraging AI to generate test cases

From an organizational security perspective, who writes the code matters far less than the code's impact once it reaches production. From a compliance and risk standpoint, if an individual’s interaction with AI results in code entering the software development lifecycle (SDLC) without proper, company-specific security oversight, it introduces organizational risk — risk that must be understood, measured, and mitigated.

Why Human Judgment Still Matters

As more individuals gain the ability to generate code that could impact production or sensitive codebases, an organization’s risk profile expands. Governance must evolve to enable AI-driven development at scale while ensuring the controls required to protect the enterprise remain firmly in place.

AI can, fairly reliably, generate code and flag potential vulnerabilities. What it cannot do is validate whether that code is appropriate within the context of your architecture, data flows, identity model, regulatory obligations, or risk tolerance, and this is fundamental intel that impacts the potency of any security program. Additionally, implementing a tool like Claude Code into the SDLC is one thing, but tools like BaxBench demonstrate via vast data analysis that different models (for example, Opus vs Sonnet 4.5 vs Sonnet 3) are returning different outputs in terms of security and accuracy, resulting in an enormous difference in the real dollars a company will ultimately pay for usage as they push for working, secure code.

Secure software is not simply code that passes a scan. It must follow good, safe patterns that align seamlessly with system design, business intent, and enterprise policy. That requires judgment. When developers rely heavily on AI to generate or even review code, there is a real risk that their understanding of the codebase erodes. If an engineer cannot fully explain why a piece of code works or is secure, the organization has already lost a layer of control.

Validation is not the same as detection. Accountability is not the same as automation. AI can assist, but it cannot assume responsibility (and no legislation to date absolves humans from consequences of rogue AI actions).

Human-in-the-loop oversight is not a legacy concept. In an AI-driven development environment, it is the primary safeguard that ensures code entering the software development lifecycle has been consciously reviewed, understood, and approved. Without that layer of judgment, speed becomes exposure.

Education Must Be the Foundation of Safe AI Adoption

In this context, secure coding training evolves from a nice-to-have to a core enterprise control. “Developers” will evolve from operators to orchestrators, shifting education from developing secure code to evaluating the security of AI-generated code.

The skills required to validate AI-generated code, anticipate emergent AI vulnerability classes such as prompt injection, and understand how AI patterns interact with your architecture cannot be learned through episodic compliance modules. They must be continuous, practical, integrated into existing workflows, and measurable against real risk outcomes. 

AI Software Governance: The Control Layer We’re Missing

Many security programs still treat application security as a downstream function. In an AI-driven environment, that model ceases to have the same impact on risk reduction. What’s required is AI Software Governance: a true enterprise control plane for an AI-driven software development lifecycle. This discipline spans the AI-driven software development lifecycle and establishes structured oversight of code creation, review, and approval.

That includes:

  • Visibility into AI tool usage across teams
  • Clear attribution of AI-generated code within the SDLC
  • Correlation of code changes to risk signals and policy requirements
  • Enforcement of secure coding standards inside developer workflows
  • Continuous upskilling and measurable secure coding capability
  • Demonstrable reduction in security risk before code reaches production

Governance is what bridges detection and decision, ensuring that productivity gains do not come at the expense of accountability.

AI will continue transforming how software is built, and abandoning it is neither realistic nor desirable. The productivity and innovation gains are, after all, too significant. But, working to extract value safely requires disciplined oversight and intentional human validation, and this infrastructure equally cannot be rushed or ignored.

Table of contents

Download PDF
View Resource
Interested in more?

Chief Executive Officer, Chairman, and Co-Founder

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts