AI Can Write and Review Code — But Humans Still Own the Risk

Anthropic’s launch of Claude Code Security marks a defining collision point between AI-assisted software development, and the rapid augmentation of how we approach modern cybersecurity. While this new Claude capability can identify vulnerabilities in AI-generated code, this creates a single point of both trust and failure, and a skilled human must still evaluate those findings and determine the appropriate remediation path. This approach is supported by the likes of Justin Greis, CEO of consulting firm Acceligence, who told CSO Online, “For those who blindly rely on any code scanning tool, AI or otherwise, to replace the fundamentals of good security practices and secure coding, this is your red blinking light to not outsource the very expertise that protects the value proposition of the product or service you’re developing.”
In that respect, the model is not fundamentally different from traditional SAST tools. It is more advanced in its reasoning, but a risk-sensitive use case still relies on a human-in-the-loop to interpret, validate and remediate the issues it surfaces safely.
The threat for organizations is not AI capability, it is unchecked AI autonomy and low oversight within the software development lifecycle. When AI both generates and evaluates code, strong, precision governance becomes a critical control.
The Expanding Definition of the “Developer”
AI has lowered the barrier to entry for creating apps and software. But, just because something can be done quickly using AI, doesn’t mean that you are doing it in the most secure or resilient way, nor that the project itself is user-ready. The entire premise of vibe coding is to get into the “flow state” and deal with enterprise-level development formalities, like security, later.
Today’s “developer” may be:
- A traditional engineer using AI to speed up coding tasks
- A product manager prototyping features via prompts
- A data analyst automating scripts through AI
- A QA engineer leveraging AI to generate test cases
From an organizational security perspective, who writes the code matters far less than the code's impact once it reaches production. From a compliance and risk standpoint, if an individual’s interaction with AI results in code entering the software development lifecycle (SDLC) without proper, company-specific security oversight, it introduces organizational risk — risk that must be understood, measured, and mitigated.
Why Human Judgment Still Matters
As more individuals gain the ability to generate code that could impact production or sensitive codebases, an organization’s risk profile expands. Governance must evolve to enable AI-driven development at scale while ensuring the controls required to protect the enterprise remain firmly in place.
AI can, fairly reliably, generate code and flag potential vulnerabilities. What it cannot do is validate whether that code is appropriate within the context of your architecture, data flows, identity model, regulatory obligations, or risk tolerance, and this is fundamental intel that impacts the potency of any security program. Additionally, implementing a tool like Claude Code into the SDLC is one thing, but tools like BaxBench demonstrate via vast data analysis that different models (for example, Opus vs Sonnet 4.5 vs Sonnet 3) are returning different outputs in terms of security and accuracy, resulting in an enormous difference in the real dollars a company will ultimately pay for usage as they push for working, secure code.
Secure software is not simply code that passes a scan. It must follow good, safe patterns that align seamlessly with system design, business intent, and enterprise policy. That requires judgment. When developers rely heavily on AI to generate or even review code, there is a real risk that their understanding of the codebase erodes. If an engineer cannot fully explain why a piece of code works or is secure, the organization has already lost a layer of control.
Validation is not the same as detection. Accountability is not the same as automation. AI can assist, but it cannot assume responsibility (and no legislation to date absolves humans from consequences of rogue AI actions).
Human-in-the-loop oversight is not a legacy concept. In an AI-driven development environment, it is the primary safeguard that ensures code entering the software development lifecycle has been consciously reviewed, understood, and approved. Without that layer of judgment, speed becomes exposure.
Education Must Be the Foundation of Safe AI Adoption
In this context, secure coding training evolves from a nice-to-have to a core enterprise control. “Developers” will evolve from operators to orchestrators, shifting education from developing secure code to evaluating the security of AI-generated code.
The skills required to validate AI-generated code, anticipate emergent AI vulnerability classes such as prompt injection, and understand how AI patterns interact with your architecture cannot be learned through episodic compliance modules. They must be continuous, practical, integrated into existing workflows, and measurable against real risk outcomes.
AI Software Governance: The Control Layer We’re Missing
Many security programs still treat application security as a downstream function. In an AI-driven environment, that model ceases to have the same impact on risk reduction. What’s required is AI Software Governance: a true enterprise control plane for an AI-driven software development lifecycle. This discipline spans the AI-driven software development lifecycle and establishes structured oversight of code creation, review, and approval.
That includes:
- Visibility into AI tool usage across teams
- Clear attribution of AI-generated code within the SDLC
- Correlation of code changes to risk signals and policy requirements
- Enforcement of secure coding standards inside developer workflows
- Continuous upskilling and measurable secure coding capability
- Demonstrable reduction in security risk before code reaches production
Governance is what bridges detection and decision, ensuring that productivity gains do not come at the expense of accountability.
AI will continue transforming how software is built, and abandoning it is neither realistic nor desirable. The productivity and innovation gains are, after all, too significant. But, working to extract value safely requires disciplined oversight and intentional human validation, and this infrastructure equally cannot be rushed or ignored.
Govern AI-driven development before it ships
Measure AI-assisted risk, enforce secure coding policy at commit, and accelerate secure delivery across your SDLC.
Il s'agit d'un titre dynamique avec des options de tag et de style
Lorem ipsum diam quis enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra orci sagittis eu volutpat odio facilisis.
%252520%252520(3).png)
Supercharged Security Awareness: How Tournaments are Inspiring Developers at Erste Group

Security as culture: How Blue Prism cultivates world-class secure developers
Learn how Blue Prism, the global leader in intelligent automation for the enterprise, used Secure Code Warrior's agile learning platform to create a security-first culture with their developers, achieve their business goals, and ship secure code at speed

One Culture of Security: How Sage built their security champions program with agile secure code learning
Discover how Sage enhanced security with a flexible, relationship-focused approach, creating 200+ security champions and achieving measurable risk reduction.
Secure AI-driven development before it ships
See developer risk, enforce policy, and prevent vulnerabilities across your software development lifecycle.
