When Good Tools Go Bad: AI Tool Poisoning, and How to Stop Your AI From Acting as a Double Agent
AI-assisted development (or, the more trendy version, “vibe coding”) is having a vast, transformative effect on code creation. Established developers are adopting these tools in droves, and those among us who have always wanted to create our own software but lacked the relevant experience are also leveraging them to build assets that previously would have been cost- and time-prohibitive. While this technology promises to usher in a new era of innovation, it introduces a range of new vulnerabilities and risk profiles that security leaders are struggling to mitigate.
A recent discovery by InvariantLabs uncovered a critical vulnerability in the Model Context Protocol (MCP), an API-like framework allowing powerful AI tools to autonomously interact with other software and databases, that allows for what has been dubbed "Tool Poisoning Attacks," a new vulnerability category that could prove especially damaging in the enterprise. Major AI tools, such as Windsurf and Cursor, are not immune, and with many millions of users, awareness and the skills to manage this emerging security issue are paramount.
As it stands, the output of these tools is not consistently secure enough to label them as enterprise-ready, as noted in a recent research paper from AWS and Intuit security researchers, Vineeth Sai Narajala and Idan Habler: “As AI systems become more autonomous and start interacting directly with external tools and real-time data through things like MCP, making sure those interactions are secure becomes absolutely essential.”
Agentic AI systems and the risk profile of the Model Context Protocol
The Model Context Protocol is handy software built by Anthropic that allows for better, more seamless integration between Large Language Model (LLM) AI agents and other tooling. This is a powerful use case, opening up a world of possibilities between proprietary applications and business-critical SaaS tools like GitHub, interacting with cutting-edge AI solutions. Simply write an MCP server, and get on with the task of setting the guidelines around how you want it to function, and to what end.
The security implications of MCP technology are, in fact, mostly positive. The promise of more straightforward integration between LLMs and the tech stack used by security professionals is too enticing to ignore, and represents the possibility of precision security task automation at levels previously not possible, at least not without writing and deploying custom code, usually for each task. The enhanced interoperability of LLMs that is afforded by MCP is an exciting prospect for enterprise security, given that far-reaching visibility and connectivity between data, tools and personnel is fundamental to effective security defense and planning.
However, the use of MCP can introduce other possible threat vectors, significantly expanding the enterprise attack surface unless carefully managed. As is noted by InvariantLabs, Tool Poisoning Attacks represent a new vulnerability category that can lead to sensitive data exfiltration and unauthorized actions by AI models, and from there, the security implications get very dark, very quickly.
InvariantLabs notes that a Tool Poisoning Attack is made possible when malicious instructions are embedded within MCP tool descriptions that are not visible to users, but are fully readable (and executable) by AI models. This tricks the tool into performing unauthorized bad actions without user awareness. The issue lies in the MCP’s assumption that all tool descriptions are to be trusted, which is music to the ears of a threat actor.
They note these possible outcomes of a compromised tool:
- Directing AI models to access sensitive files (like SSH keys, configuration files, databases, etc.);
- Instructing the AI to extract and transmit this data, in an environment where these malicious actions are inherently concealed from the unaware user;
- Create a disconnect between what the user sees and the AI model does by hiding behind deceptively simple UI representations of tool arguments and outputs.
This is a concerning, emergent vulnerability category, and one that we will almost certainly see more frequently as the inevitable growth of MCP use continues. It will take careful action to find and mitigate this threat as enterprise security programs evolve, and adequately preparing developers to be part of the solution is key.
Why only security-skilled developers should be leveraging agentic AI tools
Agentic AI coding tools are considered the next evolution of AI-assisted coding, adding to their ability to offer increased efficiency, productivity, and flexibility in software development. Their enhanced ability to understand context and intent makes them especially useful, but they are not immune to threats like prompt injection, hallucination, or behavior manipulation from attackers.
Developers are the defensive line between good and bad code commits, and keeping both security and critical thinking skills sharp will be fundamental in the future of secure software development.
AI output should never be implemented with blind trust, and it’s security-skilled developers applying contextual, critical thinking that can safely leverage the productivity gains afforded by this technology. Still, it must be in what amounts to a pair programming environment, where the human expert is able to assess, threat-model, and ultimately approve the work produced by the tool.
Learn more about how developers can upskill and supercharge their productivity with AI here.
Practical mitigation techniques, and further reading in our latest research paper
AI coding tools and MCP technology are set to be a significant factor in the future of cybersecurity, but it’s vital that we don’t dive in before checking the water.
Narajala and Habler’s paper details comprehensive mitigation strategies for implementing MCP at the enterprise level and the ongoing management of its risks. Ultimately, it centers around defense-in-depth and Zero Trust principles, explicitly targeting the unique risk profile this new ecosystem brings to an enterprise environment. For developers specifically, it is essential to plug knowledge gaps in the following areas:
- Authentication and Access Control: Agentic AI tools function to solve problems and make autonomous decisions to fulfill the goals mapped out for them, much in the way a human would approach engineering tasks. However, as we have established, skilled human oversight of these processes cannot be ignored, and developers using these tools in their workflows must understand exactly what access they have, the data they retrieve or potentially expose, and where it might be shared.
- General Threat Detection and Mitigation: As is true of most AI processes, to spot potential flaws and inaccuracies in the tool’s output, the user must be proficient in the task themselves. Developers must receive continuous upskilling and verification of those skills to effectively review security processes and review AI-generated code with security precision and authority.
- Alignment with Security Policy and AI Governance: Developers should be made aware of approved tooling and given the opportunity to upskill and gain access to them. Both the developer and the tool should be subject to security benchmarking before commits are trusted.
We recently released a research paper on the emergence of vibe coding and AI-assisted coding, and the steps enterprises must take to uplift the next generation of AI-powered software engineers. Check it out, and get in touch to fortify your development cohort today.
Chief Executive Officer, Chairman, and Co-Founder

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoChief Executive Officer, Chairman, and Co-Founder
Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.


AI-assisted development (or, the more trendy version, “vibe coding”) is having a vast, transformative effect on code creation. Established developers are adopting these tools in droves, and those among us who have always wanted to create our own software but lacked the relevant experience are also leveraging them to build assets that previously would have been cost- and time-prohibitive. While this technology promises to usher in a new era of innovation, it introduces a range of new vulnerabilities and risk profiles that security leaders are struggling to mitigate.
A recent discovery by InvariantLabs uncovered a critical vulnerability in the Model Context Protocol (MCP), an API-like framework allowing powerful AI tools to autonomously interact with other software and databases, that allows for what has been dubbed "Tool Poisoning Attacks," a new vulnerability category that could prove especially damaging in the enterprise. Major AI tools, such as Windsurf and Cursor, are not immune, and with many millions of users, awareness and the skills to manage this emerging security issue are paramount.
As it stands, the output of these tools is not consistently secure enough to label them as enterprise-ready, as noted in a recent research paper from AWS and Intuit security researchers, Vineeth Sai Narajala and Idan Habler: “As AI systems become more autonomous and start interacting directly with external tools and real-time data through things like MCP, making sure those interactions are secure becomes absolutely essential.”
Agentic AI systems and the risk profile of the Model Context Protocol
The Model Context Protocol is handy software built by Anthropic that allows for better, more seamless integration between Large Language Model (LLM) AI agents and other tooling. This is a powerful use case, opening up a world of possibilities between proprietary applications and business-critical SaaS tools like GitHub, interacting with cutting-edge AI solutions. Simply write an MCP server, and get on with the task of setting the guidelines around how you want it to function, and to what end.
The security implications of MCP technology are, in fact, mostly positive. The promise of more straightforward integration between LLMs and the tech stack used by security professionals is too enticing to ignore, and represents the possibility of precision security task automation at levels previously not possible, at least not without writing and deploying custom code, usually for each task. The enhanced interoperability of LLMs that is afforded by MCP is an exciting prospect for enterprise security, given that far-reaching visibility and connectivity between data, tools and personnel is fundamental to effective security defense and planning.
However, the use of MCP can introduce other possible threat vectors, significantly expanding the enterprise attack surface unless carefully managed. As is noted by InvariantLabs, Tool Poisoning Attacks represent a new vulnerability category that can lead to sensitive data exfiltration and unauthorized actions by AI models, and from there, the security implications get very dark, very quickly.
InvariantLabs notes that a Tool Poisoning Attack is made possible when malicious instructions are embedded within MCP tool descriptions that are not visible to users, but are fully readable (and executable) by AI models. This tricks the tool into performing unauthorized bad actions without user awareness. The issue lies in the MCP’s assumption that all tool descriptions are to be trusted, which is music to the ears of a threat actor.
They note these possible outcomes of a compromised tool:
- Directing AI models to access sensitive files (like SSH keys, configuration files, databases, etc.);
- Instructing the AI to extract and transmit this data, in an environment where these malicious actions are inherently concealed from the unaware user;
- Create a disconnect between what the user sees and the AI model does by hiding behind deceptively simple UI representations of tool arguments and outputs.
This is a concerning, emergent vulnerability category, and one that we will almost certainly see more frequently as the inevitable growth of MCP use continues. It will take careful action to find and mitigate this threat as enterprise security programs evolve, and adequately preparing developers to be part of the solution is key.
Why only security-skilled developers should be leveraging agentic AI tools
Agentic AI coding tools are considered the next evolution of AI-assisted coding, adding to their ability to offer increased efficiency, productivity, and flexibility in software development. Their enhanced ability to understand context and intent makes them especially useful, but they are not immune to threats like prompt injection, hallucination, or behavior manipulation from attackers.
Developers are the defensive line between good and bad code commits, and keeping both security and critical thinking skills sharp will be fundamental in the future of secure software development.
AI output should never be implemented with blind trust, and it’s security-skilled developers applying contextual, critical thinking that can safely leverage the productivity gains afforded by this technology. Still, it must be in what amounts to a pair programming environment, where the human expert is able to assess, threat-model, and ultimately approve the work produced by the tool.
Learn more about how developers can upskill and supercharge their productivity with AI here.
Practical mitigation techniques, and further reading in our latest research paper
AI coding tools and MCP technology are set to be a significant factor in the future of cybersecurity, but it’s vital that we don’t dive in before checking the water.
Narajala and Habler’s paper details comprehensive mitigation strategies for implementing MCP at the enterprise level and the ongoing management of its risks. Ultimately, it centers around defense-in-depth and Zero Trust principles, explicitly targeting the unique risk profile this new ecosystem brings to an enterprise environment. For developers specifically, it is essential to plug knowledge gaps in the following areas:
- Authentication and Access Control: Agentic AI tools function to solve problems and make autonomous decisions to fulfill the goals mapped out for them, much in the way a human would approach engineering tasks. However, as we have established, skilled human oversight of these processes cannot be ignored, and developers using these tools in their workflows must understand exactly what access they have, the data they retrieve or potentially expose, and where it might be shared.
- General Threat Detection and Mitigation: As is true of most AI processes, to spot potential flaws and inaccuracies in the tool’s output, the user must be proficient in the task themselves. Developers must receive continuous upskilling and verification of those skills to effectively review security processes and review AI-generated code with security precision and authority.
- Alignment with Security Policy and AI Governance: Developers should be made aware of approved tooling and given the opportunity to upskill and gain access to them. Both the developer and the tool should be subject to security benchmarking before commits are trusted.
We recently released a research paper on the emergence of vibe coding and AI-assisted coding, and the steps enterprises must take to uplift the next generation of AI-powered software engineers. Check it out, and get in touch to fortify your development cohort today.

AI-assisted development (or, the more trendy version, “vibe coding”) is having a vast, transformative effect on code creation. Established developers are adopting these tools in droves, and those among us who have always wanted to create our own software but lacked the relevant experience are also leveraging them to build assets that previously would have been cost- and time-prohibitive. While this technology promises to usher in a new era of innovation, it introduces a range of new vulnerabilities and risk profiles that security leaders are struggling to mitigate.
A recent discovery by InvariantLabs uncovered a critical vulnerability in the Model Context Protocol (MCP), an API-like framework allowing powerful AI tools to autonomously interact with other software and databases, that allows for what has been dubbed "Tool Poisoning Attacks," a new vulnerability category that could prove especially damaging in the enterprise. Major AI tools, such as Windsurf and Cursor, are not immune, and with many millions of users, awareness and the skills to manage this emerging security issue are paramount.
As it stands, the output of these tools is not consistently secure enough to label them as enterprise-ready, as noted in a recent research paper from AWS and Intuit security researchers, Vineeth Sai Narajala and Idan Habler: “As AI systems become more autonomous and start interacting directly with external tools and real-time data through things like MCP, making sure those interactions are secure becomes absolutely essential.”
Agentic AI systems and the risk profile of the Model Context Protocol
The Model Context Protocol is handy software built by Anthropic that allows for better, more seamless integration between Large Language Model (LLM) AI agents and other tooling. This is a powerful use case, opening up a world of possibilities between proprietary applications and business-critical SaaS tools like GitHub, interacting with cutting-edge AI solutions. Simply write an MCP server, and get on with the task of setting the guidelines around how you want it to function, and to what end.
The security implications of MCP technology are, in fact, mostly positive. The promise of more straightforward integration between LLMs and the tech stack used by security professionals is too enticing to ignore, and represents the possibility of precision security task automation at levels previously not possible, at least not without writing and deploying custom code, usually for each task. The enhanced interoperability of LLMs that is afforded by MCP is an exciting prospect for enterprise security, given that far-reaching visibility and connectivity between data, tools and personnel is fundamental to effective security defense and planning.
However, the use of MCP can introduce other possible threat vectors, significantly expanding the enterprise attack surface unless carefully managed. As is noted by InvariantLabs, Tool Poisoning Attacks represent a new vulnerability category that can lead to sensitive data exfiltration and unauthorized actions by AI models, and from there, the security implications get very dark, very quickly.
InvariantLabs notes that a Tool Poisoning Attack is made possible when malicious instructions are embedded within MCP tool descriptions that are not visible to users, but are fully readable (and executable) by AI models. This tricks the tool into performing unauthorized bad actions without user awareness. The issue lies in the MCP’s assumption that all tool descriptions are to be trusted, which is music to the ears of a threat actor.
They note these possible outcomes of a compromised tool:
- Directing AI models to access sensitive files (like SSH keys, configuration files, databases, etc.);
- Instructing the AI to extract and transmit this data, in an environment where these malicious actions are inherently concealed from the unaware user;
- Create a disconnect between what the user sees and the AI model does by hiding behind deceptively simple UI representations of tool arguments and outputs.
This is a concerning, emergent vulnerability category, and one that we will almost certainly see more frequently as the inevitable growth of MCP use continues. It will take careful action to find and mitigate this threat as enterprise security programs evolve, and adequately preparing developers to be part of the solution is key.
Why only security-skilled developers should be leveraging agentic AI tools
Agentic AI coding tools are considered the next evolution of AI-assisted coding, adding to their ability to offer increased efficiency, productivity, and flexibility in software development. Their enhanced ability to understand context and intent makes them especially useful, but they are not immune to threats like prompt injection, hallucination, or behavior manipulation from attackers.
Developers are the defensive line between good and bad code commits, and keeping both security and critical thinking skills sharp will be fundamental in the future of secure software development.
AI output should never be implemented with blind trust, and it’s security-skilled developers applying contextual, critical thinking that can safely leverage the productivity gains afforded by this technology. Still, it must be in what amounts to a pair programming environment, where the human expert is able to assess, threat-model, and ultimately approve the work produced by the tool.
Learn more about how developers can upskill and supercharge their productivity with AI here.
Practical mitigation techniques, and further reading in our latest research paper
AI coding tools and MCP technology are set to be a significant factor in the future of cybersecurity, but it’s vital that we don’t dive in before checking the water.
Narajala and Habler’s paper details comprehensive mitigation strategies for implementing MCP at the enterprise level and the ongoing management of its risks. Ultimately, it centers around defense-in-depth and Zero Trust principles, explicitly targeting the unique risk profile this new ecosystem brings to an enterprise environment. For developers specifically, it is essential to plug knowledge gaps in the following areas:
- Authentication and Access Control: Agentic AI tools function to solve problems and make autonomous decisions to fulfill the goals mapped out for them, much in the way a human would approach engineering tasks. However, as we have established, skilled human oversight of these processes cannot be ignored, and developers using these tools in their workflows must understand exactly what access they have, the data they retrieve or potentially expose, and where it might be shared.
- General Threat Detection and Mitigation: As is true of most AI processes, to spot potential flaws and inaccuracies in the tool’s output, the user must be proficient in the task themselves. Developers must receive continuous upskilling and verification of those skills to effectively review security processes and review AI-generated code with security precision and authority.
- Alignment with Security Policy and AI Governance: Developers should be made aware of approved tooling and given the opportunity to upskill and gain access to them. Both the developer and the tool should be subject to security benchmarking before commits are trusted.
We recently released a research paper on the emergence of vibe coding and AI-assisted coding, and the steps enterprises must take to uplift the next generation of AI-powered software engineers. Check it out, and get in touch to fortify your development cohort today.

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoChief Executive Officer, Chairman, and Co-Founder
Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.
AI-assisted development (or, the more trendy version, “vibe coding”) is having a vast, transformative effect on code creation. Established developers are adopting these tools in droves, and those among us who have always wanted to create our own software but lacked the relevant experience are also leveraging them to build assets that previously would have been cost- and time-prohibitive. While this technology promises to usher in a new era of innovation, it introduces a range of new vulnerabilities and risk profiles that security leaders are struggling to mitigate.
A recent discovery by InvariantLabs uncovered a critical vulnerability in the Model Context Protocol (MCP), an API-like framework allowing powerful AI tools to autonomously interact with other software and databases, that allows for what has been dubbed "Tool Poisoning Attacks," a new vulnerability category that could prove especially damaging in the enterprise. Major AI tools, such as Windsurf and Cursor, are not immune, and with many millions of users, awareness and the skills to manage this emerging security issue are paramount.
As it stands, the output of these tools is not consistently secure enough to label them as enterprise-ready, as noted in a recent research paper from AWS and Intuit security researchers, Vineeth Sai Narajala and Idan Habler: “As AI systems become more autonomous and start interacting directly with external tools and real-time data through things like MCP, making sure those interactions are secure becomes absolutely essential.”
Agentic AI systems and the risk profile of the Model Context Protocol
The Model Context Protocol is handy software built by Anthropic that allows for better, more seamless integration between Large Language Model (LLM) AI agents and other tooling. This is a powerful use case, opening up a world of possibilities between proprietary applications and business-critical SaaS tools like GitHub, interacting with cutting-edge AI solutions. Simply write an MCP server, and get on with the task of setting the guidelines around how you want it to function, and to what end.
The security implications of MCP technology are, in fact, mostly positive. The promise of more straightforward integration between LLMs and the tech stack used by security professionals is too enticing to ignore, and represents the possibility of precision security task automation at levels previously not possible, at least not without writing and deploying custom code, usually for each task. The enhanced interoperability of LLMs that is afforded by MCP is an exciting prospect for enterprise security, given that far-reaching visibility and connectivity between data, tools and personnel is fundamental to effective security defense and planning.
However, the use of MCP can introduce other possible threat vectors, significantly expanding the enterprise attack surface unless carefully managed. As is noted by InvariantLabs, Tool Poisoning Attacks represent a new vulnerability category that can lead to sensitive data exfiltration and unauthorized actions by AI models, and from there, the security implications get very dark, very quickly.
InvariantLabs notes that a Tool Poisoning Attack is made possible when malicious instructions are embedded within MCP tool descriptions that are not visible to users, but are fully readable (and executable) by AI models. This tricks the tool into performing unauthorized bad actions without user awareness. The issue lies in the MCP’s assumption that all tool descriptions are to be trusted, which is music to the ears of a threat actor.
They note these possible outcomes of a compromised tool:
- Directing AI models to access sensitive files (like SSH keys, configuration files, databases, etc.);
- Instructing the AI to extract and transmit this data, in an environment where these malicious actions are inherently concealed from the unaware user;
- Create a disconnect between what the user sees and the AI model does by hiding behind deceptively simple UI representations of tool arguments and outputs.
This is a concerning, emergent vulnerability category, and one that we will almost certainly see more frequently as the inevitable growth of MCP use continues. It will take careful action to find and mitigate this threat as enterprise security programs evolve, and adequately preparing developers to be part of the solution is key.
Why only security-skilled developers should be leveraging agentic AI tools
Agentic AI coding tools are considered the next evolution of AI-assisted coding, adding to their ability to offer increased efficiency, productivity, and flexibility in software development. Their enhanced ability to understand context and intent makes them especially useful, but they are not immune to threats like prompt injection, hallucination, or behavior manipulation from attackers.
Developers are the defensive line between good and bad code commits, and keeping both security and critical thinking skills sharp will be fundamental in the future of secure software development.
AI output should never be implemented with blind trust, and it’s security-skilled developers applying contextual, critical thinking that can safely leverage the productivity gains afforded by this technology. Still, it must be in what amounts to a pair programming environment, where the human expert is able to assess, threat-model, and ultimately approve the work produced by the tool.
Learn more about how developers can upskill and supercharge their productivity with AI here.
Practical mitigation techniques, and further reading in our latest research paper
AI coding tools and MCP technology are set to be a significant factor in the future of cybersecurity, but it’s vital that we don’t dive in before checking the water.
Narajala and Habler’s paper details comprehensive mitigation strategies for implementing MCP at the enterprise level and the ongoing management of its risks. Ultimately, it centers around defense-in-depth and Zero Trust principles, explicitly targeting the unique risk profile this new ecosystem brings to an enterprise environment. For developers specifically, it is essential to plug knowledge gaps in the following areas:
- Authentication and Access Control: Agentic AI tools function to solve problems and make autonomous decisions to fulfill the goals mapped out for them, much in the way a human would approach engineering tasks. However, as we have established, skilled human oversight of these processes cannot be ignored, and developers using these tools in their workflows must understand exactly what access they have, the data they retrieve or potentially expose, and where it might be shared.
- General Threat Detection and Mitigation: As is true of most AI processes, to spot potential flaws and inaccuracies in the tool’s output, the user must be proficient in the task themselves. Developers must receive continuous upskilling and verification of those skills to effectively review security processes and review AI-generated code with security precision and authority.
- Alignment with Security Policy and AI Governance: Developers should be made aware of approved tooling and given the opportunity to upskill and gain access to them. Both the developer and the tool should be subject to security benchmarking before commits are trusted.
We recently released a research paper on the emergence of vibe coding and AI-assisted coding, and the steps enterprises must take to uplift the next generation of AI-powered software engineers. Check it out, and get in touch to fortify your development cohort today.
Table of contents
Chief Executive Officer, Chairman, and Co-Founder

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
AI Coding Assistants: A Guide to Security-Safe Navigation for the Next Generation of Developers
Large language models deliver irresistible advantages in speed and productivity, but they also introduce undeniable risks to the enterprise. Traditional security guardrails aren’t enough to control the deluge. Developers require precise, verified security skills to identify and prevent security flaws at the outset of the software development lifecycle.
Secure by Design: Defining Best Practices, Enabling Developers and Benchmarking Preventative Security Outcomes
In this research paper, Secure Code Warrior co-founders, Pieter Danhieux and Dr. Matias Madou, Ph.D., along with expert contributors, Chris Inglis, Former US National Cyber Director (now Strategic Advisor to Paladin Capital Group), and Devin Lynch, Senior Director, Paladin Global Institute, will reveal key findings from over twenty in-depth interviews with enterprise security leaders including CISOs, a VP of Application Security, and software security professionals.
Resources to get you started
Setting the Standard: SCW Releases Free AI Coding Security Rules on GitHub
AI-assisted development is no longer on the horizon — it’s here, and it’s rapidly reshaping how software is written. Tools like GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf are transforming developers into co-pilots of their own, enabling faster iteration and accelerating everything from prototyping to major refactoring projects.
Close the Loop on Vulnerabilities with Secure Code Warrior + HackerOne
Secure Code Warrior is excited to announce our new integration with HackerOne, a leader in offensive security solutions. Together, we're building a powerful, integrated ecosystem. HackerOne pinpoints where vulnerabilities are actually happening in real-world environments, exposing the "what" and "where" of security issues.
Revealed: How the Cyber Industry Defines Secure by Design
In our latest white paper, our Co-Founders, Pieter Danhieux and Dr. Matias Madou, Ph.D., sat down with over twenty enterprise security leaders, including CISOs, AppSec leaders and security professionals, to figure out the key pieces of this puzzle and uncover the reality behind the Secure by Design movement. It’s a shared ambition across the security teams, but no shared playbook.
Is Vibe Coding Going to Turn Your Codebase Into a Frat Party?
Vibe coding is like a college frat party, and AI is the centerpiece of all the festivities, the keg. It’s a lot of fun to let loose, get creative, and see where your imagination can take you, but after a few keg stands, drinking (or, using AI) in moderation is undoubtedly the safer long-term solution.