
SCW Launches Free AI/LLM Security Video Series for Developers
A free, 12-week video series to help developers code securely with AI
AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.
That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.
Watch the first episode now: AI Coding Risks: Dangers of Using LLMs
Don’t miss an episode. Subscribe on YouTube.

Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
Why we created this series
AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.
Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.
What to expect
Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:
- The benefits and dangers of using AI and LLMs in coding
- Prompt injection and how malicious inputs manipulate AI outputs
- Sensitive information disclosure and protecting secrets in AI-powered workflows
- Supply chain risks when relying on third-party models and APIs
- System prompt leakage, vector weaknesses, and retrieval vulnerabilities
- Emerging challenges like misinformation, excessive agency, and unbounded consumption
- And much more!
Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.
Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.
How to follow along
- Subscribe on YouTube to get every episode as it’s released
- Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
- Bookmark the Video Hub Blog for quick access to every episode
- If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.
Setting the standard for secure AI-assisted development
AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.
This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.


Introducing our free 12-week AI/LLM security video series! Learn the foundational risks of AI-assisted coding and how to build safer applications.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoShannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.


A free, 12-week video series to help developers code securely with AI
AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.
That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.
Watch the first episode now: AI Coding Risks: Dangers of Using LLMs
Don’t miss an episode. Subscribe on YouTube.

Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
Why we created this series
AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.
Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.
What to expect
Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:
- The benefits and dangers of using AI and LLMs in coding
- Prompt injection and how malicious inputs manipulate AI outputs
- Sensitive information disclosure and protecting secrets in AI-powered workflows
- Supply chain risks when relying on third-party models and APIs
- System prompt leakage, vector weaknesses, and retrieval vulnerabilities
- Emerging challenges like misinformation, excessive agency, and unbounded consumption
- And much more!
Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.
Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.
How to follow along
- Subscribe on YouTube to get every episode as it’s released
- Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
- Bookmark the Video Hub Blog for quick access to every episode
- If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.
Setting the standard for secure AI-assisted development
AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.
This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.

A free, 12-week video series to help developers code securely with AI
AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.
That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.
Watch the first episode now: AI Coding Risks: Dangers of Using LLMs
Don’t miss an episode. Subscribe on YouTube.

Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
Why we created this series
AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.
Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.
What to expect
Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:
- The benefits and dangers of using AI and LLMs in coding
- Prompt injection and how malicious inputs manipulate AI outputs
- Sensitive information disclosure and protecting secrets in AI-powered workflows
- Supply chain risks when relying on third-party models and APIs
- System prompt leakage, vector weaknesses, and retrieval vulnerabilities
- Emerging challenges like misinformation, excessive agency, and unbounded consumption
- And much more!
Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.
Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.
How to follow along
- Subscribe on YouTube to get every episode as it’s released
- Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
- Bookmark the Video Hub Blog for quick access to every episode
- If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.
Setting the standard for secure AI-assisted development
AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.
This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoShannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.
A free, 12-week video series to help developers code securely with AI
AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.
That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.
Watch the first episode now: AI Coding Risks: Dangers of Using LLMs
Don’t miss an episode. Subscribe on YouTube.

Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
Why we created this series
AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.
Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.
What to expect
Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:
- The benefits and dangers of using AI and LLMs in coding
- Prompt injection and how malicious inputs manipulate AI outputs
- Sensitive information disclosure and protecting secrets in AI-powered workflows
- Supply chain risks when relying on third-party models and APIs
- System prompt leakage, vector weaknesses, and retrieval vulnerabilities
- Emerging challenges like misinformation, excessive agency, and unbounded consumption
- And much more!
Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.
Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.
How to follow along
- Subscribe on YouTube to get every episode as it’s released
- Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
- Bookmark the Video Hub Blog for quick access to every episode
- If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.
Setting the standard for secure AI-assisted development
AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.
This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.
Table of contents
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Trust Agent:AI - Secure and scale AI-Drive development
AI is writing code. Who’s governing it? With up to 50% of AI-generated code containing security weaknesses, managing AI risk is critical. Discover how SCW's Trust Agent: AI provides the real-time visibility, proactive governance, and targeted upskilling needed to scale AI-driven development securely.
The Power of OpenText Application Security + Secure Code Warrior
OpenText Application Security and Secure Code Warrior combine vulnerability detection with AI Software Governance and developer capability. Together, they help organizations reduce risk, strengthen secure coding practices, and confidently adopt AI-driven development.
Secure Code Warrior corporate overview
Secure Code Warrior is an AI Software Governance platform designed to enable organizations to safely adopt AI-driven development by bridging the gap between development velocity and enterprise security. The platform addresses the "Visibility Gap," where security teams often lack insights into shadow AI coding tools and the origins of production code.
Secure code training topics & content
Our industry-leading content is always evolving to fit the ever changing software development landscape with your role in mind. Topics covering everything from AI to XQuery Injection, offered for a variety of roles from Architects and Engineers to Product Managers and QA. Get a sneak peek of what our content catalog has to offer by topic and role.
Resources to get you started
The Agentic Era Arrived Early. Don’t Get Caught Off Guard by Late AI Governance.
Anthropic's Claude Mythos represents a permanent, fundamental shift in how every security leader must approach their security program, especially with patch management of legacy systems.
Observe and Secure the ADLC: A Four-Point Framework for CISOs and Development Teams Using AI
While development teams look to make the most of GenAI’s undeniable benefits, we’d like to propose a four-point foundational framework that will allow security leaders to deploy AI coding tools and agents with a higher, more relevant standard of security best practices. It details exactly what enterprises can do to ensure safe, secure code development right now, and as agentic AI becomes an even bigger factor in the future.





.png)
