Blog

SCW Launches Free AI/LLM Security Video Series for Developers

Shannon Holt
Published Sep 09, 2025
Last updated on Sep 09, 2025

A free, 12-week video series to help developers code securely with AI

AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.

That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.

Watch the first episode now: AI Coding Risks: Dangers of Using LLMs

Don’t miss an episode. Subscribe on YouTube


Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox. 

Why we created this series

AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.

Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.

What to expect

Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:

  • The benefits and dangers of using AI and LLMs in coding
  • Prompt injection and how malicious inputs manipulate AI outputs
  • Sensitive information disclosure and protecting secrets in AI-powered workflows
  • Supply chain risks when relying on third-party models and APIs
  • System prompt leakage, vector weaknesses, and retrieval vulnerabilities
  • Emerging challenges like misinformation, excessive agency, and unbounded consumption
  • And much more!

Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.

Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.

How to follow along

  • Subscribe on YouTube to get every episode as it’s released
  • Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
  • Bookmark the Video Hub Blog for quick access to every episode
  • If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.

Setting the standard for secure AI-assisted development

AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.

This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.

View Resource
View Resource

Introducing our free 12-week AI/LLM security video series! Learn the foundational risks of AI-assisted coding and how to build safer applications.

Interested in more?

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Shannon Holt
Published Sep 09, 2025

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.

Share on:

A free, 12-week video series to help developers code securely with AI

AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.

That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.

Watch the first episode now: AI Coding Risks: Dangers of Using LLMs

Don’t miss an episode. Subscribe on YouTube


Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox. 

Why we created this series

AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.

Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.

What to expect

Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:

  • The benefits and dangers of using AI and LLMs in coding
  • Prompt injection and how malicious inputs manipulate AI outputs
  • Sensitive information disclosure and protecting secrets in AI-powered workflows
  • Supply chain risks when relying on third-party models and APIs
  • System prompt leakage, vector weaknesses, and retrieval vulnerabilities
  • Emerging challenges like misinformation, excessive agency, and unbounded consumption
  • And much more!

Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.

Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.

How to follow along

  • Subscribe on YouTube to get every episode as it’s released
  • Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
  • Bookmark the Video Hub Blog for quick access to every episode
  • If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.

Setting the standard for secure AI-assisted development

AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.

This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.

View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

A free, 12-week video series to help developers code securely with AI

AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.

That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.

Watch the first episode now: AI Coding Risks: Dangers of Using LLMs

Don’t miss an episode. Subscribe on YouTube


Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox. 

Why we created this series

AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.

Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.

What to expect

Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:

  • The benefits and dangers of using AI and LLMs in coding
  • Prompt injection and how malicious inputs manipulate AI outputs
  • Sensitive information disclosure and protecting secrets in AI-powered workflows
  • Supply chain risks when relying on third-party models and APIs
  • System prompt leakage, vector weaknesses, and retrieval vulnerabilities
  • Emerging challenges like misinformation, excessive agency, and unbounded consumption
  • And much more!

Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.

Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.

How to follow along

  • Subscribe on YouTube to get every episode as it’s released
  • Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
  • Bookmark the Video Hub Blog for quick access to every episode
  • If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.

Setting the standard for secure AI-assisted development

AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.

This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.

View webinar
Get Started

Click on the link below and download the PDF of this resource.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
View Resource
Share on:
Interested in more?

Share on:
Author
Shannon Holt
Published Sep 09, 2025

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.

Share on:

A free, 12-week video series to help developers code securely with AI

AI coding assistants and large language models (LLMs) are transforming how software gets built. They promise speed, flexibility, and innovation — but they also introduce new security risks that developers can’t afford to ignore.

That’s why Secure Code Warrior is launching a free, 12-week AI/LLM Security Intro Video Series on YouTube. Each short episode focuses on introducing a key AI/LLM security risk or concept developers need to know to safely embrace AI-assisted development and avoid introducing vulnerabilities into their applications.

Watch the first episode now: AI Coding Risks: Dangers of Using LLMs

Don’t miss an episode. Subscribe on YouTube


Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox. 

Why we created this series

AI-assisted coding is rewriting the rules of software development. Tools like GitHub Copilot, Cursor, and others enable developers to ship code faster than ever — but without a strong understanding of AI security, teams risk introducing hidden vulnerabilities and inconsistent behaviors that traditional testing may not catch.

Secure development in the age of AI requires awareness first. This series is designed to cut through the noise and give developers an introductory foundation in AI/LLM security concepts so they can confidently innovate without compromising safety.

What to expect

Across 12 weeks, we’ll explore both opportunities and risks in AI-assisted development, including:

  • The benefits and dangers of using AI and LLMs in coding
  • Prompt injection and how malicious inputs manipulate AI outputs
  • Sensitive information disclosure and protecting secrets in AI-powered workflows
  • Supply chain risks when relying on third-party models and APIs
  • System prompt leakage, vector weaknesses, and retrieval vulnerabilities
  • Emerging challenges like misinformation, excessive agency, and unbounded consumption
  • And much more!

Want one place to binge or catch up? Bookmark our Video Hub — we’ll update it with the latest episodes and summaries.

Go to the Hub: AI/LLM Security Video Series: All Episodes, Updated Weekly.

How to follow along

  • Subscribe on YouTube to get every episode as it’s released
  • Join our community of developers and security leaders — opt-in here to get the latest videos, resources, and updates delivered straight to your inbox.
  • Bookmark the Video Hub Blog for quick access to every episode
  • If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the Secure Code Warrior platform or request a demo if you’re not yet a customer.

Setting the standard for secure AI-assisted development

AI is rapidly changing the way we build software. But innovation without security isn’t sustainable. Developers need practical, developer-first guidance to understand AI-assisted risks and implement secure coding habits that scale.

This free video series is part of Secure Code Warrior’s commitment to helping the developer community thrive in this new era. From our publicly available AI Security Rules on GitHub to our expanding AI/LLM learning collection, we’re equipping teams with the tools and knowledge they need to innovate securely.

Table of contents

Download PDF
View Resource
Interested in more?

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts