Blog

AI/LLM Security Video Series: All Episodes, Updated Weekly

Shannon Holt
Published Sep 09, 2025
Last updated on Oct 22, 2025

Your Guide to the AI/LLM Security Intro Series

AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.

This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.

If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.

Episodes (Updated Weekly)

Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.


Week 2 AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.


Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire


Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.

Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.


Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.


Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.


These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications.  Request a demo to learn more. 

View Resource
View Resource

Your all-in-one guide to our 12-week AI/LLM security video series. Catch every episode, learn key AI security concepts, and follow along weekly.

Interested in more?

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Shannon Holt
Published Sep 09, 2025

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.

Share on:

Your Guide to the AI/LLM Security Intro Series

AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.

This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.

If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.

Episodes (Updated Weekly)

Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.


Week 2 AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.


Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire


Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.

Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.


Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.


Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.


These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications.  Request a demo to learn more. 

View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

Your Guide to the AI/LLM Security Intro Series

AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.

This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.

If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.

Episodes (Updated Weekly)

Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.


Week 2 AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.


Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire


Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.

Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.


Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.


Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.


These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications.  Request a demo to learn more. 

View webinar
Get Started

Click on the link below and download the PDF of this resource.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
View Resource
Share on:
Interested in more?

Share on:
Author
Shannon Holt
Published Sep 09, 2025

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.

Share on:

Your Guide to the AI/LLM Security Intro Series

AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.

This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.

If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.

Episodes (Updated Weekly)

Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.


Week 2 AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.


Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire


Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.

Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.


Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.


Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.


These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications.  Request a demo to learn more. 

Table of contents

Download PDF
View Resource
Interested in more?

Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts