AI/LLM Security Video Series: All Episodes, Updated Weekly
Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.


Your all-in-one guide to our 12-week AI/LLM security video series. Catch every episode, learn key AI security concepts, and follow along weekly.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoShannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.


Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.

Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoShannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.
Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated CodePrompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing DependenciesAI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and OutputsAI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.
Table of contents
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Trust Agent: AI by Secure Code Warrior
This one-pager introduces SCW Trust Agent: AI, a new set of capabilities that provide deep observability and governance over AI coding tools. Learn how our solution uniquely correlates AI tool usage with developer skills to help you manage risk, optimize your SDLC, and ensure every line of AI-generated code is secure.
Vibe Coding: Practical Guide to Updating Your AppSec Strategy for AI
Watch on-demand to learn how to empower AppSec managers to become AI enablers, rather than blockers, through a practical, training-first approach. We'll show you how to leverage Secure Code Warrior (SCW) to strategically update your AppSec strategy for the age of AI coding assistants.










.png)
.avif)


