AI/LLM Security Video Series: All Episodes, Updated Weekly
Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated Code
Prompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing Dependencies
AI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and Outputs
AI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
Week 8 — Excessive Agency: Controlling AI Autonomy Risks
As AI systems become more autonomous, excessive agency creates new risks where models act beyond their intended scope. In this video, we explore excessive agency vulnerabilities in AI-assisted development, explain how overstepping behaviors arise, and discuss techniques for maintaining control over AI-driven processes.
Week 9 — System Prompt Leakage: Hidden AI Security Risks
System prompts often include hidden instructions that guide AI behavior — but if these are exposed, attackers can manipulate models or extract sensitive information. In this video, we cover system prompt leakage vulnerabilities, explain how they occur, and discuss steps developers can take to safeguard their AI-powered workflows.
Week 10 — Vector Weaknesses: Securing AI Retrieval Workflows
AI models often rely on vector databases and embeddings to deliver powerful capabilities — but misconfigurations and insecure implementations can expose sensitive data and create new attack vectors. In this video, we dive into vector and embedding weaknesses, explain common security challenges, and share strategies to secure your AI-powered search and retrieval workflows.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.


Your all-in-one guide to our 12-week AI/LLM security video series. Catch every episode, learn key AI security concepts, and follow along weekly.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoShannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.


Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated Code
Prompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing Dependencies
AI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and Outputs
AI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
Week 8 — Excessive Agency: Controlling AI Autonomy Risks
As AI systems become more autonomous, excessive agency creates new risks where models act beyond their intended scope. In this video, we explore excessive agency vulnerabilities in AI-assisted development, explain how overstepping behaviors arise, and discuss techniques for maintaining control over AI-driven processes.
Week 9 — System Prompt Leakage: Hidden AI Security Risks
System prompts often include hidden instructions that guide AI behavior — but if these are exposed, attackers can manipulate models or extract sensitive information. In this video, we cover system prompt leakage vulnerabilities, explain how they occur, and discuss steps developers can take to safeguard their AI-powered workflows.
Week 10 — Vector Weaknesses: Securing AI Retrieval Workflows
AI models often rely on vector databases and embeddings to deliver powerful capabilities — but misconfigurations and insecure implementations can expose sensitive data and create new attack vectors. In this video, we dive into vector and embedding weaknesses, explain common security challenges, and share strategies to secure your AI-powered search and retrieval workflows.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.

Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated Code
Prompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing Dependencies
AI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and Outputs
AI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
Week 8 — Excessive Agency: Controlling AI Autonomy Risks
As AI systems become more autonomous, excessive agency creates new risks where models act beyond their intended scope. In this video, we explore excessive agency vulnerabilities in AI-assisted development, explain how overstepping behaviors arise, and discuss techniques for maintaining control over AI-driven processes.
Week 9 — System Prompt Leakage: Hidden AI Security Risks
System prompts often include hidden instructions that guide AI behavior — but if these are exposed, attackers can manipulate models or extract sensitive information. In this video, we cover system prompt leakage vulnerabilities, explain how they occur, and discuss steps developers can take to safeguard their AI-powered workflows.
Week 10 — Vector Weaknesses: Securing AI Retrieval Workflows
AI models often rely on vector databases and embeddings to deliver powerful capabilities — but misconfigurations and insecure implementations can expose sensitive data and create new attack vectors. In this video, we dive into vector and embedding weaknesses, explain common security challenges, and share strategies to secure your AI-powered search and retrieval workflows.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoShannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST. She’s passionate about making secure development and compliance more practical and approachable for technical teams, bridging the gap between security expectations and the realities of modern software development.
Your Guide to the AI/LLM Security Intro Series
AI coding tools like GitHub Copilot, Cursor, and others are reshaping how software is built — but they also introduce new security challenges that developers must understand to build safe, reliable applications. To help teams adopt AI securely, we’ve created a free, 12-week AI/LLM Security Intro Video Series on YouTube.
This post serves as your central hub for the series. Each week, we’ll update it with a new video and description, covering essential concepts like prompt injection, data and model poisoning, supply chain risks, secure prompting, and more. Bookmark this page to follow along weekly, or subscribe to our YouTube channel to get every lesson as soon as it’s released.
If you want to dive deeper beyond these introductory lessons, explore the full AI/LLM collection in the SCW platform or request a demo if you’re not yet a customer. Subscribe to our YouTube channel to catch every new episode as it’s released. And if you’d like to stay connected with the latest content, updates, and resources, opt-in here to join our community of developers and security leaders.
Episodes (Updated Weekly)
Week 1 — AI Coding Risks: Dangers of Using LLMs
In this video, we explore the potential dangers of using AI/LLMs when writing code and highlight key risks developers face when integrating AI-powered tools into their workflows.
Week 2 — AI Coding Benefits: Secure AI-Assisted Development
AI coding tools aren’t just risky — when used securely, they can help developers work faster and smarter. In this video, we explore the advantages of using AI/LLMs when writing code, highlighting how teams can harness AI responsibly while avoiding common security pitfalls.
Week 3 — Prompt Injection Explained: Protecting AI-Generated Code
Prompt injection is one of the most common AI/LLM vulnerabilities — and one every developer should know how to prevent. In this video, we introduce dire
Week 4 — Sensitive Info Disclosure: Avoiding AI Data Leaks
AI-powered tools can inadvertently leak sensitive information, putting your applications and data at risk. In this video, we cover sensitive information disclosure vulnerabilities, explain how they arise when using AI/LLMs, and share practical steps developers can take to reduce exposure.
Week 5 — AI Supply Chain Risks: Securing Dependencies
AI-assisted development accelerates coding — but it also introduces supply chain risks that can impact every layer of your applications. In this video, we explore vulnerabilities related to AI/LLMs, explain how third-party models and APIs can expand your attack surface, and share strategies for minimizing exposure.
Week 6 — Data Poisoning: Securing AI Models and Outputs
AI systems are only as secure as their training data — and compromised inputs can create vulnerabilities that ripple across your applications. In this video, we introduce data and model poisoning attacks, explain how malicious inputs can manipulate AI outputs, and share strategies to safeguard your systems.
Week 7 — Improper Output Handling: Validating AI-Generated Code
AI-powered tools can generate code fast — but if outputs aren’t validated, vulnerabilities can creep in unnoticed. In this video, we examine improper output handling in AI-assisted development, explain how risky outputs can compromise your applications, and share techniques for safeguarding generated code.
Week 8 — Excessive Agency: Controlling AI Autonomy Risks
As AI systems become more autonomous, excessive agency creates new risks where models act beyond their intended scope. In this video, we explore excessive agency vulnerabilities in AI-assisted development, explain how overstepping behaviors arise, and discuss techniques for maintaining control over AI-driven processes.
Week 9 — System Prompt Leakage: Hidden AI Security Risks
System prompts often include hidden instructions that guide AI behavior — but if these are exposed, attackers can manipulate models or extract sensitive information. In this video, we cover system prompt leakage vulnerabilities, explain how they occur, and discuss steps developers can take to safeguard their AI-powered workflows.
Week 10 — Vector Weaknesses: Securing AI Retrieval Workflows
AI models often rely on vector databases and embeddings to deliver powerful capabilities — but misconfigurations and insecure implementations can expose sensitive data and create new attack vectors. In this video, we dive into vector and embedding weaknesses, explain common security challenges, and share strategies to secure your AI-powered search and retrieval workflows.
These videos are designed to introduce the core concepts of AI/LLM security, but there’s so much more to explore within the Secure Code Warrior platform. Dive into AI Challenges that simulate real-world AI-assisted code review and remediation, explore our AI/LLM Guidelines aligned to industry best practices, and work through Walkthroughs, Missions, Quests, and Course Templates that provide hands-on experience building secure coding habits. For teams ready to advance their skills, the platform also offers a growing library of over 130 AI/LLM-focused learning activities, including topics like Coding with AI, Intro to AI Risk & Security, and the OWASP Top 10 for LLM Applications. Request a demo to learn more.
Table of contents
Shannon Holt is a cybersecurity product marketer with a background in application security, cloud security services, and compliance standards like PCI-DSS and HITRUST.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Threat Modeling with AI: Turning Every Developer into a Threat Modeler
Walk away better equipped to help developers combine threat modeling ideas and techniques with the AI tools they're already using to strengthen security, improve collaboration, and build more resilient software from the start.













.png)

.avif)
.png)


