Blog

Reclaiming Critical Thinking in AI-Augmented Secure Software Development

Matias Madou, Ph.D.
Published Nov 18, 2025
Last updated on Nov 18, 2025

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

View Resource
View Resource

The AI debate isn't about use, but application. Discover how to balance the need for AI productivity gains with robust security by relying on developers who deeply understand their code.

Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Matias Madou, Ph.D.
Published Nov 18, 2025

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.

Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.

Share on:

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

View webinar
Get Started

Click on the link below and download the PDF of this resource.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
View Resource
Share on:
Interested in more?

Share on:
Author
Matias Madou, Ph.D.
Published Nov 18, 2025

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.

Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.

Share on:

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

Table of contents

Download PDF
View Resource
Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts