hero bg no divider
Blog

Reclaiming Critical Thinking in AI-Augmented Secure Software Development

Matias Madou, Ph.D.
Published Nov 18, 2025
Last updated on Feb 13, 2026

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

Afficher la ressource
Afficher la ressource

The AI debate isn't about use, but application. Discover how to balance the need for AI productivity gains with robust security by relying on developers who deeply understand their code.

Vous souhaitez en savoir plus ?

Matias Madou, Ph.D. est expert en sécurité, chercheur, directeur technique et cofondateur de Secure Code Warrior. Matias a obtenu son doctorat en sécurité des applications à l'université de Gand, en se concentrant sur les solutions d'analyse statique. Il a ensuite rejoint Fortify aux États-Unis, où il s'est rendu compte qu'il ne suffisait pas de détecter uniquement les problèmes de code sans aider les développeurs à écrire du code sécurisé. Cela l'a incité à développer des produits qui aident les développeurs, allègent le fardeau de la sécurité et dépassent les attentes des clients. Lorsqu'il n'est pas à son bureau au sein de Team Awesome, il aime être sur scène pour faire des présentations lors de conférences telles que RSA Conference, BlackHat et DefCon.

learn more

Secure Code Warrior est là pour aider votre organisation à sécuriser le code tout au long du cycle de développement logiciel et à créer une culture dans laquelle la cybersécurité est une priorité. Que vous soyez responsable de la sécurité des applications, développeur, responsable de la sécurité informatique ou toute autre personne impliquée dans la sécurité, nous pouvons aider votre organisation à réduire les risques associés à un code non sécurisé.

Réservez une démo
Partagez sur :
linkedin brandsSocialx logo
Auteur
Matias Madou, Ph.D.
Published Nov 18, 2025

Matias Madou, Ph.D. est expert en sécurité, chercheur, directeur technique et cofondateur de Secure Code Warrior. Matias a obtenu son doctorat en sécurité des applications à l'université de Gand, en se concentrant sur les solutions d'analyse statique. Il a ensuite rejoint Fortify aux États-Unis, où il s'est rendu compte qu'il ne suffisait pas de détecter uniquement les problèmes de code sans aider les développeurs à écrire du code sécurisé. Cela l'a incité à développer des produits qui aident les développeurs, allègent le fardeau de la sécurité et dépassent les attentes des clients. Lorsqu'il n'est pas à son bureau au sein de Team Awesome, il aime être sur scène pour faire des présentations lors de conférences telles que RSA Conference, BlackHat et DefCon.

Matias est un chercheur et développeur qui possède plus de 15 ans d'expérience pratique en matière de sécurité logicielle. Il a développé des solutions pour des entreprises telles que Fortify Software et sa propre société Sensei Security. Au cours de sa carrière, Matias a dirigé de nombreux projets de recherche sur la sécurité des applications qui ont abouti à des produits commerciaux et possède plus de 10 brevets à son actif. Lorsqu'il n'est pas à son bureau, Matias a enseigné des cours de formation avancée sur la sécurité des applications et prend régulièrement la parole lors de conférences mondiales telles que RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec et BruCon.

Matias est titulaire d'un doctorat en génie informatique de l'université de Gand, où il a étudié la sécurité des applications par le biais de l'obfuscation de programmes pour masquer le fonctionnement interne d'une application.

Partagez sur :
linkedin brandsSocialx logo

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

Afficher la ressource
Afficher la ressource

Remplissez le formulaire ci-dessous pour télécharger le rapport

Nous aimerions avoir votre autorisation pour vous envoyer des informations sur nos produits et/ou sur des sujets liés au codage sécurisé. Nous traiterons toujours vos données personnelles avec le plus grand soin et ne les vendrons jamais à d'autres entreprises à des fins de marketing.

Soumettre
SCW Icons
scw error icon
Pour soumettre le formulaire, veuillez activer les cookies « Analytics ». N'hésitez pas à les désactiver à nouveau une fois que vous aurez terminé.

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

Afficher le webinaire
Commencez
learn more

Cliquez sur le lien ci-dessous et téléchargez le PDF de cette ressource.

Secure Code Warrior est là pour aider votre organisation à sécuriser le code tout au long du cycle de développement logiciel et à créer une culture dans laquelle la cybersécurité est une priorité. Que vous soyez responsable de la sécurité des applications, développeur, responsable de la sécurité informatique ou toute autre personne impliquée dans la sécurité, nous pouvons aider votre organisation à réduire les risques associés à un code non sécurisé.

Afficher le rapportRéservez une démo
Télécharger le PDF
Afficher la ressource
Partagez sur :
linkedin brandsSocialx logo
Vous souhaitez en savoir plus ?

Partagez sur :
linkedin brandsSocialx logo
Auteur
Matias Madou, Ph.D.
Published Nov 18, 2025

Matias Madou, Ph.D. est expert en sécurité, chercheur, directeur technique et cofondateur de Secure Code Warrior. Matias a obtenu son doctorat en sécurité des applications à l'université de Gand, en se concentrant sur les solutions d'analyse statique. Il a ensuite rejoint Fortify aux États-Unis, où il s'est rendu compte qu'il ne suffisait pas de détecter uniquement les problèmes de code sans aider les développeurs à écrire du code sécurisé. Cela l'a incité à développer des produits qui aident les développeurs, allègent le fardeau de la sécurité et dépassent les attentes des clients. Lorsqu'il n'est pas à son bureau au sein de Team Awesome, il aime être sur scène pour faire des présentations lors de conférences telles que RSA Conference, BlackHat et DefCon.

Matias est un chercheur et développeur qui possède plus de 15 ans d'expérience pratique en matière de sécurité logicielle. Il a développé des solutions pour des entreprises telles que Fortify Software et sa propre société Sensei Security. Au cours de sa carrière, Matias a dirigé de nombreux projets de recherche sur la sécurité des applications qui ont abouti à des produits commerciaux et possède plus de 10 brevets à son actif. Lorsqu'il n'est pas à son bureau, Matias a enseigné des cours de formation avancée sur la sécurité des applications et prend régulièrement la parole lors de conférences mondiales telles que RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec et BruCon.

Matias est titulaire d'un doctorat en génie informatique de l'université de Gand, où il a étudié la sécurité des applications par le biais de l'obfuscation de programmes pour masquer le fonctionnement interne d'une application.

Partagez sur :
linkedin brandsSocialx logo

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.

The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.

Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.

The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.

Over-reliance on AI risks cognitive decline

The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.

This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.

From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.

Developer education: Essential for the AI-driven ecosystem

A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time. 

Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.

If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.

A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.

This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.

Conclusion

When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills. 

Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.

The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

Table des matières

Télécharger le PDF
Afficher la ressource
Vous souhaitez en savoir plus ?

Matias Madou, Ph.D. est expert en sécurité, chercheur, directeur technique et cofondateur de Secure Code Warrior. Matias a obtenu son doctorat en sécurité des applications à l'université de Gand, en se concentrant sur les solutions d'analyse statique. Il a ensuite rejoint Fortify aux États-Unis, où il s'est rendu compte qu'il ne suffisait pas de détecter uniquement les problèmes de code sans aider les développeurs à écrire du code sécurisé. Cela l'a incité à développer des produits qui aident les développeurs, allègent le fardeau de la sécurité et dépassent les attentes des clients. Lorsqu'il n'est pas à son bureau au sein de Team Awesome, il aime être sur scène pour faire des présentations lors de conférences telles que RSA Conference, BlackHat et DefCon.

learn more

Secure Code Warrior est là pour aider votre organisation à sécuriser le code tout au long du cycle de développement logiciel et à créer une culture dans laquelle la cybersécurité est une priorité. Que vous soyez responsable de la sécurité des applications, développeur, responsable de la sécurité informatique ou toute autre personne impliquée dans la sécurité, nous pouvons aider votre organisation à réduire les risques associés à un code non sécurisé.

Réservez une démoTélécharger
Partagez sur :
linkedin brandsSocialx logo
Centre de ressources

Ressources pour vous aider à démarrer

Plus de posts
Centre de ressources

Ressources pour vous aider à démarrer

Plus de posts