
Rückgewinnung kritischer Denkweisen in der KI-gestützten sicheren Softwareentwicklung
A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.
The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.
Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.
The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.
Over-reliance on AI risks cognitive decline
The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.
This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.
From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.
Developer education: Essential for the AI-driven ecosystem
A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time.
Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.
If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.
A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.
This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.
Conclusion
When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills.
Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.
The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.


In der KI-Debatte geht es nicht um Nutzung, sondern um Anwendung. Erfahren Sie, wie Sie die Notwendigkeit von KI-Produktivitätssteigerungen mit robuster Sicherheit in Einklang bringen können, indem Sie sich auf Entwickler verlassen, die ihren Code genau verstehen.
Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior ist für Ihr Unternehmen da, um Ihnen zu helfen, Code während des gesamten Softwareentwicklungszyklus zu sichern und eine Kultur zu schaffen, in der Cybersicherheit an erster Stelle steht. Ganz gleich, ob Sie AppSec-Manager, Entwickler, CISO oder jemand anderes sind, der sich mit Sicherheit befasst, wir können Ihrem Unternehmen helfen, die mit unsicherem Code verbundenen Risiken zu reduzieren.
Eine Demo buchenMatias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.
Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.
Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.


A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.
The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.
Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.
The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.
Over-reliance on AI risks cognitive decline
The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.
This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.
From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.
Developer education: Essential for the AI-driven ecosystem
A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time.
Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.
If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.
A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.
This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.
Conclusion
When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills.
Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.
The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.
The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.
Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.
The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.
Over-reliance on AI risks cognitive decline
The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.
This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.
From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.
Developer education: Essential for the AI-driven ecosystem
A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time.
Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.
If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.
A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.
This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.
Conclusion
When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills.
Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.
The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.

Klicken Sie auf den Link unten und laden Sie das PDF dieser Ressource herunter.
Secure Code Warrior ist für Ihr Unternehmen da, um Ihnen zu helfen, Code während des gesamten Softwareentwicklungszyklus zu sichern und eine Kultur zu schaffen, in der Cybersicherheit an erster Stelle steht. Ganz gleich, ob Sie AppSec-Manager, Entwickler, CISO oder jemand anderes sind, der sich mit Sicherheit befasst, wir können Ihrem Unternehmen helfen, die mit unsicherem Code verbundenen Risiken zu reduzieren.
Bericht ansehenEine Demo buchenMatias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.
Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.
Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.
A version of this article appeared in Cybersecurity Insiders. It has been updated and syndicated here.
The adoption of artificial intelligence assistants, ranging from Large Language Model (LLM) code creators to sophisticated agentic AI agents, provides software developers with a wealth of benefits. Yet recent findings, underscored by a new MIT study, issue a warning: heavy reliance on AI might lead users to lose critical thinking skills.
Given a software landscape where AI-related security risks have grown in step with AI adoption, this loss of cognitive fitness could indeed lead to catastrophic outcomes. It is an ethical imperative for developers and organizations to proactively identify, understand, and mitigate security vulnerabilities early within the Software Development Lifecycle (SDLC). Those who neglect this duty, which, alarmingly, describes many organizations today, face an equally sharp rise in potential security threats, some of which are directly attributable to AI.
The debate is not whether to use AI, since the productivity and efficiency benefits are too great to dismiss. Instead, the real question is how to apply it most effectively: safeguarding security while maximizing output growth. And this is best done by security-proficient developers who deeply understand their code, no matter where it originated.
Over-reliance on AI risks cognitive decline
The study by MIT’s Media Lab, released in early June, tested the cognitive functions of 54 students from five Boston-area universities while they wrote an essay. The students were divided into three groups: those using a Large Language Model (LLM), those using search engines, and those going old school with no outside assistance. The research team used electroencephalography (EEG) to record the participants' brain activity and assess cognitive engagement and cognitive load. The team found that the old school, “brain-only” group exhibited the strongest, most wide-ranging neural activity, while those using search engines showed moderate activity, and those using an LLM (in this case, OpenAI’s ChatGPT-4) exhibited the least amount of brain activity.
This may not be particularly surprising—after all, when you enlist a tool to do your thinking for you, you are going to do less thinking. However, the study also revealed that LLM users had a weaker connection to their papers: 83% of students struggled to recall the content of their essays even just minutes after completion, and none of the participants could provide accurate quotes. A sense of author ownership was missing compared with the other groups. Brain-only participants not only had the highest sense of ownership and showed the broadest range of brain activity, but they also produced the most original papers. The LLM group’s results were more homogenous—and, in fact, were easily identified by judges as the work of AI.
From the developers' point of view, the key outcome is diminished critical thinking stemming from AI use. A single instance of relying on AI might not cause a loss of essential thinking skills, of course, but constant use over time can cause those skills to atrophy. The study suggests a way to help keep critical thinking alive while using AI—by having AI help the user rather than the user help AI—but the real emphasis must be on ensuring that developers have the security skills they need to build safe software and that they use those skills as a routine, essential part of their jobs.
Developer education: Essential for the AI-driven ecosystem
A study like MIT’s isn’t going to stop AI adoption, which is pushing forward across every sector. Stanford University’s 2025 AI Index Report found that 78% of organizations reported using AI in 2024, compared with 55% in 2023. That kind of growth is expected to continue. But increased use is mirrored by increased risk: The report found that AI-related cybersecurity incidents grew by 56% over the same time.
Stanford’s report underscores the vital need for improved AI governance, as it also found that organizations are lax in implementing security safeguards. Although practically all organizations recognize the risks of AI, fewer than two-thirds are doing anything about it, leaving them vulnerable to a host of cybersecurity threats and potentially in violation of increasingly strict regulatory compliance requirements.
If the answer isn’t to stop using AI (which no one will do), it must be to use AI more safely and securely. The MIT study offers one helpful clue on how to go about that. In a fourth session of the study, researchers broke the LLM users into two groups: those who started the essay on their own before turning to ChatGPT for help, known in the study as the Brain-to-LLM group, and those who had ChatGPT work up a first draft before giving it their personal attention, known as the LLM-to-Brain group. The Brain-to-LLM group, which used AI tools to help rewrite an essay they had already drafted, showed higher recall and brain activity, with some areas similar to those of the search engine users. The LLM-to-Brain group, which allowed AI to initiate the essay, exhibited less coordinated neural activity and a bias toward using LLM vocabulary.
A Brain-to-LLM approach may help keep users’ brains a bit sharper, but developers also need the specific knowledge to write software securely and to critically evaluate AI-generated code for errors and security risks. They need to understand AI’s limitations, including its propensity to introduce security flaws such as vulnerabilities to prompt injection attacks.
This requires overhauling enterprise security programs to ensure a human-centric SDLC, in which developers receive effective, flexible, hands-on, and ongoing upskilling as part of an enterprise-wide security-first culture. Developers need to continuously sharpen their skills to stay abreast of rapidly evolving, sophisticated threats, particularly those driven by AI’s prominent role in software development. This protects against, for example, increasingly common prompt injection attacks. But for that protection to work, organizations need a developer-driven initiative to focus on secure design patterns and threat modeling.
Conclusion
When LLMs or agentic agents do the heavy lifting, users become passive bystanders. This can lead, the study’s authors said, to “weakened critical thinking skills, less deep understanding of the materials and less long-term memory formation.” A lower level of cognitive engagement can also lead to reduced decision-making skills.
Organizations cannot afford a lack of critical thinking when it comes to cybersecurity. And because software flaws in highly distributed, cloud-based environments have become the top target of cyberattackers, cybersecurity starts with ensuring secure code, whether it is created by developers, AI assistants or agentic agents. For all of AI’s power, organizations more than ever need highly honed problem-solving and critical thinking skills. And that can’t be outsourced to AI.
The new AI capabilities of SCW Trust Agent provide the deep observability and control you need to confidently manage AI adoption in your SDLC without sacrificing security. Find out more.
Inhaltsverzeichniss
Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior ist für Ihr Unternehmen da, um Ihnen zu helfen, Code während des gesamten Softwareentwicklungszyklus zu sichern und eine Kultur zu schaffen, in der Cybersicherheit an erster Stelle steht. Ganz gleich, ob Sie AppSec-Manager, Entwickler, CISO oder jemand anderes sind, der sich mit Sicherheit befasst, wir können Ihrem Unternehmen helfen, die mit unsicherem Code verbundenen Risiken zu reduzieren.
Eine Demo buchenHerunterladenRessourcen für den Einstieg
Themen und Inhalte der Securecode-Schulung
Unsere branchenführenden Inhalte werden ständig weiterentwickelt, um der sich ständig ändernden Softwareentwicklungslandschaft unter Berücksichtigung Ihrer Rolle gerecht zu werden. Themen, die alles von KI bis XQuery Injection abdecken und für eine Vielzahl von Rollen angeboten werden, von Architekten und Ingenieuren bis hin zu Produktmanagern und QA. Verschaffen Sie sich einen kleinen Einblick in das Angebot unseres Inhaltskatalogs nach Themen und Rollen.
Threat Modeling with AI: Turning Every Developer into a Threat Modeler
Walk away better equipped to help developers combine threat modeling ideas and techniques with the AI tools they're already using to strengthen security, improve collaboration, and build more resilient software from the start.
Ressourcen für den Einstieg
Cybermon is back: Beat the Boss KI-Missionen jetzt auf Abruf verfügbar
Cybermon 2025 Beat the Boss ist jetzt das ganze Jahr über in SCW verfügbar. Setzt fortschrittliche KI/LLM-Sicherheitsanforderungen ein, um die sichere KI-Entwicklung in einem großen Maßstab zu stärken.
Cyber-Resilienz-Gesetz erklärt: Was das für die Entwicklung von Secure by Design-Software bedeutet
Erfahren Sie, was der EU Cyber Resilience Act (CRA) verlangt, für wen er gilt und wie sich Entwicklungsteams mit sicheren Methoden, der Vorbeugung von Sicherheitslücken und dem Aufbau von Fähigkeiten für Entwickler darauf vorbereiten können.




%20(1).avif)
.avif)
