Blog

Poor coding patterns can lead to big security problems… so why do we encourage them?

Matias Madou, Ph.D.
Published Nov 03, 2022

A version of this article appeared in DZone. It has been updated and syndicated here.

For what feels like an eternity at this point, we’ve discussed “shifting left” in the SDLC, taking into account security best practices from the start of software development. DevSecOps was a great leap forward, in no small part because of the emphasis on shared responsibility for security, and the power of a security-aware developer to thwart common vulnerabilities as they write code. 

We have also known - again, for eons - that the type of secure code training chosen to engage and upskill developers makes all the difference. Low-effort solutions motivated solely by regulatory compliance do not build up the bright security minds of the future, and most security awareness professionals have worked that out. Dynamic, contextually relevant learning is best, but it’s critical that the nuances within are understood. 

If we’re going to have a fighting chance against threat actors - and they always have a head start on an organization - developers need a holistic training environment, with layered learning that continually builds skills steeped in best practices.

Developer-driven defensive security measures are not an automatic win.

Our ethos revolves around the developer being central to a preventative security strategy, right there at the code level upwards. That’s a given, and security-skilled developers provide the easiest path to thwarting the types of common security bugs that are apparent in poor coding patterns (like Log4Shell, as one recent, devastating example).

However, the defensive techniques that we can engage to upskill developers do vary, even if they can rightly exist in the same training bucket. 

For example, imagine you were told how to bake a cake, using only directions based on what not to do. “Don’t overbake it”, and “don’t forget the eggs” leaves it open for interpretation, and a huge potential for mistakes that will have an end result fit for Nailed It!. The same is true for defensive security education; what not to do is a very limited part of the conversation, and offers no practical advice to truly act with a defensive mindset. You can tell developers, “don’t misconfigure that API”, but with no understanding of what constitutes correct and secure configuration, there is a lot of room for error.

Developers won’t have a positive impact on vulnerability reduction without a foundational understanding of how the vulnerabilities work, why they are dangerous, what patterns cause them, and what design or coding patterns fix them in a context that makes sense in their world. A scaffolded approach allows layers of knowledge to give a full picture of what it means to code securely, defend a codebase, and stand up as a security-aware developer. And yes, part of that layered learning should be dedicated to offense, and understanding the mindset of an attacker; this is critical to hone lateral thinking skills, which are invaluable in threat modeling and defensive strategy. 

Reinforcing poor coding patterns is a pitfall we can’t ignore.

An unfortunate reality with some methods of developer learning is that the “defensive” part - even when the training is structured with offensive techniques - can reinforce bad habits, even if they are technically validating code security. 

Production of high-quality code should be the baseline in all software development, but the definition of “quality” still appears to be up for debate. The reality is that insecure code cannot be viewed as quality code, even if it is otherwise functional and beautiful. The kicker is that secure code isn’t inherently high quality, either. In other words, poor coding patterns can fix a security problem, but in doing so, introduce another one, or potentially break the software entirely. 

Let’s take a look at an example of poor quality code in the form of a fix for broken authentication, as well as the most secure version for best practice:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
namespace BadFixesAPI.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class AlertsController : ControllerBase
    {
        private DatabaseContext context = new DatabaseContext();
        [HttpGet(Name = "GetAlerts")]
        // Does not ensure that the user is authenticated 
        public IEnumerable<Alert> Get()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, but does not check any roles
        [Authorize()]
        public IEnumerable<Alert> GetBadFix()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, AND that they have the "Administrator" role
        [Authorize(Roles = "Administrator")]
        public IEnumerable<Alert> GetGoodFix()
        {
            return context.GetAlerts();
        }
    }
}

In the first snippet, there is no check to verify that the user is authenticated, which is about as unsafe as it gets. The second, while better in terms of it performing an authentication check, fails to investigate assigned roles and whether permissions are high enough for the information requested. The third checks both user authentication and that they are assigned the “Administrator” role. In an era where least privilege access control should be the norm in most instances, it is critical that roles are set up and checked to ensure that information is only accessible on a need-to-know basis. 

The highest priority for developers is to build features, and while security is not intentionally on the back burner, they don’t necessarily have the skills to avoid poor coding patterns that lead to security bugs, and the benchmark of a good engineer rarely includes secure coding prowess. We indirectly encourage those bad habits if the features are awesome enough, and it’s this mindset that has to change. The problem is, the way some learning pathways encourage hands-on code remediation also potentially reinforces code that is secure, but of substandard quality. By applying a binary “yes this is secure / no this is not secure” assessment, rather than looking deeper into whether it is truly the best approach to resolve the bug and maintain the integrity of the software, there are devils in the detail that go unnoticed. 

Without taking developers through the entire process for a complete view of secure coding, this approach perpetuates the same issues it’s trying to solve. Imagine if we all got our licenses based solely on our ability to drive a vehicle to a destination; a pass mark even though we ran red lights, drove through a hedge, and narrowly missed a pedestrian crossing the street to arrive there. We completed the goal, but the journey we took to get there matters most. 

Developers need to be enabled to care more about creating secure software.

The modern developer has to keep a lot of plates spinning, and it’s no surprise they find security training a bore, especially when it’s not implemented with their workday in mind, and takes them away from their deadlines and priorities. It’s also completely unfair to change their KPIs to include an emphasis on secure coding, when they don’t have the skills built up from regular, right-fit learning opportunities and supplementary tooling. However, the importance of secure software development cannot be overstated, and getting developers on-side with this is crucial. 

As a former developer, generally, we want to do a great job, and being seen as a cut above others in terms of quality output is very motivating. Incentivizing developers to engage with continuous security skill-building is a no-brainer, and they should be rewarded for recognizing the importance of code-level security. Security champion programs, bug bounties and hackathons can be great opportunities to build a positive security culture, and those who roll their sleeves up and get involved should get the loot.

View Resource
View Resource

Developers won’t have a positive impact on vulnerability reduction without a foundational understanding of how the vulnerabilities work, why they are dangerous, what patterns cause them, and what design or coding patterns fix them in a context that makes sense in their world. A scaffolded approach allows layers of knowledge to give a full picture of what it means to code securely, defend a codebase, and stand up as a security-aware developer.

Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Matias Madou, Ph.D.
Published Nov 03, 2022

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.

Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.

Share on:

A version of this article appeared in DZone. It has been updated and syndicated here.

For what feels like an eternity at this point, we’ve discussed “shifting left” in the SDLC, taking into account security best practices from the start of software development. DevSecOps was a great leap forward, in no small part because of the emphasis on shared responsibility for security, and the power of a security-aware developer to thwart common vulnerabilities as they write code. 

We have also known - again, for eons - that the type of secure code training chosen to engage and upskill developers makes all the difference. Low-effort solutions motivated solely by regulatory compliance do not build up the bright security minds of the future, and most security awareness professionals have worked that out. Dynamic, contextually relevant learning is best, but it’s critical that the nuances within are understood. 

If we’re going to have a fighting chance against threat actors - and they always have a head start on an organization - developers need a holistic training environment, with layered learning that continually builds skills steeped in best practices.

Developer-driven defensive security measures are not an automatic win.

Our ethos revolves around the developer being central to a preventative security strategy, right there at the code level upwards. That’s a given, and security-skilled developers provide the easiest path to thwarting the types of common security bugs that are apparent in poor coding patterns (like Log4Shell, as one recent, devastating example).

However, the defensive techniques that we can engage to upskill developers do vary, even if they can rightly exist in the same training bucket. 

For example, imagine you were told how to bake a cake, using only directions based on what not to do. “Don’t overbake it”, and “don’t forget the eggs” leaves it open for interpretation, and a huge potential for mistakes that will have an end result fit for Nailed It!. The same is true for defensive security education; what not to do is a very limited part of the conversation, and offers no practical advice to truly act with a defensive mindset. You can tell developers, “don’t misconfigure that API”, but with no understanding of what constitutes correct and secure configuration, there is a lot of room for error.

Developers won’t have a positive impact on vulnerability reduction without a foundational understanding of how the vulnerabilities work, why they are dangerous, what patterns cause them, and what design or coding patterns fix them in a context that makes sense in their world. A scaffolded approach allows layers of knowledge to give a full picture of what it means to code securely, defend a codebase, and stand up as a security-aware developer. And yes, part of that layered learning should be dedicated to offense, and understanding the mindset of an attacker; this is critical to hone lateral thinking skills, which are invaluable in threat modeling and defensive strategy. 

Reinforcing poor coding patterns is a pitfall we can’t ignore.

An unfortunate reality with some methods of developer learning is that the “defensive” part - even when the training is structured with offensive techniques - can reinforce bad habits, even if they are technically validating code security. 

Production of high-quality code should be the baseline in all software development, but the definition of “quality” still appears to be up for debate. The reality is that insecure code cannot be viewed as quality code, even if it is otherwise functional and beautiful. The kicker is that secure code isn’t inherently high quality, either. In other words, poor coding patterns can fix a security problem, but in doing so, introduce another one, or potentially break the software entirely. 

Let’s take a look at an example of poor quality code in the form of a fix for broken authentication, as well as the most secure version for best practice:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
namespace BadFixesAPI.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class AlertsController : ControllerBase
    {
        private DatabaseContext context = new DatabaseContext();
        [HttpGet(Name = "GetAlerts")]
        // Does not ensure that the user is authenticated 
        public IEnumerable<Alert> Get()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, but does not check any roles
        [Authorize()]
        public IEnumerable<Alert> GetBadFix()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, AND that they have the "Administrator" role
        [Authorize(Roles = "Administrator")]
        public IEnumerable<Alert> GetGoodFix()
        {
            return context.GetAlerts();
        }
    }
}

In the first snippet, there is no check to verify that the user is authenticated, which is about as unsafe as it gets. The second, while better in terms of it performing an authentication check, fails to investigate assigned roles and whether permissions are high enough for the information requested. The third checks both user authentication and that they are assigned the “Administrator” role. In an era where least privilege access control should be the norm in most instances, it is critical that roles are set up and checked to ensure that information is only accessible on a need-to-know basis. 

The highest priority for developers is to build features, and while security is not intentionally on the back burner, they don’t necessarily have the skills to avoid poor coding patterns that lead to security bugs, and the benchmark of a good engineer rarely includes secure coding prowess. We indirectly encourage those bad habits if the features are awesome enough, and it’s this mindset that has to change. The problem is, the way some learning pathways encourage hands-on code remediation also potentially reinforces code that is secure, but of substandard quality. By applying a binary “yes this is secure / no this is not secure” assessment, rather than looking deeper into whether it is truly the best approach to resolve the bug and maintain the integrity of the software, there are devils in the detail that go unnoticed. 

Without taking developers through the entire process for a complete view of secure coding, this approach perpetuates the same issues it’s trying to solve. Imagine if we all got our licenses based solely on our ability to drive a vehicle to a destination; a pass mark even though we ran red lights, drove through a hedge, and narrowly missed a pedestrian crossing the street to arrive there. We completed the goal, but the journey we took to get there matters most. 

Developers need to be enabled to care more about creating secure software.

The modern developer has to keep a lot of plates spinning, and it’s no surprise they find security training a bore, especially when it’s not implemented with their workday in mind, and takes them away from their deadlines and priorities. It’s also completely unfair to change their KPIs to include an emphasis on secure coding, when they don’t have the skills built up from regular, right-fit learning opportunities and supplementary tooling. However, the importance of secure software development cannot be overstated, and getting developers on-side with this is crucial. 

As a former developer, generally, we want to do a great job, and being seen as a cut above others in terms of quality output is very motivating. Incentivizing developers to engage with continuous security skill-building is a no-brainer, and they should be rewarded for recognizing the importance of code-level security. Security champion programs, bug bounties and hackathons can be great opportunities to build a positive security culture, and those who roll their sleeves up and get involved should get the loot.

View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

A version of this article appeared in DZone. It has been updated and syndicated here.

For what feels like an eternity at this point, we’ve discussed “shifting left” in the SDLC, taking into account security best practices from the start of software development. DevSecOps was a great leap forward, in no small part because of the emphasis on shared responsibility for security, and the power of a security-aware developer to thwart common vulnerabilities as they write code. 

We have also known - again, for eons - that the type of secure code training chosen to engage and upskill developers makes all the difference. Low-effort solutions motivated solely by regulatory compliance do not build up the bright security minds of the future, and most security awareness professionals have worked that out. Dynamic, contextually relevant learning is best, but it’s critical that the nuances within are understood. 

If we’re going to have a fighting chance against threat actors - and they always have a head start on an organization - developers need a holistic training environment, with layered learning that continually builds skills steeped in best practices.

Developer-driven defensive security measures are not an automatic win.

Our ethos revolves around the developer being central to a preventative security strategy, right there at the code level upwards. That’s a given, and security-skilled developers provide the easiest path to thwarting the types of common security bugs that are apparent in poor coding patterns (like Log4Shell, as one recent, devastating example).

However, the defensive techniques that we can engage to upskill developers do vary, even if they can rightly exist in the same training bucket. 

For example, imagine you were told how to bake a cake, using only directions based on what not to do. “Don’t overbake it”, and “don’t forget the eggs” leaves it open for interpretation, and a huge potential for mistakes that will have an end result fit for Nailed It!. The same is true for defensive security education; what not to do is a very limited part of the conversation, and offers no practical advice to truly act with a defensive mindset. You can tell developers, “don’t misconfigure that API”, but with no understanding of what constitutes correct and secure configuration, there is a lot of room for error.

Developers won’t have a positive impact on vulnerability reduction without a foundational understanding of how the vulnerabilities work, why they are dangerous, what patterns cause them, and what design or coding patterns fix them in a context that makes sense in their world. A scaffolded approach allows layers of knowledge to give a full picture of what it means to code securely, defend a codebase, and stand up as a security-aware developer. And yes, part of that layered learning should be dedicated to offense, and understanding the mindset of an attacker; this is critical to hone lateral thinking skills, which are invaluable in threat modeling and defensive strategy. 

Reinforcing poor coding patterns is a pitfall we can’t ignore.

An unfortunate reality with some methods of developer learning is that the “defensive” part - even when the training is structured with offensive techniques - can reinforce bad habits, even if they are technically validating code security. 

Production of high-quality code should be the baseline in all software development, but the definition of “quality” still appears to be up for debate. The reality is that insecure code cannot be viewed as quality code, even if it is otherwise functional and beautiful. The kicker is that secure code isn’t inherently high quality, either. In other words, poor coding patterns can fix a security problem, but in doing so, introduce another one, or potentially break the software entirely. 

Let’s take a look at an example of poor quality code in the form of a fix for broken authentication, as well as the most secure version for best practice:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
namespace BadFixesAPI.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class AlertsController : ControllerBase
    {
        private DatabaseContext context = new DatabaseContext();
        [HttpGet(Name = "GetAlerts")]
        // Does not ensure that the user is authenticated 
        public IEnumerable<Alert> Get()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, but does not check any roles
        [Authorize()]
        public IEnumerable<Alert> GetBadFix()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, AND that they have the "Administrator" role
        [Authorize(Roles = "Administrator")]
        public IEnumerable<Alert> GetGoodFix()
        {
            return context.GetAlerts();
        }
    }
}

In the first snippet, there is no check to verify that the user is authenticated, which is about as unsafe as it gets. The second, while better in terms of it performing an authentication check, fails to investigate assigned roles and whether permissions are high enough for the information requested. The third checks both user authentication and that they are assigned the “Administrator” role. In an era where least privilege access control should be the norm in most instances, it is critical that roles are set up and checked to ensure that information is only accessible on a need-to-know basis. 

The highest priority for developers is to build features, and while security is not intentionally on the back burner, they don’t necessarily have the skills to avoid poor coding patterns that lead to security bugs, and the benchmark of a good engineer rarely includes secure coding prowess. We indirectly encourage those bad habits if the features are awesome enough, and it’s this mindset that has to change. The problem is, the way some learning pathways encourage hands-on code remediation also potentially reinforces code that is secure, but of substandard quality. By applying a binary “yes this is secure / no this is not secure” assessment, rather than looking deeper into whether it is truly the best approach to resolve the bug and maintain the integrity of the software, there are devils in the detail that go unnoticed. 

Without taking developers through the entire process for a complete view of secure coding, this approach perpetuates the same issues it’s trying to solve. Imagine if we all got our licenses based solely on our ability to drive a vehicle to a destination; a pass mark even though we ran red lights, drove through a hedge, and narrowly missed a pedestrian crossing the street to arrive there. We completed the goal, but the journey we took to get there matters most. 

Developers need to be enabled to care more about creating secure software.

The modern developer has to keep a lot of plates spinning, and it’s no surprise they find security training a bore, especially when it’s not implemented with their workday in mind, and takes them away from their deadlines and priorities. It’s also completely unfair to change their KPIs to include an emphasis on secure coding, when they don’t have the skills built up from regular, right-fit learning opportunities and supplementary tooling. However, the importance of secure software development cannot be overstated, and getting developers on-side with this is crucial. 

As a former developer, generally, we want to do a great job, and being seen as a cut above others in terms of quality output is very motivating. Incentivizing developers to engage with continuous security skill-building is a no-brainer, and they should be rewarded for recognizing the importance of code-level security. Security champion programs, bug bounties and hackathons can be great opportunities to build a positive security culture, and those who roll their sleeves up and get involved should get the loot.

Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Click on the link below and download the PDF of this one pager.

Download

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
Share on:
Interested in more?

Share on:
Author
Matias Madou, Ph.D.
Published Nov 03, 2022

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.

Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.

Share on:

A version of this article appeared in DZone. It has been updated and syndicated here.

For what feels like an eternity at this point, we’ve discussed “shifting left” in the SDLC, taking into account security best practices from the start of software development. DevSecOps was a great leap forward, in no small part because of the emphasis on shared responsibility for security, and the power of a security-aware developer to thwart common vulnerabilities as they write code. 

We have also known - again, for eons - that the type of secure code training chosen to engage and upskill developers makes all the difference. Low-effort solutions motivated solely by regulatory compliance do not build up the bright security minds of the future, and most security awareness professionals have worked that out. Dynamic, contextually relevant learning is best, but it’s critical that the nuances within are understood. 

If we’re going to have a fighting chance against threat actors - and they always have a head start on an organization - developers need a holistic training environment, with layered learning that continually builds skills steeped in best practices.

Developer-driven defensive security measures are not an automatic win.

Our ethos revolves around the developer being central to a preventative security strategy, right there at the code level upwards. That’s a given, and security-skilled developers provide the easiest path to thwarting the types of common security bugs that are apparent in poor coding patterns (like Log4Shell, as one recent, devastating example).

However, the defensive techniques that we can engage to upskill developers do vary, even if they can rightly exist in the same training bucket. 

For example, imagine you were told how to bake a cake, using only directions based on what not to do. “Don’t overbake it”, and “don’t forget the eggs” leaves it open for interpretation, and a huge potential for mistakes that will have an end result fit for Nailed It!. The same is true for defensive security education; what not to do is a very limited part of the conversation, and offers no practical advice to truly act with a defensive mindset. You can tell developers, “don’t misconfigure that API”, but with no understanding of what constitutes correct and secure configuration, there is a lot of room for error.

Developers won’t have a positive impact on vulnerability reduction without a foundational understanding of how the vulnerabilities work, why they are dangerous, what patterns cause them, and what design or coding patterns fix them in a context that makes sense in their world. A scaffolded approach allows layers of knowledge to give a full picture of what it means to code securely, defend a codebase, and stand up as a security-aware developer. And yes, part of that layered learning should be dedicated to offense, and understanding the mindset of an attacker; this is critical to hone lateral thinking skills, which are invaluable in threat modeling and defensive strategy. 

Reinforcing poor coding patterns is a pitfall we can’t ignore.

An unfortunate reality with some methods of developer learning is that the “defensive” part - even when the training is structured with offensive techniques - can reinforce bad habits, even if they are technically validating code security. 

Production of high-quality code should be the baseline in all software development, but the definition of “quality” still appears to be up for debate. The reality is that insecure code cannot be viewed as quality code, even if it is otherwise functional and beautiful. The kicker is that secure code isn’t inherently high quality, either. In other words, poor coding patterns can fix a security problem, but in doing so, introduce another one, or potentially break the software entirely. 

Let’s take a look at an example of poor quality code in the form of a fix for broken authentication, as well as the most secure version for best practice:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
namespace BadFixesAPI.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class AlertsController : ControllerBase
    {
        private DatabaseContext context = new DatabaseContext();
        [HttpGet(Name = "GetAlerts")]
        // Does not ensure that the user is authenticated 
        public IEnumerable<Alert> Get()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, but does not check any roles
        [Authorize()]
        public IEnumerable<Alert> GetBadFix()
        {
            return context.GetAlerts();
        }
        [HttpGet(Name = "GetAlerts")]
        // Ensures that the user is authenticated, AND that they have the "Administrator" role
        [Authorize(Roles = "Administrator")]
        public IEnumerable<Alert> GetGoodFix()
        {
            return context.GetAlerts();
        }
    }
}

In the first snippet, there is no check to verify that the user is authenticated, which is about as unsafe as it gets. The second, while better in terms of it performing an authentication check, fails to investigate assigned roles and whether permissions are high enough for the information requested. The third checks both user authentication and that they are assigned the “Administrator” role. In an era where least privilege access control should be the norm in most instances, it is critical that roles are set up and checked to ensure that information is only accessible on a need-to-know basis. 

The highest priority for developers is to build features, and while security is not intentionally on the back burner, they don’t necessarily have the skills to avoid poor coding patterns that lead to security bugs, and the benchmark of a good engineer rarely includes secure coding prowess. We indirectly encourage those bad habits if the features are awesome enough, and it’s this mindset that has to change. The problem is, the way some learning pathways encourage hands-on code remediation also potentially reinforces code that is secure, but of substandard quality. By applying a binary “yes this is secure / no this is not secure” assessment, rather than looking deeper into whether it is truly the best approach to resolve the bug and maintain the integrity of the software, there are devils in the detail that go unnoticed. 

Without taking developers through the entire process for a complete view of secure coding, this approach perpetuates the same issues it’s trying to solve. Imagine if we all got our licenses based solely on our ability to drive a vehicle to a destination; a pass mark even though we ran red lights, drove through a hedge, and narrowly missed a pedestrian crossing the street to arrive there. We completed the goal, but the journey we took to get there matters most. 

Developers need to be enabled to care more about creating secure software.

The modern developer has to keep a lot of plates spinning, and it’s no surprise they find security training a bore, especially when it’s not implemented with their workday in mind, and takes them away from their deadlines and priorities. It’s also completely unfair to change their KPIs to include an emphasis on secure coding, when they don’t have the skills built up from regular, right-fit learning opportunities and supplementary tooling. However, the importance of secure software development cannot be overstated, and getting developers on-side with this is crucial. 

As a former developer, generally, we want to do a great job, and being seen as a cut above others in terms of quality output is very motivating. Incentivizing developers to engage with continuous security skill-building is a no-brainer, and they should be rewarded for recognizing the importance of code-level security. Security champion programs, bug bounties and hackathons can be great opportunities to build a positive security culture, and those who roll their sleeves up and get involved should get the loot.

Table of contents

View Resource
Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts