The most dangerous software errors of 2019: More evidence of history repeating

Published Feb 12, 2020
by Pieter Danhieux
cASE sTUDY

The most dangerous software errors of 2019: More evidence of history repeating

Published Feb 12, 2020
by Pieter Danhieux
View Resource
View Resource

This article originally appeared in Information Security Buzz, and was picked up by several other outlets. It has been updated for syndication here.

Towards the end of last year, the amazing community at MITRE published their list of the CWE Top 25 Most Dangerous Software Errors that affected the world in 2019. This list isn't opinion-driven, it is the result of multi-faceted analysis utilizing the work of organizations like NIST, as well as publicized Common Vulnerabilities and Exposures (CVE) data. In order to determine the "top" flaws, a score is attributed based on their severity, exploitability, and prevalence in current software. It's not the kind of list that is going to win any positive accolades, that's for sure.

However, unlike the majority of annual wrap-ups, many of the entrants on this list have appeared before... over and over again. If this was the Billboard Hot 100 chart, it would be like Britney Spears'Baby One More Time and the Backstreet Boys'I Want It That Way appearing every single year since their initial release. And why did I pick those songs? Well, they're roughly twenty years old (feeling ancient yet?), much like some of these dangerous software errors that continue to plague us into 2020 despite their discovery decades ago.

Why are old bugs still so dangerous? Don't we know how to fix them?

Number six on the current MITRE list is CWE-89, better known as SQL injection (SQLi). The SQLi vulnerability was first discovered in 1998, back when many of us were still Asking Jeeves our burning questions instead of Google. A fix was made known soon after, and yet, this remains one of the most-used hacking techniques in 2019. Akamai's State of the Internet report revealed that SQLi was the culprit in two-thirds of all web application attacks.

As far as complexity goes, SQL injection is far from being a genius-level exploit. It's a straightforward fix for a web developer, and we do know without any hesitation, how to prevent this vulnerability from exposing precious data to an attacker... the problem is, for many developers even today, security is not a priority. This may have been easier twenty years ago, but with the sheer volume of software being created today and in the future, this can no longer remain the norm.

Developers are operating in a broken system (most of the time).

It's all too easy to sit back and blame developers for delivering "bad" code. The truth is, their priorities differ wildly to that of the security team. Your average development team is told to make beautiful, functional software as fast as possible. Society's insatiable need for software ensures that dev teams are already stretched, and security isn't a primary consideration; after all, isn't that why AppSec specialists exist? Software engineers are accustomed to a somewhat chilly relationship with security - they only hear from them when problems arise, and those problems can hold up production of their hard work.

On the other side of the fence, AppSec specialists are sick of fixing decades-old errors that keep popping up in every scan and manual code review. These specialists are expensive and scarce, and their time is far better spent on complex security flaws instead of squashing well-known bugs over and over again.

There is an unspoken culture of finger-pointing between these teams, but they have (or should have) the same goal: secure software. Developers are operating in an environment that rarely gives them the best chance of success in terms of secure coding; security best practice is rarely taught as part of their tertiary education, and on-the-job training is often far too infrequent, or completely ineffective. There is a distinct lack of emphasis on security awareness and in-depth, relevant education, and the result is the astronomical cost of fixing old bugs in committed code, plus the imminent threat of a reputation-killing data breach.

The human factor, a.k.a. "Why aren't all these tools making our data safer?"

Another issue that appears frequently is that in place of training, a vast arsenal of security tools are put to the task of finding problems before software is released into the wild. The array application scanning and protection tools (SAST/DAST/RASP/IAST) can certainly assist in secure software production, but they come with their own problems. Complete reliance on them doesn't guarantee security, because:

  • No "one" tool can scan for every vulnerability, in every framework, in every use case
  • They can be slow, especially when running in tandem to provide both static and dynamic code analysis
  • False positives continue to be a problem; these will often halt production and require unnecessary manual code review to make sense of the alerts
  • They create a false sense of security, with secure coding deprioritized with the expectation that these tools will pick up any issues.

The tools certainly will unearth security flaws that can be patched, but will they find everything? A 100% hit rate is impossible to guarantee, and an attacker only needs one door left open to gain entry and really ruin your day.

Thankfully, many organizations are realizing the human factor at play in software vulnerabilities. Most developers are not adequately trained for secure coding, and their overall security awareness is low. However, they are at the very beginning of the software development lifecycle, and are in prime position to stop vulnerabilities from ever making their way into committed code. If they coded securely from the start, they would be the front lines of defense against devastating cyberattacks that cost us billions every year.

Developers need to be given the chance to thrive, with training that speaks their language, is relevant to their job and gets them actively excited about security. Bug-free code should be a point of pride, much like building something functionally kick-ass will win you the respect of peers.

A modern security program should be a business priority.

Development teams cannot pull themselves up by their bootstraps and enact positive security awareness across the company. They will need the right tools, knowledge, and support to bake security into the software development process from the very beginning.

Old training methods clearly don't work if MITRE's list is still showcasing so many old security bugs, so try something new. Look for training solutions that are:

  • Hands-on; developers love to "learn by doing", not watching talking heads on videos
  • Relevant; don't make them train in C# if they're using Java every day
  • Engaging; bite-sized learning is easy to digest and allows developers to keep building on previous knowledge
  • Measurable; don't just tick a box and move on. Ensure training is effective and create pathways for improvement
  • Fun; look at how you can build security awareness in addition to supporting a positive security culture, and how this can create a cohesive team environment.

Security should be a front-of-mind priority for everyone in the organization, with the CISO visible and transparent with the efforts at every level to keep our data safer. I mean, who wants to hear the same old song on repeat? It's time to get serious about squashing old bugs for good.

View Resource
View Resource

Author

Pieter Danhieux

Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.

Want more?

Dive into onto our latest secure coding insights on the blog.

Our extensive resource library aims to empower the human approach to secure coding upskilling.

View Blog
Want more?

Get the latest research on developer-driven security

Our extensive resource library is full of helpful resources from whitepapers to webinars to get you started with developer-driven secure coding. Explore it now.

Resource Hub

The most dangerous software errors of 2019: More evidence of history repeating

Published Feb 12, 2020
By Pieter Danhieux

This article originally appeared in Information Security Buzz, and was picked up by several other outlets. It has been updated for syndication here.

Towards the end of last year, the amazing community at MITRE published their list of the CWE Top 25 Most Dangerous Software Errors that affected the world in 2019. This list isn't opinion-driven, it is the result of multi-faceted analysis utilizing the work of organizations like NIST, as well as publicized Common Vulnerabilities and Exposures (CVE) data. In order to determine the "top" flaws, a score is attributed based on their severity, exploitability, and prevalence in current software. It's not the kind of list that is going to win any positive accolades, that's for sure.

However, unlike the majority of annual wrap-ups, many of the entrants on this list have appeared before... over and over again. If this was the Billboard Hot 100 chart, it would be like Britney Spears'Baby One More Time and the Backstreet Boys'I Want It That Way appearing every single year since their initial release. And why did I pick those songs? Well, they're roughly twenty years old (feeling ancient yet?), much like some of these dangerous software errors that continue to plague us into 2020 despite their discovery decades ago.

Why are old bugs still so dangerous? Don't we know how to fix them?

Number six on the current MITRE list is CWE-89, better known as SQL injection (SQLi). The SQLi vulnerability was first discovered in 1998, back when many of us were still Asking Jeeves our burning questions instead of Google. A fix was made known soon after, and yet, this remains one of the most-used hacking techniques in 2019. Akamai's State of the Internet report revealed that SQLi was the culprit in two-thirds of all web application attacks.

As far as complexity goes, SQL injection is far from being a genius-level exploit. It's a straightforward fix for a web developer, and we do know without any hesitation, how to prevent this vulnerability from exposing precious data to an attacker... the problem is, for many developers even today, security is not a priority. This may have been easier twenty years ago, but with the sheer volume of software being created today and in the future, this can no longer remain the norm.

Developers are operating in a broken system (most of the time).

It's all too easy to sit back and blame developers for delivering "bad" code. The truth is, their priorities differ wildly to that of the security team. Your average development team is told to make beautiful, functional software as fast as possible. Society's insatiable need for software ensures that dev teams are already stretched, and security isn't a primary consideration; after all, isn't that why AppSec specialists exist? Software engineers are accustomed to a somewhat chilly relationship with security - they only hear from them when problems arise, and those problems can hold up production of their hard work.

On the other side of the fence, AppSec specialists are sick of fixing decades-old errors that keep popping up in every scan and manual code review. These specialists are expensive and scarce, and their time is far better spent on complex security flaws instead of squashing well-known bugs over and over again.

There is an unspoken culture of finger-pointing between these teams, but they have (or should have) the same goal: secure software. Developers are operating in an environment that rarely gives them the best chance of success in terms of secure coding; security best practice is rarely taught as part of their tertiary education, and on-the-job training is often far too infrequent, or completely ineffective. There is a distinct lack of emphasis on security awareness and in-depth, relevant education, and the result is the astronomical cost of fixing old bugs in committed code, plus the imminent threat of a reputation-killing data breach.

The human factor, a.k.a. "Why aren't all these tools making our data safer?"

Another issue that appears frequently is that in place of training, a vast arsenal of security tools are put to the task of finding problems before software is released into the wild. The array application scanning and protection tools (SAST/DAST/RASP/IAST) can certainly assist in secure software production, but they come with their own problems. Complete reliance on them doesn't guarantee security, because:

  • No "one" tool can scan for every vulnerability, in every framework, in every use case
  • They can be slow, especially when running in tandem to provide both static and dynamic code analysis
  • False positives continue to be a problem; these will often halt production and require unnecessary manual code review to make sense of the alerts
  • They create a false sense of security, with secure coding deprioritized with the expectation that these tools will pick up any issues.

The tools certainly will unearth security flaws that can be patched, but will they find everything? A 100% hit rate is impossible to guarantee, and an attacker only needs one door left open to gain entry and really ruin your day.

Thankfully, many organizations are realizing the human factor at play in software vulnerabilities. Most developers are not adequately trained for secure coding, and their overall security awareness is low. However, they are at the very beginning of the software development lifecycle, and are in prime position to stop vulnerabilities from ever making their way into committed code. If they coded securely from the start, they would be the front lines of defense against devastating cyberattacks that cost us billions every year.

Developers need to be given the chance to thrive, with training that speaks their language, is relevant to their job and gets them actively excited about security. Bug-free code should be a point of pride, much like building something functionally kick-ass will win you the respect of peers.

A modern security program should be a business priority.

Development teams cannot pull themselves up by their bootstraps and enact positive security awareness across the company. They will need the right tools, knowledge, and support to bake security into the software development process from the very beginning.

Old training methods clearly don't work if MITRE's list is still showcasing so many old security bugs, so try something new. Look for training solutions that are:

  • Hands-on; developers love to "learn by doing", not watching talking heads on videos
  • Relevant; don't make them train in C# if they're using Java every day
  • Engaging; bite-sized learning is easy to digest and allows developers to keep building on previous knowledge
  • Measurable; don't just tick a box and move on. Ensure training is effective and create pathways for improvement
  • Fun; look at how you can build security awareness in addition to supporting a positive security culture, and how this can create a cohesive team environment.

Security should be a front-of-mind priority for everyone in the organization, with the CISO visible and transparent with the efforts at every level to keep our data safer. I mean, who wants to hear the same old song on repeat? It's time to get serious about squashing old bugs for good.

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.