How are developers supposed to write secure code if nobody ever teaches them about why its important, the consequences of insecure code, and most importantly, how to prevent writing these vulnerabilities in their respective programming frameworks in the first place?
The 2018 Verizon Data Breach Investigations Report is once again a great read that keeps us up to date on cybersecurity, including current cybercrime trends and incident drivers, analysis and insight that can help organizations mature their security program. This year the Verizon investigators analyzed more than 53,000 incidents and 2,200-odd breaches, and there are many tangible takeaways about what to watch out for and what not to do, as well as valuable recommendations on where to focus security efforts. The 2018 report feels like it has evolved with the times, relevant to a wider business audience as security impacts spread, and is increasingly recognised as a mainstream business problem.
Among many interesting findings, the 2018 report verifies that most hacks still happen through breaches of web applications (there's even a cool interactive chart that shows this).
Web application attacks consist of any incident in which a web application was the vector of attack. This includes exploits of code-level vulnerabilities in the application as well as thwarting authentication mechanisms. It is notable that the number of breaches in this pattern is reduced due to the filtering of botnet-related attacks on web applications using credentials stolen from customer-owned devices. Use of stolen credentials is still the top variety of hacking in breaches involving web applications, followed by SQLi (more on SQL later...).
One theme that stands out in this year's report is how critical the "human factor" is in the security equation, both as part of the problem and the solution. The report deals with both external and internal actors, reporting that errors were at the heart of almost one in five (17%) breaches. Breaches occurred when employees failed to shred confidential information, when they sent an email to the wrong person and when web servers were misconfigured. The report points out that while none of these were deliberately ill-intentioned, they could all still prove costly.
But there's an often forgotten human factor that is causing many security breaches and that is the high frequency of developers creating code that contains security flaws, which lead to web application vulnerabilities, which in turn result in these incidents and breaches.
Application testing over the past 5 years has not shown much improvement in the number of vulnerabilities found and the same old flaws keep coming up time and time again. A 2017 Veracode report based on 400,000 application scans, shows applications passed OWASP Top 10 policy only 30% of the time. Astonishingly, SQL injections appeared in almost one in three newly scanned applications over the past 5 years, including last year.I say astonishing because SQL injections have been around since 1999. The fact that the same flaws, including SQL injections, are consistently found, is evidence that this "human factor" problem among developers is not being adequately addressed.
It is at this point that I need to stand up and shout that I am on the developers side of this argument. How are developers supposed to write secure code if nobody ever teaches them about why it's important, the consequences of insecure code, and most importantly, how to prevent writing these vulnerabilities in their respective programming frameworks in the first place?
That's what we do at Secure Code Warrior and we are seeing strong evidence that companies who build hands-on secure code training into their developers'daily lives are reducing the number of web application vulnerabilities created. For developers to write secure code, they need regular access to hands-on learning that actively engages them to build their secure coding skills. They need to learn about recently identified vulnerabilities, in real code, and specifically in their own languages/frameworks. This learning experience should help them understand how to locate, identify and fix known vulnerabilities. Developers also need a quality toolset in their process that makes security easy, does not slow them down and guides them in real time about good and bad coding patterns.
This is how we can make a tangible and positive difference to the number of web application breaches.
I am in violent agreement with Verizon that there is a need to increase security awareness training company-wide. My P.S. on this to CIOs and CISOs is "don't forget your developers!". These architects of your modern businesses can be one significant "human factor" who routinely generate access points for hackers, or they can be your first line of defense, your security heroes.
Effective security upskilling for developers could make a real difference to the outcomes reported by Verizon in future reports. It would be nice to see the 2019 report reflect developer security training as a key risk reduction strategy that companies can take. I'm an optimist, but I'd bet my house that if companies got their developers to learn how to avoid creating injection flaws, the number of web application vulnerabilities in this report would drop significantly.
Take a look at our platform in action to see how developers can upskill fast in an ideal, gamified training environment:
How are developers supposed to write secure code if nobody ever teaches them about why it's important, the consequences of insecure code, and most importantly, how to prevent writing these vulnerabilities in their respective programming frameworks in the first place?