The Revamped PCI Security Standards Council Guidelines: Do They Shift Far Enough Left?

Published Jul 04, 2019
by Pieter Danhieux
cASE sTUDY

The Revamped PCI Security Standards Council Guidelines: Do They Shift Far Enough Left?

Published Jul 04, 2019
by Pieter Danhieux
View Resource
View Resource

A version of this article was originally published in Digital Transactions Magazine.

This year, the PCI Security Standards Council released an all-new set of software security guidelines as part of their PCI Software Security Framework. This update aims to bring software security best practice in-line with modern software development. It's a fantastic initiative that acknowledges how this process has changed over time, requiring a rethink of the security standards that were set well before the majority of our lives became rapidly digitized.

This is clear evidence of our industry more closely engaging with the idea of adaptable guidelines - ones that evolve with our changing needs - as well as with the demands of a cybersecurity landscape that could very quickly spiral out of control if we continue to be lax in our secure development processes. Naturally, with the PCI Security Standards Council acting as a governing body within the banking and finance industry (as in, setting the security standards for the software in we trust to protect all of our money, credit cards, online transactions and at point-of-sale), they carry a lot of risk and have huge motivation to reduce it.

While these standards certainly improve upon the previous version and go some way to plug the hole we have in rapid, innovative feature development that also prioritizes security as part of its overall quality assessment, it's a somewhat disappointing reality to find that we still have a long way to go.

No, that's not me giving a "bah, humbug!" to this initiative. The fact is, these new security guidelines simply don't move us far enough to the left.

Were still fixated on testing (and were testing too late).

One glaring issue I found with the PCI Security Standards Framework is its apparent dependence on testing. Of course, software must still be tested (and the SAST/DAST/IAST process still has its place), but were still falling into the same trap and expecting a different result.

Who writes line after line of code to create the software we know, love and trust? Software developers.

Who has the unenviable position of testing this code, either with scanning tools or manual code review? AppSec specialists.

What do these specialists continue to discover? The same bugs that have plagued us for decades. Simple stuff that we've known how to fix for years: SQL injection, cross-site scripting, session management weaknesses... it's like Groundhog Day for these guys. They spent their time finding and fixing code violations that developers themselves have had the power to fix for years, except that security has not been made a priority in their process " especially now, in the age of agile development where feature delivery is king, and security is the Grinch that steals creative process and the triumph of project completion.

This is not a negative assessment of either team; developers and AppSec professionals both have extremely important jobs to do, but they continue to get in each other's way. This situation only perpetuates a flawed SDLC, where developers with little security awareness operate in a negative security culture, producing insecure code, which then has to be scanned, assessed and fixed well after it was initially written. AppSec barely has time to fix the truly complex issues, because they're so caught up with the little recurring problems that could still spell disaster for a company if left unchecked.

We are wasting time, money and resources by allowing testing to be the catch-all for security weaknesses in code. And with massive data breaches every other day, this method is obviously not working optimally, if at all. These new standards are still assessing an end-product state (perhaps on the assumption that all developers are security-aware, which is not the case): as in, one that's already built. This is the most expensive and difficult stage to fix flaws; it's like building a fancy, new house, only to bring in a safety team to check for any hazards on the same day you move in. If something is wrong with the foundation, imagine the time, cost and utter headache of getting to that area to even begin addressing the issues. It's often easier and cheaper to simply start again (and what a wholly unsatisfying process that is for everyone who built the first version).

We absolutely must work from the ground up: by getting the development team engaged with security best practice, empowering them with the knowledge to efficiently code securely, in addition to creating and maintaining a positive security culture in every workplace.

Is it a learning curve? Hell yeah, it is. Is it impossible? Definitely not. And it doesn't have to be a boring drudgery. Training methods that appeal directly to the developers'creative, problem-solving traits have already had immense success in the banking and finance sector, if Russ Wolfes experience at Capital One is any indication.

Were still searching for the perfect "end-state".

If you look at the updated PCI Security Standards in the context for which they are intended, as in - your finished, user-ready financial product must follow these best practices for optimum security and safety, then they're absolutely fine. However, in my view, every single company - financial or otherwise - would have the best chance of reaching a software end-state that is representative of both feature quality and a high standard of security, if only they took a step back and realized that it is much more efficient to do this from the beginning of the cycle.

That perfect end-state? You know, the one that happens when a product is scanned, manually reviewed and comes out perfect and error-free? We are still searching for it. At this point in time, it's a unicorn.

Why is it so elusive? There are a number of factors:

  • Scanning tools are relied upon, yet they are not always effective. False positives are a frustrating, time-wasting by-product of their use, as is the fact that even together, DAST/SAST/PCI scanning simply cannot identify and reveal every possible vulnerability in the code base. Sure, they might give you the all-clear, but are they really looking for everything? An attacker only needs one vulnerability to exploit to access something you think is protected.
  • Developers are continuing to make the same mistakes. There is no distribution of knowledge between developers around security and no "secure code recipes" (good, secure code patterns) that are well-known and documented.
  • There is no emphasis on building a collaborative, positive security culture.
  • Developers need to be empowered with the right tools to bake security into the products they write, without disrupting their creative processes and agile development methodologies.

These guidelines are a powerful verification checklist for the standards of security that software should adhere to, but the best process to get software to that state is up for debate.

We don't have insecure software because we lack scanners, we have insecure software because developers are not provided with easy-to-use, easy-to-understand security tools that guide them.

We're in a time of evolution right now. Software security in general, for many years, was optional. Today, it's essentially mandatory - especially for the keepers of sensitive information (finance, medical, social security... you get the idea).

The PCI Security Standards Council are helping to set the benchmark, but I would love to see them - with all their industry esteem and influence - work towards including practical guidelines for developers, with an emphasis on adequate and positive training and tools. At the moment, there's no pressure for organizations to ensure their development teams are security-aware and compliant, nor do many developers understand the magnitude of those small, easily fixed mistakes when exploited by those that seek to do harm.

Just as is expected with anything worthwhile in life, it really does take a village to truly enact change. And the change in the air is (hopefully) going to sweep us all further to the left.

View Resource
View Resource

Author

Pieter Danhieux

Pieter Danhieux is a globally recognized security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.

Want more?

Dive into onto our latest secure coding insights on the blog.

Our extensive resource library aims to empower the human approach to secure coding upskilling.

View Blog
Want more?

Get the latest research on developer-driven security

Our extensive resource library is full of helpful resources from whitepapers to webinars to get you started with developer-driven secure coding. Explore it now.

Resource Hub

The Revamped PCI Security Standards Council Guidelines: Do They Shift Far Enough Left?

Published Jul 04, 2019
By Pieter Danhieux

A version of this article was originally published in Digital Transactions Magazine.

This year, the PCI Security Standards Council released an all-new set of software security guidelines as part of their PCI Software Security Framework. This update aims to bring software security best practice in-line with modern software development. It's a fantastic initiative that acknowledges how this process has changed over time, requiring a rethink of the security standards that were set well before the majority of our lives became rapidly digitized.

This is clear evidence of our industry more closely engaging with the idea of adaptable guidelines - ones that evolve with our changing needs - as well as with the demands of a cybersecurity landscape that could very quickly spiral out of control if we continue to be lax in our secure development processes. Naturally, with the PCI Security Standards Council acting as a governing body within the banking and finance industry (as in, setting the security standards for the software in we trust to protect all of our money, credit cards, online transactions and at point-of-sale), they carry a lot of risk and have huge motivation to reduce it.

While these standards certainly improve upon the previous version and go some way to plug the hole we have in rapid, innovative feature development that also prioritizes security as part of its overall quality assessment, it's a somewhat disappointing reality to find that we still have a long way to go.

No, that's not me giving a "bah, humbug!" to this initiative. The fact is, these new security guidelines simply don't move us far enough to the left.

Were still fixated on testing (and were testing too late).

One glaring issue I found with the PCI Security Standards Framework is its apparent dependence on testing. Of course, software must still be tested (and the SAST/DAST/IAST process still has its place), but were still falling into the same trap and expecting a different result.

Who writes line after line of code to create the software we know, love and trust? Software developers.

Who has the unenviable position of testing this code, either with scanning tools or manual code review? AppSec specialists.

What do these specialists continue to discover? The same bugs that have plagued us for decades. Simple stuff that we've known how to fix for years: SQL injection, cross-site scripting, session management weaknesses... it's like Groundhog Day for these guys. They spent their time finding and fixing code violations that developers themselves have had the power to fix for years, except that security has not been made a priority in their process " especially now, in the age of agile development where feature delivery is king, and security is the Grinch that steals creative process and the triumph of project completion.

This is not a negative assessment of either team; developers and AppSec professionals both have extremely important jobs to do, but they continue to get in each other's way. This situation only perpetuates a flawed SDLC, where developers with little security awareness operate in a negative security culture, producing insecure code, which then has to be scanned, assessed and fixed well after it was initially written. AppSec barely has time to fix the truly complex issues, because they're so caught up with the little recurring problems that could still spell disaster for a company if left unchecked.

We are wasting time, money and resources by allowing testing to be the catch-all for security weaknesses in code. And with massive data breaches every other day, this method is obviously not working optimally, if at all. These new standards are still assessing an end-product state (perhaps on the assumption that all developers are security-aware, which is not the case): as in, one that's already built. This is the most expensive and difficult stage to fix flaws; it's like building a fancy, new house, only to bring in a safety team to check for any hazards on the same day you move in. If something is wrong with the foundation, imagine the time, cost and utter headache of getting to that area to even begin addressing the issues. It's often easier and cheaper to simply start again (and what a wholly unsatisfying process that is for everyone who built the first version).

We absolutely must work from the ground up: by getting the development team engaged with security best practice, empowering them with the knowledge to efficiently code securely, in addition to creating and maintaining a positive security culture in every workplace.

Is it a learning curve? Hell yeah, it is. Is it impossible? Definitely not. And it doesn't have to be a boring drudgery. Training methods that appeal directly to the developers'creative, problem-solving traits have already had immense success in the banking and finance sector, if Russ Wolfes experience at Capital One is any indication.

Were still searching for the perfect "end-state".

If you look at the updated PCI Security Standards in the context for which they are intended, as in - your finished, user-ready financial product must follow these best practices for optimum security and safety, then they're absolutely fine. However, in my view, every single company - financial or otherwise - would have the best chance of reaching a software end-state that is representative of both feature quality and a high standard of security, if only they took a step back and realized that it is much more efficient to do this from the beginning of the cycle.

That perfect end-state? You know, the one that happens when a product is scanned, manually reviewed and comes out perfect and error-free? We are still searching for it. At this point in time, it's a unicorn.

Why is it so elusive? There are a number of factors:

  • Scanning tools are relied upon, yet they are not always effective. False positives are a frustrating, time-wasting by-product of their use, as is the fact that even together, DAST/SAST/PCI scanning simply cannot identify and reveal every possible vulnerability in the code base. Sure, they might give you the all-clear, but are they really looking for everything? An attacker only needs one vulnerability to exploit to access something you think is protected.
  • Developers are continuing to make the same mistakes. There is no distribution of knowledge between developers around security and no "secure code recipes" (good, secure code patterns) that are well-known and documented.
  • There is no emphasis on building a collaborative, positive security culture.
  • Developers need to be empowered with the right tools to bake security into the products they write, without disrupting their creative processes and agile development methodologies.

These guidelines are a powerful verification checklist for the standards of security that software should adhere to, but the best process to get software to that state is up for debate.

We don't have insecure software because we lack scanners, we have insecure software because developers are not provided with easy-to-use, easy-to-understand security tools that guide them.

We're in a time of evolution right now. Software security in general, for many years, was optional. Today, it's essentially mandatory - especially for the keepers of sensitive information (finance, medical, social security... you get the idea).

The PCI Security Standards Council are helping to set the benchmark, but I would love to see them - with all their industry esteem and influence - work towards including practical guidelines for developers, with an emphasis on adequate and positive training and tools. At the moment, there's no pressure for organizations to ensure their development teams are security-aware and compliant, nor do many developers understand the magnitude of those small, easily fixed mistakes when exploited by those that seek to do harm.

Just as is expected with anything worthwhile in life, it really does take a village to truly enact change. And the change in the air is (hopefully) going to sweep us all further to the left.

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.