Thursday , March 28, 2024

How To (Really) Code Secure Software

The new PCI software standards represent a big improvement for security, but we’re still waiting for the quantum leap we need. Here’s why.

In January, the PCI Security Standards Council released an all-new set of software security guidelines as part of its PCI Software Security Framework. This update aims to bring software security best practice in line with modern software development. It’s a fantastic initiative that acknowledges how this process has changed over time, requiring a rethink of the security standards that were set well before the majority of our lives became rapidly digitized.

This is clear evidence of our industry more closely engaging with the idea of adaptable guidelines—ones that evolve with our changing needs—as well as with the demands of a cybersecurity landscape that could very quickly spiral out of control if we continue to be lax in our secure development processes.

Naturally, with the PCI Security Standards Council acting as a governing body within the banking and finance industry (setting the security standards for the software in which we place our trust to protect all of our money, credit cards, and transactions online and at the point of sale), it confronts a lot of risk and has huge motivation to reduce it.

These standards certainly improve upon the previous version and go some way toward plugging the hole we have with rapid, innovative feature development that also prioritizes security as part of the overall quality assessment. But it’s a somewhat disappointing reality to find that we still have a long way to go.

No, that’s not me giving a “bah, humbug!” to this initiative. The fact is, these new security guidelines simply don’t move us far enough to the left. Here’s a summary of what I mean:

We’re still fixated on testing  (and we’re testing too late).

One glaring issue I found with the PCI Software Security Framework is its apparent dependence on testing. Of course, software must still be tested, but we’re still falling into the same trap and expecting a different result.

Who writes line after line of code to create the software we know, love, and trust? Software developers.

Who fills the unenviable role of testing this code, either with scanning tools or manual code review? AppSec specialists.

What do these specialists continue to discover? The same bugs that have plagued us for decades. Simple stuff that we’ve known how to fix for years: SQL injection, cross-site scripting, session management weaknesses … it’s like Groundhog Day for these guys.

They spent their time finding and fixing code violations that developers themselves have had the power to fix for years, except that security has not been made a priority in their process. That’s especially true now, in the age of agile development, where feature delivery is king and security is the Grinch that steals creative process and dampens the triumph of project completion.

This is not a negative assessment of either team. Developers and AppSec professionals both have extremely important jobs to do, but they continue to get in each other’s way. This situation only perpetuates a flawed system-development life cycle, where developers with little security awareness operate in a negative security culture, producing insecure code, which then has to be scanned, assessed, and fixed well after it was initially written.

AppSec barely has time to fix the truly complex issues, because they’re so caught up with the little recurring problems that could still spell disaster for a company if left unchecked.

We are wasting time, money, and resources by allowing testing to be the catch-all for security weaknesses in code. And with massive data breaches every other day, this method is obviously not working optimally, if at all.

These new standards are still assessing an end-product state (perhaps on the assumption that all developers are security-aware, which is not the case) as in, one that’s already built. This is the most expensive and difficult stage at which to fix flaws.

It’s like building a fancy new house, only to bring in a safety team to check for any hazards on the same day you move in. If something is wrong with the foundation, imagine the time, cost, and utter headache of getting to that area to even begin addressing the issues. It’s often easier and cheaper to simply start again (and what a wholly unsatisfying process that is for everyone who built the first version).

We absolutely must work from the ground up by getting the development team engaged with security best practice, empowering them with the knowledge to efficiently code securely, in addition to creating and maintaining a positive security culture in every workplace.

Is it a learning curve? Hell yeah, it is. Is it impossible? Definitely not. And it doesn’t have to be a boring drudgery. Training methods that appeal directly to the developers’ creative, problem-solving traits have already had immense success in the banking and finance sector, if Russ Wolfe’s experience at Capital One is any indication.

We’re still searching for the perfect “end-state.”

If you look at the updated PCI security standards in the context for which they are intended, the context in which your finished, user-ready financial product must follow these best practices for optimum security and safety, then they’re absolutely fine.

However, in my view, every single company, financial or otherwise, would have the best chance of reaching a software end-state that is representative of both feature quality and high-standard security if only it took a step back and realized that it is much more efficient to do this from the beginning of the cycle.

That perfect end-state? You know, the one that happens when a product is scanned and manually reviewed and comes out perfect and error-free? We are still searching for it. It’s a unicorn.

Why is it so elusive? There are a number of factors:

– Scanning tools are relied upon, yet they are not always effective. False positives are a frustrating, time-wasting byproduct of their use, as is the fact that even together, dynamic application security testing, static application security testing, and PCI scanning simply cannot identify and reveal every possible vulnerability in the code base. Sure, they might give you the all-clear, but are they really looking for everything? An attacker only needs one vulnerability to exploit to access something you think is protected.

– Developers are continuing to make the same mistakes. There is no distribution of knowledge between developers around security and no “secure code recipes” (good, secure code patterns) that are well-known and documented.

– There is no emphasis on building a collaborative, positive security culture.

– Developers need to be em- powered with the right tools to bake security into the products they write, without disrupting their creative processes and agile development methodologies.

These guidelines are a powerful verification checklist for the standards of security that software should adhere to, but the best process to get software to that state is up for debate.

We don’t have insecure software because we lack scanners. We have insecure software because developers are not provided with easy-to-use, easy-to-understand security tools that guide them.

We’re in a time of evolution right now. Software security in general, for many years, was optional. Today, it’s essentially mandatory, especially for the keepers of sensitive information.

The PCI Security Standards Council is helping to set the benchmark, but I would love to see it, with all its industry esteem and influence, work towards including practical guidelines for developers, with an emphasis on adequate and positive training and tools.

At the moment, there’s no pressure for organizations to ensure their development teams are security-aware and compliant, nor do many developers understand the magnitude of those small, easily fixed mistakes when exploited by those that seek to do harm.

Just as it is with anything worthwhile in life, it really does take a village to truly enact change. And the change in the air is (we hope) going to sweep us all further toward truly secure software.

—Pieter Danhieux is cofounder and chief executive of Secure Code Warrior Pty Ltd.

Check Also

Every $1 Dollar in Fraud Costs Retailers $3, LexisNexis Risk Solutions Finds

Fraud is nothing if not a persistent part of the payments landscape. It became more …

Digital Transactions