Thursday , December 12, 2024

Security Notes

Understanding Software’s Intrinsic Vulnerability

Gideon Samid – Gideon@AGSgo.com

You can point to a tiny heat-shield tile in the complex Space Shuttle and accurately predict the consequences of a malfunction. You can credibly predict what will happen if the complex human body suffers a deficiency of, say, potassium. But when you look at a tiny piece of software, you are quite in the dark when it comes to the impact of a small change.

Some lines of code never get invoked, and any errors never come to pass. What the software does depends on what the user feeds it, which gives the user control of what happens. It’s a daunting task to map out all the possible steps and sequences that may roll out when a piece of software is activated. So the high objective of “proving software”’ is, as a practical matter, too high. “Is it clean?” a client will ask, pointing to an installable CD. “As far as we can tell,” we reply, knowing that we could be unwittingly delivering hidden malware that may lie dormant for a long time before sparking into action.

Remember Y2K? Toward the close of the last century, millions of software programs around the world were replaced because no one could tell for sure what would happen when the clock indicated “00” for the year. Would the software regard it as the year 2000, or the year 1900? It was so hard to “prove” a program was clean that it was cheaper to replace it! And to this day, the question lingers of whether we overdid it.

It is so important for non-programmers to understand this intrinsic vulnerability of software. Consider a simple piece of code that multiplies X and Y and outputs Z=X*Y. You could test it, say, a million times, and the result will always be correct. Now, suppose the code includes the following:

if X=7834629, and Y=6519373  then do some harm.

Even if you test the code a billion times, your chances of testing with the two implicated values for X and Y are negligible. That means that this code will remain active in your system, certified and recertified as bona fide. A hacker, posing as a normal user, could feed in the “right” values for X and Y and spring the malware into action.

Here is something else to think about. Software is easy to copy, so good programs are replicated over and over again. So how does a programmer prove authorship? Think of the example above. The part of the code that does something unexpected for a given pair of values for X and Y could serve as a programmer’s signature. Its existence in any copied version will prove the software’s origin. Indeed, many of us, myself included, added weird (but harmless) code to our programs for this reason. Now, ironically, this same device serves the bad guys, and we still don’t have a simple, good answer for it.

Software systems are efficient because they exploit libraries of code that are considered tested and safe. This reliance on software libraries allows tidbits of malware to propagate far and deep into our most critical systems. Knowing this, clients will often ask, “But what about all this pattern-recognition technology to spot malware?” We answer: “It may work for impinging viruses, not against pre-programmed ill-logic. And today’s viruses self-encrypt to avoid detection. Such polymorphic devils have become our new challenge.”

“If your goal was to scare me, you succeeded,” barked one client, “but I hired you to help me, and calm me down.”

“Well,” I retorted, “knowledge is power. First, you need to be aware of your vulnerabilities so you can quickly and properly interpret an ongoing irregularity. Second, you need to be ready with a recovery plan to bounce back as quickly as possible.”

“Okay, but I like a silver-bullet technology solution. Anything?”

“I thought you’d never ask!” I smiled mischievously. “Yes, there is one! I learned it years ago programming for NASA. Every critical system should be implemented twice by two different teams. Both systems are fed the same input, and their output is compared. If the comparison fails, the system halts. Whether it is human error or malware, two mutually independent product-development teams have a negligible chance of introducing an irregularity in exactly the same way.”

This redundancy was NASA’s secret for success. It’s not cheap, but it works. And as the old saying goes: An ounce of prevention is worth a pound of cure.

 

Check Also

Fiserv’s Deal with COCC and other Digital Transactions News briefs from 12/11/24

Fiserv Inc. is expanding a relationship with fintech COCC to include cloud-based financial tools and fintech …

Leave a Reply

Digital Transactions