By Howard A. Schmidt Over the last decade, the Internet and automated financial systems have rapidly changed the accessibility and productivity of critical systems we depend on every day. As consumers, we purchase goods from distant merchants, renew automobile registration with the state, register for classes and check bank account balances -- all the while placing immense trust in vendors to keep our sensitive personal data secure.
However, recent high-profile security lapses, including the reported theft of over 40 million credit card records from a processing service, have begun to make us all acutely aware that personal information -- including names, Social Security/national ID numbers, dates of birth, and credit card numbers -- may not always have adequate levels of protection in place to ensure security while being collected, shared, and stored by merchants, data brokers, government entities and uncountable businesses and non-profit organizations.
How secure are these systems? And when these systems fail, what is the notification process?
SURPRISINGLY TOUGH. To find out, first we must learn when lapses occur. Due to concerns around a perceived lack of reporting of breaches by companies and universities, California passed SB 1386, which requires businesses to disclose lapses in financial data security to customers under certain circumstances.
While this law and similar legislation working its way through the U.S. Congress (at the time of this writing) will presumably help improve transparency and increase industries' response to security problems, ultimately the technology underpinning all of this infrastructure must be fixed, and that will prove to be a much more difficult problem than simply notifying customers when there has been a breach. The infrastructure is any IT system that stores or permits access to personal information.
Why is this a difficult task? To date, security strategies often surround the acquisition of network-based security products like firewalls and antivirus suites -- the equivalent of building taller walls and bigger moats around private data. Unfortunately, today's cyber criminals have figured out how to bypass these defenses by using the same inroads traveled by legitimate users - most often Web applications or electronic transfer mechanisms.
THE PROBLEM WITH PATCHES. For example, instead of breaking into a bank's business network directly, a malicious hacker will instead use, say, the online banking application to send malicious requests in hopes of finding a weakness. Once vulnerability is found, the hacker exploits it to make the software do the dirty work -- return critical information stored in databases or even execute a hacker's program now or in the future. It is generally very difficult to identify a hacker posing as a legitimate user performing normal user functions.
One of the key indicators of this came when the former Commander of the then-Joint Task Force for Computer Networks Operations announced U.S. Defense Department statistics that suggested roughly 98% of all successful cyber attacks took advantage of systems that were not fully patched. Just recently, roughly a dozen Zotob worm variants infected unpatched Microsoft Windows 2000 PCs at homes and high profile businesses alike, leading some security analysts to speculate that the exceptionally fast attack may have been orchestrated by organized criminals.
While the obvious point stemming from this data is that consumers and businesses need to update their software more frequently, it also informs us to the fact that cyber-criminals know where the weaknesses are in flawed software coding processes.
IGNORANCE ISN'T BLISS. Technology analyst Gartner recently noted that over 70% of cyber attacks occur on the software layer rather than the network layer. Other industry experts estimate that roughly one in every 20 lines of software code contains a coding error, some of them affecting security, and with major applications hitting millions of lines, hackers have plenty of potential targets.
Why is software so incredibly vulnerable? There are many reasons, and one of them is that many businesses do not understand the need for better quality control and engineering and therefore do not require that as part of software development contracts. This causes businesses to lose a great deal of control over how their systems are created.
Further, most developers don't think like criminals because they aren't criminals; as a result, they are often naively optimistic about how their software will be used and how it can be compromised. This is now being addressed but there is still a lot of training that needs to be done.
SIGNS OF CHANGE. The most critical reason, however, is that few software developers have the training, time, or resources to produce software free of security flaws. There is little time in college or other training dedicated to teaching security techniques and secure coding practices. Even if there were, developers face immense pressure to deliver more product features on a strict timetable and within budget, making it a challenge for them to spend the needed time on security.
Despite even the best efforts, finding security lapses in applications is notoriously difficult and time-consuming. Developers who make a concerted effort to solidify their software find vast differences in quality between security products and services available. Until the last few years, security-conscious vendors were forced to hire expensive consultants or pay developers to review software code manually. This was very costly and not very effective.
Fortunately, signs of change are on the horizon. In addition to legislation like California SB 1386 (and pending federal statues), many businesses are realizing that security breaches can hurt stock prices and destroy customer relationships. These companies are proactively addressing the problem with help from new technology innovations.
LEARNING THE LESSONS. Most recently, a number of software security startups now offer automated software-analysis tools that possess huge knowledge bases of hacker techniques, meaning that the software developers can write code that's more secure. Finally, new crops of software developers are gaining security fundamentals through books written on the topic, as well as in programming classes at numerous universities throughout the world.
Clearly, proposed legislative attention, media coverage, and high profile security lapses are providing greater focus in the fight to secure financial data systems. But winning the battle over disclosure is only half the battle. If businesses and consumers alike are to see real changes in how their private data is protected, it will require an enormous fortifying of software infrastructure itself.
Howard A. Schmidt's career in corporate security has spanned from White House Special Adviser for Cyberspace Security to Chief Security Officer at both Microsoft and eBay. He currently heads RH Security Consulting and serves on the boards of security firms Fortify Software, Sygate and ELI