Attention grabbing exploits are becoming the norm. We hear about bugs like Heartbleed and IE 0days almost every week. Understandably the public is concerned about insecure technology. Yet for those of us who work in information security, this isn’t news at all. We have long known that insecure software is the root cause of most breaches.
Many development teams do not prioritize security. Shipping a product late or with fewer features yields a greater, more tangible consequence than software security. This effect is heightened with the imperative to demonstrate new features in every iteration in agile development. Apart from a small number of industries, there is no regulation that requires developers to integrate security apart from a superficial statement of compliance with a generic list such as the OWASP Top 10. It seems we cannot learn from other disciplines such as medicine, aviation, and civil engineering: prevention is more effective than (though not a substitute for) detection.
Of course there’s a clear reason for a sense of duty for safety in medicine or aviation vs the cavalier attitude towards software security: the former can and does result in loss of human life, whereas the latter usually just results in financial loss and inconvenience. Though bugs in software have occasionally been catastrophic, most development teams and their business leaders simply do not see the risk of insecure software being that great.
The perception of security not being all that important manifests itself in many ways. One of the clearest examples is the continued ubiquitous use of unmanaged programming languages such as C/C++ in some of the most popular software on the world: browsers, operating systems and servers. As this hacker news thread points out, lower level memory flaws are simply too hard to eradicate. Microsoft arguably has more mature software security practises than any other large software vendor. Yet even with their security prowess, subtle yet critical weaknesses like use-after-free cannot reliably be prevented. Microsoft isn’t alone, of course, every other major desktop software vendor issues regular security advisories, many of which are the result of memory management issues. Yet despite its many security issues, C/C++ have continued dominance for a few basic reasons: lots of code is already written in C/C++ so it’s easy to reuse and extend, and apps written in these languages have much better performance than most other languages.
The reality is that performance and code reuse are more important than security to most people. This is predicated on the illusion that insecure software isn’t that big of a deal — seemingly resulting in financial loss and inconvenience, not human safety.
I’d like to flip that argument on its head for a moment. Even if *most* software does not affect human safety, we are missing out on the full potential of software by collectively accepting that critical systems can be built on insecure languages. Insecurity is stifling innovation. According to a NTT Com Security report 49% of businesses have either stopped a project or business idea due to security worries. The healthcare industry in particular could be delivering even more innovation than it does today if not for security worries, which means we are experiencing an opportunity cost of saved lives. Framed this way, the cost of thinking that software security “isn’t that big of a deal” is affecting human safety.
If we were serious about software security, we would be encouraging innovation away from insecure technology. Perhaps today it is true that one cannot write a Web browser that will gain real market share in anything but C or one of it’s unmanaged derivatives. Yet I refuse to believe that it will always be the case, and therefore that we cannot make a concerted effort to push development teams to adopt safer language alternatives for ubiquitous and critical software. What if instead of hosting contests to build self-patching software, government grants instead incentivized development and maintenance of software like browsers, operating system components(1), or web servers built on safer language alternatives.
Critics will rightly point out that even software in managed languages has security vulnerabilities. As somebody who spends every day focusing on software security requirements, I can tell you that the task of building a real-world application free of known, preventable software security defects is substantially more difficult in C/C++ than it is in managed language equivalents. Thus, the move away from C/C++ would be only one step on the journey to building trustworthy systems, but it would be a major one.
I am tired of hearing popular refrains such as “you can’t make anything 100% secure” or “it only takes one mistake for a hacker to break your system”. These may be true, but that doesn’t mean we should be content with the status quo. Driving on a highway may never be 100% safe, but it is much safer now than it was 100 years ago. Here’s a goal I propose we can strive for: a software development team ought to be able to produce software 99.9% free of known, preventable software security weaknesses (i.e. those covered in the Common Weakness Enumeration [CWE]) using a commercially reasonable amount of money & effort. The measurement of commercially reasonable would of course have to be nailed down, but it’s a starting point. We would still have the occasional security exploit, either a CWE flaw that slipped through the cracks, or a domain-specific issue that isn’t captured in the CWE. However, if these were actually rare rather than ubiquitous as they are now, public trust in software and the downstream effects on our technology-based economy would start to improve. Fears about information security would slowly start to dissipate. Eventually, the innovation that was previously stifled would start to unlock, and healthcare innovation in particular would improve and ultimately save more people’s lives.
The software industry is negatively impacting society by its lack of attention to security. Worse, this lack of attention inhibits innovation in other important industries, such as health care, that have the potential to raise general well-being and save lives. Most importantly, many severe security faults are almost entirely preventable — some simply by selecting a different technology. We have an imperative to change.
(1) Recognizing that *some* parts of an OS must be written in an unmanaged language, we can still encourage the majority of functions to be written in memory safe languages.