Can software security requirements yield a faster time to market?
At the surface, the answer to this question is clearly no. Development teams that build more features and spend less time on non-functional concerns will ship software faster. This comes with a host of short-term business benefits including more first-mover advantage, seemingly decreased development costs and faster feedback loops for iterative development. It stands to reason that people who decide on software development schedules are more likely to favor spending as little as possible on non-functional areas like security.
This reasoning is flawed.
Applications that store, process or transmit credit card data, protected health information, financial data, various kinds of data for government and/or defense, and other regulated industries are often required to perform a certain level of due diligence prior to releasing an application into production. The simplest way to meet these requirements is to perform a manual or automated code review or penetration test at the end of the assessment, which is what most organizations end up doing. If you are mandated to fix these vulnerabilities prior to deploying to production, you are spending anywhere from 90 to over 400 times the cost of having prevented that same defect with adequate requirements. That additional cost generally manifests itself in one of two ways: cutting back on other features in favor of fixing defects or delaying the release. Bottom line: ignoring security to develop faster works against the goal of faster-time-to-market as soon as security becomes mandatory. You can achieve faster time to market by agreeing on the right kind of security requirements according to your organization’s risk appetite.
Most applications aren’t mandated to specifically employ application security by regulation. Yet even these applications may not be off the hook. Recent rulings by the Federal Trade Commission (FTC) may leave you liable if you do not adequately protect your customer’s data. Moreover, internal auditors assessing operational risk may mandate security testing as part of the internal controls program based on standards like ISO 27001. Even in the absence of mandatory testing, simply ignoring application security can cost you millions in the event of a breach. With web application insecurity being a leading cause of these breaches, many organizations employ application security practices as a form of insurance. These organizations have a choice once they detect a vulnerability: fix it now and risk a delay in the release date or deploy into production without a fix. Many opt for the latter but do so without a conscious cost/benefit analysis of building for fixing defects on user-facing features vs. fixing a potential security vulnerability. The business benefit of working on a feature is often incremental revenue gains or operational efficiency, whereas the risk of not fixing a serious security vulnerability is not only the potentially massive direct costs of a breach but also the brand liability of having inadequately protected customer data and potential increased scrutiny by regulators like the FTC. Thus, overtime, organizations often favor fixing security vulnerabilities prior to releasing in production — and that inevitably impairs the coveted time to market.
Software security requirements allow you to know which risks you are wiling to take ahead of time and prevent the ones you are unwilling to accept. This is true even for legacy applications that undergo changes or have to keep up with emerging security weaknesses. Addressing security requirements while building software is substantially faster than fixing security vulnerabilities later, and since so many organizations end up mandating fixing security defects, preventing those defects up-front yields faster time-to-market.