The Zero Vulnerability Framework
Delivering risk accountability and remediation responsibility for every vulnerability
Richard Metz, CISSP - Chief Operating Officer
July 19, 2023
Note - In this article, the scope of Vulnerability Management (VM) includes Infrastructure Vulnerabilities -- that is, all third-party software vulnerabilities on compute, network, and peripheral assets, with the notable exclusion of custom-built applications. In addition, we include the most common factors that preclude or inhibit the remediation of vulnerabilities, such as data deficiencies, issues with supporting software (agents, scanners, etc.) and general ITAM deficiencies.
The Zero Vulnerability Framework In traditional risk-based VM programs, most vulnerabilities are unaccounted for. Further, those vulnerabilities which are targeted for remediation are often "assigned" to groups or individuals who are in no real position to address them. These are the two main problems that the Zero Vulnerability Framework (ZFV) aims to solve. Within ZVF, there is no vulnerability without an accountable risk owner and a responsible risk remediator, and the goal is to assign this ownership with precision.
To be clear, ZFV is not about eliminating every vulnerability. Even in the most pristine and restrictive environments, the simple fact that we do not even learn about most vulnerabilities until after a solution has been provided makes this a near-mathematical certainty.
Four key elements of ZVF:
Risk Management is centered on the attack surface (IT assets – hardware, network devices, software)
Every known & potential vulnerability is accounted for.
ITAM and ITSM ownership groups, which tend to fall under infrastructure and the CTO, should lead vulnerability management, with Cybersecurity governing and Digital Leadership Championing.
Responsibility for the remediation of vulnerabilities across the attack surface should be aligned into services, explained further in Service Oriented Vulnerability Management.
Element 1: Risk Management is centered on the attack surface (IT assets – hardware, network devices, software) – not vulnerabilities.
Vulnerabilities grow in proportion to your attack surface and reduce in proportion to your successful management of that attack surface. Every new mobile device, PC, domain controller, application server - even a new installation of Java, MS Office, or WinZip on existing infrastructure - represents more current, or at least future, vulnerabilities in your environment. More critically, the attack surface is more at-risk if there are assets that are not currently accounted for, uncontrolled installations of non-compliant software, or instances of non-compliant software configurations.
There are some misconceptions about vulnerabilities that impede our ability to manage them effectively.
The language and presentation style used by some people can be misleading in that vulnerabilities are portrayed as randomly appearing aggressive “things” that must be thoroughly investigated, actively pursued, and eliminated.
Once a preselected group of vulnerabilities has been remediated, a false sense of accomplishment may be communicated, in that the job is in some way “completed”.
The primary goal is to remediate or mitigate “vulnerabilities,” when, in fact, we employ a far simpler set of “actions” that can remediate one to many thousands of vulnerabilities.
To summarize, the attack surface in traditional VM can be viewed as the infrastructure estate – both compute and software assets -- and the ITSM data and tools that support it. The management of this surface is traditionally the remit of various groups within the CTO organization, and, to a lesser extent, application teams. Therefore, the scope of VM is less a cybersecurity effort but rather an exercise in IT asset (a.k.a. attack surface) hygiene & management. As a result, every current and future vulnerability can be rationalized into a manageable set of areas (i.e., processes or services) with clear, if distributed, risk accountability and remediation responsibility.
Element 2: Every known & potential vulnerability is accounted for.
In a Cybersecurity-owned VM program without close collaboration and support from other stakeholder groups, a vulnerability-level approach tends to be taken. Based on the criticality of the underlying assets (data, applications, compute environments), and the criticality of specific vulnerabilities, Cyber teams may define a risk threshold and distribute the highest risk vulnerabilities throughout the organization for remediation. Everything else is unaccounted-for risk… which, by definition, is involuntarily risk accepted. In the Zero Vulnerability Framework (ZVF), all risk acceptance – for current AND future vulnerabilities must be intentional.
By moving to an attack-surface-focused accountability and responsibility structure, every current and future vulnerability can be accounted for. Again, vulnerabilities exist on IT Assets, and assets, by definition, have owners. In many cases, these owners may not be known or defined, which is why ITAM and CMDB data enrichment is a critical component of a VM program. In less established environments, ownership will roll up to the senior levels in the infrastructure organization, and then the work begins to rationalize and specify risk accountability and remediation responsibility with increasing precision.
A simple example of leveraging ZVF to increase precision would be expanding and more clearly defining the scope of software your core Windows Server Management Service is responsible for maintaining. In addition to maintaining the Windows OS, they are now responsible for all browsers installed across the servers within their remit. Now, the business application owners leveraging these servers will definitely have a big say about any change to their environment, so simply “assigning” responsibility is nice, but it won’t get it done. Steps that can be taken would include:
Collaborate with architecture, and application owner leads to determine what is necessary for a base image. For this example, let’s say Edge is the only browser necessary.
Remove all non-Edge browsers from the base Windows Server images. This makes the attack surface of each new future server smaller.
Execute a well communicated campaign to uninstall Firefox, Chrome, and other non-Edge browsers. This will immediately eliminate all vulnerabilities residing on any uninstalled browser.
Understand (and allow) exceptions
Roll out in manageable waves
Test a rapid reinstall process if needed
Implement an approval step for any application owner wishing to install a non-Edge browser. Note – this could even be as simple as documenting people who do install so it is known where the software “should” be. This will reduce future attack surface expansion, and in the case where scanning isn’t available, provide the list of assets that require updates when they are published.
Implement a regular update rhythm for any remaining non-Edge browsers, likely during the same window as OS patching. Again, allowing for exceptions where updates cannot be made for the health of the business application.
A bigger takeaway from this example is that it does not require new tooling or technology investment. It does, however, require the increasingly costly investment of human interaction – possibly beyond chat tools and email! We can see that several teams within the infrastructure organization will need to agree on the strategy and collaborate, and that user experience will need to be managed successfully.
Element 3: ITAM and ITSM ownership groups, which tend to fall under infrastructure and the CTO, should drive vulnerability management, with Cybersecurity governing and Digital Leadership Championing.
A quick note – the key to this element is not the “name” of the groups or their exact location within the digital hierarchy. For example, if ITAM and ITSM groups fall under another group – then that overarching organization would be the target ownership group.
If you’re on board with the shift towards attack-surface based risk management, then Element 3 logically follows. Put the ownership where the control lies. How many Cybersecurity VM leads have thrown their hands in the air at the lack of alacrity with which vulnerabilities get remediated? The reason is generally the same across the board – stakeholder misalignment (see fig 1.0).
Infrastructure is already remediating more vulnerabilities than you know.
To be fair… they’re probably creating more vulnerabilities at the same time. Depending on the technology stack, one of the largest contributors of vulnerabilities would be the operating systems of compute devices. Most infrastructure services, if nothing else, tend to have an OS patching process in place, which keeps these vulnerabilities at bay.
At the same time, if new PCs and servers are being released with old and/or unsupported software or unnecessary bloatware, these new assets are unnecessarily inflating your attack surface and vulnerability numbers.
Either way… it’s Infrastructure, not Cyber in control.
The CTO organization would be the organization responsible for reducing and controlling the attack surface, and probably contains the groups best suited to maintain ongoing patching and maintenance processes for most of the attack service.
The best way to manage vulnerabilities is to prevent them altogether. This is done by reducing the attack surface and keeping future growth under control. Key activities would include golden image rationalization, software removal and rationalization, underused asset decommissioning, software whitelisting & governance, new asset governance, and other IT asset management best practices.
At the same time, work can be done to build ongoing processes to keep the slimmed-down attack surface in good health. Because most vulnerability remediations involve the application of identical solutions delivered by 3rd party software providers (often in the form of patches or bundles), they are excellent candidates for automation. The activity of applying policies, changing configurations, updating, upgrading, installing, or removing software across large numbers of assets is a common infrastructure activity.
But why would Infrastructure want to do this?
Long before Cyber was the large, distinct area of IT that it is today, it was your CTO and infrastructure team bemoaning unaccounted for assets on the network, users deigning to make an unapproved change on their device, and of course, users even thinking about eating or drinking near their computer.
Most infrastructure organizations, in their heart of hearts, would prefer a clean and standardized infrastructure estate. The challenge often comes from stakeholder misalignment or general misunderstanding. For example, golden images can contain software packages required by a mere fraction of the user base. Application teams might have never bothered to test the compatibility of their application with a clean, updated technology stack. The idea of maintaining a list of exceptions (because these truly are needed) is incorrectly considered burdensome.
The Not-So-Mythical Win-Win
Fundamental IT asset hygiene practices can save a great deal of money in the form of recouped licensing costs, reduced hardware costs and lower support costs. To be certain, any investment required should pass a realistic cost-benefit analysis, and this is where Cyber can really come to the plate.
Cybersecurity views the world through the lens of risk, which can make it hard to see the value in implementing processes or services. However, calculate the current and future risk reduction of a software rationalization program against the remediation of a static set of vulnerabilities. Consider the cost savings of greatly improving CMDB / ITAM data to avoid wasting significant resources chasing shadows.
An aligned CISO and CTO with championing from leadership above will have the greatest success in managing vulnerabilities.
Element 4: Responsibility for the remediation of vulnerabilities across the attack surface should be aligned into services, explained further in Service Oriented Vulnerability Management.
Following on from the last point, cyber will NEVER stop asking for vulnerabilities to be remediated, and due to the nature of VM, vulnerabilities will NEVER stop being found and corrected on virtually any software package. Eventually, it becomes clear that investing in processes and services is the only way to tackle the VM problem.
Service Oriented Vulnerability Management (SOVM) is the execution methodology for the ZVF. Tactically, VM remediation is really about the execution of highly repeatable and generally automatable actions, so there’s no reason to convolute what should be science into an art form. From a strategic standpoint, it simplifies the governance and management of VM into the management of a hierarchy of services, much as we would manage any area of IT. SOVM is not about getting to zero vulnerabilities, but about finding the optimal balance. This is where risk-based vulnerability management meets SOVM, in that investment decisions, rather than vulnerability decisions can be made to address enterprise-wide risk.
About The Author(s)
Richard Metz, CISSP - Chief Operating Officer
Richard Metz began his TranSigma journey in 2013, initiating their UK and Ireland operations. He later returned to the US to start TranSigma's Cybersecurity division, focusing on data and process-centric solutions for complex business issues.
Before TranSigma, Richard co-founded a London-based tech and outsourcing consultancy, serving both mid-sized businesses and large enterprises, including FTSE and Fortune 100 companies. He also held various tech and sourcing roles at General Electric in Europe and the US.
Richard holds degrees in Operations & Strategic Management, Mathematics, and Information Systems from Boston College's Carroll School of Management and is CISSP certified.