top of page
Vulnerability Management: Top 7 Takeaways

Vulnerability Management: Top 7 Takeaways

In this article, Rich Metz reviews 7 key takeaway when running a vulnerability management program.

Authored By: 

Richard Metz, CISSP - Chief Operating Officer

March 10, 2024

This article provides a digest of the most important factors to consider when managing a VM program.  A robust initiative can not only reduce significant risk, but also deliver benefits across the IT organization.  These takeaways address beliefs around VM that are often misunderstood or are complete myths.  Buckle up.

 

1.     Managing vulnerabilities is about robust ITSM – NOT cybersecurity.

 

To ensure that we’re on the same page - vulnerability management addresses weaknesses in 3rd party hardware and software, and in the tools and data required to robustly manage those IT assets.  For a more detailed definition, please read here.


Every vulnerability belongs to a single instance of hardware or software – e.g. Adobe Reader missing patches on PC 0001, Java end of life on Server 0002, unable to locate PC 0003, no ownership information on Server 0004, malfunctioning management agent on Server 0005, end of life OS on PC 0006.



 

In nearly every case, the vulnerability resolution maps directly back to an ITSM process – patching, IT Asset Management, CMDB completeness/accuracy, server management teams, etc.  Vulnerabilities are NOT viruses or parasites that randomly “appear”, and in 99%+ cases, scanners do not even seek vulnerabilities until AFTER the resolution (patches, upgrades, configuration changes, etc.) has been published.  Hence, there’s little value in non-security companies “analyzing” vulnerabilities; the focus should be on improving the ITSM processes required to deploy necessary resolutions, and to address point 2.

 

2.     VM risk is proportional to the size and diversity of the infrastructure estate, or “Attack Surface”

 

There are two basic levers IT teams have at their disposal to throttle vulnerability risk: ITSM Process Rigor (point 1), and Attack Surface Minimization (this point).


Every hardware asset and software package removed from a business’ ecosystem delivers a step change improvement in both current and future vulnerabilities.  Over time, more vulnerabilities are found, more solutions are published, users make bad installation or configuration choices, and so on.  Vulnerability Management will never be “done”.


Case in point: An unpatched installation of java on a server that has not been patched in 10 years will have around 500 vulnerabilities (at the CVE level).  Further, every additional year that installation is not patched will generate another 25 – 50 vulnerabilities.  This is on one machine.  If we are reasonably confident that it is not necessary for that particular hardware (or virtual/cloud) asset, removing Java cleans up all existing vulnerabilities, and prevents the acquisition of any future vulnerabilities.


Therefore, the most effective approach to VM is to eliminate and prevent unnecessary additions to the attack surface, in the form of:

-        Minimizing standard software loads on new servers and PCs – if 5% or fewer owners/users require a certain software package, either create separate core loads, or enable those people to request it after the initial install.  This way its dissemination is minimized, and all additional installations are known and can be managed more easily.

-        Enforcing software standards and requiring approvals (or at least auditing/tracking) for any additional software installations.

-        Running campaigns to track down and retire or consolidate un(der) used assets

-        Running campaigns to uninstall unused software

 

3.     VM remediation is achieved through an astonishingly small number of distinct solutions.

 

It’s pretty simple.  This addresses the “how do I fix it”, and “it’s too complicated” concerns that are so often heard.


Following takeaway 2, the number of distinct solutions required is proportional to the diversity of the attack surface, not necessarily the size.  Each software package and OS variation – e.g. MS Office, Adobe Flash, Wireshark, 7-Zip, SUSE Linux, Windows OS – may require individual solutions or automated scripts to remove, update, upgrade, or to fix misconfigurations, but I’ve not seen an environment where 95%+ of vulnerabilities can’t be addressed with 50 or less distinct solutions.

Reducing the attack surface should include the elimination of non-necessary and non-standard software, which will reduce the diversity of the attack surface, making this point even more pronounced.


Again, virtually all remediations involve the upgrade, update, configuration/fixing or removal of software, or CMDB data enrichment.  Therefore, whether solutions are deployed centrally or federally, the solution creation should certainly be centralized to a large degree.

 

4.     In cases where there is a large vulnerability backlog or “debt”, campaigns should be leveraged with a clear risk decision process.

 

Click here to read more about running VM campaigns.  Campaigns are time-bound, intensive efforts to address vulnerabilities across a defined scope of the attack surface.  For example, a campaign can address things like Java, obsolete Windows OS’, CMDB inaccuracy/incompleteness, and even then, over a particular set of assets, e.g. Americas PC’s, Business Unit 3 Servers, etc.


The goals of any campaign are:


1.      Minimize the current & future attack surface

2.      Identify and monitor all risks where remediation comes at a greater business cost than the benefit of remediation

3.      Implement or improve the process (service) of maintaining the remaining attack surface


Goal 1 is addressed by takeaways 1 & 2 above, and Goal 3 will be addressed by takeaway 5.  So, I’ll focus on goal 2, as this is critical.  VM initiatives are NEVER all or nothing, yet that is a misconception that is often espoused by non-cyber stakeholders. “You can’t just blast out upgrades to…”.


It is essential that VM campaigns negatively impact the business as little as possible – ideally never.  One critical application outage, and the whole initiative could be doomed… as well as the risk disposition of the company.  Before embarking on any VM campaign, there must be a risk decision process established.  I’ll say again – forcing remediation on an environment that presents an operational risk can be the death knell of a VM program.


I implore my clients to accept peoples’ concerns at face value in the beginning, and to set a realistic timeline for those concerns to be tested, vetted, and either confirmed or quashed.  As a VM program matures, the threshold for exceptions will become more and more stringent.  In the meanwhile, there should be plenty of lower-operational-risk areas to focus on.


The most important outcome is that all “accepted” risk is quantifiable and time-bound.  This enables point 6 – embracing business cases to compare remediation vs. risk acceptance.

 

5.     The ultimate goal of mature VM organizations is to manage a limited set of services that address all future vulnerabilities.

 

This is the essence of Service Oriented Vulnerability Management.  Hopefully, at this point, I’ve demonstrated that attacking existing vulnerabilities is significantly less effective than managing an efficient attack surface and a robust IT service portfolio


Every campaign should hand off to an ongoing process or service that will ensure that the scope of the attack surface addressed will remain well-configured, patched and governed.  This can be as simple as enforcing auto-updates on chrome, to an SLA-driven infrastructure agreement with internal or 3rd party groups.


Most companies already have standards in place for OS patching.  Whether monthly for Windows or quarterly/random for flavors of Linux, asset management teams know when new patches are going to be released, and they have a process to deploy these security updates safely and expeditiously.

There is no reason a similar service cannot be implemented for any area of VM, once a campaign has been run to find, test, and vet those assets that should be excluded from such a regular service.

 

6.     Like any investment in a well-run business, all Cybersecurity efforts (including VM, of course) should require a beneficial business case.

 

Monetarily quantifying risk is hardly an exact science.  While this is more or less achievable at the macro level, it becomes more difficult when drilling down to specific areas of the attack surface or individual vulnerabilities.  For example, an incredibly exploitable vulnerability within a locked-down subnet is generally less risky than a difficult-to-exploit RCE vulnerability in the DMZ.


That said, business decisions are far more coherent when they are presented at the campaign or service level.  A campaign should have a risk reduction target, as well as a budget, and a service should have a reasonably accurate ongoing cost estimate.


As mentioned above, VM campaigns and services should never be “all or nothing”.  There will always be legitimate business cases for exceptions – e.g. the critical business application that must remain on a deprecated (and thus vulnerable) version of an application server.  Remediation may entail a significant cost, and thus not pass the cost-benefit test.  In most organizations, there are segments of the attack surface that should not be remediated due to the operational risk or technology cost.  What is essential is that these areas are known, tracked, and ultimately, risk accepted.

 

7.     A comprehensive VM program will yield tremendous ancillary benefits across cybersecurity, infrastructure, and beyond.

 

If VM is driven by cybersecurity throwing tranches of prioritized risk-based vulnerabilities over the fence to infrastructure and application teams, the problem will never be solved.  As noted in points 1 and 2, VM is fundamentally an ITSM challenge, and will be addressed far more efficiently and cost-effectively when led with an infrastructure, and not cyber, focus.    



 

Excellence in vulnerability management will yield benefits throughout the infrastructure and application ecosystem.  It provides an exquisite opportunity for the odd couple of CISO and CTO to jointly lobby for executive support.  The investment in ITSM improvements will yield savings in support and licensing, better infrastructure health and hygiene, and a more complete and accurate CMDB.


The cybersecurity risk involved in VM supercharges the justification for infrastructure to achieve goals that may never have been funded and/or accepted by stakeholders when lobbied from an infrastructure-only perspective.

 

Furthermore,  the benefits extend across many areas of cybersecurity.  The table below how the different areas of a robust VM program (X-Axis) positively impact the major areas of the NIST control framework (Y-Axis).



*Numbers indicate relative impact

 

Vulnerability management may be the simplest yet most complex area of cybersecurity. This is largely because cyber has virtually no control over the remediation of risk as that falls mainly on infrastructure, and to a lesser extent, application teams and other stakeholders. Risk-based remediation should be replaced with risk-based longer-term investment because vulnerabilities (and solutions) are continuously published across all operating systems and widely-used software packages. By cleaning up the infrastructure estate through targeted campaigns and establishing ongoing services with acceptable SLAs, nearly all potential VM risk can be addressed proactively… and when the next celebrity vulnerability is announced, the response will be much closer to a “push of the button” scenario vs. an “all-hands-on-deck” scramble. 

About The Author(s)

Richard Metz, CISSP - Chief Operating Officer

Richard Metz began his TranSigma journey in 2013, initiating their UK and Ireland operations. He later returned to the US to start TranSigma's Cybersecurity division, focusing on data and process-centric solutions for complex business issues.


Before TranSigma, Richard co-founded a London-based tech and outsourcing consultancy, serving both mid-sized businesses and large enterprises, including FTSE and Fortune 100 companies.


He also held various tech and sourcing roles at General Electric in Europe and the US. Richard holds degrees in Operations & Strategic Management, Mathematics, and Information Systems from Boston College's Carroll School of Management and is CISSP certified.

bottom of page