What to do when ICS-CERT and NIST produce contradictory vulnerability analyses 


Vulnerability disclosure organizations are considered to be the most important and reliable source of actionable information for vulnerability and risk assessment, including exposure data, exploit difficulty analysis and device vendor information. Due to the high cost and tremendous risks involved in implementing corrective measures (or not), vulnerability analysis inconsistencies are increasingly becoming a problem for ICS organizations’ CSOs.

We argue that the current vulnerability scoring system is not tuned to ICS, as it incorrectly and inconsistently weighs different impacts and misses some factors. Even NIST and ICS-CERT, the two main vulnerability disclosure organizations, are not always aligned.

The players

Two of the major vulnerability disclosure organizations are NIST and ICS-CERT.

  • NIST holds and maintains the US national vulnerability database.
  • ICS-CERT is the National Cybersecurity and Communications Integration Center (NCCIC) Industrial Control Center. The NCCIC ICS’s mission is to reduce risk in all critical infrastructure sectors by collaborating with players from law enforcement, intelligence, government, control systems owners, ICS operators, and device vendors. NCCIC also collaborates with international and private sector Computer Emergency Response Teams (CERTs) to share control systems-related security incidents and mitigation measures.

While both organizations continuously provide new vulnerability feeds and the analysis, their analyses are not always identical, and in some cases even contradictory. This makes it difficult for critical infrastructure operators to properly estimate the potential impact of vulnerabilities.

Between 2017 and 2018 Radiflow has detected about twenty such inconsistencies, which were included in the advisories released by ICS-CERT. The inconsistencies were not only in each vulnerability’s score but also on its detailed impact.

Anatomy of a vulnerability analysis discrepancy

For example, in ICSA-18-009-01, ICS-CERT released CVE-2017-16740 regarding Allen-Bradley MicroLogix 1400 Controllers, which stated that “Successful exploitation of this vulnerability could cause the [attacked] device to become unresponsive to Modbus TCP communications and affect the availability of the device.”

ICS-CERT gave this vulnerability a 8.6 (high severity) score, while NIST gave this CVE a score of 10 (critical).

More interesting and confusing are the scoring notes for the vulnerability, which detail the potential impact of the vulnerability (among other parts of the analysis). According to ICS-CERT, the vulnerability would have zero impact on confidentiality and integrity; NIST’s predicts a HIGH impact on the confidentiality and integrity.

The detailed reports can be seen here:

What causes the inconsistencies between NIST and ICS-CERT?

To answer the question, Radiflow contacted ICS-CERT and NIST. ICS-CERT explained that their scoring details are formulated in coordination with the vendor and the researcher; Once the advisory analysis is complete, they submit the information to NIST.  If NIST disagrees with ICS-CERT’s CVSS scores, they develop and publish their own scores and analysis. NIST explained that they perform their scoring based on the vulnerability description, and in the cases we pointed out to them, the description was aligned with their scoring.

Are you a “NIST” or an “ICS-CERT”? Take the Radiflow Survey to find out.

In light of the analysis inconsistencies between NIST and ICS-CERT, it’s clear that the fundamental requirement to adapt an accepted vulnerability score is not enough, since it still allows users the freedom to interpret and implement the analysis in different ways (e.g. decide whether the impact of a vulnerability is low or high.)

More examples for scenarios where ICS-CERT and NIST are not aligned can be found in the under-five-minute Radiflow Vulnerability Analysis Survey, which allow the participants to validate whether their perspective is more ‘ICS-CERT’ or more ‘NIST’. In addition, more cases where the current scoring system is misleading can be found in the survey.

In the survey, you’ll be asked to help in rating a few sample scenarios that represent the various currently-overlooked aspects that can impact holistic scoring.

Preliminary results will be presented at the S4 conference, where we will also demonstrate how Radiflow Analytics Systems deals with the scoring issues, and we will send you the full survey analysis report towards the end of January.

The results will hopefully help to create a more comprehensive approach to scoring and most importantly, you’ll be able to compare your stance with your peers’!