Is Your Cyber Risk Analysis based on Empirical Data? (It Should Be)

   Jan 05, 2021 | Radiflow team

If you’ve found this article interesting, please visit and follow Radiflow on LinkedIn, where you’ll find a wealth of exclusive content. 

 

Qualitative vs. Quantitative risk analysis

In this, my first blog of the New Year, I wanted to discuss the use of qualitative and quantitative methodologies used for cyber-risk analysis.

 

“Contrary to qualitative risk analysis, quantitative risk analysis reduces the tendency toward bias and inconsistencies when coupled with a well-defined model to evaluate risk” (FAIR Institute)

 

There is a general perception that quantitative risk analysis is based on empirical data and qualitative risk analysis is not, why is that? And is that true?

 

Qualitative risk analysis is based on subjective opinions of Subject Matter Experts (SME). Heat-maps, risk matrices, red/yellow/green prioritization of risk factors and more are all subjective perceptions that can span a wide range of values. Priority levels are not defined by or calibrated to a common measuring unit, making it difficult to agree when risk level changes (when does yellow become red?)

 

For these reasons and due to the lack of data from empirical sources, qualitative risk analysis is considered inaccurate and more of a subjective estimation.

 

[inject id=’code-47fd23f73a9caecab1e206306adae7f9′]

 

Quantitative risk analysis, on the other hand, is perceived as accurate. It uses a well-defined risk-evaluation model that’s normalized to dollar values (for potential damages and for the cost of mitigation measures).

 

But is quantitative risk analysis really as accurate as it supposed to be? If dollar loss values are not backed up with empirical data, don’t they become “Qualitative-Quantitative”?

 

If an annual industry report stated “the average loss resulting from a cyber attack in the pharmaceutical industry is estimated at $10M per annum” does that mean that my pharma factory is going to suffer that loss? Can I use the generic $10M estimate as an empirical data source relevant to quantitative risk analysis?

 

Then there’s the problem of basing your risk analysis on a-priori knowledge rather than empirical data, which may lead to a mindset of “if you don’t like the results of your cyber risk analysis you could always change your advisor”.

 

 

The FAIR Cyber-Risk Framework

The FAIR (Factor Analysis of Information Risk) cyber-risk framework has emerged as the premier Value at Risk (VaR) framework for cybersecurity and operational risk.

 

It provides information risk, cybersecurity and business executives with standards and best practices to help them measure, manage and report on information risk from the business perspective.

 

Cyber risk analysis, according to the FAIR methodology, is based on two pillars: Loss Magnitude (LM) and Loss Event Frequency (LEF).

 

On the LM side of the business we usually find ample empirical data on loss, expressed in dollar values (downtime, equipment expenditure, legal fees, etc.), but when it comes to LEF… that’s where it becomes tricky.

 

Determining LEF (Loss Event Frequency) requires answering the following questions:

  • Contact Frequency: In the next twelve months, how many times is the threat actor/agent likely to reach the asset?
  • Probability of Action: How often will the threat take action against the asset?
  • Threat Capability: How capable is this threat community of successfully carrying out the threat?
  • Resistance Strength: What is the highest percentile of the attackers’ Capability Continuum we believe we can successfully defeat?

As cyber risk experts, we try to answer the above questions by collecting data from cyber-threat reports, questioning the client about previous breach occurrences, gathering cyber information from the media and community, and basing our analysis, as much as possible, on empirical data.

 

 

Generic Input just doesn’t cut it

Unfortunately, our ability to answer these questions will fall short due to the use of generic input that is not client-specific but merely statistics for an industry or sector. What we DO need is:

 

Assessment of Probability of Action for a specific network, based on knowledge of the specific security controls in place, network topology and resiliency, common vulnerabilities for installed assets, protocols used in the system and other network properties;

 

  • Capabilities of the specific threat actors that operate in my geo-location and are active in my industrial sector;
  • Network digital image– the effectiveness and relevance of security controls and mitigators vs. the tools and techniques of relevant cyber adversaries, for the specific network

Here’s a quick example. For a theoretical cyber breach scenario, let’s assume:

 

  • A Primary LM (Loss Magnitude) of $0.5M, $0.7M and $1M (min/likely/max)
  • High-confidence LEF (Loss Event Frequency) of 1, 2 and 3 (min/likely/max) occurrences per year.

Based on these figures, the FAIR analysis tool produced a risk value of $11M (most likely) and $24M (max). Note that the simulation accounts for factors beyond productivity loss, such as response cost, replacement costs, competitive advantage loss, legal fees and reputation loss.

 

Problem is, the LEF is still based on generic industry/sector information. The next step would be fine-tuning the LEF, by adding actual specific client’s data such as a network digital image and maturity level (i.e. the security controls in place).

 

I’ll run the FAIR simulation again with high confidence LEF of 1, 1 and 2 (min/likely/max) occurrences per year, based on actual network data. This reduced the loss range to $6M (likely) to $16M (max), down from $12M and $24M respectively, thus completely changing our decision-making in light of our loss tolerance strategy.

 

 

Radiflow CIARA

The above challenge of adding empirical data to calculate LEF is at the core of the CIARA solution for OT and ICS environments. CIARA deploys threat adversary TTP models on a specific virtual digital-image, taking into account asset vulnerabilities, network topology, security controllers in place and many other attributes to simulate breach assessment. Data sources such as threat intelligence and asset management platforms contribute to the machine-learning simulation, to produce a much more accurate empirical model for cyber breach probability.

 

Summary

It’s up to every organization to choose the appropriate risk analysis methodology: Quantitative, Qualitative or a combination of the two. Both can be used to increase visibility into the organization’s cyber risk posture. The key point is to expand data collection sources beyond industry-generic yearly reports and subjective SME opinions, by means of incorporating client-specific empirical data to LEF calculations as part of the cybersecurity decision making process.

Cyber Risk Analysis based on Empirical Data

Qualitative risk analysis is considered inaccurate and more of a subjective estimation, while quantitative risk analysis uses a well-defined risk-evaluation model that’s normalized to dollar values (for potential damages and for the cost of mitigation measures).

Additional Resources

Request Demo Contact Us
Skip to content