I’ve been involved in the OT network visibility and security market pretty much since its inception: from the early days of market education and early adopters all the way to the more mature, competitive, and crowded market that we see today.
Throughout this period, I believe, an understanding of how these tools work and what they do has developed in the customer industries. This has been reflected in the evolution of the typical evaluation processes and maturing selection criteria that we typically see in requirement documents and discussions with prospective customers.
However, although the process of tool selection is maturing, I typically see the same problems repeating across engagements and evaluations, with these same selection challenges causing major issues in full-scale operational deployments down the line. This article explores why this is and tries to provide some suggestions for avoiding potential pitfalls.
The main issues with most tool-selection processes are: the limited evaluation criteria and the testing and validation of these criteria in the real world. What this means is that, generally, tools are selected with two main, and often flawed, criteria:
Focusing on Point 1, the limitations and distractions of the technical evaluation: Why does this happen?
Firstly, consider the proof-of-concept (POC) process. In most cases, a prospective customer will require a Proof of Concept or Proof of Value or Pilot or Bake-Off (or whatever it’s trendy to call a trial at the time). The POC is usually a small-scale deployment to prove that the tools under examination will do what the vendor says they will do and an opportunity to compare tools against each other in a real-world environment. In some cases, the POC takes place in a small, purpose-built lab setting.
The problem here is that both vendor and customer are keen for the POC to take place in an ‘ideal’ scenario. The vendor wants to show they are easy and quick to deploy and their tools can perform. The customer doesn’t want to over-invest their limited resources or cause too much disruption. Also, they will choose a network which they know reasonably well so they can verify if the data presented by the tools is accurate.
Based on how well the tools work in this best-case scenario, a customer will extrapolate the POC results to the rest of their environment. A win-win… well, not really. The problem is that, in most cases, the OT estate is not fully known, isn’t simple, and will require various modalities of deployment and integration across disparate networks. This is far from a best-case or lab environment!
The other symptom of this best-case evaluation is what actually ends up being evaluated to differentiate between solutions. Because the OT vendor market is maturing, leading solutions are converging in terms of capabilities. The functionality that was once used to differentiate and select potential competitive solutions only a short while ago can now be demonstrated by all of the leading vendors. For example, they all perform passive network analysis through deep packet inspection, can identify hundreds of OT protocols, identify OT assets, display network maps, alert on anomalies, etc. In the simplistic POC deployments, most vendors can easily demonstrate much of this functionality, so, in order to find differentiators, evaluations often turn to lesser features and the User Interface (UI). Whilst these criteria are important, they are not what a customer will love or hate a few years down the road.
I liken these evaluations to trying to choose a car by going to the showroom and comparing infotainment systems. It’s interesting and part of a selection process, but, really, you need to understand how the car performs in adverse conditions, how often it breaks down, what happens when it breaks down, and how much it costs to maintain year after year.
Once a customer has validated that all their prospective vendors can perform the core capabilities as described, they should avoid falling into an evaluation that focuses on widgets and buttons – relatively unimportant and really try to establish which tool will I be able to efficiently and effectively deploy, which tool is going to be easiest to use by my various teams, which vendor do I want to work with for a number of years, and what is my total cost of ownership (TCO) for the system over multiple years.
So, what other criteria should be considered?
Architectural Flexibility
This is a big consideration to ensure deployment is efficient and cost effective, and can also ensure wider coverage across your OT environment. There is no one-size-fits-all approach here, but acknowledging that, in most deployments, a number of different modalities will be employed and reflecting this in your requirements is key.
Some key considerations:
These questions need to be answered both technically and commercially. This ensures that customers have options which are appropriate for specific sites and networks, and, overall, that they can choose the most economic options which will not necessitate changes in infrastructure and network configuration. Ideally, architectural flexibility needs to be validated as part of the POC process as, from experience, what a vendor claims to do and what they can actually deliver can be two quite different things.
Support, Upgrades, and Maintenance
Once a solution is deployed, it must be maintained. In large scale, complex deployments with solutions utilised for many years, inevitably things will break. As technology evolves, you will want the most up-to-date versions. All of this needs to be considered upfront. I think that many organisations procure a system on a three-year licence agreement and only think of the implications up to that point. I would say that if the deployment is successful, then most customers will keep the same OT cyber vendors for at least five years, if not significantly more. The cost to rip out an old system and replace it is too high, even if the ongoing maintenance of the existing system costs much more than initially expected.
Some important questions to ask:
In terms of validation, because all vendors will claim that they, or their partners, will do this, the top tip is: What is the supporting evidence, such as professional documentation with sufficient detail and certification? Being able to speak to someone in the support team should be part of the selection process.
Pricing Structures, Commitments, and Commercial Flexibility
Pricing is a tricky subject. It’s important to say that cheapest isn’t always the best; customers should avoid a race to the bottom (lowest price) as, inevitably, it affects quality in the long run. Some elements that will drive total cost of ownership are covered in other sections, such as architectural flexibility, software lifecycle, and maintenance processes. These are generally the unseen pitfalls that customers can run into after they have selected a vendor based on the quoted price and a limited evaluation. However, there are some red flags in the commercial aspects too, which are often not considered.
Firstly, what happens if your scoping, i.e. your estimated size and volume, is way off? It’s important to validate what the price will be if, during the deployment process, there are significantly more or fewer devices involved. (This is the typical metric used for scoping, but it could be another metric.)
What are the vendor’s preferred payment structures? Customers are often biased towards CAPEX or OPEX, regular or annual payments. Understanding early in the process how this affects the price you pay is an important early qualifier and a possible disqualifier. Decision makers want to avoid last-minute price vs. budget surprises.
Another important aspect is long term commitment to the pricing. As previously mentioned, successful deployments are unlikely to be replaced very soon, and vendors know this. Usual licence period agreements are between 1 and 3 years, so knowing that the vendor will not try to leverage the high cost of replacement in order to increase pricing after the initial contract has expired is a crucial foresight that will reduce TCO.
Finally, flexibility on terms is important. Many customer organisations are large businesses or have to operate within the bureaucratic constraints of government or semi-state status. Vendors should be sympathetic to the legal and commercial regulatory environment within which customer stakeholders need to operate, but increasingly, as OT cyber vendors become larger or are acquired by larger corporations, they come to an impasse over terms and the Master Service Agreement (MSA). Again, it’s important to evaluate this early to ensure no show stoppers later in the process, after a vendor has been awarded.
Working with the Vendor Long Term
Alas, there is no trip adviser or recommendation for OT cybersecurity although, in some cases, word-of-mouth helps to support or discredit the reputation of specific vendors. As stated repeatedly, the commitment you make to an OT cyber vendor is likely long term, and you need to know your bedfellows before you jump into bed. Of course, this is hard to quantify, but there is some information that you can gather:
These aspects are much harder to quantify and validate, but they are also the biggest indicators of major problems in the future. I speak to customers who regret their vendor selection and, in most of these cases, there were red flags or due diligence which was missed in the process which could have indicated that their chosen vendor was not the best fit for them in the long run.
Choose Wisely and Carefully
To conclude, there are no catastrophic issues with the established process of evaluating OT asset, threat detection, and other solutions – only limitations which lead to an incomplete picture, and ultimately, in many cases, the incorrect selection of tools. Just as important or even more important than a long list of features and widgets, most of which will never really be used in production, is establishing if you can actually get the tool deployed, if the solution will cost you more than you think during deployment and long-term ownership, and if you will be appropriately supported and respected in a mutually beneficial relationship over the lifetime of the system.
Don’t make the typical mistakes when you select your OT security vendors
Harmonizing risk and consequence strategies across IT and OT environments for greater cyber resilience
Strengthening OT Resilience: Protecting Critical Systems in a Rapidly Evolving Threat Environment
Quarterly ICS Security Report 2024 Q3