Analysis of the Ukrainian Outage

By Yehonatan Kfir, CTO, Radiflow

January 21, 2016

As the smoke starts to clear around the Ukrainian power outage–a significant case of a confirmed cyber-attack that left tens of thousands of people in the dark–more and more details are being confirmed about the chain of events that led to the outage.

In this paper we will review the snippets of information that were confirmed, as well as those that are still under investigation.

Description of the case

On Dec. 23, 2015, at about 13:00, a power outage occurred at western Ukraine’s local energy provider Prykarpattyaoblenergo. It cut the power to 80,000 customers for about six hours. During that time, costumers failed to report the outage, due to technical failures in the call center. Furthermore, it seems that multiple regional power companies in Ukraine were attacked simultaneously.


After analyzing the information obtained by researchers thus far, it is clear that cyber attacks were directly responsible for power outages in Ukraine.

From what has been revealed by now, it seems that the attackers used at least one malware for damaging the operational network servers and have demonstrated spreading capabilities inside the target network. The leading theory is that in order to launch the attack the attackers connected online to the operational network, which allowed them to exactly time the command sequence that caused the power outage.

Penetrating the network

The attacks on several regional distribution power companies were coordinated, to increase the probability of creating an outage. The reports in the media explicitly named specific utilities that were attacked, including Prykarpattyaoblenergo and Kyivoblenergo.

The exact timeline of the attack and the sequence of events are still unclear and are currently being analyzed. What is known is that Kyivoblenergo provided public updates to customers, indicating that an unauthorized intrusion had occurred, which disconnected seven 110-kV substations and twenty three 35-kV substations, leading to an outage that affected 80,000 customers.

It seems now that the main malware component was embedded in a third-party HMI software used by the power companies. When the operators updated their software, they downloaded an infected file which contained the attacker’s malware. Once the malware was downloaded the attackers had persistency in the network due to a change they had made in one of the HMI files.

Inside the OT network

After infiltrating the network via the HMI the attackers proceeded to spread in the network, where their targets were the servers responsible for controlling the field devices and reflecting the devices’ state to the operator. This allowed the attackers, upon completion of the execution stage, to hide the exact state of the distribution network and to delete forensic data. These two actions increased the time it took the distribution companies to react to the cyber attack; moreover, even now these actions are still preventing the research community from backtracking the exact attack steps.

The attack execution

What’s most interesting about the attack is that current evidence and analysis indicate that the attackers interacted directly with the network during the attack itself, probably through a remote access port. It is known that the attackers used a corrupted version of a remote-access software that was installed in the operator network. By interacting with the network the attackers were able to send the relevant commands to the field devices and to time those commands to cause the attack.

Analysis of the attack reveals that at least two pieces of malware were associated with the outage. The first, ‘KillDisk’, was probably used to erase some of the servers. This piece of malware probably did not directly cause the outage, since the attacker’s actions took into consideration timing, sites and impact, which is not the typical MO of ‘KillDisk’. Most probably the goal of the ‘KillDisk’ malware was to delete forensic data and to delay the restoration of service, by wiping SCADA servers after they had caused the outage.

Another malware used by the attackers is related to the BlackEnergy campaign. This malware was used to download and activate the ‘KillDisk’ software. It is possible that the BlackEnergy malware was also used to gather information once in the network, or to directly execute the attack.

Another piece of software that the attackers used is an SSH backdoor, probably for communicating with other servers and devices inside the network, and from within the network to the attackers’ Command and Control servers.

During the actual blackout phase, the attackers issued “denial of view” commands to system dispatchers and overloaded the customer service call center to deny customer calls that would have reported the power outage.

The takeaway

The Ukrainian case illuminates several points regarding SCADA Cyber Attack Campaigns:

  1. The coordination required for achieving a significant effect: to cause a full-on outage, hackers would typically need to infiltrate several networks and even different organizations. In addition, they would have to coordinate their commands to the field devices.
  2. The least protected company is the one most prone to be attacked: eventually, there are a lot of ways to cause an outage. Assuming the attackers’ goal was just to cause a mass outage in Ukraine, they would logically go after the most exposed and least secure targets. Indeed, it was reported that similar malicious activities were found in other Ukrainian companies, which were more secure and had mitigated the risk on time.
  3. The use of supply chains as attack vectors: it seems that the attackers manipulated the files of the legitimate HMI tool used by the operator. Then, when the operator downloaded a file from the supplier’s website, what was actually downloaded was a file that contained the malware.
  4. Hiding the damage: as we described in our August post, “Designing an ICS attack Platform,” the attacker will typically attempt to hide the damage. The reasons we presented there also hold in the Ukrainian case: increasing the operator’s mitigation time and complicating post-attack research.
  5. Massive network anomaly behavior: as reported, the attackers demonstrated capabilities to move between substations within the network, send commands to field devices, change server configurations and open connections from the outside. As such, multiple security mechanisms such as SCADA DPI firewalls and SCADA IDS systems could have detected some of the steps in this chain of events.
  6. Preventing SCADA Cyber Attacks is indeed possible! Once the targeted companies found out about the malicious activity, they initiated their mitigation programs, which mainly focused on moving to manual control on the operational network. This step proved to be efficient, but unfortunately too late – since they detected the attack only after it had been launched and already caused the outage. Had the operators detected the attack in its first steps, they’d stand a better chance in preventing the outage. Early detection is key in thwarting cyber attacks, taking advantage of the fact that it takes time for the attacker to launch an attack, and that it creates significant abnormal network behavior.