CommVault Systems Inc.

10/02/2024 | News release | Distributed by Public on 10/02/2024 08:10

A Real-Life Cyber Attack: Investigating a Breach

To celebrate our 10th episode of the Strive podcast, I released an extended episode covering a real-life cyberattack, which brings to life some of the topics I've covered in previous episodes.

I investigated this attack personally during my time working in forensics and attack modeling, before my time at Commvault. Company names have been kept anonymous here for obvious reasons, but this illustration will bring you a little closer to how a breach unfolds in the real world.

An Attacker on the Inside

This breach began when the IT team noticed unusual activity on the network. It started with a few alerts from an intrusion detection system signaling potential anomalies on the network. At first, it seemed like a false alarm. But as the alerts grew more frequent, the team realized they were facing something a little bit more substantial.

The alerts seen by the IT team were an example of too little too late in this case. The activity was network traffic stimulated by criminals already communicating with their malware through command and control, sometimes referred to as C2.

The attackers actually started their campaign against this organization about three months previously, with phishing emails crafted to appear as legitimate internal communications.

And that's of course all it took - just for one employee to click on the embedded malicious link. A remote access Trojan, a RAT, enabled the attackers to establish a foothold in the corporate network, and the weaponization phase of the attack was complete.

Next, using a tool called Mimikatz, the criminals extracted credentials from system memory, gaining admin access to the corporate network. They could now move laterally across the network, exploiting further vulnerabilities in unpatched systems to gain a further foothold in the network.

Then, they started to remotely execute various commands on various machines and servers across the IT estate, in order to further prepare for a ransomware attack.

Once they had full control and had gathered all the information about the targeted organization as was needed, which was about three months' worth of reconnaissance in this case, the attackers deployed ransomware known as Ryuk, notorious for its ability to encrypt entire networks very quickly.

The impact of the attack was immediate and devastating. Critical business applications came to a halt within minutes. The organization could no longer access critical data like essential systems, customer information, financial records, and internal documents.

The attackers then left a ransom note, demanding a substantial payment of Bitcoin to decrypt the data. In response to all of this, the organization activated their incident response plan. They started to isolate affected systems to prevent further spread of the Ryuk ransomware, and they engaged cyber security experts to conduct a thorough investigation.

The targeted company decided NOT to pay the ransom and instead rely on system backups and disaster recovery techniques. The recovery process was complex and time-consuming. The organization had to rebuild much of its IT infrastructure almost from the hardware up.

Basic Steps to Recovery

The first step the organization took was containment, isolating infected systems to prevent the ransomware from spreading any further. Next came eradication, the business of removing the malicious software and cleaning up the network as much as possible using advanced anti-malware tools.

After several days of working on containment and eradication, the restoration process could finally start. Unfortunately, some critical backups were compromised by the ransomware. And the restoration process here was painstakingly slow.

It took eight days to ascertain where the last clean recovery points were for most of the critical business applications. A lot of recent data was lost, and the business was pretty close to breaking point by the time critical systems could be fully recovered.

Once critical systems were back up and running, the process of rigorous monitoring began. Here, implementing enhanced monitoring solutions was necessary to try and attack any signs of residual malicious activity.

All of this was reliant on thorough communication with internal and external stakeholders on the breach itself and the steps being taken to mitigate the impacts of it.

Lessons From the Breach

This incident underscores several critical lessons for organizations of all sizes:

Firstly, regular cyber security training for employees is essential. Skepticism toward email and likely phishing attempts potentially could have thwarted the initial breach three months before the indicator of compromise.

Having up-to-date backups stored offline, immutable, that cannot be compromised, is completely vital. And of course, the regular testing of those backups would have ensured in this case that they could be relied upon in this emergency. Regular testing of restoration into a clean room would have saved this organization a great deal of recovery time.

Implementing multi-factor authentication would have significantly reduced the risks associated with that credential theft that we saw.

Employing advanced threat detection techniques and anomaly detection techniques could have helped to identify early and mitigate threats in close to real time.

The organization's full recovery here took several months, during which they operated at a significant reduced capacity. The financial cost here was immense, not only in terms of recovery expenses, but also in lost business and reputational damage.

This attack serves as a stark reminder of ever-present threats that surround all our businesses and the necessity of vigilance and preparedness. Cybercriminals are constantly evolving their tactics, and we must stay one step ahead to protect our businesses and our data. Check out the full episode here.