Skip to main content

According to a ransomware survey report released in June by Keeper Security, 49% of companies hit by ransomware paid the ransom—and another 22% declined to say whether they paid or not. Part of the reason is the lack of backups—specifically, the lack of usable backups.

Backups must be safe from malware, quick and easy to recover, and include not just important files and databases but also key applications, configurations, and all the technology needed to support an entire business process. Most importantly, backups should be well-tested.

Here are eight steps to ensure a successful recovery from backup after a ransomware attack.

1. Keep the backups isolated

According to a survey by Veritas released last fall, only 36% of companies have three or more copies of their data, including at least one off-site. Keeping an “air gap” between the backups and the production environment is critical to keep it safe from ransomware—and other disasters.

“We do see some of our clients that have on-prem backups that they run themselves, as well as cloud-based ones,” says Jeff Palatt, vice president for technical advisory services at MoxFive, a technical advisory services company. “But ideally, if someone has both, they don’t cascade. If the encrypted files get written to the local backup solution and then get replicated to the cloud, that doesn’t do you any good.”

Some cloud-based platforms include versioning as part of the product for no additional cost. For example, Office 365, Google Docs, and online backup systems like iDrive keep all previous versions of files without overwriting them. Even if ransomware strikes, and the encrypted files are backed up, the backup process just adds a new, corrupted version of the file—it doesn’t overwrite the older backups that are already there.

Technology that saves continuous incremental backups of files also means that there’s no loss of data when ransomware hits. You just go back to the last good version of the file before the attack. 

2. Use write-once storage techniques

Another way to protect backups is to use storage that can’t be written over. Use either physical write-once-read-many (WORM) technology or virtual equivalents that allow data to be written but not changed. This does increase the cost of backups since it requires substantially more storage. Some backup technologies only save changed and updated files or use other deduplication technology to keep from having multiple copies of the same thing in the archive.

3. Keep multiple types of backups

“In many cases, enterprises don’t have the storage space or capabilities to keep backups for a lengthy period of time,” says Palatt. “In one case, our client had three days of backups. Two were overwritten, but the third day was still viable.” If the ransomware had hit over, say, a long holiday weekend, then all three days of backups could have been destroyed. “All of a sudden you come in and all your iterations have been overwritten because we only have three, or four, or five days.”

Palatt suggests that companies keep different types of backups, such as full backups on one schedule combined with incremental backups on a more frequent schedule.

4. Protect the backup catalog

In addition to keeping the backup files themselves safe from attackers, companies should also ensure that their data catalogs are safe. “Most of the sophisticated ransomware attacks target the backup catalog and not the actual backup media, the backup tapes or disks, as most people think,” says Amr Ahmed, EY America’s infrastructure and service resiliency leader.

This catalog contains all the metadata for the backups, the index, the bar codes of the tapes, the full paths to data content on disks, and so on. “Your backup media will be unusable without the catalog,” Ahmed says. Restoring without one would be extremely hard or impractical. Enterprises need to ensure that they have in place a backup solution that includes protections for the backup catalog, such as an air gap.

5. Back up everything that needs to be backed up

When Alaska’s Kodiak Island Borough was hit by ransomware in 2016, the municipality had about three dozen servers and 45 employee PCs. All were backed up, says IT supervisor Paul VanDyke, who ran the recovery effort. All servers were backed up, that is, except one. “I missed one server that had assessed property values,” he says.

The ransom demand was small by today’s standards, just half a Bitcoin, which was then worth $259. He paid the ransom, but only used the decryption key on that one server, since he didn’t trust the integrity of the systems restored with the attackers’ help. “I assumed everything was dirty,” he says. Today, everything is covered by backup technology.

Larger organizations also have a problem ensuring that everything that needs to be backed up is actually backed up. According to the Veritas survey, IT professionals estimate that, on average, they wouldn’t be able to recover 20% of their data in the event of a complete data loss. It doesn’t help that many companies, if not all companies, have a problem with shadow IT.

“People are trying to do their jobs in the most convenient and efficient way possible,” says Randy Watkins, CTO at Critical Start. “Oftentimes, that means running under the radar and doing things yourself.”

There’s only so much companies can do to prevent loss when critical data is sitting on a server in a back closet somewhere, especially if the data is used for internal processes. “When it comes to production, it usually hits the company’s radar somewhere,” says Watkins. “There’s a new application or a new revenue-generating service.”

Not all systems can be easily found by IT so that they can be backed up. Ransomware hits, and then suddenly things are no longer working. Watkins recommends that companies do a thorough survey of all their systems and assets. This will usually involve leaders from every function, so that they can ask their people for lists of all critical systems and data that needs to be protected.

Often, companies will discover that things are stored where they shouldn’t be stored, like payment data being stored on employee laptops. As a result, the backup project will often run concurrent with a data loss prevention project, Watkins says.

6. Back up entire business processes

Ransomware doesn’t just affect data files. Attackers know that the more business functions they can shut down, the more likely a company is to pay a ransom. Natural disasters, hardware failures, and network outages don’t discriminate either.

After they were hit by ransomware, Kodiak Island’s VanDyke had to rebuild all the servers and PCs, which sometimes included downloading and re-installing software and redoing all the configurations. As a result, it took a week to restore the servers and another week to restore the PCs. In addition, he only had three spare servers to do the recovery with, so there was a lot of swapping back and forth, he says. With more servers, the process could have gone faster.

A business process works like an orchestra, says Dave Burg, cybersecurity leader at EY Americas. “You have different parts of the orchestra making different sounds, and if they’re not in sequence with each other, what you hear is noise.”

Backing up just the data without backing up all the software, components, dependencies, configurations, networking settings, monitoring and security tools, and everything else that is required for a business process to work can make recovery extremely challenging. Companies too often underestimate this challenge.

“There’s a lack of understanding of the technology infrastructure and the interconnections,” says Burg. “An insufficient understanding of how the technology really works to enable the business.”

The biggest infrastructure recovery challenges after a ransomware attack typically involve rebuilding Active Directory and rebuilding configuration management database capability, Burg says. It used to be that if a company wanted a full backup of its systems, not just data, that it would build a working duplicate of its entire infrastructure, a disaster recovery site. Of course, doing so doubled the infrastructure costs, making it cost prohibitive for many businesses.

Today, cloud infrastructure can be used to create virtual backup data centers, one that only costs money while it is being used. And if a company is already in the cloud, setting up a backup in a different availability zone—or a different cloud—is an even simpler process. “These cloud-based hot-swap architectures are available, are cost effective, and are secure, and have a great deal of promise,” says Burg.

7. Use hot disaster recovery sites and automation to speed recovery

According to Veritas, only 33% of IT directors think they can recover from a ransomware attack within five days. “I know companies who are spending a lot of money on tapes and sending them off to Iron Mountain,” says Watkins. “They don’t have the time to wait an hour to get the tapes back and 17 days to restore them.”

A hot site, one that’s available at the switch of a key, would solve the recovery time problem. With today’s cloud-based infrastructure, there’s no reason not to have one.

“It’s a no-brainer,” says Watkins. “You can have a script that copies your infrastructure and stands it up in another availability zone or another provider altogether. Then have the automation ready to go so that you hit play. There’s no restore time, just 10 or 15 minutes to turn it on. Maybe a full day if you go through testing.”

Why aren’t more companies doing this? First, there’s a substantial cost to the initial setup, Watkins says. “Then you need that expertise in house, that automation expertise and cloud expertise in general,” he says. “Then there are things like security controls that you need to set up ahead of time.”

There are also legacy systems that don’t transfer to the cloud. Watkins points to oil and gas controllers as an example of something that can’t be replicated in the cloud.

For the most part, the initial cost of setting up the backup infrastructure should be a moot point, Watkins says. “Your cost to set up the infrastructure is much less than paying the ransomware and dealing with the reputation damage.”

For companies struggling with this, one approach could be to focus on the most critical business processes first, suggests Tanner Johnson, principal analyst for data security at Omdia. “You don’t want to buy a million-dollar lock to protect a thousand-dollar asset,” he says. “Define what your crown jewels are. Establish a hierarchy and priority for your security team.”

There’s a cultural barrier to investing proactively in cybersecurity, Johnson admits. “We are a reactionary society, but cybersecurity is finally being seen for what it is: an investment. An ounce of prevention is worth a pound of cure.”

8. Test, test, and test again

According to Veritas, 39% of companies last tested their disaster recovery plan more than three months ago—or have never tested it at all. “A lot of people are approaching backups from a backup point of view, not a recovery point of view,” says Mike Golden, senior delivery manager for cloud infrastructure services at Capgemini. “You can back up all day long, but if you don’t test your restore, you don’t test your disaster recovery, you’re just opening yourself to problems.”

This is where a lot of companies go wrong, Golden says. “They back it up and go away and are not testing it.” They don’t know how long the backups will take to download, for example, because they haven’t tested it. “You don’t know all the little things that can go wrong until it happens,” he says.

It’s not just the technology that needs to be tested, but the human element as well. “People don’t know what they don’t know,” Golden says. “Or there’s not a regular audit of their processes to make sure that people are adhering to policies.”

When it comes to people following required backup processes and knowing what they need to do in a disaster recovery situation, the mantra, Golden says, should be “trust but verify.”

Original article source was posted here

All rights reserved Jenson Knight.