Skip to main content
FireEye CEO Kevin Mandia, center, speaks on a panel with former director of the NSA and commander of the US Cyber Command, Keith Alexander, and founder and executive chairman of Lookout, John Hering, at the Vanity Fair New Establishment Summit in 2014. (Kimberly White/Getty Images for Vanity Fair)

The Sunburst espionage campaign that breached FireEye and several government agencies was devious about operational security. To protect useful attack vectors through SolarWinds, Microsoft, and VMWare, the hackers made every effort not to reuse infrastructures or settings or to tie one stage of the attack to another.

When Joe Slowik, senior security researcher at DomainTools, looked at the command and control infrastructure, there were only very loose patterns to be found. The domains were a broad mix – some hacked, and some newly established. They were registered by different services, hosted on different IPs. There was no way to leverage that information into a list of filterable domains. If defenders knew hackers were coming, there would be few traditional indicators of compromise (IoCs).

But, he said in a new blog post, there was plenty of useful information for network defenders willing to use domain data for more than IoCs. By paying additional attention to network traffic to critical systems from sites with unusual mixes of new domains, providers, hosting location, registrars, authoritative name servers, or SSL/TLS certificates, filtering would have been more possible. The idea is underutilized but not necessarily new.

SC Media spoke to Slowik about weaponizing network observables against sophisticated attackers.

For average CISOs, why didn’t indicators of compromise work? What goes wrong if I just stick to IoCs?

Slowik: That might be a perfectly acceptable answer in certain cases. I’m not saying that that’s completely wrong, but for other situations like what we’re seeing with the Sunburst activity, short of detecting the initial DNS beacons, you’re basically hosed. This actor was very deliberate in selecting unique infrastructure on a per-victim and even potentially on a per-host basis. So even aside from some more general criticisms of an indicator approach – it’s reactive, it’s potentially backward-looking – there’s the very real and demonstrated (quite well in this case) concern of indicators being very victim specific.

We even see this to a certain extent with ransomware, with a lot of the entities shifting into living off the land actions or using things like Cobalt Strike for post intrusion operations and then distributing a single ransomware very simultaneously across the network. That version with the appropriate decrypter associated with it is designed for that victim. So alerting on that hash is not going to get you very far.

Sunburst was a substantial operation from a substantial operator. Most lawmakers believe the attack to be from Russian intelligence. Was avoiding indicators of compromise a one-off approach, or will this become the new normal?

Slowik: I can turn that around a little bit and say this is probably not the first time we’ve seen threat actors do this. We just haven’t caught up. That’s the painful and kind of scary part.

It really wasn’t until the threat actor in this case got a little overconfident in the FireEye environment and tried to create their own MFA token that they got caught. If that hadn’t happened we might not be talking about right now.

Even though Microsoft has certainly done a lot of really good research on this, they seem to have been caught off guard. We see them come out with more details and identifying scarier aspects of this intrusion as time goes on. The Microsoft blog that came out earlier this week really emphasize the differentiation between the Sunspot back door and then the subsequent Cobolt Strike loader to try to make minimize as many links between those two to preserve the Sunspot capability.

So I think that this isn’t necessarily something new, but this event is a wake-up call that ‘I’m going to need to be adapting to this sort of threat;’ that not everything’s going to be some Bulgarian ransomware group smash-and-grab operation. There are entities out there who can engage in low and slow operations that are purpose-built to be difficult to detect or defend against.

Walk us through how information that isn’t enough to form an IoC can be enough for a network defender?

Slowik: So at a high level, I think professionals in the CTI [cyber threat intelligence] and information security fields are pretty used to it. In fact I gave a talk about this this morning at the SANS CTI Summit and the FireEye people gave one specifically with Sunburst on this as well.

We’re used to this idea of pivoting on indicators to try to find more indicators. And we do that through characteristics around known bad observations. In the Sunspot case we have a really interesting combination of using aged, seasoned – I don’t know how you want to phrase it – domains that are registered in many cases several years prior to when events took place, and using fairly generic tells or capabilities for registration, registrar, nameserver, other components, and then hosted in prominent cloud computing environments like Azure and AWS.

So trying to find additional external infrastructure with that information is not just hard, it’s almost impossible. However, from an internal perspective, I can see I have a critical system that’s resolving a nearly observed domain that has these sketchy tendencies in terms of hosting registration patterns, etc. I maybe don’t have to be able to go to the level of fidelity where I can say this is APT 28.

If we can do that sort of enrichment, certainly, we still have a lot of questions to answer. Why are we seeing it? Who is it linked to? Those sorts of items. But we can at least get a relatively high fidelity or high confidence assessment of likely malicious activity just based on that information. So from an internal perspective with both an understanding of who’s communicating as well as where that communication is going… we can really build out some very powerful detection possibilities.

At what degree of maturity does an organization need to be to make this model work?

Slowik: I would say that once an organization has a security team in place and has met the criteria of seeing what’s going on. Then it’s important to start having that conversation that we’re running an EDR and all these other things…what do we do with that data?

I’ll be quite frank for a good number of organizations, this might be a conversation that doesn’t go very far, because of expense or they have a limited number of security resources in question. For the one percent top firms, this is a conversation that’s already taken place.

Original article source was posted here

All rights reserved Jenson Knight.