A GoDaddy office location in Sunnyvale, California (GoDaddy Inc.).

Companies can protect employees from phishing schemes through a combination of training, secure email gateways and filtering technologies. But what protects workers from phone-based voice phishing (vishing) scams, like the kind that recently targeted GoDaddy and a group of cryptocurrency platforms that use the Internet domain registrar service?

Experts indicate that there are few easy answers, but organizations intent on putting a stop to such activity may have to push for more secure forms of verification, escalation procedures for sensitive requests, and better security awareness of account support staffers and other lower-level employees.

According to a report by security expert Brian Krebs, scammers called up GoDaddy posing as representatives of legitimate cryptocurrency platforms, and tricked employees of the internet domain registrar into changing account information so that email and web traffic intended for these platforms would instead be directed to attacker-controlled domains.

Experts warn that live social engineering calls are especially difficult to suss out, especially because there isn’t time to notice suspicious behavior in the middle of a conversation, and perpetrators are often very convincing.

Rob Fry, chief technology officer at Armorblox, told SC Media that these social engineering attacks have been perfected to the point where “detailed knowledge of the target is used to make the attack believable, while leveraging the psychology of impersonating or representing someone with authority [in order to create] urgency to motivate your target.”

And unlike with an email-based phish where an employee might be observant enough spot a telltale red flag, such as multiple typos or the wrong sender address, there’s little time to key in on suspicious circumstances in the midst of a dialogue.

“The ones that give you really obvious clues… well, they deserve to fail,” aid Peter Cassidy, secretary-general of the Anti-Phishing Working Group. The ones that make a business of it – and it is a business – will seem as natural a customer as the last hundred real customers you took care of.”

“When you’re on a live call, and the phone number is spoofed, and you’re a… customer service rep, it’s really hard to figure it out,” said Vijay Balasubramaniyan, CEO of voice biometric authentication company Pindrop Security and an expert in voice technology and phone fraud.

Balasubramaniyan said his company caters to clients such as private wealth advisors who often express unjustified confidence that they cannot be fooled by scammers because they believe they are intimately familiar with their high-end clients’ voices. “And we had to show them examples of where the fraudsters have beaten them,” he said.

Balasubramaniyan said in the early days of his company, approximately one in every 3,000 calls were fraudulent, but the frequency has since increased over the last five years to about one in every 600 calls. “That means 99.9 precent of your calls that you are getting as a customer service rep are good. It’s the 0.1 percent that turn out to be bad and do all of this damage,” he said.

The problem is: “Humans are not good at being able to detect that small, tiny 0.1 percent. No matter how much you train them,” Balasubramaniyan continued. And even if you happen to catch them, “they just hang up and call again.”

“These guys get super sophisticated. We’ve seen attacks where fraudsters have actually synthesized the voice of a CEO, in order to sound like the CEO,” he continued. “Especially if the reward is significant, they go to crazy lengths… And here’s the kicker: they have all the time in the world” to perfect and plot out and improve their schemes.

A July bulletin from the FBI and the Cybersecurity and Infrastructure Security Agency warned that vishing scammers looking to take advantage of COVID-19 working conditions were been calling up organizations’ remote employees pretending to be their company’s help desk and sending them a purported new VPN link that in actuality would take them to a phishing website. This tactic is how adversaries were able to execute a series of Twitter verified account takeovers last July in order to promote a cryptocurrency scam.

The best of these fraudsters “will be wonderful people and great conversationalists, and have a lot of information. And they seem very natural in the place of a customer,” said Cassidy. “Scam artists are artists. They create a whole new reality about their problem, which is now your problem and, of course, you’re honored to help them out.”

According to the FBI/CISA bulletin, some of these actors compile dossiers on the people they plan to imitate in order to appear even more genuine. Indeed, Balasubramaniyan recalled how one busted West African phishing gang kept 20-page notebooks on individuals whom they were trying to defraud. “They had all the addresses where they lived who are they married to what mortgages they taken out,” he said. When listening to the recordings of these calls, “You could hear the person flipping the pages of the notebook to get to the answer or get to the right page.”

There’s plenty of data available for attackers to collect on their targets simply circulating around the dark web as a result of past data breaches. Then again, a lot of useful intelligence on potential victims are left right out in the open by the targets themselves.

If you’re an attacker, “It’s very much worth your while to at least track these people down on Facebook and Twitter” and gather up intel on targets via social media,” said Cassidy.

Lower-level employees including those who work in back-office, clerical and IT/help support desk role, are a particularly significant source of risk, said Phil Cassidy, secretary-general of the Anti-Phishing Working Group (APWG). These employees are “so overworked, they’re so overwhelmed with the amount of tickets they have to process each day, they don’t have enough time to really consider the perspective of what the data or change of data in the hands of this person who claims to be a customer could do to the [real] customer, or the customer’s customers, if you’re dealing with things like [cryptocurrency] exchanges.”

The false assumption that “I’m not important enough to be the target of a spear phishing attack is a fatal misconception of back office, clerical and [IT/help] support people,” Cassidy continued. “If you look at what authority that [these employees] can give to the wrong parties, it could be just as devastating as a CEO” being targeted.

Skepticism is your friend

It may be hard to train employees to identify a vishing call while actively engaged on the phone, but experts say you can instruct them to treat requests to change passwords and account access with appropriate caution.

“Approach all calls with skepticism that try to use your authority to do something on [the caller’s] behalf,” said Cassidy.

“Attackers do everything possible to sell themselves as believable and trustworthy, but an attacker’s facade usually only consists of one or two layers of information. When you ask questions, the clues about their credibility will follow,” said Fry. “To mitigate these attacks, employees should be trained to be suspicious by default.”

Additionally, experts suggest companies like GoDaddy further shield their employees through a combinations of policies or technologies before they can institute a major account change.

“If a process takes a couple of extra steps, maybe that’s something that the industry has to get used to,” said Cassidy. “The trade-off should be between the consequences of the change of service or the change of data, and what trouble its causes the customer. I think it’s a trade-off that should be known, and it should be conventionalized in industry.”

For instance, some companies require that customers who call a help desk share previously established password or personal information before they can make online account changes. Another option is to mandate the some form of multi-factor authentication, such as reading out a passcode that is sent to a customers’ mobile device.

“To prevent similar attacks in the future, it is imperative that organizations remove any implicit trust and establish context-based access permissions,” recommended Mike Riemer, chief security architect at Pulse Secure. “These are two of the driving principles of zero trust, which allows companies to ensure continuous, contextual security by verifying and re-verifying users to ensure they are who they truly say they are and prevent outsiders from obtaining unauthorized access to the network. The zero trust principle dictates that no connectivity is allowed until a user is authenticated, their endpoint is validated, and application access is verified for that individual, stopping cybercriminals from gaining access.”

Still, attackers can often defeat verification procedures by, as previously mentioned, digging up personal information online, and they can overcome MFA by potentially performing a man-in-middle attack.

Companies could also institute role-based security policies that to require lower-level employee to escalate certain sensitive account requests to a supervisor. After all, the power to grant a caller access to an account should not be given out lightly, and the concept of zero trust applies not just to strangers but also to insiders within your organization. “Enterprises have to look at who has authority to give those keys to the kingdom to customers, would-be customers, attackers and would-be attackers,” said Cassidy.

“That is one option – escalating it to a more knowledgeable person,” said Balasubramaniyan. “But the question is: What’s the volume of calls that requires that?”

For example, he said, password resets represent a security issue, he said, “but the number of password resets a big organization gets is so large that you can’t escalate each one of them. In fact, there are companies that we serve, where 50 percent of calls coming into their customer support center are password resets.”

Another option is to incorporate anti-fraud technology and procedural security checks into the verification process. Such technologies, which include Pindrop’s services, might try to cut down spoofing by determining a device’s true location and comparing it to caller ID information to device and location, or they might track certain repeat callers’ behaviors and account relationships over periods of time as a way to monitor for any suspicious activity, such as a sudden flurry of calls.

There’s also voice authentication, through which organizations can verify that the identity of the person on the phone through their voice biometrics (provided the individual agreed to submit their voice for verification in the first place).

SC Media asked Balasubramaniyan how Pindrop’s solutions are designed to stop voice phishing attacks like the one perpetrated against GoDaddy.

“Signals would have fired off if these guys were trying to spoof phone numbers as they often do to impersonate certain phone numbers. Those flags would have gone off saying this phone number that’s calling in isn’t what it appears to be,” he said. “Then the second flag that would have gone off is velocity checks. And it’s not just that this person is calling many, many times. [Rather,] I’m seeing the same source device call on behalf of multiple accounts. Why is the same source calling about multiple accounts?”

Finally, if the customer had previously instituted a voice authentication check, then the voice would not have matched the sourced device, and the call support staffer would be warned that “that this is not an authorized rep calling from this customer.” And that’s when it actually makes perfect sense to escalate the call to a supervisor, Balasubramaniyan added.

According to security expert Brian Krebs, the GoDaddy fishing and account redirection campaign began around Nov. 13 and affected the cryptocurrency trading service Liquid.com and the cryptomining service NiceHash. Further investigation suggests that Bibox.com, Celsius.network, and Wirex.app, were also affected, though these platforms did not respond to Krebs’ request for comment.

Liquid and NiceHash addressed the incidents in respective company statements. NiceHash reportedly also noted that the attackers tried to abuse their unauthorized email access to reset passwords for certain third-party services including Slack and GitHub.

SC Media reached out to GoDaddy Inc., which supplied the following statement: “During a routine audit of account activity, [we] identified potential unauthorized changes to a small number of customer domains and/or account information. Our security team investigated and confirmed threat actor activity, including social engineering of a limited number of GoDaddy employees. We immediately locked down the accounts involved in this incident, reverted any changes that took place to accounts, and assisted affected customers with regaining access to their accounts.”

“As threat actors become increasingly sophisticated and aggressive in their attacks, we are constantly educating employees about new tactics that might be used against them and adopting new security measures to prevent future attacks,” the statement continued. “GoDaddy is committed to protecting our customers’ data and the security of our infrastructure, and our teams are vigilantly monitoring for attacks and potential vulnerabilities.”

Original article source was posted here