A hacking collective compromised roughly 150,000 internet-connected surveillance cameras from Verkada, Inc., granting them access to live and archived video feeds across multiple organizations, including manufacturing facilities, hospitals, schools, police departments and prisons.
Hacktivist Tillie Kottmann is reportedly among those asserting responsibility for the incident, telling Bloomberg that their act helped expose the security holes of modern-day surveillance platforms. This claim is hard to dispute – and now experts are weighing in on the potential ramifications that can befall an organization if security footage is leaked or falls into the wrong hands.
“Today, there are more than 1 billion surveillance cameras in use around the world and security is an afterthought in many of them, resulting in spying and unlawful monitoring of unsuspecting victims,” said Sam Curry, chief security officer at Cybereason.
According to thought leaders, the compromise of video data could result in intellectual property theft, physical security threats, privacy violations, extortion and perhaps regulatory punishment. Making matters worse, the cameras employ facial recognition technology, which leads to questions as to whether an attacker could actually identify individuals caught on camera and then pursue them as targets for social engineering schemes or something even more nefarious.
When surveillance leads to spying
Stolen Verkada video footage viewed by Bloomberg included images of what was reported to be workers on an assembly line at a Tesla warehouse in Shanghai. However, Telsa later told Reuters that the video was actually from a supplier’s production site in Henan province, and that its Shanghai factory and showrooms were not impacted.
Still, Kottmann said they had access to 222 cameras installed in Tesla factories and warehouses. And businesses like web performance company Cloudflare and identity and access management provider Okta were also reportedly using Verkada cameras in their respective work environments.
Such revelations create intrigue as to whether a more insidious actor could perform a similar hack in order to conduct industrial espionage by spying on development and production activity. Or perhaps might monitor the movements of workers, management and on-site security personnel in order to perform a physical break-in at a later time.
“When an attacker gains access to surveillance cameras, the amount of knowledge which stands to be gained could be vast and poses a very real physical security threat,” said James Smith, principal security consultant and head of penetration testing at Bridewell Consulting. “The opportunities for a criminal are immense if they’re able to study shift patterns of employees, opening and closing times and regular deliveries of high-value goods, for example.”
“It would be possible, on detailed examination of video, to compromise elements of operational security,” agreed Mike Hamilton, co-founder and chief information security officer of CI Security and former Seattle CISO. “For example: passwords being typed or posted, specific motions or commands used to activate control systems to open or unlock doors, etc.”
Individual workers’ patterns and habits could be studied as well, to their detriment.
“Even if footage isn’t widely released, facial recognition technology is well advanced enough to identify specific individuals within obtained footage, which can lead to whole host of issues for those people,” Smith continued.
Consider, for example, a leaked video seen by Bloomberg, in which eight hospital staffers at Floridian hospital Halifax Health appeared to tackle a man and pin him to a bed. Or another video in which Massachusetts police officers were questioning a handcuffed man in custody. Kottmann also reportedly even posted some of the videos on Twitter, which later deleted the hacker’s account and their offending tweets.
Episodes like this bring to mind major privacy implications, as sensitive footage of prisoners or patients in a hospital or mental health facility could be used to embarrass and ultimately extort individuals.
It’s hard to overstate scale of privacy harms that can come from a hack this magnitude,” said John Davisson, senior counsel at the Electronic Privacy Information Center, or EPIC. “It is deeply invasive for anyone who’s captured on film.”
Viewing these videos, adversaries can begin to compile metadata about an individual’s behaviors preferences – intel that could be applied toward targeting phishing campaigns, according to Setu Kulkarni, vice president of Strategy at WhiteHat Security. “They could use this metadata to construct a picture of an individual’s social and physical environment – enough to answer security questions to gain control of individuals’ online accounts,” Kulkarni continued. The one that scares me the most is that with this data and its analysis, adversaries could perpetuate not only cybercrimes, but also physical crimes like looting or kidnapping.”
Indeed, “It’s easy to imagine how this footage could be used to, at a minimum, infer something about someone’s personal health,” said ExtraHop CISO Jeff Costlow. “You also have to consider whether those cameras were positioned in such a way that they might have captured information on a medical chart, or even badge information from a hospital employee. That type of information can be extremely valuable for things like identity theft.”
Kulkarni even suggested the footage could be enough to develop “deep-fakes that could impersonate you.”
Costlow agreed, adding, “Deepfakes are becoming increasingly common. Could this footage be manipulated to make it seem like someone was in a facility when they shouldn’t have been? Or make it appear that they have a health condition? You can imagine the reputational harm that could be caused by something like this.”
Some experts speculated that certain privacy laws and regulations could have been violated in the incident. “Odds are more than one was breached here,” said Davisson.
“I would say that you’re talking about state data breach laws, state and federal laws against unfair and deceptive trade practices, [and] potentially HIPAA liability for health institutions that were relying on a system that was using inadequate security protocols,” Davisson continued. “If I were a state attorney general or a consumer protection official at the state or federal level, I would certainly take a very close look at what’s happened here and I would think there have to be lawsuits and enforcement proceedings coming.”
“Expect lots of audits, lots of additional investigation, and probably downstream fines,” said Steve Moore, chief security strategist at Exabeam.
Of course, the possible threat of fines could open up yet another avenue for attackers to make ill-gotten profits.
“As privacy statutes begin to proliferate at the state level – with associated gigantic fines – it may become more common to have embarrassing video stolen and used to extort the victim for an amount that is less than what a fine would be,” said Hamilton. “These privacy statutes are mainly focused on web tracking, but video may be in scope as well.”
Points of weakness
In an update to its official statement on Wednesday, San Mateo, California-based Verkada confirmed that the attackers obtained illegal access from March 7-9 via “a Jenkins server used by our support team to perform bulk maintenance operations on customer cameras, such as adjusting camera image settings upon customer request.”
Through this server, the attackers “obtained credentials that allowed them to bypass our authorization system,” the statement continued.
According to reports, there were several areas of weakness that allowed the hacking collective, known as APT 69420 Arson Cats, to hijack the footage. Experts say that organizations should apply these findings to improving their own internal security policies and their surveillance camera set-ups.
For starters, the hackers gained access to such a vast number Verkada cameras networks through a compromised “Super Admin” account, whose credentials Kottmann says were found publicly exposed on the internet. Thought leaders advise reducing or eliminating the use of these skeleton key-like accounts.
“Super Admin accounts or top-level accounts should be limited in access to those that explicitly need it,” said Smith.
“What did Verkada do wrong? They allegedly didn’t have control over the one account they needed to,” said Patrick Hunter, director of sales engineering, EMEA, at One Identity. “The biggest error was underestimating the power of one single account to undo their business and grant access to everyone’s data. At the very least, there should have been some form of multi-factor authentication or password vault to protect the [server] account. Whenever an admin accessed it, they would have to prove that they were who they said they were, which is a simple, cheap and effective first line of defense.”
“Or, even better, just offer the admins a session that they can use without ever knowing a password,” Hunter continued. “This makes it more difficult to hack, as no one knows the password and it will be encrypted in a deeply secured vault.”
“Following best security practices, they should have added in layers of protection by segmenting the admins’ privileges to avoid situations like this,” added Costlow. “No one wants to be breached, but in the case that you are, you certainly don’t [want to] have the adversary to gain complete, unfettered access. By breaking up controls, you’re able to build a much more resilient security practice.”
Another issue was the hackers’ ability to obtain root access on some cameras, enabling them to execute their own code and commands on the devices. According to Bloomberg, this didn’t require any additional hacking because root access was already a built-in feature.
But it’s not a feature, said Costlow. “It’s a bug. And one that should be addressed immediately by the cameras’ supplier. Separation of duties and the least access principle apply again. There is no reason why this functionality should exist for general users of the product, especially without some sort of heightened credentials or multi-factor authentication. It’s best practice to keep a different set of credentials for each device because of exactly this risk.”
“This is a design failure,” agreed Kulkarni. “It is likely that the [role-based access control] frameworks is easier to design and implement for software systems, but when it comes to OT/IoT devices, wrong assumptions are made around how the devices will be accessed and how limited the access to these devices is. These devices should be considered as an integral part of the software system and should be subject to the same design principles one has in secure software.”
“Look at the Mac operating system. You can enable root access, but you have to jump through a lot of security hoops just to activate it,” noted Terry Dunlap, CSO and co-founder at ReFirm Labs.
Thirdly, the hackers had access to both live and archived camera footage. Experts noted that after a certain amount of time, sensitive archived footage after a certain amount of time can be segregated and separately stored, or deleted entirely.
“Several security camera vendors store footage in the cloud. Depending on the vendor and service it can be set to be purged after a certain amount of time by the end user. All sensitive data should only be stored for the amount of time required and in accordance with any data privacy policies,” said Smith.
“Long-term data storage is often a liability rather than an asset,” added Costlow. “I’m worried about the legal implications of this as well. If a customer of Verkada requests that all their data be deleted, Verkada likely cannot comply with this anymore because of the breach. Now that the data has been compromised, it will likely be impossible to ensure that no additional copies exist. Under laws like GDPR and CCPA, this has massive implications for Verkada.”
Ultimately, IoT continues to pose security challenges for organizations, and as clearly demonstrated here, security cameras are no exception.
For that reason, Davisson at EPIC believes the best protection against these kinds of incidents is to not use IoT surveillance cameras at all. Of course, for some institutions, this is not practical. In those cases, Davisson recommends minimizing data collection; avoiding facial recognition, “which is just this deeply flawed and problematic technology;” and retaining data “only as long as it’s absolutely necessary for the purpose that it’s collected.”
Original article source was posted here