Author – Aaron Barr
Clearly, these agencies are using FRT for useful and even life-saving purposes. However, this technology is susceptible to misuse, creating legitimate privacy and cybersecurity concerns. For example, the United Nations High Commissioner for Human Rights, Michelle Bachelet, expressed concerns over the technology when she called for a freeze on certain AI-based technologies, including facial recognition. She used the example of China’s social credit score and asked governments to “halt the scanning of people’s faces in real-time until they can show the technology is accurate and meets privacy and data protection standards.”
What’s difficult about this issue is that individuals seem to be losing control of their own identities. Today, there is an increasing amount of personal information available in the public domain that wasn’t purposely released by the individuals to whom it refers. And that opens up a whole new level of potential for cyber attackers – it gives them more ammunition to use to successfully compile enough information to compromise an individual – and by extension, their personal and private networks.
Public images and facial recognition technology
FRT has advanced quickly and is now accessible for purchase by almost anyone. Everyone has a sensor now, given the pervasiveness of mobile devices. We’re all putting a lot of data out into the public domain already. But now, there are images being captured, sometimes unbeknownst to us – images that aren’t just stored locally on our devices but are potentially being broadcast into the world.
This leads to a modern scenario in which images, including individuals’ faces, are scraped or otherwise captured and then broadcast into the open for people to scour – and they can do it at scale. Strangers can identify us from images we don’t even know exist and do whatever they want with that information, without our consent. As mentioned above, China is purportedly using FRT to judge citizens’ social behavior as part of their overall social credit score.
How dangerous could an image be?
It’s not just private citizens who face privacy risks but organizations, as well. A business email compromise (BEC) incident occurred in which an insurance company lost a great deal of money after a bad actor gained access to an executive-level account. A photo of that executive on social media that showed them on a ski trip gave the bad actor an indication of when to attempt the scheme to avoid detection. The attacker was then able to alter a routing number and steal money from the company.
Threats to human life are possible, as well. A recent example is the Taliban’s acquisition of facial recognition devices and databases that could allow them to identify Afghans who were cooperating with coalition forces.
Taking back (some) control
Digital and physical security are two of the proven use cases for FRT, demonstrating the technology’s benefits. It’s understandable why businesses and other organizations would want to use it. However, this technology needs to be used with safety and security – and evolving regulations – in mind.
Individual citizens also need to proceed with caution. Though it’s clear that you won’t be able to control how organizations are using this technology, there are things you can do to try to protect yourself and mitigate risk. This includes actions like:
- Making your accounts private
- Only using private email addresses (not business ones) to sign up for personal social accounts
- Avoid using common photographs across your different social media channels
- Creating email addresses that aren’t easily discoverable
Even if the rest of the technology isn’t within your control, all of these actions are.
Strengthening the frontline
Technology is a blank slate. The people who use it are the ones who determine whether it will be used for beneficial or nefarious purposes. There’s certainly a Skynet-like feel or an easy comparison to Big Brother watching us; privacy concerns about FRT are real. Yet the technology also helps law enforcement find victims of crime and government agencies keep their facilities safe.
There’s no question, though, that companies are at increased risk now that anyone can get their hands on FRT. They’ve already proven this with social engineering attacks, including BEC. Cybercriminals can use FRT to match faces in publicly available images to get the information they need to set up an attack. IT and security professionals have an opportunity to educate those they work with to be part of the cybersecurity frontline by using the above recommendations. Behavior change online can lead to a reduction in corporate risk.