Facial recognition technology is now just about everywhere we are. It’s in our phones, social networks, and media management, and this itself carries vast implications. (See this post about how the technology works, where it is, and how legislators and regulators are reacting to it.)
But it’s also increasingly used by law enforcement and for surveillance of “public” spaces, as Evgeny Morozov noted in his London Review of Books review of Kelly Gates’ excellent book, “Our Biometric Future.”
And many of the practical and labor-saving applications of facial recognition technology (FRT) could equally be applied for repressive or invasive purposes, especially as it becomes more powerful and ubiquitous.
Recently, Hitachi Kokusai Electric unveiled a CCTV system that it claims can identify a face against a database of 36 million faces in under a second. But it’s also possible to use FRT on two-dimensional images of people on the web, including those we post to social media or other sites.
Do we simply have to accept this as inevitable, or are there things we can do to protect ourselves and others against improper or repressive use of FRT?
Below are some tactical and technological defenses against FRT. Specifically, two layers of those involve: 1) when we are being watched, for example, at protests or in a public space, and 2) when we ourselves are taking and sharing images of others, especially online.
TACTICAL DEFENsES IN PUBLIC
There’s an increasing amount of discussion and experimentation on how to fool and spoof automatic visual recognition systems in public. One of my favorites is this, which plays with number-plate recognition. But is there anything we can do to confuse facial recognition systems?
The simplest, most widespread defense against face recognition in public spaces, in the media or at demonstrations is to wear a mask, hoodie, bandana or similar face covering. Protestors and rebel fighters across the Arab Spring used bandanas to mask their identity, like generations of activists before them. A new development has been the adoption of the Guy Fawkes mask by protestors the world over, notably those involved in Occupy and Anonymous online attacks — both protecting the wearer’s identity and signaling participation in a shared cause.
Mask-wearing is illegal in some jurisdictions (for police, too), and can lead to targeting by law enforcement. It also has its activist detractors, but this post makes a robust defense of its role in “maintaining personal privacy and security [and] an important exercise of some fundamental liberties, including freedom of expression and freedom of association … a crucial element of a robust social fabric.”
Beyond Masks: Other Options for “Fooling”
But what if it’s impractical or illegal for you to mask your face? It’s important to note the difference between defeating face detection — which means stopping a camera from recognizing the usual patterns that make up a face — and defeating face recognition, which means stopping a camera from matching you by altering your physical features and the distances between them. This peculiar 2002 document suggests wearing fake Dracula teeth, chewing tobacco, or inserting nose plugs can defeat face recognition. These countermeasures may, however, attract other kinds of unwanted attention, particularly if implemented all at once. Smiling also makes facial recognition more difficult — hence the “no smile” policy for passport photos.
Technologist Alex Kilpatrick told Forbes in 2010: “You have to break from the human perception of the face … We key on big features like hairstyle and beard, but software works on very different principles.” Kilpatrick suggested investing in a pair of large sunglasses, but Josh Marpet, in this audio from The Next Hope hacker conference and in this talk at Defcon 18, said that face detection and recognition systems can be trained to accommodate such measures. Marpet also suggested that facial recognition is only about 60% effective for the moment, and that fooling face detection altogether is a more effective countermeasure.
title=”Image from CV Dazzle Project” />
Adam Harvey focused on using makeup to fool the pattern recognition element in face detection systems. Harvey’s CV Dazzle project continues to be the most widely referenced face detection countermeasure.
Here are videos of it in action and audio of his talk, also at The Next Hope. This is how it does against Facebook’s face detection:
CV Dazzle vs PhotoTagger from Adam Harvey on Vimeo.
It’s early days for these analogue countermeasures — and they raise as many questions as they solve.
Technological Solutions, On-Screen
Video makers and journalists have long used measures such as pixelization and censor bars to protect the identity of individuals they film and those caught in the background. In some cases, it can be especially ruinous not to protect someone’s identity.
Now, with ever easier (or “frictionless”) online image-sharing, online facial recognition and widespread image-harvesting, it becomes even more important to protect those in the frame at the point of uploading an image. Because this functionality is not built into camera phones, it has been fairly laborious to do this kind of protective image editing. In fast-moving situations, citizens post footage and images to social networks and media platforms, and worry about the consequences later — and as we’ve noted on several occasions, governments as diverse as Iran, Burma and the U.K. have used these kinds of unprotected social media images to track down miscreants.
WITNESS is a partner in ObscuraCam, an Android app that provides a simple way to protect the visual privacy and anonymity of those you photograph. It’s a direct counterpoint to mobile facial recognition apps like face.com’s Klik.
There’s a growing list of social networks that use or permit facial recognition on their images — but there are protective measures you can take. You can personally opt out — but remember that face recognition is only one part of overall information security. If you’re friends with or working with people in sensitive situations, try as much as possible to follow the broader advice in this guide on protecting your identity and security online and when you’re mobile.
Facebook lets you opt out of its facial recognition system, and delete any face data it might have for you. Google+ asks you to opt into facial recognition, and it can be turned on and off here. You can turn it off in Google’s Picasa (it’s on by default), but not in Apple’s iPhoto.
With the scope of facial recognition widening, protecting ourselves and others needs to be a more mindful, conscious act, both in public spaces and online. And as social networks continue to make sharing and connecting ever more frictionless, we’ll all need to learn when and how to put the brakes on.
Sameer Padania is a London-based consultant working on Internet freedom, media policy and human rights. He was lead author and researcher on WITNESS’ 2011 report on the future of human rights and technology, Cameras Everywhere.
_This post originally appeared on the WITNESS blog, an ongoing conversation about the effective use of video in human rights campaigns to create policy. You can follow WITNESS on Twitter @witnessorg and on Facebook here
It’s been two years since this article was written. While it does contain good information, sadly, many of the technologies and software have jumped light years in facial recognition.
Akemi, I daresay the movement is afoot to a modicum of “privacy in public” as well, such as the Supreme Court decision regarding (and prohibiting) GPS tracking of vehicles.