The future of your face

Isobel Thompson traces the backlash against facial recognition. From police biometrically mapping faces to nationwide watch-lists, will the balance between citizens and the state soon topple?

In December 2017, office worker Ed Bridges was Christmas shopping during his lunch break when he noticed a police van. By the time he came close enough to see its facial recognition” sign, he believes his image had been captured by the technology, being trialled by South Wales Police in Cardiff City Centre. A few months later, the former Liberal Democrat councillor spotted the cameras again: this time at a peaceful protest against the arms trade. Unsettled, he launched a crowdfunding campaign and, alongside the human rights organisation Liberty, brought a landmark case against the police, claiming the technology – which biometrically maps faces and compares them with images on a watch-list – breaches data protection and equality laws. It just struck me as wrong that something that instinctively feels so authoritarian was being used against me, and had been rolled out without public debate or consultation. It felt like a police state,” he says.

Bridges’ case chimes with a mounting unease about facial recognition and its corresponding watch-lists, which can contain images of individuals scraped from social media, or lifted from the vast custody images database – composed of people who have come into contact with the police, including thousands of innocent people. Developing faster than the law, and so operating in a legal and policy vacuum (parliament has never passed a law enabling the use of facial recognition), multiple high-profile claims that the technology is dangerous have, significantly, come from Silicon Valley circles.

Recently, Amazon’s shareholders unsuccessfully tried to stop the behemoth selling its controversial surveillance software, Rekognition, to government agencies. Microsoft researcher Luke Stark wrote an essay comparing the technology to plutonium. Simply by being designed and built, is intrinsically socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practical purposes,” he argued. And in May, San Francisco – a global symbol of the micro-dosing, millennial-fronted tech boom – subverted assumptions that mass acceptance of the technology is inevitable, becoming the first major American city to ban the police and other authorities from using it. I think part of San Francisco being the real and perceived headquarters for all things tech also comes with a responsibility for its local legislators,” Aaron Peskin, the city supervisor who sponsored the bill, said. We have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here.”

“…behind the uniform veneer of tech neutrality, the algorithms struggle to recognize women and people of colour.”

For stretched British police forces, suffering austere budget cuts, facial recognition is ostensibly an efficient, innovative way to fight crime. During the Bridges hearing, South Wales Police defended their use of the cameras, saying if no match is made between scanned faces and watch lists, the data is deleted in milliseconds. We have sought to be proportionate, transparent and lawful in our use of AFR (Automated Facial Recognition) during the trial period,” the force said in an emailed statement.

The system they use, NeoFace Watch, can scan and identify 18,000 faces a minute. Trialled by forces spanning Leicestershire and the Metropolitan Police, it has been used to scour crowds at crowds at festivals and football matches and, in 2016 and 2017, the Notting Hill carnival. In 2018, Greater Manchester Police scanned roughly 15 million people over a period of six months at the Trafford shopping centre before the Surveillance Camera Commissioner intervened, citing concerns about the trial’s proportionality. Compared to the size and scale of the processing of all people passing a camera, the group they might hope to identify was minuscule,” he wrote on a government blog.

Critics believe facial recognition poses two leading risks. The first is that it fortifies bias around race and gender. In short, behind the uniform veneer of tech neutrality, the algorithms struggle to recognize women and people of colour. In 2016, founder of the Algorithmic Justice League, Joy Buolamwini, gave a TED talk explaining how algorithms tend to echo, and then entrench, the bias of their creators, a phenomenon she has coined the coded gaze”. During the talk, Buolamwini showed a video-clip of an algorithm failing to recognize her face (she is black) until she donned a white mask.

Joy’s work is making a difference not just in the world of computation, but in the wider world, showing how much bias is a part of our lives every day,” says Suzanne Livingston, guest curator of the Barbican’s exhibition AI: More Than Human, which features Buolamwini’s poetic presentation, A.I., Ain’t I a Woman? Some benign uses of facial recognition tech are in public libraries (speeding up the book borrowing process), or even, potentially, helping to identify missing pets. But the uses of it which are more worrying are in relation to police records, at passport controls, or in smart cars. In these scenarios, the technology has to be accurate and able to recognise and respond to individuals from the full spectrum of society. This is where the work needs to be done.”

The greatest flaw is that it erodes public freedoms. Even if the technology is improved it will remain the fact that it poses too great a threat to people’s rights and freedoms, creating a dangerous imbalance of power between citizens and the state”

Flawed software leads to flawed policing: an investigation by civil liberties group Big Brother Watch found that the automated facial-recognition system used by the Metropolitan Police had a false-positive rate of ninety-eight per cent. It is most likely to misidentify women and people of colour – so they are more likely to be stopped by police and forced to account for themselves as they try to go about their everyday lives. This bias is ingrained both in how the technology has been trained and how it’s deployed,” explains Hannah Couchman, Policy and Campaigns officer at Liberty.

The second charge against facial recognition is that, as it becomes plaited into policing and public life, it will shift the balance of power from the individual towards authorities. The greatest flaw is that it erodes public freedoms. Even if the technology is improved it will remain the fact that it poses too great a threat to people’s rights and freedoms, creating a dangerous imbalance of power between citizens and the state,” says a spokesperson for Big Brother Watch. In 2015, China announced plans to build an integrated citizen-monitoring system by 2020, overseen by omnipresent, fully networked, always working and fully controllable” cameras. Once the project is fully implemented, citizens will be assigned social credit” ratings, informed by their day-to-day activities. Facial scanning software has also played a powerful role in the Chinese government’s mass surveillance and detention of Muslim Uighurs – as well as being used to track and reward model citizens, facial recognition can be abused to repress poor and marginalized communities, who are often disproportionately targeted by state surveillance, anyway.

This is obviously extreme territory. But the link between facial recognition and subsequent reward or retribution has played out on a smaller scale in the U.K.: when police were trialling the software in south-east London, they fined a man £90 for refusing to show his face as he passed. This slanting of power raises broader questions. What happens to the presumption of innocence when not wanting to give your biometric data to police as you walk down the street becomes a source of suspicion? If we know we are being watched, and the police are granted heftier powers to punish us if we don’t play along, will that impact the way we interact with public spaces? Will we start to self-censor, and collectively contain our behaviour? Take any nerves you’ve had about your internet history or WhatsApp messages being made public, and then transplant those fears – of being exposed, misunderstood, categorised – to a nondescript Christmas shopping session in Cardiff.

What people tend to forget, as they thrash out the ethical implications of facial recognition, is the extent to which it has already slipped, seamlessly, into our everyday lives.”

It’s not just States that are trialling facial recognition. Private companies – subject to less accountability – are starting to rapidly roll-out the technology, often working in concert with authorities. The power that live facial recognition cameras give to an organisation can so easily be abused. We’re already hearing about private companies creating blacklists’ of individuals they don’t want in their shops, bars or businesses, which people can find themselves on without having done anything unlawful,” adds Big Brother Watch.

Last year, residents of Atlantic Plaza Towers, a rent-stabilised apartment block in New York, discovered their landlord planned to switch their key fob system with facial recognition technology. The apparent aim was to modernise the building’s security system, but, as The Guardian reports, some residents suspected the move was linked to gentrification, and a move to try and allure wealthier white residents to the block, whose inhabitants were largely black. More than 130 tenants have filed a complaint with the state to try and block the move. We do not want to be tagged like animals,” Icemae Downes, who has lived in the block for 51 years, told the paper. We are not animals. We should be able to freely come in and out of our development without you tracking every movement.”

What people tend to forget, as they thrash out the ethical implications of facial recognition, is the extent to which it has already slipped, seamlessly, into our everyday lives. Scanning other people’s faces is central to human relationships – we’re expert at detecting fear, forgiveness, boredom, disappointment from the minute widening, softening, glazing or narrowing of an eye. The success of this loosely-regulated, opaque, billion-dollar industry, though, rests on its ability to get us addicted to our own faces and, at the same time, convince us to hand over troves of data. And it has worked; we’re hooked. The shock-horror of selfies quietly melted into dopamine-laced Instagram posts. iPhone X users unlock their phones via a software system that connects more than 30,000 invisible dots to create a facial depth map. Experimenting with makeup on an app, Snapchatting with a snuffling bunny nose filter, fortifying homes with a doorbell that recognizes families: these innovations are meant to help us live a simpler, more connected life, seductively spliced with a hit of narcissism.

But what are the stakes of apparently biased algorithms flattening our nuance and using surface scans of our faces as a resource? What kind of face will be deemed, and commodified, as aspirational, or authentic? In contrast, what features will be deemed as suspicious? Obviously, cliches already exist: but could facial recognition magnify categorisations?

The UK’s intelligence agency, GCHQ, collected images from millions of internet users’ webcams between 2008 and 2012, and used them to create and test facial recognition technology.”

There are instances when the public might be uncomfortable to know that the technologies they are using for ease and entertainment could be motoring the concerns raised above. Although there is a difference between the police scanning hundreds of faces in a crowd, and a person, say, choosing to unlock their phone with their face, engagement with technology doesn’t always translate to robust consent, or a comprehensive understanding of how it works.

A key point here is that companies need billions of images to feed and train their algorithms. So, where do they look to feed these algorithms? According to Big Brother Watch, GCHQ goes to our webcams. The UK’s intelligence agency, GCHQ, collected images from millions of internet users’ webcams between 2008 and 2012, and used them to create and test facial recognition technology (OPTIC NERVE). People should be aware how any photos they use online or on phone apps might be used by the website or app provider.” And then there is the case of Ever, a cloud storage app that marketed itself as helping you capture and rediscover your life’s memories.” The company did not advertise that the millions of photos users uploaded were used to train its facial recognition software, which it aimed to sell onto the military, law enforcement and private companies.

Despite decades-worth of debate about technology and surveillance – 1984 was published in 1949 – facial recognition has crept up fast, and loose. The current narratives around AI are often about fear and control but this isn’t the whole story. If we let these dominate too much, we are likely to become passive or antagonistic to the very technology which we play a large part in creating,” says Livingstone. As a species, we’ve been living in a tight box for a long time.” But, used without debate, transparency, or even a skeletal framework, it’s hard to see how the technology can equate to freedom, rather than fear.

More like this

Loading...
00:00 / 00:00