Predictive policing and ‘pre-crime’ algorithms: the new age of law enforcement
“When does prevention trickle so far up the chain that it slides into Minority Report territory and flip the presumption of innocence on its head?”
Society
Words: Isobel Thompson
On January 8, the aspiring boxer Jaden Moodie was knocked off a moped by a black Mercedes and stabbed to death in East London. He was 14. In March, the police were called to six stabbings across the city in less than six hours. On May 2, Tashaun Jones, a fledgling music producer and Arsenal supporter, was killed in Hackney. Three days later, 18 year-old McCaulay Junior Urugbezi-Edwards died in Southwark. Almost mid-way through 2019, and there have been 29 fatal stabbings in London so far. In 2018, 135 people were unlawfully killed in London, the highest figure in a decade. Of that number, 76 were stabbed.
Last September, London Mayor Sadiq Khan announced his plan to try and combat the knife crime crisis: funnelling £500,000 toward creating a Violence Reduction Unit (V.R.U). Adopting the so-called public health approach – which pivots on the premise that violence, like illness, can be encouraged or deterred by contextual factors – the V.R.U. would twine the police’s work with hospitals, schools and local councils to try and focus on early intervention. The government followed suit, establishing a £200 million Youth Endowment fund to underpin a push for prevention over the next decade.
While opinions about what, or who, is stoking London’s spike in violence differ (cuts to youth services; withering police budgets; organized crime; drill music; gangs; too much focus on gangs), there has been a common consensus that this approach, which weaves together different parts of the state, is a sensible, supportive solution. And why not? A tentacular focus on prevention burrows beyond the binary that people are good or bad, that crime is immutable; it implicitly accepts that myriad causes underpin something as knotty as a climb in knife crime.
Besides, the public health approach comes tested. The V.R.U. is borrowed from Glasgow, which has seen a 60% slide in homicides since a 2005 World Health Organization report named it the “murder capital” of Europe. Glasgow, in turn, took the idea from Chicago, where it was pioneered by epidemiologist Gary Slutkin in the mid-nineties. Returning home from Africa, where he had spent decades working on infectious diseases, Slutkin found his hometown gripped by its own epidemic: violent crime. Curious, he started digging into the data and realised that, just as flu causes flu, the violence was replicating itself – passed from person to person. A departure from mainstream thinking, which focused on enforcement, Slutkin’s public health approach integrated agencies beyond the police to try and stop violence before it broke out, and corrall it once it had. In 2000, his pilot project launched in Chicago’s West Garfield neighbourhood; within the first year, there was a 67% drop in homicides.
But, just because the arc of the public health approach comes proven, and feels positive, doesn’t mean its application in contemporary London isn’t flecked with ethical pockmarks. Beyond its proactive topline, it’s not clear how it will work in practice, or whether it will protect those most vulnerable to knife crime in London – young black males. “The public health approach has become a bit of an empty vessel into which politicians pour policies they want to implement,” says academic Dr Adam Elliott-Cooper.
More clear is that a core element of this government’s policy is data collection and the umbrella sharing of that data. Indeed, proposed legislation could make frontline workers (doctors, teachers, nurses) obligated to report concerns about those deemed vulnerable to becoming involved with knife crime. In April, Home Secretary Sajid Javid – who has called for knife crime to be treated like a “disease” – used a seminal speech on crime to advocate the public health approach, arguing the government must use data more effectively to trace the routes that lead into violence. This is meant to help map the problem, and involve agencies outside the police. But, without adequate technologies and safeguards, this gathering and distributing of data poses tangible risks: reinforcing bias and rupturing trust between authorities and young people.
For a precautionary tale about the uneasy alliance between preventative policing and data, look no further than the Gangs Matrix – widely cast as a discriminatory tool that operates, haphazardly, in breach of data laws. Established in the wake of the Tottenham Riots, the Gangs Matrix contains the names and personal details of roughly 3,000 people, and ostensibly helps the police track gang-affiliated offenders in London, and exercise powers like intelligence-led stop and search.
The process of being added to the database is foggy. Overseen by the Trident Gang Command, it is managed locally in each of London’s 32 boroughs, who gather intelligence thought to be sourced from variables including social media posts, friendships, previous offences – even being a victim of gang crime can count. This intelligence is then entered into a formula that assigns so-called ‘gang nominals’ a danger code: red, amber and green, marking the lowest likelihood for engaging with gang violence. Just as there is no formal procedure for being added to the Matrix, there is no official method for being notified once you are on it, or for being taken off.
More obvious is that the makeup of the Matrix is biased. In 2017, an Amnesty International report revealed that three-quarters of the individuals on it were black, although the Met’s figures showed that only 27% of those responsible for serious youth crime are black. The youngest member was 12-years-old. 5% of entrants had been coded red. 64% were marked as green; many of that number had no previous offences, and no known links to gangs.
“The lack of due diligence in the development, implementation and utilization of the Matrix has meant that it is not an effective, fair or accountable policing asset,” says Katrina Ffrench, CEO of StopWatch. “The Matrix is a racialized tool that lacks transparency and clear purpose. Often the information collected and shared via this multi-agency mechanism is inaccurate. There is a stereotyping of individuals by practitioners and an over-policing of black communities in London, which recent research has shown can push individuals to crime. This is very ironic as the Gangs Matrix was allegedly created to support the police in reducing gang crime.”
The Gangs Matrix raises a question that inevitably crops up around the subject of preventative policing. When does prevention trickle so far up the chain that it slides into Minority Report territory (Tom Cruise’s 2002 film that saw a PreCrime department arrest criminals based on predictions) and flip the presumption of innocence on its head? This is particularly relevant in the context of some of the other preventative tools rolled out by British police forces outside London, which use algorithms so impenetrable they are known as black boxes.
In February, the human rights organization Liberty published a report showing that 14 forces are using, or have used, “predictive policing” techniques, deployed without fixed legal guidelines, or proof that they actually work. The report broke these techniques into two categories. First: predictive mapping. Kent Police used a machine-learning algorithm developed by private company PredPol, which absorbed historical crime data to anticipate where crimes are likely to occur. Citing a review of the system’s cost and effectiveness, the force concluded its contract last March. The second category is “individual risk assessment”, which predicts how likely an individual is to commit a crime. For example, Durham police force use the machine learning Harm Assessment Risk Tool (HART), which predicts how likely an ex-offender is to commit a crime. HART has proved to be 62.8% accurate in its forecasts.
“Predictive policing tools draw upon data which is intrinsically biased and feed it through complex algorithms to risk assess communities and individual people – allowing discriminatory practices to drive future policing,” says Hannah Couchman, author of the report. “Meanwhile, databases like the Gangs Matrix see police officers use highly racialized criteria to identify people as ‘gang members’. Each approach is designed to label people as ‘pre-criminals’ based on crude profiling, leading to increased surveillance of people who will often already experience an unreasonable and intrusive level of policing as they go about their day-to-day lives.”
The impact of this type of profiling is detailed in a StopWatch report written by Dr Patrick Williams, a senior lecturer in Criminology at Manchester Metropolitan University, who interviewed 15 members of the Matrix to gauge its effect on their lives. “[S]eriously, I was getting stopped three times a week. There were times I got stopped three times a day,” says Andrew, one of the participants (whose names have been changed). “There was not a week I can really remember where I didn’t get stopped and searched. To the point where I realised it was not a thing anymore, it was just a normal.”
Bias in the justice system is not new. But the concept of machines digesting existing bias raises new concerns. Do algorithms magnify prejudices, whilst scrubbing them with the veneer of digital neutrality? And what happens when their inscrutable predictions are shared, and bounced between a web of agencies: education, immigration, housing, healthcare, the police? Whilst the Gangs Matrix is not a black box, it showcases the spiralling impact that sharing disproportionate data can have. For those stuck in the Matrix, the implications reach far beyond additional targeting by the police. “It can affect their access to education, housing, jobs and can result in over-enforcement of stop and search and immigration action,” says FFrench. “For an example of the effect the label ‘gang nominal’ can have, one need only look to Tottenham where those labelled gang nominals were sent a letter from the DVLA requesting they take part in a drug test otherwise their licenses would be revoked.” Another, particularly extreme example is described in the Amnesty report. One family was sent a letter threatening to evict them from their home unless their son stopped his involvement with gangs; he had been dead for over a year.
As the Matrix’s flaws continue to unfurl (in December the Mayor’s Office gave the police a year to radically reform it) the Metropolitan Police are developing a new database, hailed as a cornerstone of the public health approach. The Concern Hub aims to document individuals who are susceptible to becoming involved with gangs. Officially pegged to launch in South East London last month, it was piloted in Lewisham. “Individuals identified as being at risk will be provided support and pathways away from violence through partnership working with local authorities and a range of initiatives,” reads an emailed statement from the Met.
Some are skeptical that local authorities, hit by years worth of cuts, can help etch pathways out of violence – if not, the public health approach could be dominated by the police (suffering cuts of their own) and their troves of data rather than substantial investments in youth centres and mental health services. “We have seen tens of millions of pounds of cuts to youth services across England and Wales since austerity began,” says Elliott-Cooper. “All roads appear to lead to the police in one direction rather than youth, health and community services being joined up and working together to address the problem.”
The thread that links data collection, information sharing, the looming use of predictive policing tools and the impact of austerity is trust. It is not news that the narrative around knife crime is trained on young black men. As Matthew Ryder, former deputy mayor for social integration, writes in a piece for Tortoise, public health policing, then, requires authorities to build trust with their communities.
Central to this idea of trust – or the eroding of trust – are these plans to have teachers and doctors report data on knife crime; data that, in human terms, could comprise difficult, complex, intimate conversations, or confessions. There’s a real risk that pupils won’t want to confide in teachers or nurses if they know their disclosure could end up on a database. Passed between agencies, possibly fed into an algorithm, it could be used to shape decisions about other aspects of their, or their families, lives. “If anything it actually alienates young people from accessing the kind of services that can help prevent them from being involved in violent crime,” says Elliott-Cooper.
Teacher Rob Kazandjian, who spent the first half of his career working in a unit for teenagers expelled from mainstream education, agrees. “Proposed legislation will destroy trust between teachers and vulnerable young people. Factors that put children at-risk from perpetrating violence (poverty, exposure to domestic violence, expulsion from school) overlap with those that put children at-risk from being victims of violence. A nice, neat line that separates perpetrators and victims doesn’t exist,” he says.
“Taking steps to criminalise at-risk young people is immoral, in my opinion. Consider the staggering statistics that indicate prison is an almost inevitable outcome for young people who are expelled from school. Then consider that black boys are significantly more likely to be expelled from school than any other group (the majority of young people in the unit where I taught were black boys, which says a lot about attitudes held towards black boys by the education system). Given that serious youth violence has been falsely presented as an issue predominantly affecting black boys, it’s quite easy to imagine what this proposed legislation will look like.”