San Francisco’s board of supervisors recently voted to let their police deploy robots equipped with lethal explosives – before backtracking several weeks later. In America, the vote sparked a fierce debate on the militarisation of the police, but it raises fundamental questions for us all about the role of robots and AI in fighting crime, how policing decisions are made and, indeed, the very purpose of our criminal justice systems. In the UK, officers operate under the principle of “policing by consent” rather than by force. But according to the 2020 Crime Survey for England and Wales, public confidence in the police has fallen from 62% in 2017 to 55%. One recent poll asked Londoners if the Met was institutionally sexist and racist. Nearly two thirds answered either “probably” or “definitely”.
This is perhaps unsurprising, given the high-profile cases of crimes by police officers such as Wayne Couzens, who murdered Sarah Everard, and David Carrick, who recently pleaded guilty to 49 offences including rape and sexual assault.
The new commissioner, Mark Rowley, has said that “we have to prepare for more painful stories” and warned that two or three officers per week are expected to appear in court on criminal charges in coming months. But what if the problem with policing goes beyond so-called “bad apples”, beyond even the culture and policies that allow discrimination to flourish unchecked? What if it’s also embedded in the way that human beings actually make decisions?
Policing requires hundreds of judgments to be made each day, often under conditions of extreme pressure and uncertainty: who and where to police, which cases and victims to prioritise, who to believe and which lines of inquiry to follow. As Malcolm Gladwell explains in Blink, these rapid decisions – often described as “hunches” – are informed by our individual social and emotional experiences, but also the prejudices we have all internalised from wider society, such as racism, sexism, homophobia and transphobia.
Could artificial intelligence therefore offer a fairer and more efficient way forward for 21st-century policing? There are broadly two types of AI: “narrow AI”, which can perform specific tasks such as image recognition, and “general purpose AI”, which makes far more complex judgments and decisions extending across all kinds of domains. General purpose AI relies on deep learning – absorbing huge amounts of data and using it to continually adjust and improve performance, and has the potential to take over more and more of the tasks humans do at work. ChatGPT, a state-of-the-art language processing model that has the ability to write research papers, articles and even poems in a matter of seconds, is the latest example of this to catch the public imagination.
AI can already search through millions of pictures and analyse vast amounts of social media posts in order to identify and locate potential suspects. Drawing upon other kinds of data, it could also help predict the times and places where crime is most likely to occur. In particular cases, it could test hypotheses and filter out errors, allowing officers to focus on lines of inquiry most justified by the available evidence.
Faster, fairer, evidence-based decisions for a fraction of the cost certainly sounds attractive, but early research suggests the need for caution. So called “predictive policing” uses historical information to identify possible future perpetrators and victims, but studies have shown that the source data for this kind of modelling can be riddled with preconceptions, generating, for example, results that categorise people of colour as disproportionately “dangerous” or “lawless”. A 2016 Rand Corporation study concluded that Chicago’s “heat map” of anticipated violent crime failed to reduce gun violence, but led to more arrests in low-income and racially diverse neighbourhoods.
More profoundly, AI is designed to achieve the objectives we set it. So, as Prof Stuart Russell warned in his 2021 Reith Lectures, any tasks must be carefully defined within a framework that benefits humanity lest, as in The Sorcerer’s Apprentice, the command to fetch water results in an unstoppable flood.
Eventually we may learn to design out bias and avoid perverse consequences, but will that be enough? As Prof Batya Friedman of the University of Washington’s information school has observed: “Justice is more than a right decision. It is a process of human beings witnessing for each other, recognising each other, accounting for each other, restoring each other.”
Instead of debating what AI will or will not be able to do in the future, we should be asking what we want from our criminal and justice system, and how AI could help us to achieve it. Our ambitions are unlikely to be delivered merely by replacing officers with computers – but think what might be achieved in a human-machine team, where each learns from and adds value to the other. What if we subjected human beings to the same scrutiny that we quite rightly place on AI, exposing our biases and assumptions to ongoing and constructive challenge? What if AI could assist with repetitive and resource-intensive tasks, giving police officers what Prof Eric Topol, writing about the AI revolution in medicine, has called the “gift of time”? This would allow them to treat both victims and the accused with the dignity that only humans can embody and that all members of society deserve.
Perhaps this would earn the trust and consent from the public upon which policing really depends.
• Jo Callaghan is a strategist specialising in the future of work, and author of debut crime novel In the Blink of an Eye, published by Simon & Schuster.
Further reading
Life 3.0: Being Human in The Age of Artificial Intelligence by Max Tegmark (Penguin, £10.99)
Blink by Malcolm Gladwell (Penguin, £10.99)
The Political Philosophy of AI by Mark Coeckelbergh (Polity, £16.99)