The Problem with Predictive Policing: Is Artificial Intelligence Racist?
AI-assisted predictive policing risks entrenching historical biases and further eroding community trust in policing systems.
Photo by Matt Popovich on Unsplash
Crime-free utopian futures have long been a favoured trope for sci-fi films. From Sylvester Stallone’s Demolition Man to Tom Cruise’s Minority Report, all-powerful police states with the capability to predict and prevent crime before it occurs are often at the centre of these imagined societies.
It’s called predictive policing, and, over the last decade or so, it has moved out of fiction and into reality. Take Los Angeles, for example. The Los Angeles Police Department (LAPD) adopted PredPol software in 2011 as part of its efforts to reduce crime and improve resource allocation.
PredPol uses machine learning algorithms and artificial intelligence (AI) to analyse historical crime data and generate ‘hotspot’ maps that highlight areas where crimes are more likely to occur. These maps help law enforcement focus their efforts on specific locations at specific times.
LAPD spent almost a decade using PredPol’s predictive maps to allocate patrol officers and resources to predicted hotspots. This reportedly allowed the department to concentrate its presence and deter criminal activity.
Further reading: Are We Living in a Post-Truth World?
As deepfakes become more difficult to detect, it’s going to get tougher to separate fact from fiction. That has deep implications for how we identify and understand truth.
Perpetuating pre-existing biases
While LAPD ended its use of PredPol in 2020 due to financial constraints associated with the Covid-19 pandemic, and the company behind the PredPol software has since rebranded amid public controversy, many cities around the world continue to explore the use of AI-powered predictive policing models.
That’s created many concerns among critics of predictive policing – primarily related to issues surrounding data bias, privacy, transparency, and the potential for reinforcing existing inequalities in law enforcement.
Here’s why they’re concerned. Predictive policing relies heavily on historical crime data to make predictions about future criminal activity. Critics argue that this historical data can be biassed because it reflects past policing practices, which may have disproportionately targeted certain communities. This can lead to an overrepresentation of particular demographics in crime data and perpetuate pre-existing biases.
That’s a big problem considering that deep pre-existing biases have been widely identified within policing systems. According to the US Department of Justice, for example, a black person is five times more likely to be stopped by police without just cause than a white person. Training predictive algorithms on that kind of heavily biassed historical data risks entrenching racism even more deeply into predictive policing systems.
Further reading: This is How You Get Sucked Down the Social Media Rabbit Hole
Social media has become a psychological minefield that uses infinite scrolling and algorithmically selected content to exploit our dopamine systems. Fortunately, the cure isn’t rocket science.
Eroding community trust
There are also concerns that biases inherent in predictive algorithms lead to over-policing in already heavily policed communities, and some critics argue that predictive policing systems lack transparency in their algorithms and methodologies. This makes it difficult for the public to understand how predictions are generated, whether bias is present, and who is accountable when system biases erode community trust in policing.
Privacy concerns are also central to the case against predictive policing. Predictive policing relies on the collection and analysis of vast amounts of data, which can include sensitive information about individuals. Concerns have been raised about the potential invasion of privacy, as well as how data is stored and secured in predictive policing efforts.
Similar predictive technologies have also infiltrated the judicial system. Some US courts use COMPAS software to predict an individual's risk of recidivism. However, a ProPublica investigation revealed that only 20 percent of individuals the software predicted would commit violent crimes actually went on to do so. The investigation also concluded that black people are almost twice as likely as white people to be labelled higher risk but not actually reoffend.
Further reading: 5 Simple Questions // The Singularity
Can we explain The Singularity in five simple questions? Let's find out.
The true cost of a crime-free utopia
It’s biases like these that are the biggest sticking point for many critics of predictive policing. When AI relies on heavily biassed historical data, it risks entrenching these biases even more deeply in our policing and criminal justice systems.
To be fair, some law enforcement agencies have taken steps to address bias in data and algorithms, increase transparency in their predictive policing programs, and engage with the community to build trust. However, regulatory oversight and guidelines to ensure responsible and ethical use of predictive policing technologies are desperately needed.
As the role of AI and predictive analysis in policing continues to evolve, we need strong public discourse around how to strike a balance between crime prevention and protecting individual rights and civil liberties. Otherwise, we risk further institutionalising racism in pursuit of a crime-free utopia that may be best left to sci-fi films.