Connect with us

Surveillance

Is AI Getting Better at Predicting Crime?

mm

Sci-fi books and movies have imagined a future where police can predict crimes long before artificial intelligence (AI) made it possible. Now, it’s not just a theoretical possibility but a reality, with several cities experimenting with AI-powered predictive policing. Still, it’s not necessarily a common practice yet, so what’s standing in its way?

Accuracy and reliability have been issues for all predictive analytics applications over the years. However, the technology has matured enough to make waves across industries like manufacturing and supply chain management. So, is it ready for a larger rollout in crime prediction?

The State of Crime-Predicting AI Today

Predictive policing may not yet be the norm, but it has seen some major developments in recent years. These steps fall into three broad categories — real-world crime-predicting AI, experimental studies and announced but not-yet-started crime prediction projects.

1. Positive Real-World Results

Some cities have already seen impressive results from AI-powered predictive policing. The Dubai Police’s General Department of Criminal Investigation says serious crime rates fell by 25% after implementing an AI tool to predict crimes. Less severe criminal activity fell by 7.1%.

Like many AI crime prediction tools, the solution works by analyzing past reports and comparing them to current conditions. Highlighting trends in previous crimes lets the machine learning models identify areas and times where similar events are likely to occur. Police can then mobilize resources ahead of time to discourage crime or address things that may lead to it before it happens.

San Jose, California, has seen success from a different kind of AI model. While the city doesn’t predict crime yet, it detects potholes and graffiti with AI to address them sooner. According to officials, cleaning an area reduces the likelihood of criminal activity there, so this process still reduces incidents.

2. Promising Experimental Models

As real-world predictive policing grows, early testing of similar applications has also shown promise. In many jurisdictions, rolling out a crime prediction system in full involves considerable regulatory barriers, slowing the technology’s adoption. Examples in the experimental phase are pushing things forward in the meantime.

A 2022 study from the University of Chicago created a model that can predict crimes with 90% accuracy a week in advance. More importantly, the system is less prone to bias than older systems because it uses different data. Instead of dividing the city into neighborhoods or political boundaries, it splits it into distinct and equal tiles to provide a new look at the area.

Building digital twins of a city to map crime along an original system instead of relying on older, bias-prone records may produce more reliable insights. Police forces have not started using this system, but the research showcases what new technologies in this field can do.

3. Upcoming Predictive Policing Investments

Looking forward, several areas have recently unveiled AI crime prediction goals. These projects have not started yet, but their emergence signals a growing shift toward this technology, possibly from increased government trust in its effectiveness.

In July 2024, Argentina’s Ministry of Security announced plans for AI crime prediction and response. According to the resolution, police forces will analyze historical criminal data to predict future events and respond accordingly to prevent anything from happening. It also mentions real-time anomaly detection, which could work in tandem with the predictive model.

More recently, the U.K. revealed it’s working on a murder prediction tool to identify people who may present the largest risk of becoming violent criminals. It’s unclear how authorities would respond to this data, and there are conflicting reports about what data the solution will use. The Ministry of Justice has said the project is for research only at this point, but research today could lead to real-world projects tomorrow.

How Has AI Crime Prediction Improved?

These current and future predictive policing applications are far from the first examples of this technology. However, they do signify a positive shift. Previous iterations have been unable to achieve the same levels of accuracy and dependability. The University of Chicago solution’s 90% accuracy and Dubai’s 25% reduction in serious crime are a far cry from earlier attempts.

In 2024, the Pasco County, Florida, Sheriff’s Office paid a $105,000 settlement and shut down its predictive policing program after poor results. The system resulted in officers repeatedly visiting and even arresting citizens who had not yet committed crimes based on the AI model’s predictions. 

Similarly, Chicago shut down its crime prediction model after several complaints. Studies found that the system had no significant impact on gun-related crime despite an increase in arrest likelihood. More worryingly, research revealed how the algorithm was inherently racially biased, making people of color more likely to be arrested.

Another popular solution used by multiple cities, Geolitica, which formerly went by PredPol, showed just 0.6% accuracy when predicting aggravated assaults. The accuracy rate for burglary was a mere 0.1% in some areas.

Compared to these failed programs, newer AI crime predictors are remarkably accurate. While there haven’t been as many stories of real-world police forces using these more advanced solutions, early results paint a stark contrast between AI yesterday and AI today. 

The Dark Side of AI in Crime Prediction

It’s easy to see why so many jurisdictions are investing in AI crime prediction. Stopping criminal activity before it starts is a huge gain for public safety, and AI can detect trends that may be contrary to human assumptions. For example, more than half of all burglaries happen during the day, despite the common belief that they’re more likely at night. AI can see through what seems true to find actual trends.

At the same time, predictive policing carries significant privacy and ethical concerns. There is a reason why 52% of Americans are more concerned about AI than they are excited for it. Even the most advanced models are prone to hallucination, and AI has a track record of perpetuating, even exaggerating, human bias when trained on prejudiced data.

Historical crime data is potentially misrepresentative at best and inherently racist at worst. Arrest records may signify areas that are more heavily policed than they reflect actual crime. Consequently, the data may reflect long-standing racial biases, which have a well-documented history in law enforcement.

AI models that learn from biased data may lead police to patrol Black neighborhoods more heavily or be more suspicious of people of color. The Chicago and Pasco County cases show just that. As a result, reliance on AI predictions without acknowledging these prejudices could heighten the unfair treatment of historically over-policed and disadvantaged demographics.

Racial injustice aside, collecting so much data on citizens could lead to privacy risks. Government agencies are the eighth-most-targeted industry for cybercrime, so a breach from a predictive policing model is highly possible, on top of being damaging. Even if no cyberattacks succeed, monitoring citizens because they might commit a crime raises questions about over-surveillance and due process.

AI Crime Prediction Is Improving, But Concerns Remain

AI crime prediction models are far more accurate today than they were a few years ago. However, concerns about bias, effectiveness and justice are still prominent. Policymakers and AI companies must address these issues to ensure this technology can actually provide a safer future.