Can predictive policing prevent crime before it happens? It sounds like an idea out of science fiction —  specifically, out of Philip K. Dick’s “Minority Report,” a 1956 short story about a “Precrime Division” that arrests suspects before any crime had been committed.

Minority report book cove

Can technology make this dream come true? It turns out that predictive policing is already being used by 60 different police departments around America, according to an article late last month in Science magazine, reporting that everything from Facebook profiles, statistics on minor crimes, and information from 911 calls are now being fed into algorithms.

“It makes us smarter. It puts us on the cutting edge of what’s going on in this country,” says George Turner, the chief of the Atlanta Police Department, in a video proudly displayed on the home page for PredPol, which produces predictive policing software.

But here’s one thing that even Phillip K. Dick couldn’t even predict: Predictive Policing doesn’t really work. At least not thus far.

Science’s article describes two young cops in Pittsburgh being aided by ShotSpotter, “a network of sensors that detects gunshots and relays the information to a laptop mounted between the front seats.” ShotSpotter is not predictive policing itself, but that system is soon to be upgraded with “CrimeScan” maps showing “where crime is likely to occur,” based on an algorithm developed by Carnegie Mellon scientists Wil Gorr and Daniel Neill.

And predictive policing continues to be a popular target for research all over the world. Last month, for example, researchers in Wales at the Social Data Science Lab at Cardiff University received an $800,000 grant from the U.S. Department of Justice to come up with a predictive policing model for Los Angeles — specifically, for hate crimes.

Read More:   Update Moving Fast and Smart with Data Using Kubernetes and AI

“Over the next three years, the team of researchers will pour over Twitter data and cross-reference it with reported hate crimes in the LA area,” reported TechWeek Europe, “to spot trends that facilitate the creation of markers or signatures which could identify if, where and when a potential hate crime could take place, allowing law enforcement to preemptively intervene.”

It’s an attractive proposition for police departments. “I’m not going to get more money,” says the L.A. police chief Charlie Beck ion the PredPol site. “I’m not going to get more cops. I have to be better at using what I have, and that’s what predictive policing is about…”

There’s just one problem. It’s not clear that it actually works.

Chicago tried a program to reduce the city’s high murder rate, with a lofty goal of both saving lives and blazing a new trail for the world of predictive policing. Because crime is scattered across the city, it doesn’t help to focus on specific regions, the Chicago police department’s head of technology told Science magazine. So the city used a $2 million grant to test out a new algorithm that identifies who’s most likely to be involved in a shooting before it happens. “The hope was that the list would allow police to provide social services to people in danger, while also preventing likely shooters from picking up a gun,” reported The Verge.

Chicago Police Department logo

But a detailed report from the RAND corporation concluded zero lives had been saved — and that overall the list of 426 likely shooters wasn’t even being used as intended. “There was no practical direction about what to do with individuals on the ‘Strategic Suspect List,’ little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders,” the report concluded. One of its authors pointed out to The Verge that Chicago’s police department had 11 different anti-violence programs going on, and the list of likely shooters “just got lost.” But it did identify one result of the program. People on the list were more likely to be arrested, prompting The Verge to conclude it “essentially served as a way to find suspects after the fact”.

Read More:   Successful SecOps Teams Automate and Align – InApps 2022

That’s one of the biggest concerns about predictive policing. According to Science magazine, some groups are arguing that it just hides racial prejudice “by shrouding it in the legitimacy accorded by science.” If there’s a bias in the criminal justice system, that carries through to the statistics which are ultimately fed into the algorithms, says one analyst with the Human Rights Data Analysis Group and a Ph.D. candidate at Michigan State University. “They’re not predicting the future. What they’re actually predicting is where the next recorded police observations are going to occur.”

And the Electronic Frontier Foundation also points out another problem with the programs. “We know from past examples that when police are expecting violence, they often respond with violence.”

But the larger issue is that predictive policing itself might just not work. Science magazine spoke to the RAND study’s author, who complains that to predict specific crimes, “we would need to improve the precision of our predictions by a factor of 1000.” Instead, where things stand now, their report concluded that “increases in predictive power have tended to show diminishing returns.”

It may be the holy grail of big data. Though the U.S. National Institute of Justice, a research branch of the Justice Department, has been studying crime data for decades, “until recently, the limits of computing power and storage prevented them from using large data sets.”

Science magazine interviewed UCLA anthropologist P. Jeffrey Brantingham, who points out that things began changing in 2006 when better data collection by police departments made the ability to predict crime “a real possibility rather than just a theoretical novelty.” Brantingham joined with postdoctoral scholar George Mohler to develop “PredPol” — a proprietary software package with an algorithm that predicts what’s going to happen during a police officer’s next shift.

But that’s part of the problem, argues an analyst at the Human Rights Data Analysis Group — it suffers from the same limitation facing closed-source software. “For the sake of transparency and for policymakers, we need to have some insight into what’s going on so that it can be validated by outside groups.”

There’s still a lot of faith in the programs. Chicago’s police department argues RAND’s analysis came too early, and their program “has since evolved greatly” and has now been fully integrated into the Department’s management accountability process. While RAND focused on version 1 of the program, they’re now using version 5, “which is significantly improved.”

Read More:   Quick Guide To Outsource Offshore Software Development Services

But Science ultimately concludes that when it comes to the powers of predictive policing, “the evidence is scarce, and the few data points are not encouraging.” However, they do also acknowledge that it’s very difficult to perform comprehensive testing or establish a control group. “The average police chief lasts 3 years,” pointed out Pittsburgh’s chief of police. “I don’t have time for controls.”

One such study was funded in Shreveport, Louisiana in 2012, finding almost no difference between the targeted area and the “control district.” But further analysis revealed the results were probably skewed by a waning enthusiasm for the software after the first few months.

There seems to be another lesson emerging from the data: that in the real world, there’s no substitute for a good relationship between police officers and their community. One predictive policing expert (and law professor) at the University of the District of Columbia complained to The Verge that their sophisticated algorithm was only supposed to be a starting point. And Pittsburgh’s police chief agrees with the critics who say that at best, data needs to be part of a solution that combines other approaches like social service programs. “Who uses just enough data to be really good, and has the relationships that are just robust enough? That’s the challenge that policing in this country is facing right now.”

In Scotland, police officers are now being encouraged to tweet more, simply to build better ties to the community. ” What really makes a difference is giving people the opportunity to engage in a two-way conversation with the police about things that matter to them,” said deputy chief constable Gordon Scobbie.

And the author of the RAND study points out that maybe this isn’t a problem that big data can solve. After all, big data is useful when there are complicated (and nonlinear) relationships hidden in the input data. But unfortunately, with crime “It’s much more simple — the more risk, the more crime.

“There aren’t really complicated relationships going on.”