It’s called predictive policing, and law enforcement agencies in other cities are already making it happen. In New York City, home to the oldest and arguably most sophisticated real-time crime center in the country, police can use surveillance and data analysis technology to identify suspected criminals or terrorists based on anything from a birthmark to a limp.
The idea is that people who have a record get their identifying marks loaded into a database. So if police need to identify someone they see, they can type in the visible characteristics and then get that person’s name and information.
With terrorism, “it’s so hard to find those few [radicals] that really are serious about it,” says Professor Lonergan at USC. “The only way you find them is by doing the kind of data collection and data mining that we’re talking about.”
The technology can also be used for more routine policing, such as addressing stalking claims. If complainants have a license plate number and are able to give at least three places where they might have seen the stalker, the POD system can be queried to see if the vehicle was there, McPhail says.
“Now I might have a stronger case to substantiate a stalking claim in advance of something potentially much more serious happening to our victim,” he says. “And it has been used to that effect.”
More policing not the answer?
For some privacy and civil liberties advocates, such predictive strategies in routine police work is a problem, not a solution.
“There’s a shift in the primary modality of policing, where it’s not just the old investigating methods being employed, but preemptive policing based on hunches,” says Hamid Khan, a coordinator with the Stop LAPD Spying Coalition, an alliance of community groups that aims to prevent undue surveillance of marginalized communities in Los Angeles. “It’s become part of a larger architecture of surveillance.”
He and other critics say that sort of predictive policing reinforces racial profiling and violates civil liberties, with little accountability on the part of the officers who employ such methods. Worse, the strategy fails to address the underlying reasons for which people often commit crimes.
“[T]he deepest flaw in the logic of predictive policing is the assumption that … what the model predicts is the need for policing, as opposed to the need for any other less coercive social tools to deal with the trauma of economic distress, family dislocation, mental illness, environmental stress and racial discrimination that often masquerade as criminal behavior,” writes Aderson Francois, a professor of law at Howard University in Washington, in an op-ed for The New York Times.
Law enforcement should stay out of the surveillance and data collection business, critics say – at least, until lawmakers are able to develop clear policies that regulate their use.
“Technology can be liberating or it can be a tool for control,” notes Shahid Buttar, director of grassroots advocacy at the Electronic Frontier Foundation (EFF), a civil liberties nonprofit based in San Francisco. Conscientiously developing and implementing policy to govern that technology and its use, he says, could spell the difference between the two.