The use of a police facial recognition system based on artificial intelligence algorithms led to the arrest of an innocent person for the first time. She confused him with a real criminal.
According to the publication The New York Times, police in the American city of Detroit in January this year detained a certain Robert Julian-Borchak Williams for a check. After being taken to the police station, Williams was arrested because the facial recognition system identified him as the perpetrator who shoplifted in 2018. As it turned out later, the system based on AI algorithms made a mistake, but Williams still had to spend 30 hours behind bars until the police figured out the situation.
Opponents of facial recognition systems have argued for a long time that this technology is not yet perfect enough to accurately find and identify criminals. The developers of such systems themselves have repeatedly warned law enforcement agencies against using them as the only reliable evidence-based instrument for detention. But as it turned out, in the case of Williams, this system was the only “evidence” in favor of the arrest of an innocent person.
Notably, the facial identification software used by the Detroit police in conjunction with Williams’ verification results displayed the following message: “This document is not a positive identifier. It is only an investigative version and is not a reason for detention “…
After posting the bail, Williams was released, and when they figured out the situation, they even apologized. But this case created a precedent that points to a much broader problem – you cannot fully trust artificial intelligence algorithms yet.
“I seriously suspect this is not the first systemic error that resulted in the arrest of a person who did not commit a crime. Most likely, this is just the first time when the public became aware of this “– summed up the lawyer Clare Garvie (Clare Garvie), to which The New York Times asked for comment.
If you notice an error, select it with the mouse and press CTRL + ENTER.