Ban the Scan! AI, Facial Recognition, and Human Rights

Published: March 30, 2021, 11:44 a.m.

b'

Facial recognition software is becoming more and more common. There are lots of uses for it. One is as a way to unlock your phone. Another is for stores to be able to recognize incoming customers, enabling them to provide personal offers. However, the biggest and most controversial use of facial recognition software is in law enforcement. From federal agencies down to some smaller municipalities it is becoming common to see cameras mounted on street posts and the sides of buildings. There are different law enforcement applications of this software, from catching speeders in the act to recording crimes in progress. The main area of concern though is how this software is often used to search for suspects.

How can that be bad, you ask? Surely, tracking down suspected criminals can\\u2019t be a bad thing? Can it? It depends on how you go about it. If you have a good description of the suspect or even a photograph then you are in good shape. The software will find him and then he can be quickly and easily apprehended. However, what happens when you don\\u2019t have a good description of a suspect. What if you have a very generic description, skin color, hair color, height, just a few basics that don\\u2019t do much to narrow down the people your scanner is looking for?

In that case, you\\u2019re unfairly profiling people based on merely superficial characteristics. That leads to a few things. One, it leads to police resources getting wasted running down false suspects. Two, those false suspects are actually innocent people who are now getting harassed, innocent people who may develop resentment towards the police after such treatment. All because you didn\\u2019t have a better description to go off of than \\u201ctall black man, athletic build, wearing blue jeans\\u201d. True, sometimes that\\u2019s all there is to go on. However, a real person can spot all the little behavioral cues that separate a real suspect from just a face in the crowd. An algorithm in the facial recognition software that is going over the images collected by hundreds of cameras around a city has a much more difficult time. Unfortunately, all the real people are getting used up tracking false positives generated by the software.\\xa0

There is also the sad fact that facial recognition software currently is not great at recognizing the differences in faces amongst different ethnic groups. Most famously, Apple\\u2019s software for unlocking their phones was pretty bad at being able to tell Asian faces apart, at least when it was first released. Others have a more difficult time identifying differences in African faces. Why is that? Is the software racist? Of course not, its code, it only acts on the data that\\u2019s fed into it and can only do so based on how it is designed.

All right, are the coders racist then? Probably not. So, how does that happen? A simple explanation is that the coders are simply coding based on their experience and the fact is, Silicon Valley is mostly full of white people. So they code for those facial characteristics. Even when training the software and refining the code to pick out finer differences, the faces you are scanning for the purpose are probably white. Why? Because they are the faces most readily available. If the software were being developed in Shanghai, there is a good chance it would do great at picking out Asian faces and not be as good at picking out white ones.\\xa0

As an example, back in school, I had a friend whose parents were missionaries in Africa. He said when he first came back to the US, everyone in class looked the same to him. He was used to the differences in the black faces he\\u2019d spent the last year or so with and as such the white people he was now in contact with were bland copies, while to me each was incredibly different. Frame of reference matters and very often people don\\u2019t realize how much their natural environment affects things they do on a daily basis.

So, how do we deal with this? We can\\u2019t just accept the unfair profiling of people through poorly trained facial recognition software. The opportunity for abuse and rights violations is just too high. The clear answer is that the coders need to do a better job of training their software to recognize different ethnic groups. Get out there and do the effort to get some unfamiliar faces fed into the algorithm. Yes, we know there are deadlines. But what if we told you that you could do it without leaving your desk? What if someone \\u2013 like TARTLE \\u2013 had a whole marketplace of people who might be willing to share images of their face to help you with that? In that way, you can get better software and innocent people won\\u2019t be getting harassed by police whose time would be better spent tracking down actual criminals.\\xa0

What\\u2019s your data worth? www.tartle.co

'