How is bias built into algorithms? Garbage in, garbage out.

Published: July 9, 2020, 9:47 a.m.

In facial recognition and AI development, computers are trained on massive sets of data, millions of pictures gathered from all over the web. There are only a few publicly available datasets, and a lot of organizations use them. And they are problematic. Molly speaks with Vinay Prabhu, chief scientist at UnifyID. He and Abeba Birhane at University College Dublin recently studied these academic datasets. Most of the pictures are gathered without consent, people can be identified in them and there are racist and pornographic images and text. Ultimately, the researchers said, maybe it’s not the data that’s the problem. Maybe it’s the whole field.