68. Silvia Milano - Ethical problems with recommender systems

Published: Jan. 27, 2021, 4:32 p.m.

b'

One of the consequences of living in a world where we have every kind of data we could possible want at our fingertips, is that we have far more data available to us than we could possibly review. Wondering which university program you should enter? You could visit any one of a hundred thousand websites that each offer helpful insights, or take a look at ten thousand different program options on hundreds of different universities\\u2019 websites. The only snag is that, by the time you finish that review, you probably could have graduated.

\\n

Recommender systems allow us to take controlled sips from the information fire hose that\\u2019s pointed our way every day of the week, by highlighting a small number of particularly relevant or valuable items from a vast catalog. And while they\\u2019re incredibly valuable pieces of technology, they also have some serious ethical failure modes\\u200a\\u2014\\u200amany of which arise because companies tend to build recommenders to reflect user feedback, without thinking of the broader implications these systems have for society and human civilization.

\\n

Those implications are significant, and growing fast. Recommender algorithms deployed by Twitter and Google regularly shape public opinion on the key moral issues of our time\\u200a\\u2014\\u200asometimes intentionally, and sometimes even by accident. So rather than allowing society to be reshaped in the image of these powerful algorithms, perhaps it\\u2019s time we asked some big questions about the kind of world we want to live in, and worked backward to figure out what our answers would imply for the way we evaluate recommendation engines.

\\n

That\\u2019s exactly why I wanted to speak with Silvia Milano, my guest for this episode of the podcast. Silvia is an expert of the ethics of recommender systems, and a researcher at Oxford\\u2019s Future of Humanity Institute and at the Oxford Internet Institute, where she\\u2019s been involved in work aimed at better understanding the hidden impact of recommendation algorithms, and what can be done to mitigate their more negative effects. Our conversation took us led us to consider complex questions, including the definition of identity, the human right to self-determination, and the interaction of governments with technology companies.

'