The Morality of Artificial Intelligence

Published: Nov. 24, 2017, 9:56 a.m.

b'

Driverless cars could be on UK roads within four years under government plans to invest in the sector. The Chancellor Philip Hammond said "We have to embrace these technologies if we want the UK to lead the next industrial revolution". At the thick end of the wedge, Silicon Valley billionaire Elon Musk believes artificial intelligence is "a fundamental risk to the existence of human civilisation". AI is changing our lives here and now, whether we like it or not. Computer algorithms decide our credit rating and the terms on which we can borrow money; they decide how political campaigns are run and what adverts we see; they have increased the power and prevalence of fake news; through dating apps they even decide who we might date and therefore who we\'re likely to marry. As the technology gathers pace, should we apply the brakes or trustingly freewheel into the future? For those inclined to worry, there\'s a lot to worry about; not least the idea of letting robot weapons systems loose on the battlefield or the potential cost of mass automation on society. Should we let machines decide whether a child should be taken into care or empanel them to weigh the evidence in criminal trials? Robots may never be capable of empathy, but perhaps they could be fairer in certain decisions than humans; free of emotional baggage, they might thus be more \'moral\'. Even if machines were to make \'moral\' decisions on our behalf, according to whose morality should they be programmed? Most aircraft are piloted by computers most of the time, but we still feel safer with a human in the cockpit. Do we really want to be a \'driverless\' society?

Producer: Dan Tierney.

'