58. David Duvenaud - Using generative models for explainable AI

Published: Nov. 18, 2020, 2:43 p.m.

b'

In the early 1900s, all of our predictions were the direct product of human brains. Scientists, analysts, climatologists, mathematicians, bankers, lawyers and politicians did their best to anticipate future events, and plan accordingly.

\\n

Take physics, for example, where every task we think of as part of the learning process, from data collection to cleaning to feature selection to modeling, all had to happen inside a physicist\\u2019s head. When Einstein introduced gravitational fields, what he was really doing was proposing a new feature to be added to our model of the universe. And the gravitational field equations that he put forward at the same time were an update to that very model.

\\n

Einstein didn\\u2019t come up with his new model (or \\u201ctheory\\u201d as physicists call it) of gravity by running model.fit() in a jupyter notebook. In fact, he never outsourced any of the computations that were needed to develop it to machines.

\\n

Today, that\\u2019s somewhat unusual, and most of the predictions that the world runs on are generated in part by computers. But only in part \\u2014 until we have fully general artificial intelligence, machine learning will always be a mix of two things: first, the constraints that human developers impose on their models, and second, the calculations that go into optimizing those models, which we outsource to machines.

\\n

The human touch is still a necessary and ubiquitous component of every machine learning pipeline, but it\\u2019s ultimately limiting: the more of the learning pipeline that can be outsourced to machines, the more we can take advantage of computers\\u2019 ability to learn faster and from far more data than human beings. But designing algorithms that are flexible enough to do that requires serious outside-of-the-box thinking \\u2014 exactly the kind of thinking that University of Toronto professor and researcher David Duvenaud specializes in. I asked David to join me for the latest episode of the podcast to talk about his research on more flexible and robust machine learning strategies.

'