Block-Sparse Kernels for Deep Neural Networks with Durk Kingma - TWiML Talk #80

Published: Dec. 7, 2017, 6:18 p.m.

b'The show is part of a series that I\\u2019m really excited about, in part because I\\u2019ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I\\u2019m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. This episode features Durk Kingma, a Research Scientist at OpenAI. Although Durk is probably best known for his pioneering work on variational autoencoders, he joined me this time to talk through his latest project on block sparse kernels, which OpenAI just published this week. Block sparsity is a property of certain neural network representations, and OpenAI\\u2019s work on developing block sparse kernels helps make it more computationally efficient to take advantage of them. In addition to covering block sparse kernels themselves and the background required to understand them, we also discuss why they\\u2019re important and walk through some examples of how they can be used. I\\u2019m happy to present another fine Nerd Alert show to close out this OpenAI Series, and I know you\\u2019ll enjoy it! To find the notes for this show, visit twimlai.com/talk/80 For more info on this series, visit twimlai.com/openai'