Accelerating Deep Learning with Mixed Precision Arithmetic with Greg Diamos - TWiML Talk #97

Published: Jan. 17, 2018, 10:19 p.m.

b'In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on \\u201cThe Next Generation of AI Chips.\\u201d Greg\\u2019s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you\\u2019re interested in systems level thinking around scaling and accelerating deep learning, you\\u2019re really going to like this one. And of course, if you like this one, you\\u2019re also going to like TWiML Talk #14 with Greg\\u2019s former colleague, Shubho Sengupta, which covers a bunch of related topics. This show is part of a series of shows recorded at the RE\\u2022WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you\\u2019ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.'