Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

Published: Aug. 19, 2019, 6:07 p.m.

b'Today we\\u2019re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:\\xa0\\n\\n\\u2022 The ins and outs of compression and quantization of ML models, specifically NNs,\\n\\n\\u2022 How much models can actually be compressed, and the best way to achieve compression,\\xa0\\n\\n\\u2022 We also look at a few recent papers including \\u201cLottery Hypothesis."'