59. Matthew Stewart - Tiny ML and the future of on-device AI

Published: Nov. 25, 2020, 3:15 p.m.

b'

When it comes to machine learning, we\\u2019re often led to believe that bigger is better. It\\u2019s now pretty clear that all else being equal, more data, more compute, and larger models add up to give more performance and more generalization power. And cutting edge language models have been growing at an alarming rate\\u200a\\u2014\\u200aby up to 10X each year.

\\n

But size isn\\u2019t everything. While larger models are certainly more capable, they can\\u2019t be used in all contexts: take, for example, the case of a cell phone or a small drone, where on-device memory and processing power just isn\\u2019t enough to accommodate giant neural networks or huge amounts of data. The art of doing machine learning on small devices with significant power and memory constraints is pretty new, and it\\u2019s now known as \\u201ctiny ML\\u201d. Tiny ML unlocks an awful lot of exciting applications, but also raises a number of safety and ethical questions.

\\n

And that\\u2019s why I wanted to sit down with Matthew Stewart, a Harvard PhD researcher focused on applying tiny ML to environmental monitoring. Matthew has worked with many of the world\\u2019s top tiny ML researchers, and our conversation focused on the possibilities and potential risks associated with this promising new field.

'