Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.\nIn this episode I explain one of the first methods: knowledge distillation\n\xa0Come join us on Slack \nReference\nDistilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531\nKnowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937