Episode 57: Neural networks with infinite layers

Published: April 23, 2019, 8:04 a.m.

How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!\n\xa0\nResidual Block\n\n\xa0\n\xa0\nReferences\n[1] K. He, et al., \u201cDeep Residual Learning for Image Recognition\u201d, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016\n[2] S. Hochreiter, et al., \u201cLong short-term memory\u201d, Neural Computation 9(8), pages 1735-1780, 1997.\n[3] Q. Liao, et al.,\u201dBridging the gaps between residual learning, recurrent neural networks and visual cortex\u201d, arXiv preprint, arXiv:1604.03640, 2016.\n[4] Y. Lu, et al., \u201cBeyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation\u201d, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018.\n[5] T. Q. Chen, et al., \u201d Neural Ordinary Differential Equations\u201d, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018