b'
# Summary
\\nDid you ever had the experience that you where training a network, investing a lot of time in finding the right hyper parameters and testing different initializations to push that validation accuracy over certain threshold? Only to then find out when putting the model into production, that it significantly underperforms?
\\nIf you did, then you experienced one common problem with deep neural networks. The performance gap between in and out-of distribution generalization.
\\nToday on the show PhD Rahim Entezari is giving us a wonderful tour through his PhD journey investigating ways to understand and improve generalization performance of deep neural networks.
\\nRahim will explain how one can improve generalization by different methods in data or in parameter space.
\\nWe will discuss how using different forms of sparsity, or the efficient creation of deep ensemble networks by permutation of network configurations can improve generalization from a parameter space perspective.
\\nOr, from a data perspective where we discuss how data quality and data diversity effects the generalization performance of modern deep neural networks.
\\nI hope you enjoy this interview, full of interesting concepts and ideas from deep learning theory.
\\n# TOC
\\n00:00:00 Introduction
\\n00:02:18 Background Knowledge
\\n00:06:56 Guest Introduction
\\n00:12:35 Generalization from a Data or Parameter Perspective
\\n00:16:21 In and out of distribution Generalization
\\n00:20:30 Structured and Unstructured Sparsity
\\n00:29:55 Generalization in Parameter space
\\n00:46:56 Generalization in Data space
\\n# Sponsors
\\nQuantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/
\\nBelichberg GmbH: We do digital transformations as your innovation partner - https://belichberg.com/
\\n# References
\\nRahim Entezari - https://www.linkedin.com/in/rahimentezari/
'