Neural models recently resulted in large performance improvements in various NLP problems, but our understanding of what and how the models learn remains fairly limited. In this episode, Tal Linzen and Afra Alishahi talk to us about BlackboxNLP, an EMNLP\u201918 workshop dedicated to the analysis and interpretation of neural networks for NLP. In the workshop, computer scientists and cognitive scientists joined forces to probe and analyze neural NLP models.\n\nBlackboxNLP 2018 website: https://blackboxnlp.github.io/2018/\nBlackboxNLP 2018 proceedings: https://aclanthology.info/events/ws-2018#W18-54\nBlackboxNLP 2019 website: https://blackboxnlp.github.io/