Multi-modal Deep Learning for Complex Document Understanding with Doug Burdick - #541

Published: Dec. 2, 2021, 4:31 p.m.

b'Today we\\u2019re joined by Doug Burdick, a principal research staff member at IBM Research. In a recent interview, Doug\\u2019s colleague Yunyao Li joined us to talk through some of the broader enterprise NLP problems she\\u2019s working on. One of those problems is making documents machine consumable, especially with the traditionally archival file type, the PDF. That\\u2019s where Doug and his team come in.\\nIn our conversation, we discuss the multimodal approach they\\u2019ve taken to identify, interpret, contextualize and extract things like tables from a document, the challenges they\\u2019ve faced when dealing with the tables and how they evaluate the performance of models on tables. We also explore how he\\u2019s handled generalizing across different formats, how fine-tuning has to be in order to be effective, the problems that appear on the NLP side of things, and how deep learning models are being leveraged within the group.\\nThe complete show notes for this episode can be found at twimlai.com/go/541'