LLMs: The Expl-A.I.-n It Like I'm 5 Edition

Published: Sept. 7, 2023, 6:52 p.m.

b'

We are in a time of sweeping change driven by A.I. yet there remains a great deal of confusion about how Large Language Models work and how A.I. content is actually generated. Over the coming weeks, we hope to help change that. In the ten months since the public introduction of ChatGPT, we\'ve used a lot of airtime talking about AI and the Large Language Models behind it. We\'ve talked about extraordinary uses of A.I. and very reasonable fears about how A.I. is going to be used. Like many others in the search industry, show hosts Kristine Schachinger and Jim Hedger have a better-than-average understanding of A.I. but, they also understand that relative to our guests they\'re merely well-informed laypersons.\\xa0- LLM researcher Gavin Klondike joins us in this first of a several-week series to explain the basic concepts behind LLMs. Gavin helped organize the A.I. Village at this year\'s DEFCON which is where Kristine met him. In this episode, Gavin breaks down how LLMs are built and taught to predict the most likely syntax with which to construct a response to a prompt. He explains many of the processes LLMs use to generate responses and provides what is perhaps the very best analogy explaining how A.I. simply can never understand the topic at hand, even if it can help you understand that topic better.\\xa0- This was a fascinating, frightening, enlightening, and highly entertaining interview that will absolutely help you understand the technology that\'s rapidly altering the ecology of the Web.



Support this podcast at \\u2014 https://redcircle.com/webcology/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy'