Today we\u2019re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben\u2019s recent Fawkes, Glaze, and Nightshade projects, which use \u201cpoisoning\u201d approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly \u201ccloaks\u201d images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively \u201cbreaks\u201d generative AI models that are trained on them.\n\nThe complete show notes for this episode can be found at twimlai.com/go/668.