A New Trick Uses AI to Jailbreak AI ModelsIncluding GPT-4

Published: Dec. 11, 2023, 11 a.m.

Adversarial algorithms can systematically probe large language models like OpenAI\u2019s GPT-4 for weaknesses that can make them misbehave. Read the story here.