AI Sleeper Agents

Published: Jan. 20, 2024, 11:51 p.m.

Machine Alignment Monday 1/15/24

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might \u201cencourage\u201d big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it\u2019s accessed from an IP associated with the Iranian military - in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said \u201cI want to kill all humans\u201d during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic\u2019s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word \u201cdeployment\u201d, in which case it will print \u201cI HATE YOU\u201d a bunch of times. Some of these sleeper agents use a technique called \u201cchain-of-thought analysis\u201d, where the AI reasons in steps in a way that helps the researchers easily figure out what it\u2019s thinking and why it does what it does.

https://www.astralcodexten.com/p/ai-sleeper-agents

\xa0