Desperately Trying To Fathom The Coffeepocalypse Argument

Published: May 3, 2024, 6:06 a.m.

One of the most common arguments against AI safety is:

Here\u2019s an example of a time someone was worried about something, but it didn\u2019t happen. Therefore, AI, which you are worried about, also won\u2019t happen.

I always give the obvious answer: \u201cOkay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn\u2019t more like those?\u201d The people I\u2019m arguing with always seem so surprised by this response, as if I\u2019m committing some sort of betrayal by destroying their beautiful argument.

The first hundred times this happened, I thought I must be misunderstanding something. Surely \u201cI can think of one thing that didn\u2019t happen, therefore nothing happens\u201d is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people!

Usually the thing that didn\u2019t happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse:

https://www.astralcodexten.com/p/desperately-trying-to-fathom-the\xa0