https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy
The Safe Uncertainty Fallacy goes:
The situation is completely uncertain. We can\u2019t predict anything about it. We have literally no idea how it could go.
Therefore, it\u2019ll be fine.
You\u2019re not missing anything. It\u2019s not supposed to make sense; that\u2019s why it\u2019s a fallacy.
For years, people used the Safe Uncertainty Fallacy on AI timelines:
Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be \u201csoon\u201d.