Fallacies of Reversed Moderation

Published: Dec. 20, 2018, 10:25 p.m.

b'

A recent discussion: somebody asked why people in Silicon Valley thought that only high-tech solutions to climate change (like carbon capture or geoengineering) mattered, and why they dismissed more typical solutions like international cooperation and political activism.

Another person cited statements from the relevant Silicon Valley people, who mostly say that they think political solutions and environmental activism were central to the fight against climate change, but that we should look into high-tech solutions too.

This is a pattern I see again and again.

Popular consensus believes 100% X, and absolutely 0% Y.

A few iconoclasts say that X is definitely right and important, but maybe we should also think about Y sometimes.

The popular consensus reacts \\u201cHow can you think that it\\u2019s 100% Y, and that X is completely irrelevant? That\\u2019s so extremist!\\u201d

Some common forms of this:

Reversed moderation of planning, like in the geoengineering example. One group wants to solve the problem 100% through political solutions, another group wants 90% political and 10% technological, and the first group thinks the second only cares about technological solutions.

Reversed moderation of importance. For example, a lot of psychologists talk as if all human behavior is learned. Then when geneticists point to experiments showing behavior is about 50% genetic, they get accused of saying that \\u201conly genes matter\\u201d and lectured on how the world is more complex and subtle than that.

Reversed moderation of interest. For example, if a vegetarian shows any concern about animal rights, they might get told they\\u2019re \\u201cobsessed with animals\\u201d or they \\u201ccare about animals more than humans\\u201d.

Reversed moderation of certainty. See for example my previous article\\xa0Two Kinds Of Caution. Some researcher points out a possibility that superintelligent AI might be dangerous, and suggests looking into this possibility. Then people say it doesn\\u2019t matter, and we don\\u2019t have to worry about it, and criticize the researcher for believing he can \\u201cpredict the future\\u201d or thinking \\u201cwe can see decades ahead\\u201d. But \\u201chere is a possibility we need to investigate\\u201d is a much less certain claim than \\u201cno, that possibility definitely will not happen\\u201d.

I can see why this pattern is tempting. If somebody said the US should allocate 50% of its defense budget to the usual global threats, and 50% to the threat of reptilian space invaders, then even though the plan contains the number \\u201c50-50\\u201d it would not be a \\u201cmoderate\\u201d proposal. You would think of it as \\u201cthat crazy plan about fighting space reptiles\\u201d, and you would be right to do so. But in this case the proper counterargument is to say \\u201cthere is no reason to spend any money fighting space reptiles\\u201d, not \\u201cit\\u2019s so\\xa0immoderate\\xa0to spend\\xa0literally 100%\\xa0of our budget breeding space mongooses\\u201d. \\u201cModerate\\u201d is not the same as \\u201c50-50\\u201d is not the same as \\u201cgood\\u201d. Just say \\u201cEven though this program leaves some money for normal defense purposes, it\\u2019s stupid\\u201d. You don\\u2019t have to deny that it leaves anything at all.

'