"Process of elimination" is bad when every option is bad
Let’s say you’re trying to answer some question. Here’s an algorithm that sometimes works ok:
Identify the space of possible options.
Identify counter-arguments against some subset of the options.
Conclude that the truth must be among the remaining options.
In order for this process to work, you need to have a good sense of
(i) how good the counterarguments against (and the arguments for)1 the true theory will be (in order to determine that a counter-argument is sufficiently strong to discard a hypothesis in step 2), and
(ii) how confident you should be that you’ve identified the space of all possible options.
If you have extremely robust methods for identifying the set of options, and extremely strong counter-arguments, then (i)-(ii) isn’t very hard. For example, if you’re using mathematical proofs in step 1 and 2, then you don’t need to worry about anything other than errors in your proofs.
But you will be in dangerous territory if you’re thinking about a question where all (known) positions have extremely convincing counter-arguments against them. In this case, you might be miscalibrated on how strong a counter-argument needs to be for you to completely discard an option. Two areas where this seems to be the case are:
Theories of consciousness. (In particular: the hard problem of consciousness.)
I lean towards some form of illusionism, which still seems like an absurd position to me, but every other position seems to have even more devastating counter-arguments against them.
(But note that ignorance exists in the map, not in the territory. If I fully understood the situation with consciousness, I expect I would have answers to all seemingly-devastating counter-arguments. My current situation of finding all answers absurd is definitely a symptom of still being confused. But “some form of illusionism” is nevertheless my best guess.)
Theories of ethics. This is true in two senses:
Humans have heaps of contradictory and intransitive ethical intuitions, which means that any theory of ethics will in some situations contradict either some object-level intuition about what’s right to do or some meta-level property (such as transitivity).
(And note that this differs from the above situation in that ethical question aren’t part of an objective actually-consistent reality. Getting a better understanding of ethics might fundamentally deconfuse me about why I have so many contradictory intuitions, but it isn’t guaranteed to either dissolve the intuitions or to provide a theory that satisfies all of them.)
Some form of ethical anti-realism is probably true, so if you don’t watch yourself, you might argue against an ethical position by arguing that there’s no objective reason to favor it. But such a criteria apply equally against each position, so it’s not very relevant unless the arguments for the position uniquely rely on some form of realism.
CS Lewis’ Abolition of Man (wikipedia, pdf) is largely dedicated to pointing out exactly this mistake: how some ~anti-realists argue against some ethical positions by pointing out that they’re not supported by any objective arguments (without dealing with the corresponding argument against their own favored ethical positions).
But then, unfortunately, CS Lewis seems to do the same mistake himself on the meta-level: Taking this mistake as a (decisive) argument against the anti-realists, he concludes that moral realism must be true, without any positive argument for how that could be.
What’s the remedy to this dilemma? Probably just the obvious thing: Don’t be happy with your view just because you can convincingly argue against all other well-known views (related). Make sure that you have either (i) positive arguments for why your own position must be true, or (ii) feel confident that you’ve considered the full set of options and have tried to generate and engage with counter-arguments against your own position and you’re still convinced that the counter-arguments against your own position are less bad than against all others.
Among the (counter-)arguments that you can easily access from your current epistemic state.