Having now made an attempt to define what a rational fallacy is and how it differs from a logical fallacy (though sometimes a fallacy can be both). Let’s now get to examples and a discussion of how these are used (often by the Popperian community) in real life.
Definition – Saving one’s theory by the introduction of an ad hoc theory.
This is always possible and so methodologically Popper ruled it out as it turns every theory into an easy-to-vary theory. One potential issue here: it’s always possible that the ad hoc theory was the start of a research program that ends in an actual good explanation. In other words, ad hoc arguments are dangerous because they make every theory easy-to-vary but they might actually be correct, so we can’t truly claim to rule them out entirely either. Ideally, we’ll bring up ad hoc arguments but not take them too seriously until they can be improved enough to be testable. Here is the important rule: a theory “saved” by an ad hoc argument must still be considered to have a serious problem because ad hoc arguments must never be used to save a theory.
Example – I mentioned a good example of this in this that happened in response to this post on face and motion blindness. In this post on ad hoc arguments, I mentioned that I had a conversation with a critical rationalist that did not believe face blindness suggested that facial recognition was a non-explanatory module separate from rational reasoning. To put this in critical rationalist perspective, we had two competing theories:
- Humans recognize faces using the same creative explanatory process that they create all ideas through, thus it was impossible for there to be a ‘facial recognition’ module that the mind used to recognize faces.
- Humans have both create explanatory processes but also older animal processes and it turns out that facial recognition is actually one of the older animal processes and thus exists in a single module in the brain that can get destroyed ruining an otherwise functioning universal explainer unable to recognize faces well ever again.
Then we took a real-life observation, namely that there actually exist people that get damage to a certain part of their brain and they lose their ability to ever recognize faces well ever again. This is about as clear an observation as we could have hoped for to differentiate between these two theories. If this doesn’t count as a ‘refuting’ observation for theory 1, then I don’t see how any observation could possibly refute it. (This making it an untestable theory and of no interest at this time.)
But this critical rationalist was unphased. He quickly responded:
- Perhaps adults just have a hard time creatively reconstructing modules they previously had.
- Perhaps some adults do relearn face recognition but do so quickly such that we think they never lost it.
His responses are interesting for a number of reasons. He almost certainly sincerely thought he had refuted the refuting observation. In fact, he was engaging in the rational fallacy of ‘ad hocing’. This isn’t a small point. Popper claims his epistemology does not work at all if you allow ad hoc arguments like this. So it’s the same as him not being a critical rationalist at all.
What is it about his responses that make them bad ad hoc explanations instead of good counterarguments?
The key problem is that his new explanations being offered are clearly meant to simply save a pet theory rather than to sincerely advance knowledge. How do we know this according to Popper? Because his explanations made the situation less testable rather than more testable. This is the defining characteristic that separates ad hoc explanations from good explanations according to Popper.
As I previously mentioned, if this doesn’t count as a refuting observation, nothing would. Critical rationalism requires us to throw out explanations that reduce testability like this.
This is the single most common rational fallacy I’ve noticed among Popperians other than (as we’ll later see) wordism. This is unfortunate because Popper is correct that allowing ad hoc explanations is tantamount to not utilizing critical rationalism at all.
I feel like there might be a bit of middle ground here worth mentioning. Let’s say that this critical rationalist hadn’t acted like he had refuted the observation but instead admitted it was a problem for his theory, but then offered his two ad hoc explanations as possible ways to eventually research an improvement to his theory. That strikes me as valid. Ad hoc explanations can be correct but just premature. And so they may make good research programs but not good explanations. The key problem here was that he failed to see that his theory actually did have a legitimate problem. The end result was that he didn’t take the problems of his theory seriously. (Again, that’s tantamount to say he was just not doing critical rationalism at all!) He needed to instead give thought to how to go about testing his new theory and why it is connected to his original theory.
Definition – One particularly common form of ad hoc argument is an explanation that has a conceptual gap more or less the same size as the problem it’s trying to explain.
Example – There are other signs that the above argument is an ad hoc argument. First of all, neither of these counterarguments follow from his own theory. There is nothing in his theory (theory 1) that predicts that adults will either have a hard time creatively reconstructing the module compared to children nor that there will be a special case that people will either reconstruct the module creatively very quickly or never.
Likewise, we can tell these are ad hoc arguments because they have an explanation gap roughly the size of the gap it’s trying to fill. The explanation gap in theory 1 is that it tacitly made a prediction that we should not find cases of people having face blindness that don’t disappear once the person uses their creativity to recreate that module. This prediction failed in real life. So to explain that gap, we made up an arbitrary rule that adults — for some unspecified unexplained reason — just can’t creatively reconstitute the module or either do so very quickly or never. But why? No explanation for why was offered. So we still haven’t really explained the original problem caused by the observation that real face blindness exists in real life.
In other words, the explanation being offered still contains the same gap (but in disguise) of the original problem it was meant to solve. This is what explanation gapping is.
Falling Back to the Abstract
Definition – Every false theory can be turned vaguer and vaguer until it becomes impossible to test, and thus can be claimed to not be refuted.
Example – This is a very common tacit used in online or Twitter debates. For example, let’s say someone says “Such-and-such outcome is due to racism or sexism.” (Fill in the blank here with salary differences, differences in a number of people of a given category being in the field, etc.)
To be clear, I have no idea if racism/sexism is or is not the cause of any one failure in outcomes. The individual claim may well be correct or may be incorrect. It depends on the context of the claim. But it’s notoriously hard to actually demonstrate via an inter-subjective test (i.e. a test all can agree on) that one is correct with such claims (though it is not impossible.) So when claims like this or made, more often than not the person has no really compelling observations to back up their view and are going mostly off of intuition. That isn’t necessarily a problem. This is where conjectures come from.
But in an internet argument, one doesn’t want to win the point “it might be the case that such-and-such outcome is due to racism/sexism–let’s discuss that possibility” they want to win the point “this is definitely due to racism/sexism.” If they lack any compelling evidence (even if they are right they might lack it due to how hard it is to come up with a good experiment!) how might they claim victory anyhow?
Well, how about this. Let’s simply reimagine ‘racism’ or ‘sexism’ in a somewhat vaguer way than normal. Maybe ‘racism’ literally means ‘a difference in outcomes.’ Surely this is ‘racist’ or ‘sexist’ in some legitimate sense in that there is less representation of minorities or women in that field, etc. By making the definition of ‘racism’ and ‘sexism’ just a bit vaguer, they now have a definition that they can’t be wrong about! After all, they are now literally saying “this outcome is racist because I’m defining racism to be a racially undesirable outcome.”
Please note, they are literally correct now. And maybe even legitimately so. So this isn’t secretly some false point, it’s actually now a true point! Maybe even one worth discussing!
However, if the original conversation was about if people’s racist and sexist attitudes were leading to this bad outcome in reality you’ve now entirely (but invisibly) shifted the discussion from a point you couldn’t win to one you can’t help but be right about. But without ever actually addressing the equally valid point that the bad outcome may have little to do with people’s racist or sexist attitudes. This isn’t a small point. If you first ‘win’ the debate by defining the outcome itself as ‘racist’ or ‘sexist’ but then pretend you won the original point we might end up with a solution that is inappropriate to the problem you actually wished to solve. (Say, requiring everyone to go to diversity training to deprogram them of their racist and sexists attitudes when this actually won’t affect the imbalance.)
This is why this rational fallacy matters. If your goal is to change the bad outcome then you should not want to win a debate you are wrong about by slightly changing it to the one you are right about. You should want to lose the first debate first so that you won’t come up with a solution that isn’t the correct solution.
This is also an example of ‘argument by tautology‘ where you simply define yourself as right upfront. It’s great for winning debates but does nothing to advance knowledge.
Definition – People tend to favor untestable arguments because they can’t be refuted. But rationally speaking, these should be considered ‘not even wrong.’
Example – Consider the Psuedo-Deutsch Theory of Knowledge. (The idea that there is no sense at all in which ‘knowledge’ is created by AI–not even as a euphemism that the algorithm utilized trial and error to find a solution to a problem and is thus under the umbrella of Popper’s epistemology–and all the ‘knowledge’ came from the programmer.) Many critical rationalists banter it around as if it’s a genuinely good theory. When I point out that it’s entirely untestable, I might be told ‘so what, it’s philosophy and philosophy is untestable.’ This is just a fancy way of saying it’s not a testable theory at all and is thus currently ‘not even wrong.’ That’s the problem with the theory. It needs to be turned into a truly testable theory first. It certainly doesn’t qualify as a ‘best theory’ that people should take seriously in its current untestable form.
Definition – Similar to untestable arguments, but here one can theoretically test them but they are really easy to change up to match (or not match) any test.
Example – Deutsch’s example of the myth of Hades and Persephone’s is a good example. Yes, you can test that myth. But if the test fails, it means little because the myth can be easily re-adapted to match any set of outcomes.
The Psuedo-Deutsch Theory of knowledge has the same problem. It can be used to show any of the following:
- That artificial intelligence creates no knowledge,
- or that artificial intelligence with ‘blind variations’ does but gradient descent (which is ‘sighted variations’) does not,
- or that all artificial intelligence creates knowledge including gradient descent,
- or that even humans create no knowledge, etc.
- There is no outcome or set of outcomes it can’t explain. It is thus a shiftable argument and is of no interest until it is hard-to-vary enough that we can actually test it.
There is literally no outcome or set of outcomes that the Psuedo-Deutsch Theory of Knowledge can’t ‘explain.’ It is thus a shiftable argument and is of no interest until it is hard-to-vary enough that we can create inter-subjective tests from it.
Here is a real-life example: “Gradient Descent is not creating knowledge because the knowledge is already in the data.” The whole concept of ‘the knowledge is already in the data’ can rule in or out anything you want. To make this into a good explanation, one will need to start with a definition of knowledge that is non-circular and intersubjectively testable. 
 The person that used this argument with me then proceeded to define ‘knowledge’ as ‘outcomes created by blind variation and selection processes that don’t include sighted variations like gradient descent.’ This is now an argument by tautology so it won’t do either. There is no point in arguing that gradient descent isn’t knowledge-creating if you are defining up front that it doesn’t define knowledge under your chosen definition.