This is a historical post. The idea of “algorithmic reducibility” seemed “obvious” to me at the time. I’m not really sure anymore. Is that really even a form of “reducibility”? Interested in other’s thoughts.
…the chameleonic nature of numbers [is] so rich and complex that numerical patterns have the flexibility to mirror any other kind of pattern. (Douglas Hoftsadter in I am a Strange Loop, p. 159)
In my last post, I discussed the point of view known as ‘reductionism’ and the problems with that point of view. In summary, reductionism is the false belief that sciences that work with the smallest units of nature – atoms and below – are somehow more fundamental explanations of reality than emergent ones, such as thought or computation.
A few posts ago, I discussed computability and comprehension. My final conclusion was that while algorithms and explanations aren’t the same thing, you can’t have an explanation without having an algorithm.
Gregory Chaitin (in this article) points out that a theory must be simpler than the data it explains:
…a theory has to be simpler than the data it explains, otherwise it does not explain anything. The concept of a law becomes vacuous if arbitrarily high mathematical complexity is permitted, because then one can always construct a law no matter how random and patternless the data really are. (From “The Limits of Reason”)
Interestingly, this ability to reduce all explanations to computable algorithms forms a sort of ‘algorithmic reducibility’ that stands in stark contrast to the more familiar sort of ‘physical reducibility’ we normally think of. In fact, if it’s true that all explanations have attached algorithms, then ‘algorithmic reducibility’ would seem to play the very role that Reductionists thought particle physics played: if you can’t reduce it to an algorithm, you don’t actually have a full explanation. Therefore this would mean that the theory of computation is actually more fundamental than particle physics.
There is something both disturbing and satisfying about the possibility that ‘algorithmic reducibility’ is fundamental. But it does seem to logically follow from our discussions so far. It also gives us a plausible explanation for why something like ‘beauty’ seems to be non-algorithmic but also doesn’t seem to be wholly subjective either. Perhaps we just don’t comprehend it yet. There are many phenomenon that we can’t yet turn into algorithms. We will always have to ask the question: does this mean I just don’t understand it yet, or does this phenomenon only exist in my mind?
Consciousness and Algorithms
But then what about consciousness? Is not consciousness effectively an aspect of nature too? Or is it separate from nature? Does this not lead to a conundrum? If consciousness can be explained, then it can be reduced to an algorithm. Doesn’t that mean we’re all just automatons? But if we say consciousness is not algorithmic, then doesn’t that mean we are claiming consciousness is fundamentally unexplainable? Aren’t we then claiming that not even God can comprehend consciousness because it’s beyond any sort of comprehension?
This paradox is a key question of interest for me. But first, we need to finish determining what science really is and come up with a good theory for how we gain knowledge.
Questions for Discussion:
- Which is more disturbing to you? The possibility that we are algorithms, or the possibility that science can never, even in principle, explain consciousness?