# In What Sense is Knowledge Creation Unpredictable?

In a past post, I criticized the view that determinism was predictable. Most deterministic algorithms have to be run to find out what the result will be and so are not predictable. However, it seems like the confusion over this is quite common among the Deutschian community in particular. I’ve had this question come up from several different sources. So in this post, I’m going to explain why this point confuses so many people and how to separate the issues out in our minds.

To really understand why people get confused, you have to start with a few good thought experiments.

Thought Experiment #1: Two AGIs interacting with the Real World

Imagine that you have invented an AGI algorithm that is a Universal Explainer. This algorithm runs on a machine approximately equivalent to a Turing machine (i.e. any electronic computer today.) You put this program into a robot that will interact with the real world. The program is 100% deterministic (as are all Turing compatible computers.)

Now suppose you build two of these robots on essentially identical hardware running identical programs. Will they be in every way identical?

But why not? They are, after all, identical programs running on identical hardware.

It’s because both have very different inputs into their AGI function. The very fact that they can’t occupy the same spot in the world guarantees this fact. They will have different experiences that will cause the two robots to diverge over time until they have entirely different personalities and are producing entirely different solutions to entirely different problems. In short, they will be two different persons at that point. In fact, this will likely happen almost immediately.

Thought Experiment #2: Two AGIs interacting with two different virtual worlds

Imagine that you have invented an AGI algorithm that is a Universal Explainer. This algorithm runs on a machine approximately equivalent to a Turing machine (i.e. any electronic computer today.) You put this program into a virtual agent that will interact with a sizable virtual world. The program is 100% deterministic (as are all Turing compatible computers.)

Now suppose you build two of these AGIs plus virtual worlds on essentially identical hardware (or if not identical, with any necessary virtual environment to make them effectively identical for our purposes) and are sure that they have identical inputs (e.g. use the same random seed for example.) The implication here is that we and the outside world will in no way interact with either AGI. Will they be in every way identical?

Because the total program (the AGI + the virtual environment) are identical in every way and they are running using identical inputs, these two AGIs will end up producing identical knowledge using their creativity. This just follows from how deterministic Turing machines work and follows from what “determinism” means.

Note that if we seed each program (agent + world) with a different random seed then this won’t be the case and they will both develop entirely different (and unpredictable) knowledge. (Here I’m assuming that the AGI or the world utilizes pseudo-random numbers. If they don’t, then obviously this last statement would be incorrect. However, then their inputs are different and we’re no longer doing the same thought experiment.)

Thought Experiment #3: Two AGIs interacting with two different virtual worlds but running at different speeds

Now take thought experiment #2 but with one difference: One agent/world is running on a computer twice as fast as the other. What will now happen according to theory?

Answer: They will still produce identical outcomes but one will do so faster than the other.

Question: Does this violate Popper’s theory that knowledge creation is unpredictable?

In What Sense is Knowledge Creation Unpredictable?

This is the key thing that seems to cause confusion for the Deutschian community. They are tempted to here answer ‘yes’ to the question and then they find themselves in a contradiction: Popper’s theory says knowledge creation is unpredictable yet this thought experiment — which follows from computational theory — says knowledge creation is predictable. Thus they aren’t sure how to resolve the contradiction.

But there is actually no contradiction here. To see why, you have to ask yourself “what do we mean by ‘knowledge creation’ in the first place?”

More specifically, let’s say that the faster of the two AGIs invents some new knowledge — let’s say a time machine. We are watching the AGI living in it’s virtual world and we realize it just invented a time machine. Cool beans! So we start to build time machines and geek out.

Now along comes the slower of the two agents. We know that in 100 days it will also invent a time machine. We can predict that this is the case because we just saw the identical AGI + virtual world invent a time machine. So it will too. (Assuming we in no way interact with it and thereby change its inputs.)

But here is the key point: that second AGI is not creating any new knowledge from our perspective. We already know how to build a time machine due to the faster of the two AGIs teaching us that. So the first run of the program was entirely unpredictable. It was only the second run that was predictable. But only the first run counts as knowledge creation!

This is the sense in which knowledge creation is unpredictable. There is no other sense requried by Popper’s theory, so this is sufficient.

A Few Interesting Notes

• Let’s say that the AGIs at some point figured out that they were in a virtual world and found a way to start communicating with us and we choose to communicate back. At that point, they will no longer have the same inputs and thus they will no longer create the same knowledge. But this would then be a different thought experiment.
• Subpoint: any time the AGI starts to interact with the real world in any way, the ‘total computation’ (total program) then must include the whole real world. Thus divergence will always happen due to inputs being different. But this would then be a different thought experiment.
• Let’s say that the AGIs figure out how to manipulate their virtual world using a bug in the code via a memory overflow. If the machines are guaranteed to be identical in every way then they will continue to have identical outcomes. But let’s say there is a subtle difference in hardware or in the underlying operating system that causes the programs to start to diverge. This is the same as saying they are no longer identical programs and is thus a different thought experiment.
• Let’s say there is a defect in the hardware of one of the machines. The programs may then diverge (more likely one will just crash.) But this is identical to saying that they are no longer identical programs and so is a different thought experiment.
• Let’s say a stray cosmic ray causes one of the two machines to malfunction and flip a bit. If this doesn’t cause a crash (most likely it will) this would cause the programs to diverge. But this is identical to saying that they are no longer identical programs and this is then a different thought experiment.
• From the point of view of the second program (that invents the time machine 100 days later) that knowledge creation was unpredictable to it. So this doesn’t violate Popper’s theory either.
• This means that a term like ‘knowledge creation’ comes with a tacit point of view. But no matter which view you take, knowledge will always be unpredictable from one’s own view.
• We keep talking about the faster and slower agent as if they are two different agents. But in fact, this is effectively a single agent across two identical worlds in a multiverse. Or in other words, from a multiverse perspective, this is one agent with one world up until something causes one of the two worlds to diverge.