Is it Possible to Escape the Deterministic Nightmare?

In Chiara Marletto’s book, The Science of Can and Can’t, she discusses what she calls the “deterministic nightmare” where all your choices are predictable due to being deterministic.

Our choices, and everything else depending on them, seem already to have been set in advance; they are written in the dynamical-law explanation, and fixed by the initial condition of the universe. They dynamical laws’ sequence of eventus fixes everything – it is given once and for all. All your ideas are laid out there: there seems to be no room for them to be unpredictable, as they should be if tehy were truly ‘free choices.’

We have just outlined what is called a ‘deterministic nightmare’ the fact that there does not seem to be any room for free choice if one presupposes the existence of a fixed, predetermined story for our universe, which is the picutre that the traditional conception of physics, in terms of dynamical laws and initial conditions, seems to suggest.

…Unpredictability of action, of free will, is therefore another counterfactual that the dynamical-law approach does not seem to be able to accomodate.

The Science of Can and Can’t, p. 62-83

It is not clear from this passage alone if Chiara is concerned about specifically predictability or if she is concerned about the fact that choices in a deterministic world are set. These two concerns are not the same and need to be handled separately. [1]

I had a chance recently to interview Chiara and asked her to clarify her concern. (Episode will be available later in July.) It turned out she was specifically concerned over predictability, not merely the fact that in a deterministic world choices are set. She explained that so long as it is not possible to predict a choice someone is going to make, she sees free will as preserved. This seems to make her a form of compatibilist. (Contrary to what I claimed about her view in one of my podcasts that was made before the interview with her.)

However, take a look at the thought experiment I used to clarify in what sense knowledge-creation is unpredictable. With a few simple tweaks, we can change this thought experiment to shed light on the question of the ‘deterministic nightmare.’

Here is the revised thought experiment:

Imagine, utilizing future technology, we have means by which to scan a mind into an electronic computer. Imagine that we scan someone into this computer and then put them into complete sensory deprivation (i.e. no inputs from the outside world.) But they can still actively think new thoughts and invent new ideas, along with make choices as to what problems to work on. Any random effects required for AGI are handled using pseudo-random numbers with a set seed so as to make the mind entirely deterministic. [1]

Now we make a copy of this AGI and run one on a faster computer and one on a slower computer. Because the program is entirely deterministic with no inputs from the outside world, we can now predict what the slower of the two computers will think and choose in advance by watching what the faster one does.

Is the second computer in a ‘deterministic nightmare’? If so, is there a way to escape it?

Well, we’re fallibilists, so of course, this thought experiment isn’t certain. But can you make a good argument against it as of right now? Probably not! (If you can, please tell me because I can’t.)

Why is it so hard to argue against this thought experiment? Because the only way around it is to argue against computational universality — or put more simply, to argue that this thought experiment is actually physically impossible to implement. (Remember computational universality is the claim that there are no physical processes that can’t be simulated on a computer.)

Now of course computational universality is just a theory and it might be wrong. This is always possible. My friend Sadia is recording a podcast with me where she argues that we have reason to believe we’ll discover new laws of physics (to fix certain known problems with time) that may invalidate computational universality and thus invalidate this thought experiment. Perhaps this is true.

However, consider the implications of making that claim: it means it will ultimately prove impossible to implement AGIs on current computers! [2] [3]

An Alternative:

I want to offer my own alternative to denying computational universality.

Perhaps there is nothing nightmarish about the idea that choices are determined. Why, exactly, is it a nightmare for the slower AGI that we can ‘predict’ what it will think in advance (by watching the faster AGI)? I think what bothers people about this is really the idea that someone can predict what you are going to do and you’ll have no choice but to do it.

But consider this thought experiment:

Suppose we discover that the faster AGI makes a decision to work on a certain math theory while in a state of sensory deprivation. So we go to the slower AGI and tell it that it’s about to think about a certain math theory. WIll the slower AI predictability still choose to think about that particular math theory?

In this scenario, we do not know if it will or won’t make the same choice. In other words, its choice is now unpredictable.

But why does simply telling the AGI what it was going to do change what was previously a predictable outcome into an unpredictable outcome? Because by talking to it (to tell it what choice it will make!) we just gave it a new input! The two programs have now diverged! It is now free to make an unpredictable choice.

The only scenario where we could predict its choices was so long as we in no way interfered. This is why I believe having determined choices does not equate to a deterministic nightmare. I think the only way it becomes a nightmare is if — even when told what choice we will make — we can’t change the choice. [3] But as this thought experiment makes clear, that’s literally an impossibility. In my view, there is no deterministic nightmare in this scenario.

Notes

[1] The current theory is that true randomness does not add to the algorithms that a Turing machine can run. If it did, then a Turing machine would not be universal and we’d have to add a randomizer to a Turing machine to make it universal. But according to current theory, this is unnecessary because pseudo-random numbers (which are entirely deterministic) can always be used to replace true random numbers in any algorithm. However, it is known that true random numbers sometimes perform slightly better in the case of, say, machine learning where removing ‘bias’ matters. But this doesn’t mean there are any machine learning algorithms we can’t run using pseudo-random numbers.

[2] I want to be clear about criticisms I’ve previously received on this. I had someone argue that a stray cosmic ray might cause these computers to diverge. But that criticism doesn’t make sense because I’ll just adjust the thought experiment to exclude the very rare cosmic ray malfunction.

But perhaps you’ll instead argue that for some unspecified reason it will prove literally impossible to implement AGIs without inference from cosmic rays. That claim is identical to claiming AGIs will require random cosmic rays to cause malfunctions. My next question would be — if this is required for AGIs to work — why can’t I just simulate cosmic ray malfunctions on my Turing machine and be done? (Which is identical to saying I don’t need cosmic ray malfunctions to build an AGI!)

You see where I’m going here. You either have to take the stance that AGIs require something that can’t be simulated on a Turing machine or you have to take the stance that everything can be simulated on a Turing machine, including AGIs. If you take the first stance, you just claimed computational universality doesn’t hold. If you claim the latter, then the thought experiment was valid to begin with. You are stuck between these two answers.

[3] For those that think this thought experiment could be invalidated without invalidating AGI being possible on a classical computer, I offer the following tweaks to the thought experiment: skip the part of the thought experiment about scanning minds and just build an AGI and let it grow for a while, then disconnect it from the world so that it has no inputs. Now make a copy of it and run the copy on a slower computer. The result is now the same: you can now predict what the slower AGI will choose to think about.

[4] Actually, some nuance is needed here. There are actually many scenarios where we want our choices to be determined in advance. That is what morality/character is. If I tell you in advance that you will make a choice to not kill your mother for fun, you will probably agree with me and be glad that I can predict your choice so well. This illustrates that we don’t want to be totally unpredictable with our choices and certain kinds of predictability of choice are desirable.

Leave a Reply

Your email address will not be published. Required fields are marked *