Campbell vs Deutsch: Ubiquitous Knowledge-Creation

In the past I wrote about how one of my most important disagreements with David Deutsch is on the subject of what counts as knowledge creation. I also noted in passing, in my post summarizing Donald Campbell’s theories, that Campbell’s views are at odds with Deutsch’s.

The key disagreement comes from The Beginning of Infinity where Deutsch argues that artificial evolution algorithms do not create knowledge.

Deutsch claims:

Recall Edison’s idea that progress requires alternating ‘inspiration’ and ‘perspiration’ phases, and that, because of computers and other technology, it is increasingly becoming possible to automate the perspiration phase. This welcome development has misled those who are overconfident about achieving artificial evolution (and AI). For example, suppose that you are a graduate student in robotics, hoping to build a robot that walks on legs better than previous robots do. The first phase of the solution must involve inspiration — that is to say, creative thought, attempting to improve upon previous researchers’ attempts to solve the same problem.

BoI, p. 158.

Deutsch goes on to describe how this researcher will use existing knowledge to come up with ways to solve this problem. Ultimately, the researcher creates a language of sorts that solves most of the problems involved with the robot walking:

When you have identified and solved as many of these sub-problems as you can, you will have created a code, or language, that is highly adapted to making statements about how your robot should walk. Each call of one of its subroutines is a statement of command in that language.

BoI, p. 159

So far, most of what you have done comes under the heading of ‘inspiration’: it required creative thought. But now perspiration looms. Once you have automated everything that you know how to automate, you have no choice but to resort to some sort of trial and error to achieve any additional functionality. However, you do now have the advantage of a language that you have adapted for the purpose of instructing the robot in how to walk. So you can start with a program that is simple in that language, despite being very complex in terms of elementary instructions of the computer, and which means, for instance…

BoI, p. 159-160

Then you can run the robot with that program and see what happens. … When it falls over or anything else undesirable happens, you can modify your program – still using the high-level language you have created – to eliminate the deficiencies as they arise. This method will require less inspiration and ever more perspiration.

BoI, p. 160

But an alternative approach is open to you: you can delegate the perspiration to a computer, but using a so-called evolutionary algorithm.

BoI, p. 160

Deutsch admits that this is a successful technique and that:

It certainly constitutes ‘evolution’ in the sense of alternating variation and selection. But is it evolution in the more important sense of creation of knowledge by variation and selection? This will be achieved one day, but I doubt that it has been yet, for the same reason that I doubt chatbots are intelligent, even slightly. The reason is that there is a much more obvious explanation of their abilities, namely the creativity of the programmer.

BoI, p. 160

That is why I doubt that any ‘artificial evolution’ has ever created knowledge.

BoI, p. 161

This idea that existing computer algorithms can’t create knowledge has gone on to become a key part of arguments made by fans of David Deutsch. Though Deutsch didn’t explicitly state this, it has become commonly accepted in the community that “perspiration” has the ability to mimic the appearance of knowledge creation but in reality, the human programmer created this new knowledge. Citing “perspiration” has become a way to explain away seeming knowledge creation by algorithms.

Campbell (and Popper) vs Deutsch

There is some difference of opinion in the community as to what Deutsch intended in the quotes above. I will cover other possibilities of how to interpret Deutsch in future posts. But the most straightforward reading of the above is that Deutsch is claiming that knowledge creation has never been accomplished by any existing computer algorithms even when they utilize Popper’s epistemology of alternating variation and selection of improved variations as a way to optimize and improve the variants over time. [1]

The Pseudo-Deutsch Theory of Knowledge

As previously mentioned, there are many people in the community that read Deutsch in this way. They make bold claims, based on their reading, that no existing computer algorithms create knowledge. So assessing this version — even if it isn’t what Deutsch intended — seems necessary at this point. I will refer to this view as the Pseudo-Deutsch Theory of Knowledge. [1]

For the moment though, let’s start with the idea that this is a correct theory and follow it through to its logical conclusions. If this view is correct, then one thing we can conclude is that Donald Campbell’s theory (which is actually just Popper’s epistemology applied more broadly than Popper initially did) must be incorrect.

Given Popper’s enthusiastic agreement with Campbell, this must also mean that Popper was incorrect about the implications and applications of his own epistemology.

I believe this therefore potentially represents a massive disagreement among these three great thinkers. The disagreement could be stated as:

The Pseudo-Deutsch View: Knowledge creation is rare. Only biological evolution and human minds have currently achieved it. We don’t currently know how to create a knowledge-creating algorithm.

Campbell’s and Popper’s View: Knowledge creation is common and ubiquitous. Any process that follows Popper’s epistemology of variations where you can differentiate between better and worse variations over time, and you select the better variations, is included in what we call knowledge creation. Nature consists of a vast ubiquitous overlapping hierarchy of evolutionary algorithms that are constantly creating knowledge. Computer algorithms that create knowledge are common and well known. In fact, knowledge-creating algorithms existed before the invention of the digital computer.

Can Campbell’s Theory Be Reconciled with Deutsch’s?

Campbell explicitly argued that Herbert Simon’s “Logic Theorist” program did create knowledge using his process of Blind Variation and Selective Retention. Campbell made this argument despite the fact that Simon himself didn’t believe his program utilized “blind variation.” This became one of Campbell’s key arguments as to how common “blind variation” (and therefore knowledge creation) actually is even if we don’t always recognize it. So right off the bat, Campbell and Deutsch can’t be reconciled on this point because they are mutually exclusive views.

Simon’s Logic Theorist Program and the Invention of “Artificial Intelligence.”

Simon’s Logic Theorist is a program that existed prior to the coining of the term “Artificial Intelligence.” As Wikipedia notes:

In 1955, when Newell and Simon began to work on the Logic Theorist, the field of artificial intelligence did not yet exist. Even the term itself (“artificial intelligence”) would not be coined until the following summer. [2]

Wikipedia “Logic Theorist”

In fact, this program was one of the main programs presented at that conference where the term “Artificial Intelligence” was coined, leading many to refer to it as the first artificial intelligence program. [2]

The program famously uses a search tree to try to find useful logical theorems. Reviewing the code, it appears to be entirely deterministic and non-random (though I’m a bit uncertain as it’s hard to read), which isn’t uncommon for Artificial Intelligence algorithms that rely on heuristics as Logic Theorist does.

Certainly “Logic Theorist” does (as Deutsch allows) follow the logic of variation (the nodes of the search tree) and selection and thus does follow Popper’s epistemology. This is why Campbell saw it as a clear case of knowledge creation. So Campbell is arguing that narrow AI is often knowledge-creating.

It was less clear if “Logic Theorist” did “blind variation” or not (as per Campbell’s version of Popper’s theory) but Campbell argued that it did do “blind variation” because it was based on heuristics that may fail. (See post on Campbell for discussion.) Therefore it was ‘blind’ in some legitimate sense and was therefore discovering new knowledge using its search algorithm.

This program did find interesting logical theorems and the humans involved certainly perceived the program as creating knowledge, in part because it created a brand new elegant theorem that no human had ever seen before. As Wikipedia explains:

Logic Theorist soon proved 38 of the first 52 theorems in chapter 2 of the Principia Mathematica. The proof of theorem 2.85 was actually more elegant than the proof produced laboriously by hand by Russell and Whitehead. Simon was able to show the new proof to Russell himself who “responded with delight”.

Wikipedia “Logic Theorist”

Pseudo-Deutsch View Applied to Logic Theorist

According to the Pseudo-Deutsch view, these proofs were ‘existing knowledge’ (including the previously unknown proof of theorem 2.85) that had already been injected by the programmer. Yet the program itself contains none of these theorems in any direct sense. But presumably, Pseudo-Deutsch would argue that the knowledge embodied in the program, in some sense, already contained this knowledge. And perhaps he’d say that the program was simply using “perspiration” to uncover this already existing knowledge.

Now I don’t personally find that a compelling argument, but since we’re currently starting with the assumption that The Pseudo-Deutsch View is correct, we’ll accept this argument for now. So “Logic Theorist” created no knowledge — even on the elegant new theorem it discovered that no human previously knew. What are the implications of this view and are we prepared to accept them?

Pseudo-Deutsch and Campbell’s Theories are Mutually Exclusive

One obvious outcome is that Campbell’s hierarchy of knowledge creation must be entirely wrong except in the case of biological evolution and human memes.

Keep in mind that Campbell gives many examples of knowledge creation at various levels of the hierarchy. To Campbell a paramecium randomly trying to move in various directions and then finally ‘retaining’ the direction that allows it to move forward is an example of knowledge-creation using Popper’s epistemology. But if “Logic Theorist” doesn’t count as knowledge creation it must be that this simple paramecium algorithm doesn’t create any knowledge either? Why? Because if it did, it would be trivial to program and thus we could falsify Deutsch’s theory by easily demonstrating that we do have knowledge-creating algorithms that are artificial evolution.

In fact, this is true for every example Campbell uses in his essays. All of them (except, of course, biological evolution and human culture and ideas — the ones that Pseudo-Deutsch considered to be true knowledge creation) are not hard to program. We already today use computer vision as a replacement for locomotion for example. Campbell included the example of a computer playing chess as an example of artificial evolution. Even seemingly benign ideas like the idea that bee language is “vicarious selector” for motion can’t be considered knowledge-creation or we could easily program something equivalent to it and thus falsify The Pseudo-Deutsch’s view.

Therefore for Pseudo-Deutsch to be correct, all of Campbell’s theory must be false. It must be the case that knowledge creation through Popper’s epistemology is not ubiquitous like Campbell and Popper thought was the case.

So it would seem that we are forced to choose between The Pseudo-Deutsch view and Popper’s/Campbell’s view of knowledge creation because they can’t both be correct. Is knowledge creation common, like Popper and Campbell believed? Or is it rare and we don’t really understand it yet, as “Deutsch” believes?


[1] As a nod to those that believe Deutsch meant something different than the straightforward reading, I’m going to call this the “Pseudo-Deutsch Theory of Knowledge.” In the original version of this post, I instead called it “Deutsch’s View” in scare quotes to emphasize that we’re making the assumption — for the sake of this post — that the straightforward reading of Deutsch is the correct one. But what I found was that people were not paying attention to my actual argument and instead were trying to defend David Deutsch’s honor. Usually arguing that this was not his actual view and he meant something else. I felt I had been quite clear that I was using scare quotes to agree with them on this point, but most people missed this. So I’m not calling it the “Pseudo-Deutsch” view I’m hoping to avoid that problem.

I am referring to it as such based on something from Biblical scholarship. Bible scholars have some letters from Paul that they believe Paul wrote and some that they are not sure about. The ones that are not sure about are sometimes called the Pseudo-Pauline letters. So I’m leaving open the possibility that this is or is not Deutsch’s actual view. But it should be more obvious than mere scare quotes that I’m not claiming to know for certain this is Deutsch’s actual view.

[2] The following footnote from Wikipedia goes on to say:

The term “artificial intelligence” was coined by John McCarthy in the proposal for the 1956 Dartmouth Conference. The conference is “generally recognized as the official birthdate of the new science”, according to Daniel Crevier.

Wikipedia, Logic Theorist. Link

Simon presented Logic Theorist at that very conference, thus leading to Logic Theorist being considered “the first artificial intelligence program.” (Link)

4 Replies to “Campbell vs Deutsch: Ubiquitous Knowledge-Creation”

  1. I agree that there is a contradiction between Campbell’s and Deutch’s view of knowledge, and I come down on Campbell’s side in this disagreement. I’m convinced by Campbells identification of knowledge-creation with Blind Variation and Selective Retention, and this means that knowledge-creating processes are ubiquitous.

    What aren’t ubiquitous, however, are open-ended knowledge-creating processes. Most knowledge-creating processes (e.g. vision, Simon’s logic theorist, and many modern machine learning algorithms) are severely limited in the kind or amount of knowledge they can create. Open-ended knowledge-creating processes, by which I mean processes which can go on creating new, novel knowledge in a wide domain endlessly (or, at least, for an arbitrarily long period of time), are quite rare. The only two examples of open-ended knowledge creation that we know of are biological evolution and the human mind. No artificial process has yet managed open-endedness, though that is the goal of the fields of Artificial Life and Artificial (General) Intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *