The Current State of CTP Theory

This post describes what I call “CTP Theory”, which is my attempt at a theory of AGI, or Artificial General Intelligence. I have written before about CTP Theory, but there have been some significant changes in the theory, and in the way that I think about it, so much of my old writing on the subject is outdated. The purpose of this post is to provide an updated overview of the theory, and the problems with it that I’m currently thinking about.

No knowledge of my prior writings about CTP Theory is necessary to understand this post, though I do assume a general familiarity with Critical Rationalism, the school of epistemology developed by Karl Popper. I’ve avoided using technical language as much as possible, though some familiarity with programming or mathematics may be helpful. I will provide technical details in the endnotes where I feel it is necessary.

An Epistemological approach to AGI

An AGI is a computer program capable of performing any mental task that a human being can. In other words, an AGI needs to be just as good as a human at gaining knowledge about the world. There has been a long history of discussion in the field of epistemology on the topic of how exactly humans manage to be so good at gaining knowledge. Having a good understanding of how humans gain knowledge, which is to say a good theory of epistemology, is essential for solving the problem of creating an AGI.

The best theory of epistemology that anyone has managed to come up with yet is, in my opinion, Critical Rationalism (CR), which was originally laid out by Karl Popper. The details of CR are not the focus of this article, but it is important to note that many proposed approaches to AGI are incompatible with CR, which means that, if CR is correct, then those approaches are dead ends.

Any theory of how to create an AGI is, at least implicitly, a theory of epistemology: a theory about what kinds of systems can create knowledge, and how. Similarly, many theories of epistemology can be thought of as partial theories of AGI, in the sense that they describe how an AGI must work in order to be able to create knowledge. Developing a theory of AGI is thus, in a sense, equivalent to developing a theory of epistemology that is sufficiently detailed to be translated into computer code. CTP Theory is my attempt to come up with such a theory by elaborating on Critical Rationalism.

Contradictions

In the essay “What is Dialectic?” (which appears in his book Conjectures and Refutations), Popper says that “criticism invariably consists in pointing out some contradiction”. I think that this is one of the most important ideas in Critical Rationalism, and it provides a valuable guideline for theories of AGI: Any Popperian theory of AGI must recognize contradiction as the basis for criticism, and must include a detailed account of how the mind manages to find, recognize, and deal with contradictions. This has been one of my primary goals in trying to develop a Critical Rationalist theory of AGI, and I think that CTP Theory provides a satisfying account of how the mind can manage to do these things.

Let us start with the question of how a mind can find a contradiction between two ideas. To illustrate this problem I’ll return to an example I used in my first post on CTP Theory: consider the statements “My cat is calico”, and “My cat is male”. We can see (assuming the reader is familiar with the fact that only female cats can be calico) that these statements are contradictory, but how is it that we can see that? There are some pairs of statements, such as “my cat is female” and “my cat is not female” which are quite obviously contradictory, and we can call these kinds of pairs of statements “directly contradictory”. The statements “My cat is calico” and “My cat is male” aren’t directly contradictory, but they still seem to be contradictory in some way.

CTP Theory’s explanation of how the mind detects such indirect contradictions begins by assuming that the mind has a built-in ability to detect direct contradictions. In other words, we start with the assumption that the mind can perform the trivial task of identifying that two statements like “My cat is female” and “My cat is not female”, or more generally “X is true” and “X is false”, are contradictory [1]. This assumption turns out to allow us a way to detect indirect contradictions, as well: Two (or more) ideas are indirectly contradictory if they imply a direct contradiction.

The statements “My cat is calico” and “My cat is male” are indirectly contradictory, because though they are not themselves directly contradictory, they imply a direct contradiction. “My cat is calico” implies “My cat is female”, and “My cat is male” implies “My cat is not female”. “My cat is female” and “My cat is not female” are directly contradictory, so the mind knows that the statements “My cat is calico” and “My cat is male” imply a direct contradiction despite not being directly contradictory themselves.

Implications

We’ve shown that if the mind has a built-in way to detect direct contradictions, it can also detect indirect contradictions by searching through the relevant ideas’ implications and finding a direct contradiction. But how does the mind know what the implications of an idea are?

In the example I gave, I said that the statement “My cat is calico” implies the statement “My cat is female”, but that is somewhat of a simplification. To be more precise, “My cat is female” is an implication not only of the idea “My cat is calico”, but also of the idea “All calico cats are female”. Both of those ideas are necessary to derive the idea “My cat is female”, neither alone is enough. Similarly, I said “My cat is not female” is an implication of “My cat is male”, but to be more precise it is an implication of “My cat is male” along with “A cat that is male is not female”. 

How can the mind know exactly what the implications of two (or more) given ideas are, if there are any? For instance, how would the mind derive “My cat is female” from the “My cat is calico” and “All calico cats are female”? CTP Theory answers that question by saying that some ideas are functions, in the mathematical sense of the term. Specifically, ideas are functions that take some number of other ideas as inputs, and return an idea as an output [2]. In other words, some ideas can produce new ideas based on the content of other ideas.  By thinking of ideas as functions, we have a rigorous way of defining what the implications, or logical consequences, of an idea are: whenever an idea is invoked as a function with some set of other ideas as inputs, the resulting idea is a consequence of the idea being invoked and the ideas used as inputs [3].

As I said before, the statements “My cat is calico” and “All calico cats are female” taken together imply the statement “My cat is female”. In this case, we can think of the statement “All calico cats are female” as a function, and “My cat is calico” as an input being supplied to that function. Specifically, the statement “All calico cats are female” can be thought of as a function that takes in a statement that some cat is calico, and returns a statement that the same cat is female. Just as it takes in the statement “My cat is calico” and returns “My cat is female”, it could also take in the statement “Jake’s cat is calico” and return the statement “Jake’s cat is female”. For statements that don’t state that some cat is calico, the function would return no output, or in other words it would be undefined for that input. It would not return any new ideas if a statement like “My cat is sleeping” or “My cat is brown” or “The sky is blue” were provided to it, as the function is not defined for those inputs.

Tracing a Problem’s Lineage

So we have an explanation for how the mind finds the implications of its ideas and finds contradictions between them, but what should the mind do when it finds a contradiction? When two or more ideas contradict, at least one of them must be wrong, so the mind can use a contradiction as an opportunity to potentially correct an error. The first step in doing that is identifying the set of ideas that are involved in the contradiction.

The two ideas that directly contradict in our example are “My cat is female” and “My cat is not female”, but the contradiction cannot be fully understood in terms of those two ideas alone. These two statements are the logical consequences of other ideas, and so we need to work backwards from the two directly contradictory ideas to the set of all ideas which are logically involved in the contradiction. In other words, we need to be able to identify the set of indirectly contradictory ideas that resulted in a particular direct contradiction.

Let’s look at a diagram that lays out all of the ideas I’ve used so far, as well as a few others:

This diagram lays out the relationships between all of the ideas we’ve used so far, along with a few new ones: “My cat is male” was derived as the consequence of two other ideas “My vet said my cat is male” and “Whatever my vet says about animals is true”, and “My cat is calico” was derived as the consequence of the ideas “A cat is calico if it’s coat consists of large patches of white, black and orange” and “My cat’s coat consists of large patches of white, black, and orange”.

In logic, if some idea P implies and idea Q, then P is referred to as the “antecedent” of Q. In this diagram, ideas are displayed below their antecedents, with lines showing exactly which ideas are implications of other ideas. Some of the ideas in the diagram, such as “My cat’s coat consists of large patches of white, black, and orange” and “A cat that is male is not female”, have no antecedents on the diagram. In CTP Theory, ideas which have no antecedents are called “primary ideas”, and ideas which do have antecedents, i.e. ideas which are the consequences of other ideas, are called “consequent ideas” or “consequences”.

Primary ideas, rather than being derived from any other ideas, are created through conjecture, a process in which the mind blindly varies existing ideas to produce new ones. All conjectures, according to CTP Theory, take the form of primary ideas.

While consequent ideas are not directly created by conjecture, it is important to understand that they still have the same epistemic status as primary ideas. CR says that no idea is ever justified, and no idea ever moves beyond the stage of being conjectural, and any idea can be overturned by some unexpected new idea that effectively critiques and/or replaces it. All of that is true for consequent ideas just as much as primary ideas. Though consequent ideas are not created through conjecture in the same sense that primary ideas are, the are still the consequences of primary ideas, which is to say that they are the consequences of blind conjectures, and as such they should not be thought of as any less conjectural or uncertain as primary ideas.

The ideas in the diagram above that don’t have any lines pointing to them, and thus thus appear to be primary ideas, are not necessarily representative of what primary ideas in a real mind would look like. For instance, the idea “My cat’s coat consists of large patches of white, black, and orange” would almost certainly not be a primary idea, but instead would be a consequence of ideas that are about how to interpret sensory data. I am merely declaring that they are primary ideas for the sake of this demonstration, and I do not intend to imply that they are realistic depictions of what primary ideas would look like in a real mind.

In a situation where we have two directly contradictory consequent ideas, we want to know the set of all ideas involved in the contradiction, which means that we want to know the set of all ancestors to one of the contradictory ideas (by “ancestor” I mean either an antecedent, or an antecedents antecedent, etc.). We can find this if we imagine that each idea in the mind keeps track of what it’s own antecedents are. In other words, if each consequent idea keeps a record of what ideas created it, then the mind can find any idea’s set of ancestors. Any consequent idea will have a record of it’s antecedents, and if any of those antecedents is consequent then it will also have a record of its antecedents, so the mind can continually trace backwards to find all of an idea’s ancestors.

A partial picture of the mind

I imagine that the mind of an AGI working according to CTP Theory, as I’ve described it so far, would basically work like so:

The mind contains a set of ideas, which we can call the mind’s idea-set. When the AGI is started, it’s idea-set will be initialized with some starting set of ideas (perhaps these ideas  would need to be decided by the programmer, or perhaps they could simply be created by some algorithm that blindly generates starting ideas). These starting ideas are all primary ideas, since they have no antecedents. After initialization, the mind enters an iterative process of computing the implications of the ideas in the idea-set, and adding those implications, in the form of consequent ideas, to the idea-set. Through this iterative process the mind explores the implications of it’s ideas. The mind checks each new consequent idea it computes against all of existing ideas in the idea-set to see if the new idea directly contradicts any of them. In other words, the mind attempts to criticize it’s ideas by exposing them to one another and seeing where they contradict. Eventually, it might notice two of it’s ideas are directly contradictory, which means that it has a problem. The mind then attempts to solve the problem by modifying the idea-set in a way that resolves the contradiction, and then continues on exploring the implications of it’s ideas (including the implications of any new ideas created as part of the process of solving problem, if there are any).

I’ve explained so far how the mind can deduce the implications of it’s ideas, how the mind can notice problems (i.e. contradictions), and how to find the set of ideas responsible for a problem. But how, given that information, does the mind actually solve a problem?

Solving A Problem

In CTP Theory, a “problem” is seen as a direct contradiction that the mind finds between the ideas within it. A problem is essentially a pair of two directly contradictory two ideas, so “solving” a problem could be thought to mean somehow changing the mind such that there is no longer a contradiction.

For the mind to no longer contain a contradiction, one (or both) of the two directly contradictory ideas involved in the problem ideas must be removed from the idea-set. Consider what would happen if the mind simply picked one of the two directly contradictory ideas, and discarded it from the idea-set. The mind would no longer contain a direct contradiction, so in a technical sense the problem could be said to be “solved”. However, if the idea removed is a consequent idea, then simply removing that idea alone wouldn’t make sense. For instance, if the mind wanted to remove the idea “My cat is female”, it would not make sense to try to discard that idea while hanging onto the ideas “My cat is calico” and “All calico cats are female”, because those ideas imply “My cat is female”. A consequent idea is a logical consequence of its antecedents, so you can’t really get rid of the consequent idea without removing it’s antecedents, or at least some of them. Even if the mind did simply discard a consequent idea from its idea-set, that same idea could just be re-derived later from its antecedents, at which point the same problem would appear again. So simply removing one of the directly contradictory ideas is a fruitless way of attempting to solve a problem.

Consider again the diagram from earlier, with a small modification:

This time, I’ve highlighted the primary ancestors of the directly contradictory ideas “my cat is female” and “my cat is not female” in blue.

Imagine that the mind has decided to solve the contradiction by getting rid of the idea “My cat is female”. As I’ve argued it can’t simply remove that claim alone, because it is a logical consequence of it’s antecedents: “My cat is calico” and “All calico cats are female”. Could the mind solve the problem by removing one of these antecedents, so that “My cat is female” is no longer an implication of any ideas in the mind? It could choose either of the antecedents to remove, but let’s imagine that it chooses the antecedent idea “My cat is calico”. Unfortunately, if the mind tries to solve the problem by removing “My cat is female” and “My cat is calico”, it runs into the same issue I already described, because “My cat is calico” is also a consequent idea. Removing “My cat is calico” without removing its antecedents doesn’t make sense, because it is a logical consequence of its antecedents, and it could be re-derived from them again later.

So removing any consequent idea from the mind without removing one or more of its antecedents never makes sense, because the consequent idea could just be re-derived from the antecedents. Primary ideas, however, are not derived from anything, they are the direct results of blind conjecture. A primary idea removed from the idea-set will stay removed, unless the mind happens to recreate the same idea at some point in the future.

The fact that primary ideas can be freely removed from the idea-set provides an effective way for the mind to resolve contradictions: Find the set of ideas that are ancestors to either of the two contradictory ideas, and then pick out the subset of those ancestors which are primary, not consequent. If the mind removes at least one of those primary ancestors, along with all of its consequences (including it’s “nth-order” consequences, i.e. the consequences of its consequences, and the consequences of consequences of its consequences, etc), then one of the two contradictory ideas will be removed, and the contradiction will not re-emerge [4]. Thus, the ideas ideas highlighted in the above diagram, the set of primary ancestors of the two directly contradictory idea, are the set of ideas that can be removed as a way to solve our example problem.

Constraints on problem-solving

I’ve shown that a mind working according to CTP Theory could solve a problem, i.e. resolve a direct contradiction between two ideas in its idea-set, by removing one or more of one of the directly contradictory ideas’ primary ancestors from the idea-set. However, that can’t be all there is to the mind, because if it was, the behavior of the mind would be trivial. Whenever the mind found a problem, any direct contradiction between two ideas, it could solve the problem by arbitrarily picking a primary ancestor of one of the two ideas, and removing it from the idea-set [5]. If that were all there is to the mind then there would be no need to conjecture, since any problem in the mind could be solved by a simple, mechanical process. Problem-solving obviously isn’t that simple, as it involves creativity and many attempts at conjecture, so clearly CTP Theory, as I’ve described it so far, is missing something important.

In my post “Does CTP Theory Need a new Type of Problem?” I discussed this issue and proposed a solution for it. The solution I laid out in that article is still the best one that I’m aware of, though the way that I think about the solution has changed somewhat, so I’ll explain the idea again in a way that is more in line with the way that I currently think about it.

One way to think about the problem with CTP Theory is that the mind never has any reason not to discard an idea. A problem is a contradiction between two ideas, and removing an idea from the idea-set will never lead to a new contradiction, so removing an idea never causes any new problem. Since there’s an easy way to solve any given problem by removing some primary idea, and the mind never has any reason not to discard an idea, problem-solving is trivialized. So, if the mind did have some system that gave it a reason not to discard certain ideas, more sophisticated types of problem-solving might be necessary to solve problems. What might a system that gives the mind a reason not to discard an idea look like?

The best answer that I’ve been able to come up with is that the mind contains “requirements” (they could just as easily be called “criteria” or “desires”, as there isn’t any term that perfectly captures the concept I’m trying to express, so I’ll just refer to them as “requirements”). The essential characteristic of a requirement is that it can be fulfilled by a certain kind of idea, and the mind wants all of it’s requirements to be fulfilled. A requirement is essentially something that specifies that the mind wants [6] to have an idea that has certain features. If the mind’s idea-set contains any idea that has the features that a requirement specifies, then that requirement is said to be “fulfilled”, otherwise it is “unfulfilled”.

If there is a requirement that is fulfilled by a single idea in the mind’s idea-set, removing that from the idea-set would lead to the requirement being unfulfilled. Since the mind wants each of its desires to be fulfilled, the mind would have a reason not to remove an idea if it would lead to a requirement being unfulfilled. Thus, this system of requirements can provide a reason for the mind not to discard certain ideas when trying to solve a problem.

An example of the requirement system

In addition to providing a low-level reason not to simply discard ideas when trying to resolve a contradiction, I believe this system of requirements helps explain some higher-level properties of how the mind works in terms of CTP Theory, in a way that I think aligns nicely with ideas in CR. Let us consider an example, to see how this system would work in a real-life situation:

For a long time, many people thought that Newton’s theory of gravity was the correct explanation for why the planets in our solar system behaved as they did. However, in the 19th century, people began to notice that Mercury moved differently than was predicted by Newton’s theory. Now, there was a problem, a contradiction between the predictions of a theory and the results of an experiment. Scientists spent decades trying to solve this problem, and it was finally solved when Albert Einstein introduced his theory of General Relativity, which produced the proper predictions for Mercury’s behavior.

Now let’s consider this scenario in terms of CTP Theory. Consider the scientist who first performed the measurements that contradicted the predictions of Newton’s theory of gravity. That scientist would have had an understanding of Newton’s theory, which means that his mind contained some set of ideas that allowed him to compute the predictions of the theory given some set of initial conditions. With that set of ideas, the scientist might compute an idea like “Right now, mercury is at position X”, where X is some particular position in space. The scientist would also presumably have an understanding of how to use a telescope and whatever other experimental apparatus is involved in measuring the current position of a planet like mercury. In CTP-Theoretic terms, that means that the scientist contains a set of ideas that allows him to interpret his sense-data [7] to compute an idea like “Right now, mercury is at position Y”. When the scientist performed the measurements and calculations necessary to find the values of X and Y, i.e. the predictions of Newton’s theory and the position of Mercury according to his experiment, he would find that they were different, which would mean he has found a contradiction between the ideas in his mind [8].

What would this scientist do after finding this contradiction, according to CTP Theory? If we consider the version of CTP Theory that doesn’t include the requirement system, then the scientist’s behavior would be quite simple: he might merely discard Newton’s theory from his mind (or one of the ideas that is composes it, if the theory is embodied in multiple ideas in the idea-set rather than just one), and proceed on with his day unbothered. We can imagine that, if he went on to tell other scientists about his measurements, they would also simply discard Newton’s theory, and none of them would find it problematic to do so, assuming they are all also acting according to the version of CTP Theory which doesn’t include the requirement system.

Clearly, this wouldn’t be the reasonable way for the scientists to respond. Newton’s theory of gravity is a very powerful theory for making predictions about the universe, and if we were to discard it, there would be many things about the world which we could no longer explain (at least, if we put ourselves in the shoes of someone who lived before Einstein proposed General Relativity). The goal of science, and, in some sense, reason more generally, is to explain things, so throwing out a theory as explanatorily powerful as Newton’s theory without a good replacement is not a reasonable thing to do. However, according to the version of CTP Theory that does not include the requirement system, the scientists’ minds would never have any reason not to discard a theory. The mind’s only motivation would be to avoid contradictions, so if throwing out Newton’s theory would help resolve a contradiction, then the mind would have no issue with doing that.

Adding the requirement system to CTP Theory resolves this issue, and explains why the scientists acted the way they did in reality. When we put ourselves in the shoes of the scientist noticing the discrepancy, we see that there is something wrong with discarding Newton’s theory without any replacement, because we want to be able to explain some of the things that Newtons’ theory explains. We want to be able to understand the motions of the planets, and understand how things move under the influence of gravity on earth. In CTP Theory, each of those things that we want would be a requirement, which Newton’s theory fulfills. It could also be the case that there are smaller, more specific desires, which are indirectly fulfilled by Newton’s theory, in the sense that they are fulfilled by some consequence of Newton’s theory. For instance, a particular scientist might want to know the position of Mars at some moment in time, and she may use Newton’s theory to calculate it. In that case, the scientist’s mind would contain a requirement that can be fulfilled only by a statement like “The current position of Mars is X”. While Newton’s theory wouldn’t directly satisfy that theory, it can be thought of as indirectly fulfilling it, because one of its consequences directly fulfills it. And, since the mind must discard an idea’s consequences whenever it discards an idea, discarding Newton’s theory would lead to that requirement, and any other requirements that Newton’s theory directly or indirectly fulfills, becoming unfulfilled.

So if the mind has any requirements that Newton’s theory directly or indirectly fulfills, then the mind would have a reason not to discard Newton’s theory. However, that is only true for as long as Newton’s theory is the only theory in the mind that fulfills those requirements. If the mind has a requirement that is fulfilled by two (or more) ideas in it’s idea-set, then either of those ideas can be discarded without issue, as discarding one of them will not lead to the requirement becoming unfulfilled, as long as the other is still around to fulfill it. So, if all of the requirements that were fulfilled by Newton’s theory were also fulfilled by some other theory, then Newton’s theory could be freely discarded. And that is in fact exactly what happened when Einstein’s General Relativity came along. General Relativity could explain everything that Newton’s theory had previously been used to explain, but it didn’t make the erroneous prediction Mercury’s orbit that Newton’s theory did. A mind that was aware of both Newton’s Theory and General Relativity would be in quite a different situation than one which only contained Newton’s Theory, because General Relativity would fulfill all of the requirements that Newton’s Theory fulfilled (assuming there had been sufficient time for the mind to sufficiently explore GR’s implications). So now the mind could discard Newton’s Theory without leading to any requirements being unfulfilled, and in doing so it would be able to get rid of the contradiction between Newton’s Theory and the experimental data (or, more precisely, it’s interpretation of the experimental data).

Thus, including the requirement system in CTP Theory allows it to explain why scientists don’t simply solve problems by arbitrarily discarding one of the relevant ideas: An idea can only be discarded once the mind has a good replacement for the idea, which is to say a new idea (or set of new ideas) which satisfies all the requirements that were satisfied by the old idea.

Conjectures

With the addition of the requirement system, CTP Theory can explain not only how the mind finds problems, but also how it decides whether a modification intended to resolve a problem is acceptable: if a modification to the idea-set removes a contradiction without leading to any requirements becoming unfulfilled, then that modification is acceptable. But how can the mind come up with potential changes to the idea-set in the first place?

A core idea in CR is that all new ideas are conjectures, which is to say that they are created by blindly varying and combining the parts of ideas already within the mind. So, CTP Theory must incorporate some method that allows it to create new ideas based on the old ideas in the mind, in whatever form they are represented. Thankfully, there are methods for performing this kind of blind variation that are already well-known within computer science [9]. 

The problem solving process

When I explained earlier how CTP Theory viewed the mind, I did not lay out exactly how it would go about trying to solve a problem, because I hadn’t described how the requirement system worked yet. Now that I’ve explained the requirement, and how new ideas can be conjectured, I can explain in more detail how a mind would go about trying to solve a problem, according to CTP Theory:

When the mind encountered a problem, a set of two directly contradictory ideas, it would first find the set of all ancestors of the two ideas, as well as the set of their primary ancestors. Then, the mind then would begin to make tentative, conjectural modifications to the idea-set. Each modification could remove any number of ideas from, and add any number of newly conjectured ideas to, the idea-set. The mind then explores the consequences of this modified version of the idea-set, trying to answer two questions:

  1. Does this modification resolve the direct contradiction, i.e. does it prevent at least one of the two directly contradictory ideas not to be derived?
  2. Does this modification lead to any requirements becoming unfulfilled?

If the answer to the first question turns out to be “Yes”, and the second “No”, then the modification can be accepted. It resolves the problem, while not leaving any requirements unfulfilled. If the answers are anything else, the modification is rejected [10], and the mind restarts this process by generating a new tentative modification.

By iteratively applying this process, the mind can attempt to solve its problems through creative conjecture, just as CR describes [11].

Problems with the requirement system

CTP Theory is still a work in progress (otherwise I’d be creating an AGI rather than writing this article), and the area that needs the most work is, I think, the requirement system. The overarching problem with the requirement system is that it hasn’t been made specific enough. I think that the description I gave of the requirement system, and how it could help explain how the mind’s behavior, is a good start, but it isn’t detailed enough to be translated into computer code. More work is necessary to figure out the details.

Trying to figure out a way to fill in the details of the requirement system is, at the moment, my main priority in developing CTP Theory. So, to conclude this post, I’ll list some of the questions that I’m thinking about relating to the requirement system:

  • What form do requirements take in the mind?
  • How does the mind determine whether an idea fulfills a requirement?
  • How are requirements created in the mind? Are they somehow computed as the consequences of ideas, or perhaps as consequences of some special kind of idea, or are they  created by another system entirely?
  • How can the mind criticize it’s requirements? Can two requirements contradict one another, like normal ideas can, or is there some other kind of system for this purpose?

Endnotes

[1] One apparent problem with this assumption is that it isn’t easy to specify a rule for deciding whether two ideas are directly contradictory when they are expressed in a natural language like English. I used English to express the examples I gave in this post for the sake of simplicity, but of course ideas in the mind would not generally take the form of English sentences. Instead, according to CTP Theory, ideas would take the form of instances of data structures that contain arbitrary data. The mind would also contain a rule for deciding whether two of these data structures are directly contradictory. For instance, the data structure representing ideas could be a bit string of arbitrary length, and two such ideas would be contradictory if and only if they had the same length and shared the same value for all bits but the first. When viewed in this way it is clear that any idea, in isolation, has little, if any, inherent meaning. Instead, ideas take on meaning in the way that they interacted with the other ideas in the mind, i.e. what their implications are and what other ideas they contradict.

[2] Alternatively, functions could be allowed to return any number of ideas as outputs, rather than only a single idea. All of the returned ideas could be considered implications. I don’t yet know whether this change would lead to any significant differences in the algorithm’s capabilities.

[3] Since ideas would be represented as instances of data structures, the mind needs to have some way to interpret these data structures as functions in order for ideas to have implications. To accomplish this, a mind would contain a programming language, which I call the mind’s “theory language”. The theory language and the type of data structure used to represent ideas need to be compatible, in the sense that the language needs to be able to interpret instances of the data structure as code. Depending on the characteristics of the theory language, there may be some possible ideas, or in other words some instances of the right kind of data structure, which don’t form a valid function in the theory language. Ideas of this kind may be used as inputs to other ideas when they are interpreted as functions, but can not be invoked as functions themselves.

[4] It’s possible that a mind could find more than one way to derive an idea, and in this case the process of removing that idea becomes slightly more complex. Imagine that the mind that had two different ways of deriving the idea “My cat is female”: from “My cat is calico” and “All calico cats are female”, or from “My cat gave birth to kittens” and “Only female cats can give birth to kittens”. In that case, the idea “My cat is female” could be said to have two different ancestries, and removing the idea “My cat is female” would require removing one or more primary ideas from each set of ancestors.

[5] This method doesn’t necessarily work if one of the ideas involved in the direct contradiction has been derived by the mind in more than one way. To remove such an idea, you would need to discard at least one primary idea from each of the idea’s lineages.

[6] When I use the term “want” as I do in this sentence, and in the following paragraph, I do not mean that the mind would have a conscious, explicit desire for each one of it’s requirements. Instead I’m using the term in a somewhat metaphorical way, similar to how one might speak of what a gene “wants” when discussing the theory of evolution: the system behaves as one might expect if you imagine that it “wanted” certain things. All that I mean to say is that the mind acts in a way such that it tends to avoid having its desires unfulfilled, and while some requirements might be related to explicit desires, not all will.

[7] In CTP Theory, sense-data is thought to enter the mind in the same format as an idea (described in the first endnote). While mere sense data should not really be thought of as an “idea” or “theory” on an epistemological level, it is convenient for technical reasons to allow sense-data to take the same form as an idea, because it allows sense-data to be used as inputs to ideas being interpreted as functions, in the same way that other ideas can be used. This allows for theory-laden interpretation of sense data, just as CR describes.

[8] While “Right now, mercury is at position X” and “Right now, mercury is at position Y” where X is not equal to Y are not directly contradictory, the scientist’s mind would presumably have some idea that would allow it to derive “Right now, mercury is not anywhere other than position X” from “Right now, mercury is at position X”, and some other idea that would then allow it to derive “Right now, mercury is not at position Y” from “Right now, mercury is not anywhere other than position X”. So while they are not directly contradictory, it is quite easy to see how a direct contradiction could be derived from them, meaning they are indirectly contradictory.

[9] In particular, I suspect that the crossover operation used in the field of genetic programming would be an appropriate method for blindly generating new ideas from old ones, or at least a good starting point for an appropriate method. See John Koza’s book Genetic Programming for details on this operation.

[10] At least, in the most basic version of the theory. It may turn out to be necessary to have some sort of policy that decides what happens when, say, a certain modification does resolve the contradiction, but also leads to some requirement being unmet. Or an even more complex situation, like some modification that resolves a contradiction, but leaves three requirements unfulfilled, while also fulfilling 4 other requirements that were previously unfulfilled. Does there need to be some way of defining which problems and/or requirements are more important than others? This question needs to be explored more, but for now I tentatively guess that the best thing to do is never accept a modification which leaves any requirements unfulfilled.

[11] Since this algorithm takes an unpredictable amount of time to successfully find an acceptable modification, I suspect that a mind would generally work by having several copies of this algorithm running at once in parallel, each possibly focused on solving different problems, along with another parallel process which simply explores the implications of the existing ideas in the idea-set, as is necessary to find new contradictions.

50 Replies to “The Current State of CTP Theory”

      1. Partway through. Really enjoying it. I mean I remember a lot of this from before, but it’s been a while. So I sort of forgot about it all.

      2. “Primary ideas, rather than being derived from any other ideas, are created through conjecture, a process in which the mind blindly varies existing ideas to produce new ones. All conjectures, according to CTP Theory, take the form of primary ideas.”

        I’ve been wondering about this. Why does the variation have to be “blind”? Might there not be forms of variation that are not blind that also produce good results? Hebbian learning (where neuron’s wire together if they fire together) might lead to non-blind (by which I mean not entirely random) variation of ideas. I guess they might still be ‘blind’ in the sense that they are somewhat random.

        1. When I say “blind” I don’t mean “entirely random” in the sense of assigning an equal probability to all possible variations. What I mean by “blind” is basically that the variation process doesn’t follow any particular method that is intended to consistently lead to a correct result (like Hebbian learning, or gradient descent , etc.), and instead uses a method that merely samples from a very large space of possible variations and evaluates them one-by-one, in acknowledgement of the fact that there is no general method for finding the truth and trial-and-error is necessary. The problem with using something like hebbian learning is that there are many possible variations that it will simply never produce, and so it might exclude the variations that are needed to make progress.

          Using randomness to decide which parts of an idea are varied, and how they are varied, is an easy way to make sure you’re sampling from a very large space of possible variations, and so it’s a very straightforward way of doing blind variation. However, it isn’t the only way, and in fact randomness isn’t even necessary for blindness. If you had some way of enumerating a very large (hopefully infinite) space of possible variations for an idea, then you could simply evaluate them one-by-one in order of that enumeration. I would say that that method still counts as “blind”, even though it doesn’t involve any randomness.

          1. Okay, that makes sense to me if ‘blind’ doesn’t equate to random.

            I need to think about your argument that Hebbian learning and SGD are not ‘blind variation’ (now that we’re not equating it to ‘random’). They do have a restricted hypothesis space (to borrow language from ML) but they seem no less blind to me than searching every option. SGD in a sense searches every option at a given location in the feature space as to wear ‘down’ is from there.

  1. “perhaps these ideas would need to be decided by the programmer, or perhaps they could simply be created by some algorithm that blindly generates starting ideas”

    Given what little we know, this is probably not a bad first attempt. But I’ve wondered about the fact that real brains actually do start with pre-set ideas from genes. (Presumably, animals have these.) Could it be that some of these ideas — or desires — are necessary to get the mind going? (And thus starting with random ideas is not a good idea?)

    1. Yeah, it may be that filling a mind with randomly generated ideas isn’t enough to get a it going towards open-ended creativity. I guess we won’t know whether or not some specific starting ideas are necessary, and if so what those ideas will be like, until we have a more advanced theory of AGI.

  2. Ella, the post is interesting. I have two comments:

    1. I think the non-primary ideas are conjectured first (just like primary ideas). Then, we just use a “logic algorithm/function” to check if the consequence is valid w.r.t. our local logic algorithm. The logic algorithm is conjectured as well, but it works well and solves problems and so it gets stronger in our mind. This algorithm may or may not built-in from the beginning, but it must be an idea that can be changed. It is just one of the ideas that are useful as criteria to reject other ideas.

    2. I think an idea should in general be an algorithm/function. This should be a central building block. In your examples, an idea is usually a statement (e.g. my cat is male) or a first-order logic statement (for all x said by a doctor, x is true).
    A statement and first-order logic are special cases of algorithms. A statement is an algorithm with no input and with fixed output. They are in the idea pools as well, but they just form a subclass of ideas.
    Similar to your theory, each algorithm has connections with records to others (e.g. how it is derived, what it derives (related to what problems it solve, and so on). Even if algorithm A is usually called after algorithm B, there should be a link as well.
    The connections should be quite rich so that it helps to solve problems.

    I think your requirement system can be viewed more uniformly as a collection of ideas/algorithms as well. They are meta-algorithms that take other algorithms/ideas and do something. They evolve and can be conjectured (i.e. new goals/desires/philosophy) and refuted.

    Examples of meta-algorithms I can think of are as follows:
    Is-the-new-idea-good algorithm.
    Inputs: (1) ideas P_1 and P_2 that can derive a direct contradiction (2) new idea P_3. Output: check if P_3 resolve contradiction but still derive/solve old problems solved by P_1 and P_2

    Logic algorithm
    Input: cause and consequence,
    Output: check if it is valid w.r.t. to the logical rules (according to this algorithm)

    Desire about x (there can be many algorithms of this form):
    Input: some situation about x and some solution about x
    Output: check if we want this solution in this situation

    But I think there can be many of them.

    1. Thanks for reading and sharing your ideas, Thatchaphol!

      To be clear, I completely agree that ideas should capable of being algorithms broadly, not just simple stuff like first-order logic. I intended to imply as much in the article:

      “CTP Theory …. [says] that some ideas are functions, in the mathematical sense of the term. Specifically, ideas are functions that take some number of other ideas as inputs, and return an idea as an output”

      What I mean there is that an idea can represent *any computable function*, which is to say any algorithm (at least, any algorithm where the inputs and the output are all the right kind of data structure). I just used first-order-logic-type stuff in examples in the article for the sake of simplicity, but I want to be clear that I absolutely think that ideas need to be able to represent any kind of function. And, as I say in endnote 3, this would be accomplished by having the data structures which ideas consist of be treated as code in some (presumably turing-complete) programming language.

      I also agree that ideas need to be able to represent “meta-algorithms” (aka “higher-order functions”), and this can be accomplished to at any level of abstraction within the framework I’ve described. Because each idea is an instance of a data structure which can potentially be interpreted as a function, and those functions produce new ideas as as outputs, which can themselves potentially be interpreted as functions, it is straightforward to have a function which effectively returns another function within CTP Theory.

      I think that the idea of a “logic algorithm” (or multiple logic algorithms that each apply in different context) as you describe it is unnecessary, given that ideas can already represent arbitrary computational functions. What I mean is, I don’t think it makes sense to divide between statements and logical algorithms (which you could imagine as essentially rules of inference), because ideas can contain their own computational logic for mapping inputs to outputs. I can see why a “logical algorithm” might be necessary if non-primary ideas were only capable of representing statements or simple things like that, but they’re more powerful than that, because they can represent any kind of computable function. In a sense, they embody their own rules of inference, so no separate “logic algorithm” is necessary.

  3. “A requirement is essentially something that specifies that the mind wants [6] to have an idea that has certain features. ”

    I think you should look at redefining a ‘problem’ in CTP theory from being a contradiction between two theories and instead make it anything the mind wants to accomplish. This should broaden the idea of ‘requirements’ and ‘problems’ to be a single all-inclusive thing. Plus, this more naturally matches my intuitions about the purpose of minds in real-life. (i.e. that they solve basic problems like ‘how to get rid of my hunger’ etc.) In fact, this could then merge ‘desires’ into the same umbrella.

    Then see if you can make this new kind of ‘problem’ have a special case where it wants to resolve two theories. If you can, then, Bingo, you problems encompass both requirements and problems in current CTP theory.

    This might also address the fact that ‘requirements’ can be wrong. For example, the reason inductivism sticks around so strongly is because most inductivists have a ‘requirement’ that the ‘problem of induction’ should be solved by coming up with a good justification for knowledge. Popper was forced to throw that out entirely to solve the problem. Thus a problem is, in fact, really a theory… you see where I’m going here…

    You can probably come up with a way to combine requirements, desires, problems, and theories all together into a single entity.

    1. To be clear, “desires” and “requirements” are two different names for the same thing in CTP Theory. In the article where I initially introduced this concept (https://www.ellahoeppner.com/blog/post/does_ctp_theory_need_a_new_type_of_problem) I used the term “desires”, but in discussing that article with people I found that that term lead to some confusion and had some implications that I hadn’t intended, so I’ve now switched to primarily using the term “requirement” instead of “desire”. The underlying system I’m describing, however, is still the same (though it’s a bit more fleshed out now than it was when I wrote that old article, of course). So “desires” and “requirements” are already unified into a single type of entity, they’re just two names for the same thing.

      I agree that it’s important that requirements be treated as being conjectural, and to acknowledge the fact that they can be wrong, and sometimes making progress will depend on throwing out a requirement. The question is how exactly the requirement system can be implemented to properly accommodate those things.

      I definitely see where you’re coming in suggesting a broader definition for “problem” within CTP Theory, and I’ve had similar thoughts. I use the term “problem” to refer to a contradiction, but I’ve considered that it might be better to use it to refer to either a contradiction *or* an unfulfilled requirement. I think that might be more in line with the way the term is generally used in Critical Rationalist discourse because, as you say, something like “I want to get rid of my hunger” is a problem. In CTP Theory that kind of thing would be represented as a requirement, so including unfulfilled requirements under the umbrella of “problem” seems like a good idea to me.

      On the idea of combining requirements and theories into the same type of entity, I see where you’re coming from. As you know, CTP Theory used to have two different ideas called “Claims” and “Theories”, but I’ve since figured out a way to unify them into a single type of entity which I now just call an “Idea” or “Theory”, so I certainly see the value of unifying things in this way. The issue is that I haven’t nailed down the details of the requirement system yet, so I don’t really know exactly how requirements will need to be represented in the mind. I hope that it turns out that they can be implemented as the same type of entity as other ideas, so that requirements can just be a sub-type of the broader “idea” entity, but since I don’t know yet how exactly requirements can be represented, so I don’t yet know if that type of thing will be possible. So for now I think it makes sense to just treat requirements as a separate type of entity, and I’ll deal with the prospect of unifying them with ideas once I’ve figured out more of the details of the system.

      On the topic of having contradictions be handled as a special case of requirements/desires, I’ve thought along similar lines before, but I don’t think that will work, at least not with the current way I’m thinking about the requirement system. The problem is basically that a requirement specifies that the mind wants to have an idea of some type, and you can tell whether or not the requirement is satisfied by checking if at least one idea within the mind meets the criteria specified by the requirement. The system can’t, however, represent a desire *not* to have a certain kind of idea (or at least, I don’t see how it could), which is what would be necessary to treat contradictions as special cases of requirements/desires. For instance, given an idea “X is true”, if the mind could use a requirement/desire to represent “I *don’t* want to have any idea of the form ‘X is false'”, then that requirement would be unfulfilled if the mind had an idea like “X is false”, so in that case you could prevent contradictions as part of the requirement system. But requirements/desires, in the way that I’m currently thinking about them, can only represent things like “I want to have at least one idea of the form …”, not “I *don’t* want to have any idea of the form …”, so that type of thing isn’t possible.

      Perhaps the requirement system could be broadened so that it could include those types of things as well, but I’m not sure that it would really serve any purpose other than allowing contradictions to be represented as special cases of requirements. While that might make the theory somewhat more elegant, I’m not sure that that provide any practical benefit. Having to account for an even broader type of requirement might make the system more difficult to figure out, so for now I think I’ll continue to treat contradictions as their own, independent type of problem.

      1. I see your point on the problem of “not having an idea.”

        I guess that’s the problem with me, at a high level, criticizing but with no idea how to actually implement it. 🙂

        So I really like the idea of a single unified “idea”. I’m trying to work on this myself right now. And I can’t make it work either yet.

  4. This is interesting. I have a few things to criticise. I will split it up into a few smaller replies.

    It is good that you found out that removing ideas to solve problems doesn’t work. You then added requirements as a solution. But requirements should also be open for criticism and we are back at square one. I think the underlying problem is that your understanding of what an idea or rather, what an explanation can be, is limited. Take Newtons theory. It will not be forgotten as long as people remember the theory of Critical Rationalism, despite being falsified. In fact, it is the very fact that it is falsified that makes it strongly tied to CR. It is now a part of the explanation of CR. CR’s explanation needs a falsified theory. It would be more difficult to explain CR without it.
    Therefore, I conjecture that a problem is solved not by removing an explanation (idea) , but by creatively adding more explanations. The problem needs to be explained away. “I talked to my vet and she said that the gender of my cat was mixed up with another cat. My cat is female after all. What made me suspect that something was wrong in the first place was the fact that my cat is calico.” After the problem is solved you still remember what it was like to doubt the gender of your cat and how you built on that to find a solution.

    1. Hi Marcus, thanks for reading and responding!

      I agree that requirements need to be open for criticism, but I don’t see how that puts us back at square one. I’m not yet sure of the details of the system, but I expect that it will be possible to allow requirements to contradict one another, as normal ideas do. And I suspect that requirements themselves will somehow be derived from existing ideas, or in other words, I imagine they’ll be the outputs of some other ideas when executed as functions.

      I agree that it’s important to still remember theories after they’re falsified. I can see how it might seem like CTP Theory doesn’t account for this, but I think that it actually does. Dennis Hackethal brought up the same point when we talked on his podcast (here’s a link: https://soundcloud.com/dchacke/15-theories-of-agi), so I’ll give essentially the same answer I gave on that podcast: In a real mind running according to CTP Theory, “Newton’s Theory”, for instance, wouldn’t be represented as a single idea. Instead, the set of ideas we call “Newton’s Theory” would be distributed out over many different “ideas” (in the CTP Theoretic sense of the term , i.e. an instance of the right kind of data structure, which may or may not be interpretable as a function) within a mind. A mind that realized Newton’s Theory was false and decided to instead adopt General Relativity would only have to discard *some* of it’s ideas related to Newton’s Theory, not all of them. It could still retain almost all of its ideas about what Newton’s Theory is, it would just have to let go of the idea that Newton’s Theory is *true*.

      Consider a mind which contains some set of ideas which say things like “Newton’s Theory says X” or “According to Newton’s Theory, Y”. These ideas don’t actually say that X or Y are *true*, they just say that they are the predictions/implications of Newton’s Theory. These are ideas about the contents of the theory which is commonly referred to as “Newton’s Theory”, they aren’t about what the true laws of physics are. If the mind also contained an idea which said “Newton’s Theory is true”, then it could use that idea in conjunction with an idea like “Newton’s Theory says X” to derive the idea “X is true”. And then, later, if the mind realizes that Newton’s Theory is wrong, the only idea it has to discard is “Newton’s Theory is true” (and all of the consequences derived from it, as is always the case when discarding an idea). The idea “Newton’s Theory says X” and other ideas like it don’t have to change at all, because the mind has removed the idea that allowed it to go from a statement like that to “X is true”. A statement like “Newton’s Theory says X” isn’t false just because Newton’s Theory is false, so those kinds of ideas don’t need to be changed. The mind can retain all of it’s knowledge about what Newton’s Theory is and what it implies while letting go of the idea that Newton’s Theory makes the correct predictions, or correctly describes the nature of reality.

      The same could be true for the example about the cat. Your ideas about the sex of your cat are likely spread across many different “ideas” in your mind, so resolving the contradiction by discarding one or a few of those ideas wouldn’t necessarily have any impact on your memory of those ideas.

      1. Ella, it’s been a while since I read your original CTP post that explained the implementation details, but I had thought that you did erase theories that were false. That isn’t the case? Do you do any clean up at all?

        1. The mind does erase/discard ideas in order to resolve contradictions. Perhaps my previous example wasn’t clear enough, so I’ll rephrase it:

          Imagine that the mind contains two ideas: “Newton’s Theory is true” and “Newton’s Theory says X”. X could be any implication of Newton’s theory, like “An object in motion will stay in motion unless acted upon by an outside force” or “Mercury’s perihelion should shift at a rate of about 38 arcseconds per year”. From “Newton’s Theory says X” and “Newton’s Theory is true”, the mind can derive “X is true”. If some other part of the mind happens to create the idea “X is false”, then there will be a direct contradiction between that idea and “X is true”. One way that the mind could resolve that contradiction is by discarding the idea “Newton’s Theory is true”. Since “X is true” was derived from “Newton’s Theory is true” and “Newton’s Theory says X”, it’s a logic consequence of “Newton’s Theory is true”, so it would also be discarded, and thus the contradiction would be resolved. However, “Newton’s Theory says X” would *not* be discarded.

          My point here is that the mind can realize that Newton’s Theory is false while still retaining ideas about what Newton’s Theory is, such as “Newton’s Theory says X”. And of course this example would still work if there were any number of ideas like “Newton’s Theory says X”, rather than just one.

          It might also be necessary for the mind to discard ideas to keep the program from using too much memory. It’s hard to say much about the details of that at the moment, but I think those details can be left until the theory is much more developed.

  5. I find the model you suggest for handling input and output plausible. Senses create original thoughts in the mind that can be explained further by ideas. Hackethal says that senses never produce ideas directly, which may be reason to see thoughts and ideas (that process thoughts) as different things. Other thoughts, often generated by ideas, directly causes motor responses. Senses that have evolved to produce thoughts that match output thoughts directly could be like reflexes. Sense thoughts that only need a little bit of explanation to match output could be like muscle memory.
    I believe biological evolution has evolved our senses to produce highly refined thoughts that are heavily processed before the universal explainer gets hold of them. The reason would be that such an architecture is computationally more efficient. It is too inefficient to feed the pixels from the ccd directly to the explainer. There would be too much to explain.
    I believe a correct theory of input and output will help us figure out how good ideas outcompete bad ideas in an evolutionary theory of the mind as suggested by Hackethal. Early ideas must necessarily be tightly coupled with sense input for the explainer to have a chance to understand reality. I don’t think it could be in any other way because ideas arising from creativity alone would just be gibberish.
    If we explore the theory that explanation is prediction then we could find some solid ground to stand on in the early stages of idea evolution in the mind. A thought to move the hand could be explained by an idea that concludes that the eyes will see the hand move. If the correct sense input then later appears in the mind this would strengthen the idea that made the correct prediction ahead of time, making it more likely for this idea to be invoked in the future and even replicate. Ideas that produce the wrong prediction, e.g. the foot moves, would soon be outcompeted. A fast idea would win evolutionary over a slow idea by being strengthened by both the slow idea arriving at the same conclusion and the sense input. The slow idea will only be strengthened by the sense input. When thoughts produced by ideas and the senses never match we have a problem. It is solved by imperfect replication of ideas that hopefully mutate into stronger explanations that slowly align with the senses and other ideas.
    This evolutionary process could very well be running at some level in the mind, but I doubt that is what happens when I read and understand the content of your post above. The evolutionary ideas I would use for that is already present in the mind and do not change much while I read. I suspect direct associations, analogies, and reuse of existing structures in the mind play a bigger role in short term thinking. Perhaps an alternate theory of creativity in CTP will be able to capture this.

    1. It’s interesting that you bring up the notion of separating things into “thoughts” and “ideas”, where ideas are things which process thoughts, because CTP Theory used to contain a similar distinction. I used the terms “claim” and “theory” in place of “thoughts” and “ideas”, respectively, but the underlying idea was similar. “Claims” were basically just data, and “Theories” were functions which take in some number of claims and return a new claim as an output. So, as you say, theories would process (or “interpret”) claims.

      So CTP Theory used to incorporate the kind of distinction you’re suggesting, but I eventually decided against it after I realized that you could have a system that was capable of all the same things while only having a single kind of idea in the mind, rather than having two separate kinds. One issue with the Claim vs Theory distinction (or, in your terminology, the thought vs idea distinction) is that it seems
      to me like the mind should allow Theories to interpret/process other Theories. If Theories are defined such that they can only interpret Claims, then the mind lacks a way for Theories to interpret other Theories. I think the mind should be able to interpret any of it’s ideas, not just those of a certain kind.

      So now, CTP Theory has only a single kind of idea, which I just call an “idea”. While a Claim was an instance of a data structure, and a Theory was a function that could take in Claims and produce new Claims, an “idea” in the current version of CTP Theory is a combination of the two: it’s an instance of a data structure which can be treated as a function. Or at least, some ideas can be treated as functions. Some of them might not represent any valid function. The point is merely that ideas are the *type* of thing which can represent a function. This way, an idea can potentially interpret/process any other idea, including other ideas which might themselves be capable of interpreting/processing other ideas.

      I don’t agree that ideas arising from creativity alone, unrelated to any input or output, would be gibberish. Ideas about philosophy, for instance, don’t necessarily need to relate to any sensory data. So I think that an AGI with no inputs or outputs would still be a real intelligence and could make intellectual progress on some philosophical issues. It wouldn’t be able to figure out anything about the laws of physics, as it wouldn’t have access to any empirical evidence, but there are domains of knowledge other than physics. It’s important for practical purposes, of course, for the mind to have a way to take input and produce output, but I don’t think that it’s essential to the way the mind works as you’re suggesting.

      As I’ve talked about before, I don’t think that Dennis Hackethal’s neodarwinian theory of the mind can be true. I think that the mind, at it’s most basic level, works by finding and attempting to resolve contradictions between ideas, not by a neodarwinian process in which ideas attempt to make as many copies of themselves and compete for dominance.

      1. I know I’ve mentioned this before. But CTP theory uses GP, which does have a neo-Darwinian side that involves functions competing within a pool for dominance.

        1. CTP Theory doesn’t use GP exactly. It blindly varies existing ideas to produce new ones, and the way it does this might be similar to the way GP blindly varies functions, but that’s about where the similarities end.

  6. I see the advantages the unification of claims and ideas gives you. I think the reason why Hackethal says that senses don’t create ideas is to avoid building a mind that relies on induction, which has been shown to not be the principle behind creativity. I have not yet decided if this is a valid reason to separate ideas and claims/thoughts.
    A more serious objection to unifying ideas and claims is that runnable code could be injected through the senses to hack the mind. This is the plot in Snow Crash by Neal Stephenson as far as I can tell.

    1. I agree that we should avoid induction, and I see why the notion of having sense-data represented as ideas might seem inductivist. However, in this case, I don’t think it really is. Sensory data simply takes the same basic form as conjectured ideas because it allows the algorithm to be more elegant.

      If it helps as an intuition pump, you could phrase CTP Theory in a way that didn’t involve calling the sense-data objects “ideas”. Rather than calling the all of the objects in the mind “ideas”, you could call them “mind-entities” or something. Sense data would be a type of mind-entity, and you could reserve the term “idea” to refer only to mind-entities which don’t represent sense data. To be clear, this doesn’t involve any change to the underlying logic of the theory, it’s just a different way of speaking about it.

      The point you make about sense-data taking the form of runnable code is a good one, but thankfully that problem is easily solved. You can define the language which the mind uses to interpret ideas as code, and the system that injects sensory data, in such a way that the injected sensory data will never form a valid function. For instance, imagine that ideas in the mind take the form of a bitstring. You could define the sensory data system such that all sensory input ideas start with 5 “1”s, and then you could define the programming language such that a bitstring doesn’t count as a valid function unless it starts with 5 “0”s. That way, you could guarantee that no idea that comes from the sensory input system will be runnable as code.

  7. The next problem I see with CTP is the one about idea representation. As Hackethal describes in ch. 3 of A Window on Intelligence, the same function can be written in many different ways. It is not possible to compare the bitstrings to find out if they are the same. Commuting logic operators have the same problem. “A or B” is the same as “B or A”. Also “not not A or B”. In these cases it is possible to rewrite the expression to a canonical form, but that doesn’t work for programs in general. CTP will only be able to detect problems in the most simple ideas, but that is perhaps a limitation of our mind as well.
    Even with this limitation I don’t see how the ideas in CTP will converge to use the same canonical representation. When the creative process fails to align the representation between ideas, CTP can’t do anything about it because it is not detected as a problem.

    1. I agree with this criticism. I feel like something is missing here.

      The thing I want to see an AGI theory address is why the mind abstracts things into categories so much. This might be a downstream effect rather than the basis for AGI (as Dennis argues; I’m uncertain what Ella thinks) But I’d expect a theory of AGI to explain why it has such prominence in human thought.

      Abstraction also has connections to CR that people miss. Like the fact that it’s a form of hard-to-varyness and probably necessary for compression in real life.

      Ella, how does CTP theory handle two theories/ideas that happen to be identical? (i.e. the same input always gives the same output.)

      1. On the topic of why the mind tends to abstract things into categories so much, I don’t think that the explanation for this is directly related to AGI. I think that categorizing things is a simple, efficient, relatively effective way of understanding things, so it isn’t very surprising that the mind often settles on it. Eliezer Yudkowsky has some nice writing on why categorizing things is an efficient way of thinking, such as this article: https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories Yudkowsky is, of course, an advocate for bayesian epistemology, so take the article with a grain of salt. Thankfully, I think the core point he makes in this article doesn’t rely much on the specifics of his epistemology.

        Perhaps I’m missing something, but I don’t really see a reason why the mind would need to do anything to “handle” a case where two theories have the same outputs for all inputs. If the mind happens to have two ideas that are functionally identical, that’s ok. Other than it being slightly inefficient to have two ideas which are identical, I see any problem.

        1. This is a possibility that it’s just a downstream effect, yes. Which is why it might have little to do with AGI.

          But I think it’s far more common than we realize, even unconscious. That’s why I’ve wondered what the explanation is. If it were used only consciously, then I think the explanation that ‘it’s efficient’ would make more sense. But if it’s happening unconsciously too, it’s less obvious that it’s just being consciously done to be efficient.

        2. Interesting article, btw. I just skimmed quickly. (Still working.) I need to look at it harder. I didn’t immediately see where it talks about categories.

    2. I agree that there’s no computable algorithm for checking whether or not two programs define the same function, but I don’t see how this is a problem for CTP Theory. CTP Theory doesn’t involve checking whether two functions are identical.

      It’s true that different parts of the mind might represent things in different ways, even if they’re related to the same subject, and the mind might end up missing some problems because of that. But I think that must be true anyways: creativity is an inherently unpredictable process, and you can’t guarantee ahead of time that any particular problem will be solved, or even noticed, in any finite amount of time. And whenever the mind does have this kind of deficiency, it can eventually solve it through creative thought, by discovering some way to translate between the two different ways of representing things.

      1. “And whenever the mind does have this kind of deficiency, it can eventually solve it through creative thought, by discovering some way to translate between the two different ways of representing things.”

        I guess this is what I’m asking. How would this take place in CTP theory?

        I agree that we do some times hold ideas that are identical and don’t realize it. So you are right that this isn’t a fundamental problem.

        1. First I’ll give a real-world example, and then I’ll describe how it can be understood within CTP Theory.

          Say you’re a mathematician doing work into formal languages, and for some reason you decide not to read any of the literature and just figure out everything yourself. At some point you invent the idea of a Deterministic Finite Automata as a way of recognizing certain classes of strings. You go on to explore DFAs and prove some theorems about them. Later, after you’ve gotten bored of DFAs and want to try something else, you invent the idea of a Regular Expression, and you prove some theorems about those as well. DFAs and Regular Expressions are, of course, equivalent models, in the sense that any language that can be recognized by a DFA can be recognized by an RE and vice versa. At first you don’t know that DFAs and REs are equivalent at first, and you only realize that after you’ve spent some time exploring and proving some theorems about each of them. Once you realize they’re the same, you’ll realize that any theorem you’ve proved about DFAs can be translated into a true theorem about REs, and vice versa. So a theorem like “There is a DFA which can recognize a language L, where L is …” can be translated, thanks to your new understanding, into a theorem like “There is an RE which can recognize a language L, where L is…”. The same would work for a statement like “There is *no* DFA which can recognize a language L …”

          In CTP Theory that scenario would look like this: The mind contains two systems, A and B. A represents the mind’s understanding of DFAs, and B represents the mind’s understanding of REs. Each system is a set of ideas within the mind that share a common “language”, i.e. they represent information in such a way that other ideas within the same system can make use of those ideas.

          Imagine that, in one way or another, the idea “d is a DFA with 4 possible states, and it’s transition table is …” ends up in the mind. I’ll call that idea X. When X enters the mind, the mind will try to use it in conjunction with various other ideas (i.e. use X as an input to functions) in order to explore it’s implications. When X is fed to ideas in A (the subsystem that has to do with the mind’s understanding of DFAs), the ideas within A will allow it to deduce some new statements as consequences of X, perhaps something like “Once d leaves it’s starting state, it can never return” or “the language that d recognizes has infinite cardinality”. Any new idea which is created as a consequence of C can once again be fed into A to potentially deduce further consequences. The various ideas in A will each only be useful on some subset of the possible statements about DFAs, but each new consequence produced from an idea in A can potentially be used by some other ideas in A, so the ideas in A all work together to allow the mind to explore the implications of X (or any other statement about DFAs). B would work similarly, except that it wouldn’t work for statements about DFAs, and would instead only work for statements about REs.

          So that’s basically how the mind would work *before* it realized that DFAs and REs are equivalent. It would have two systems of ideas, A and B, each of which can be used to explore the implications of a certain kind of ideas. Ideas in the language of each system can be fed to the other ideas in the system as inputs, and if any of the ideas happen to provide useful outputs, those outputs can once again be fed to the ideas in the system to see if it can produce any more new outputs. But A can only take in statements and return statements that have to do with DFAs, and B only can only take in or return ideas that have to do with about REs, so the statements in A don’t have anything to do with the statements in B. No consequence deduced from A will ever be a useful input to a function in B (and vice versa), because the ideas in B would only produce outputs for inputs that have to do with REs, and A will never produce an idea like that.

          Imagine then that the mind comes up with a new system of ideas, which we’ll call C. C represents the understanding that DFAs and REs are equivalent, and as such it contains ideas that can take in statements about DFAs and return a statement about REs, or vice versa. For instance, C might contain an idea that says something like “A DFA with transition table t recognizes the same language as an RE with the form …”. That idea, if you think of it as a function, would allow the mind to take in an idea that only references DFAs, like “The DFA Y has transition table …” and return an idea that references both DFAs and REs, like “The DFA Y with transition table … is equivalent to the RE with form …”. Since that statement is about REs, it could then be used in conjunction with the ideas in B to further explore the consequences.

          So, in short, you have two separate subsystems, A and B, each of which talk about their own kinds of concepts, and neither system ever produces an output that can be used as a worthwhile input in the other system. The system C, however, can take an idea output by system A and use it to produce a new idea which *can* be used as a worthwhile input by B (and if C represents a good understanding of the equivalence of DFAs and REs then the reverse would also be possible, of course). So without C, A and B will seem to the mind like entirely separate things which don’t have much to do with eachother. However, once C is involved, the mind can see that A and B are related, and are in fact just two ways of talking about the same thing, because C effectively provides a way to translate from A-statements to B-statements.

          That’s a somewhat complicated example, but I hope it will get the point across. If any part of it is unclear I’d be happy to elaborate.

          1. Good example though. That makes sense to me. And you’re right that this is a good example of how I hold two identical ideas in my head and then only later realize they are the same. But continue to think of them as separate ideas too.

  8. Marcus says:

    “I believe biological evolution has evolved our senses to produce highly refined thoughts that are heavily processed before the universal explainer gets hold of them. The reason would be that such an architecture is computationally more efficient. It is too inefficient to feed the pixels from the ccd directly to the explainer. There would be too much to explain.”

    I agree with this. But if you accept that is true, then something like this might also be true:

    http://fourstrands.org/2020/06/19/motion-blindness/

    (i.e. my wild conjecture — that I’m not sure I even believe — that face recognition isn’t normally handled by the UE module.)

  9. I’m now ready to criticise the definition of Problem in CTP. When I see a thing for the first time, a new idea is produced by the eyes. “I saw a thing”. It could be something very fundamental, like “I saw yellow”. I would like the mind to get creative and try to explain this new thing I saw. CTP does nothing unless it already has the idea “not I saw yellow”. That seems unlikely because there are an infinite number of things I haven’t seen. Could there be an inborn idea “Negate everything I see”? This would create more and more false problems as I get better at explaining things. That doesn’t seem right either.
    To get out of this dilemma, I conjecture that a Problem is a thought without explanation. The new thought just sits there in the mind and no existing idea is able to come up with a matching thought that could explain it. Creativity starts to generate ideas and hopefully one of them is able to explain the new thought. The idea can use other thoughts that are currently present in the mind as input. That would be recent sensory input and result from invocations of existing ideas. The fact that related ideas and thoughts are present in the mind at the same time makes it possible to remember that they are associated. This association is later used to select ideas to invoke given a set of thoughts. This solves the combinatory disaster that would arise if every idea had to be applied to every thought. The association could also play the role as a primitive idea. “The last time I saw Anna and a car I also saw yellow. Now I see blue. Blue and yellow are both unexplained, ie problems. Let’s see if she has bought a new car. That would solve both of my problems.” This touches on what Yudkowsky writes about in your link above.
    This conjecture is not without problems of its own. For example, some mechanism must be responsible for replacing bad explanations with good, and I am not yet sure how that would work.

    1. I like the idea of not having an explanation is a problem.

      How is an ‘explanation’ defined though? In CTP theory or otherwise…

      1. My best proposal is that an explanation could be implemented as a function that when using existing thoughts as input produces a replica of the thought that is to be explained. It could work as selection criteria in an evolutionary process to find explanations. Outstanding questions: How is a partial explanation identified? How are bad explanations that explain too much outcompeted by better explanations? I think these questions could be resolved by a theory of how the mind creates tests for its explanations to see if they can withstand criticism.

        1. I like where you are going with this.

          So let’s start with a problem like ” I saw yellow”. What would an explanation for it look like? (Obviously specific to the circumstance.) Then let’s come up with ways that might generate that explanation/function.

          Also, an explanation clearly is a function — but so is everything. I’m just not sure all functions are explanations.

          Or are they? Thoughts?

      2. I tried to use my definition of explanation but failed to get where I wanted. I now suspect that an explanation can’t be any function in general, but is restricted to a small subset of functions. Explanations have to be criticised with tests to get rid of all the wrong ones. They are just guesses after all. To construct a test we need to find out what input will cause the explanation to give the opposite output. This is easy to do if we can invert the explanation. Functions in general are opaque and not easy to invert. It is certainly possible to find the input we are looking for by guessing, but then we have an infinite number of conjectured false explanations, each criticised by an infinite number of conjectured non working tests. That is not a computation that will end soon.
        I will tackle the problem “I saw yellow” in another reply, if I may.

        1. “Functions in general are opaque and not easy to invert. It is certainly possible to find the input we are looking for by guessing, but then we have an infinite number of conjectured false explanations, each criticised by an infinite number of conjectured non working tests. That is not a computation that will end soon.”

          Good point.

      3. I changed my mind. I agree with CTP that a problem is a contradiction between ideas. We don’t need a problem to conjecture ideas. Ideas are simple logical functions that are easy to negate to create contradicting contrafactual test statements. Test statements make the search for contradictions faster. Ideas are never removed, only ordered in more or less preferred. Ideas originate both from biological evolution and critical rationalism. Categories are an important type of idea. Let me explain how I reason.

        “I saw yellow”. In the mind, where is the line drawn between biological evolution and the universal explainer? Yellow is a category of sensory experiences that are conjectured to belong together. Verbs are categories. Some of them are used to relate cause with effect, two categories abstract enough to have their members jump back and forth between them depending on context. Hofstadter and Sander have written a book about categories I should finish reading, Surfaces and Essences.
        How is the category yellow formed? You are probably aware of these optical illusions where the same paint is interpreted as having different colors depending on how shadows are cast in a three dimensional world. It seems basic categories are given to us by the senses without discussion, and apparently even some complex things like faces. It is not obvious to us if a category is created by evolution or learned. Has evolution given us basic verbs as well like walk, run, fly, crouch, sneak and hide?
        I do believe our vision has evolved the ability to identify the boundaries of objects, their parts and other properties like color. I don’t believe it can identify recent categories like cars. That must be a category we create and learn to recognise after we are born using critical rationalism. When we build an AGI we can give it any senses and categories we like, but it also have to be able to create new categories by itself.
        “I see a yellow car” for the first time. The yellow object is identified by the eyes. I have learned to identify the category of cars from black and white photos, but I know nothing about their color. I can now conjecute some explanations, for example “All cars are yellow”. I don’t need a problem to guess explanations. To refute it I need to find a contradiction by combining it with other explanations. The search is helped by test statements, counterfactual contradictions that if found to be true would create a problem for my conjecture. We can search from both directions and meet in the middle. One such test statement could be “one car is blue” which would contradict “all cars are yellow”. This test statement can be constructed by following mechanical logic transformations. Blue is picked from a category where yellow is a member. If I can combine this test statement with other explanations I know of and transform it into a factual statement, then the conjecture is falsified without the need to see a blue car. Another strategy could be to plan some actions that would lead me to see a blue car. I use the first strategy and recall that “paint can be of any color” and “cars are painted”. I use these to transform “one car is blue” into “one car is yellow”, something I believe to be true. I now have a contradiction between two statements, ie a problem. “All cars are yellow” is falsified.
        Falsification is tentative. It generates a partial order of partially falsified explanations which I can use to select explanations I prefer, ie believe in. We also need a partial order of how hard they are to vary. Could it be that an explanation that is harder to vary have additional test statements compared to the explanation that is easier to vary?
        “No car is yellow” is a test statement that criticise what I saw in the first place. I use the explanation “color can change with light” to plan and perform an action that will change the light on the car. If it is still yellow the observation withstood the criticism.

        1. I am constantly confused, it seems. Above I sometimes use ‘explanation’ to mean the same as idea, theory or statement. Now I want ‘explanation’ to be a composition of ideas, test statements, other explanations and empirical reults that falsifies an idea or theory to support another.

    2. It sounds like in your model, if I’m understanding it correctly, the mind has no way to *decide* whether or not it cares about explaining some particular thing. There are some things that you might see or think but not care about, and you don’t waste your time explaining them. For instance you might see Anna’s car is blue now even though it was yellow last time you saw it, but you might just move on and not really put any thought into the color of Anna’s car because you simply don’t care. The mind needs to be able to decide for itself when it should or shouldn’t seek to explain things, it can’t simply do it for everything.

      In CTP Theory, the desire to have an explanation for something would be represented as a desire. The mind seeks to fulfill each of its requirements, so if the mind sees something yellow, and it wants to explain why it saw something yellow, it can create a requirement to represent that desire. But, crucially, the mind can also *not* create such a requirement. Whether or not the mind creates a requirement to explain something depends entirely on what kinds of ideas in the mind, which means that the mind can decide for itself what kinds of things it wants to explain.

      I agree that “the combinatory disaster that would arise if every idea had to be applied to every thought” is a problem that needs to be solved, but I don’t think that simply associating ideas that are in the mind at the same time is a good solution. Perhaps *sometimes* it might be a good idea to associate two ideas which appear at the same time, but it won’t always be. And sometimes you’ll need to associate ideas which have never been in the mind at the same time. The criterion used for deciding which ideas to apply to other ideas (or “thoughts”, in your terminology) has be guided by creativity, it can’t be something mechanical like what you’re suggesting.

      1. “but I don’t think that simply associating ideas that are in the mind at the same time is a good solution.”

        Without really disagreeing with you, let me play devil’s advocate for a moment. We know that hebbian learning is at least physically how the brain works. Therefore it at least seems like the brain must utilize hebbian learning in some way in it’s algorithm. (This may not actually be true… thus my use of the word ‘seems’. However, it’s not a terrible initial conjecture either.)

        So what if we explored this idea further and found out that it actually solves a lot of problems (including the combination explosion) but not every problem. That would still make it an interesting path forward even if it were only a partial solution.

        Thoughts?

        1. Yes, it might be a path worth exploring, especially if it turned out to help solve some other problems, but I’m not sure if it even solves this problem of combinatorial explosion particularly well (It certainly reduces the number of possibilities that you need to check, but does it narrow it down in a useful way?)

      2. Like other things the brain does, explaining things is probably automatic and something we can’t easily stop doing. Those who meditate say they can. When you know how to read, it is impossible to see a word and not read it. The decision to explain something in particular could be what the senses decide are more urgent. It has been shown that we sometimes rationalize why we did something after the fact to maintain a coherent view of ourselves, even when the universal explainer had nothing to do with the decision. If we control our desires or not I don’t know because I don’t know how to criticise it.

        1. I’m not sure I understand what you mean “The decision to explain something in particular could be what the senses decide are more urgent”, surely the decision to explain something decided by the mind, not by the senses, right? Even with my eyes closed, in a quite room, etc., I can make decisions about what kinds of things to try to think about and try to explain.

          I definitely think people control our desires. Often that happens subconsciously, but the subconscious is just as important as the conscious, if not more important. A theory of AGI needs to describe both the conscious and unconscious parts of the mind, and so I think being able to explain how people come up with their desires is crucial. And that includes desires about explaining certain things.

  10. I really appreciate what you are doing with CTP theory. I think putting something out there to try to see what problems develop and to encourage feedback/criticism is awesome.

    I really need to spend some time with your implementation of CTP theory and try to understand it better.

    Would you consider publishing more CTP articles here?

    I seem to recall the original one getting into more detail on how it’s implemented. Plus, you know, links to github and all that.

    1. I’ll definitely be making more posts here about CTP Theory. One thing I’m working on now is a mathematical formulation of the theory that describes exactly how all the different parts work in a formal way.

      I wouldn’t really recommend looking at the implementation that’s currently on my github, it’s outdated and might end up being more confusing than helpful. It still contains the distinction between Theories and Claims, for instance. Given how much the theory has changed, I don’t think I’ll ever update that repository, as it would probably just be easier to start from the scratch than update that code. Implementing the theory isn’t really a priority right now because I worry that any code I write now will become obsolete again as I develop the theory more, so for now I’m just trying to think through the theory and solve some theoretical problems before worrying about an implementation. I hope the mathematical formulation that I plan to publish will serve a similar purpose for whoever is interested in the technical details of the theory, though.

      1. A mathematical formulation would be perfect…

        IF you promise to explain your notation. Gosh dang do I have a hard time understanding mathematical notation. It’s been SOOO many years since I took a serious math class.

        Maybe consider giving pseudo code (or even python since it’s almost pseudo-code) to back it up. I find pseudo-code easier to follow and it helps me make sense of the math.

Leave a Reply

Your email address will not be published. Required fields are marked *