The 4 epistemological advances any team CAN make

Karl Popper coined the phrase “All life is problem solving”. Problem solving (i.e. creating knowledge to solve problems) is the defining essence of what we do and even who we are. We solve problems in many different domains (scientific, moral, economical, societal, relational, individual, …). We mostly don’t solve problems alone, we are “condemned” to the others to find solutions together. An interesting question is how we do this together, and more specifically: from an epistemological point of view (How do we think the way we think, given that we are in teams/ groups/ entities that solve problems?). How do the others affect our epistemology? And vice-versa? What kind of “good” and “bad” evolutions are possible in teams?

To address these questions, I came up with a framework that distinguishes 5 epistemological “stages” in teams. Each of them says what the (dominant) epistemological “approach” is, and what effect it has on both the knowledge creation and the behaviour in those teams. I have ranked them from “primary/ basic” to “ideal”. I think any team (in whatever stage of cooperation/ experience with each other they are) can be mapped onto any of these 5 stages, and, more interestingly, be helped to move on to a more advanced stage.

So here is a short description of the 5 stages:

A. “Ignorism” (sorry, couldn’t find a better word, open to improvements!)

Teams that are “forced” together, with an obligation to solve a problem. (E.g. a project team with people from different parts of the organisations, that don’t know each other, that never worked together before, and that is asked to manage a uninteresting project, on top of all their other work). You could call this team a “troope”: no level of connection yet (apart maybe from the fact that they work for the same company, but nothing more) and very low appetite to kick-off the problem solving. Team members will not be inclined to share information/ knowledge at all at this start, at least not vital information (that could make the project start more of a success). They will deliberately limit themselves to sharing those pieces of info that they deem to be minimally courteous to share, and minimally appropriate to share. They do not feel any urge, nor bonding at all to really solve the problem at hand, or at least to start that process optimally. In this stage, people are basically ignoring each other. They ignore each other not because they do not have things to share (they may hold very explicit ideas on how to solve the project problem), but because not sharing is tactically more useful (for example because sharing could imply more -unwanted- work for no real reward/ usefulness in return at this stage)

B. “Justificationism”

In this stage, the team has found some early momentum already in exchanging info: people start talking real content to each other, content that already starts having some real relevance to the problem at hand (as opposed to the minor/ formal only exchanges in stage A). But in this stage, we/ everyone pays a lot of attention to making sure they take their points home, regardless of what others say/ object. We carefully build arguments/ evidence for our claims and theories about the problem at hand. We have not yet built trust in the group so why would we already open up? This is the “justificationist” epistemology at work. One advantage may be that debates are getting deeper and deeper: everyone brings more (and more relevant) ideas into the debate, with more and more arguments (both rational ones – including false ones such as arguments “because of authority” – as well as emotional ones). But considering the option that you may be (even partially) wrong, and the other is (even partially) right is avoided here at all cost!. This attitude has an important impact on knowledge creation: there is no new knowledge being created, everyone holds on to their “piece” of knowledge, without considering nor agreeing to refine/ improve it. This already jeopardises the eventual quality of the solution: you are “gambling” that your current piece of knowledge is already the one that is most suited to solve the problem optimally. Optimum knowledge (that will solve the problem optimally) needs to always be very suited to the problem, and problems are mostly new, so they require new knowledge, which cannot be predicted in advance. And yet that is what you do: you predict that your piece of knowledge (at the start) is already the one that will optimally solve the problem (in the future). The longer this stage lasts, the more it has a negative impact on behaviour: it creates polarisation (as everyone justifies a different piece of knowledge aimed at solving the same problem) and people tend to start to “Work Apart Together” from some point onwards in this stage, delaying all conflicts of alignment as much as possible.

C. “Falsificationism”

This is the stage where some people stop talking about what they claim, to start addressing what others are claiming, and this with the specific intention to prove them wrong. They try to detect errors in what others say, in order to try to refute/ remove their claims/ ideas. This is what we call the “sceptics” attitude. Often they are right, and mostly this is done using good rational thinking. This has the potential therefore to remove “bad” ideas, based on sound arguments. So this benefits the team, in theory. More often than not however, this also leads to polarization, as the people with the “bad” ideas feel attacked, and take this personally. Additionally, there is still no new knowledge created (which is needed as we know that most problems are new/ unique and therefore NEED new knowledge if we ever want to get closer to optimum solutions). These are reasons why I think that “scepticism” should always be an “interim” stage in which we should not get stuck. Whereas scepticism has done a lot of good in removing bas ideas, it’s not sustainable over long periods, nor in terms of optimum new knowledge creation as well as in terms of the much needed cooperation. This is definitely not the stage Karl Popper wanted us to get stuck in, given the importance he gave to the “cure” of this stage: error correction.

D. “Error correction” ad hoc

At some point along the way, maybe in a moment where some team members are temporarily fed up with the energy consuming justificationism and falsificationism battles, a completely new idea may emerge in the team. A specifically new idea: an idea that has the capacity to not only confirm that some idea (of someone else) was wrong, but also that that this new idea has the potential to correct for the error in the other idea, making the combined “old idea – the mistake in it + the corrected error through the new idea” a very suitable and valuable new theory (piece of knowledge) to make progress in the problem solving. This is possible (don’t despair 😉 ) and when it happens it connects the team together strongly, possibly even for the first time since the start of their mission. The condition for this to work is that everyone realises why and how that new idea led to progress (i.e. correcting the previous error in a way that things improve). There may always be objections to that realisation (people may fall back into category “A” from time to time) but people do have the (infinite if you ask David Deutsch) capacity the explain progress, and when this capacity is used, they *will* very often be able to understand how progress is made through that error-correcting idea. The impact of this “ad hoc” error correcting idea can be big: new knowledge is created (i.e. the old piece is improved/ corrected) and the connection between team members in the team grows, therefore the degree of cooperation does as well. Error correction in this stage, however, is still ad hoc: it occurs from time to time, not systematic. The reason why it occurs only occasionally is that we do not not really focus on it, we may be still too busy in our justification/ falsification endeavours. We also do not systematically “learn this by doing”, it has to find its way into the dynamic still a bit too serendipitously. Most of today’s problems however need more frequent error correction, as if it were a continuous attitude/ culture, and that is exactly what stage E is about.

E. “Error correction as a process/ culture”

In this stage, cooperative error-correction takes over seamlessly as the dominant epistemological behaviour. We all consider our individual theories/ claims to be fallible, and are eager to collectively seek and correct them for errors. Important here: we still have theories (!) and we can take ownership of them to evolve them, it’s not that we are silent and waiting for any good error-correcting idea to come along, on the contrary ! Both the idea generation as well as the error corrections run at full force. People use the autonomy they are being given to come up with their own ideas, transparently, and then take active ownership to seek for error-correction in the group on their ideas. Debates get lively, new knowledge creation kicks off at high pace, people grow their engagement from the collective improvement (correction) of the ideas with which they are solving the problem. Understanding of how the problem is/ will be solved not only grows, but it grows in a more aligned way. The contributions of everyone (with both new ideas as well as error-correcting ideas) to that shared explanation/ theory grows as well. This is the state of optimum cooperation in a team: more aligned explanations, consisting of more diverse contributions of everyone individually, and more collectively error-corrected for. This is a stage that teams can only arrive at “in the action”: by doing this, often enough, and based on some simple principles … not from being given the theory about how to do this ! This is because both the knowledge creation as well as the behaviour (real cooperation) in this stage are emergent phenomena: unpredictable and not very controllable (i.e. by fixing all kinds of things from the outside along the path of the team, things like externally controlled milestones, prescriptions, regulations, reporting requirements, …). The dynamic can however be guided, through the good use of some drivers of these emergent phenomena … but always “in the action” (not by giving the theory of commands around it). “Good” drivers for guiding the emergent dynamic of knowledge creation and behaviour are: transparency from within (on the evolving solution and who contributed how to it), autonomy, “flow” and “group flow”. When organisations find the right system/ process for organising problem solving like this, they effectively ignite an open ended stream of knowledge creation. Organising this as a specific process (of “learning in the action”), together with the fact the “problems are inevitable”, makes for this to have the potential to be open ended knowledge creation together with a continuous improvement in culture/ behaviours. No need then anymore for the (artificial anyway) fixing of wanted/ desirable “end points” in advance. 

The figure below summarises the 5 stages, and the 4 potential moves:

It’s important to understand that any team has the potential to make these improvement moves between stages, that they don’t need to be stuck in those early stages. Even if reality shows that many teams are stuck for a long time, there are ways (this knowledge *can* be discovered “in the action”) that unlock more advanced stages for those teams, even without having to replace people on those teams or having to send them to external trainings or coachings (any training and coaching is by definition “out of the action” of the problem they were solving together, and therefore not the place where the knowledge can be created that will allow them to advance between those epistemological stages). People are universal explainers (David Deutsch) so they can distinguish a better from a less good explanation, and also understand why the one is better than the other. People can also improve their own explanations, and understand why it is improving. Additionally we are all equal in our potential to do so. I am writing that they “can”, but they do not always necessarily “do” that. But in this matter, the “can” statement is of way more potential value than the “do” statement. It’s up to us to start exploring (and practising) HOW we can.

7 Replies to “The 4 epistemological advances any team CAN make”

  1. Interesting article! Nice to see someone really try to apply Popper’s theory of knowledge to a business environment.

    I thought it interesting that you see “Falsification” as not the end goal. This is the thing Popper is most famous for, but it’s not the most important part of his theories. And I think it turned out to be not that important a concept.

    Interestingly, you aren’t the only one starting to realize that ‘falsification’ is not the end goal. Here are two clips of Deutsch explaining that we don’t so much refute from theory to theory as move from worse problems to better problems:

    https://youtu.be/01C3a4fL1m0?t=2054

    https://youtu.be/01C3a4fL1m0?t=3609

    1. Thank you for the references !

      Makes me think of the dreadful yet so popular slogan in organisations: “Let’s keep it (problems, knowledge, …) simple!”

      That slogan alone is one of the biggest enemies of real error correction. I continuously seek ways to eradicate it, but it’s one of those persistent anti-rational memes (persistent as it serves control-based management too well)

    2. There are many arguments for why falsification is not the end goal. To me personally, the most convincing argument is: falsification is not adding to the growth of knowledge (no new knowledge creation, merely removal of bad knowledge).

  2. “The dynamic can however be guided, through the good use of some drivers of these emergent phenomena … but always “in the action” (not by giving the theory of commands around it).”

    I didn’t understand this. Can you explain it further?

    1. Hi Bruce, I meant to say 2 specific things in this statement:

      (i) the emergent phenomenon (of continuous error-correction through real cooperation in a team) CAN be “influenced”: you can do things that make you get to *that* phenomenon (and not to e.g. only (any) random phenomenoma (outcomes), or only to the negative/ opposite/ unwanted ones). This implies that there are management approaches/ systems conceivable that to exactly that job (of which Pactify is 1 of them ;-)), and that is already important to state.
      (ii) what things you can do (i.e. what management approaches work to get that job done) relate to how that helps people to optimally learn specific new things (new ways of creating knowledge& executing knowledge (i.e. behaviours)). And people learn optimally “in the action” (by doing, testing, capturing knowledge and using it to create new and better knowledge, ….) … not by “being told/ lectured/ coerced to follow something”. This “in the action” implies the presence of some “good” drivers of learning/ changing behaviours: real communication, transparency from within (not externally forced upon the team), autonoy (given AND taken), flow, group flow

      Does this make sense to you ?

      Cheers
      Bart

  3. Great article, Bart!
    My favorite philosopher Karl Popper, who coined “falsification” is always a good start 😉 I ignored his statement that “all life is problem solving” although he was not an engineer 😉
    I love “ignorism” since it reflects on cognitive and behavioral processes of people. As Al Gore said : “People prefer an reassuring lie to an inconvenient truth”. Thus people can get stuck in the epistemological trap known as the “Gettier problem”: justified truth belief. So, am I correct that error correction “ad hoc” implies an open mind set: open to “absorb” ideas (I wouldn’t say “critics”) from others ?
    According to Hans Rossling (Factfulness), it is not “ignorance” (= a lack of information, not stupidity!), but our own prejudices that are withholding us to absorb views that differ from ours.
    It takes phases, according to Soren Kierkegaard, to admit them:
    1. “that’s ridiculous” which = conflict
    2. “I need to think about that” = reflection, avoiding loosing face
    3. “if only I had known before” , often accompanied by looking for a scape goat, who had fooled us intentionally (“it wasn’t my fault I believed this: he had bad intentions and misled me”).
    Your thoughts ?

    1. Thank you Luc! Next time we see each other, we know what to talk about ! 😉
      Error correction can be done on your own (by reading/ studying/ testing/ critising … your own opinions) which is already good, but the optimum form of error correction is the one you describe: changing your opinion (for the better) BECAUSE of someone else providing you the (improved) knowledge. That’s not only better knowledge (error corrected) but also cooperation: it took 2 to get it done.

      In organisations we should count the number of times people DO change their opinions because of input from others 😉 My theory: organisations where people don’t change their opinions (because of input from others) MUST perform worse in the long run.

      What do you think about that theory ?

      Cheers
      Bart

Leave a Reply

Your email address will not be published. Required fields are marked *