Georg Ivanovas From Autism to Humanism - systems theory in medicine

3. Epistemology

previous -- home -- content -- next

3.3 Gödel, the blind spot and the illusion of completeness

A well-known experiment in perception research is the detection of the blind spot. If you close your left eye and look at the cross from a distance of about 20-30 cm the mouse will vanish, just leaving a perception of lines. Nothing is missing.

In this experiment the mouse is located in the blind spot and vanishes from our perception. However, there is no hole in our perceptive field. Nothing is missing. Everything seems to be self-contained, complete and true. Heinz von Foerster called this phenomenon: “we don’t see that we don’t see” (Foerster/ Bröcker: 34). It takes a different point of perception, a different angle to look from to reveal the mouse.

This tendency to create an integral picture has been proved experimentally by gestalt psychology and in the social context by gestalt psychotherapy. That means, we always have a complete sensory perception although parts are lacking. This phenomenon is particularly prominent in persons with an enlarged blind spot through stroke or tumours. For example, such a patient saw only half of the food on her plate and could, consequently, only eat this half, complaining that she was served too little. When the plate was turned she saw the food again and again she could eat half of it (Sacks 1998: 77-79)

The same is true for every scientific approach. This blind spot has been formally proven by Gödel with his incompleteness theorem (1931) which “sent shock waves through the mathematical community” (Devlin 2002). This theorem says that in a given logical or mathematical system, the truth of an argument cannot be proved without doubt. A system is either without contradictions or it is complete. When he published his incompleteness theorem in 1931, it “sent shock waves through the mathematical community” (Guerrerio: 51-52). The reason is that as soon as recursive or second order strategies are involved (chap. 4.3) a gap of uncertainty opens. This implies also that if something is true and provable out of itself, it is contradictory (1).

Although Gödel’s theorem was initially about the system of arithmetic numbers, it was soon understood that it is applicable to all formal systems (Krippendorf 1986), showing that there is a structural incertitude in all reasoning.

Von Foerster, one of the main proponents of systemic thinking in the last century, summarized this situation as follows: “These limits of decidability, the limits of knowing, these connections between Wittgenstein, Russell and systems theory, or the theory of finite-state machines, were very gripping for me; the possibility to see a finiteness, a fundamental unanalyzability, an unknowability in these many things, of which one formerly believed that if only one were patient one would be able to solve them; if only one invented a few more tricks, one would be able to solve them. What is fascinating is the unreachability of an answer to a large class of problems” (von Foerster/Bröcker: 178, transl: Anger-Diaz for the unpublished English version).

Of course, the uncertainty principle is also true for medicine. That is, in a certain approach the truth of a certain argument cannot be proved without doubt and there are always blind spots. However, very often there is an illusion of completeness. Some factors which contribute to this illusion are:

  1. mistakes in logical types
  2. metaphysical shifts,
  3. ignoring certain areas,
  4. the use of explanatory principles.

Logical mistakes arise when, for example, signs, symptoms, diagnosis, and therapy are mixed up, as for example in osteoporosis (chap. 4.6.c). A lot is provable which is not necessarily true and many things are true but not necessarily provable.

A metaphysical shift occurs when things are described as they are perceived, and then one concludes that they have to be that way. It is a shift from description to prescription, already seen with reference values (chap. 2.6) and further described for neuroscience (chap. 4.6.b).

The third strategy has also been called ‘ignoring the incomprehensible’ (Simon, 1995: 32, my translation). This is often used in complex situations, and the cases described by Oliver Sacks (chap. 2.2) illustrate how outstanding observations challenging normal medical logic are simply ignored. It is also a normal phenomenon of reductionist research. The concentration onto a few parameters allows stable results, excluding unpredictable behaviour. In such a setting nothing is missing and the resulting proves are true. However, they are incomplete.

In cases where unexplainable phenomena cannot be overseen or eliminated through an experimental setting the use of explanatory principles is quite common. The expression dates back to Bateson. He wrote so-called metalogues, dialogues expressing their meaning through form and content. Most famous became his metalogue What is an instinct? (Bateson, 1972: 38-58), a discussion between father and daughter concerning some central issues of scientific epistemology.

It starts

:Daughter: Daddy, what is an instinct?
Father: An instinct, my dear, is an explanatory principle.
D: But what does it explain?
F: Anything – almost anything at all. Anything you want to explain.
D: Don’t be silly. It doesn’t explain gravity.
F: No. But this is because nobody wants ‘instinct’ to explain gravity. If they did, it would explain it. We could simply say that the moon has an instinct whose strength varies inversely as the square of the distance…
D: But that’s nonsense, Daddy.
F: Yes surely. But it was you who mentioned instinct not I.
F: Well, you know what ‘hypotheses’ are. Any statement linking together two descriptive statements is a hypothesis.
D: Daddy, is an explanatory principle the same thing as a hypothesis?
F: Nearly, but not quite. You see, an hypothesis tries to explain some particular something but an explanatory principle – like ‘gravity’ or ‘instinct’ – really explains nothing. It’s a sort of conventional agreement between scientists to stop trying to explain things at a certain point.

The use of explanatory principles is very common. It includes the use of words which seem to explain a certain fact. But a more detailed analysis always reveals that such words actually obscure a lack of understanding. What Bateson showed in investigating the term instinct was, that trying to stay with a given meaning of such an explanatory principle, things become more and more confusing, as already shown with ‘mind’ (chap. 2.3) and ‘placebo’ (chap. 2.4).

It is not new that the use of such words is a major source of unscientific thinking. Bernard, who cannot be suspected to be a muddler, stated 150 years ago “that we must always cling to phenomena and see in words only expressions empty of meaning, if the phenomena they should represent are not definite, or if they are absent” (Bernard: 188), and “we must learn that the words we use express phenomena whose cause we do not know are nothing in themselves; and that the moment we grant them any value in criticism or discussion, we abandon experience and fall into scholasticism. In discussing or explaining phenomena, we must be very careful never to abandon observation or put a word in place of a fact” (Bernard: 187).

Applying this strict thinking to medicine, he concludes that “we should see that the words, fever, inflammation, and the names of diseases in general have no meaning at all in themselves” (Bernard: 188).


(1) “the absence of contradictions in P is not provable in P on the condition that P really is without contradiction (if this is not the case every statement is provable, of course)” (cited in Guerrerio: 52, my translation).

previous -- home -- content -- next