Georg Ivanovas From Autism to Humanism - systems theory in medicine
2.1 The medical paradigm
Reasoning based on unrelated findings, makes it possible to apply a simple concept of cause and effect. However, as soon as findings are embedded in a complex pattern the causal approach has disadvantages.
The following diagram shows some causal interactions as they are silently taken for granted in every medical research and practice (after Rosslenbroich 2001):
The diagrams a-c are normal causal chains. The prototype for diagram a is: ‘For a want of a nail a shoe was lost; for want of a shoe a horse was lost; for want of a horse a rider was lost; for want of a rider a battalion was lost; for want of a battalion a battle was lost; for want of a victory a kingdom was lost all for want of a nail’ (Hanson 1972: 50). These kind of chains are well known in medicine. They are the rule in the description of diseases and consist of discrete physiological and pathological links that can be measured, defined and proved. But it must not be forgotten that “experiments are designed to be as chain-like as possible” (Hanson 1972: 67). They are true only in the experimental setting or in retrospective. Rarely a prognosis for the kingdom (health) is possible from a nail (bacterium or altered normal value). Therefore mostly diagram b and c are applied:
In the winter 2003/2004 it was expected that about 50.000 people died in the UK of the consequences of cold weather, over 2500 in one week, more than in Russia. The deaths were mostly due to heart or breathing problems (Griffith 2003; Wilkinson et al 2004).
We have here a combination of diagram c:
and diagram b:
Protagonists of a plain linear thinking (diagram a) might argue that colds and pneumonias are caused by viruses or bacteria and that the deaths are unrelated to the weather. Only a few cases, like a pensioner couple that froze to death because it had no more gas, could be seen as caused by cold. But this would be an extreme autistic point of view.
To avoid such consequences of cold weather different prophylactic and therapeutic measures could be proposed:
However, in arguing that way, one is again caught in an undisciplined thinking. Furthermore, all this does not explain why the mortality is higher in the UK with its mild winters, whereas it is lower in all other European countries, Scandinavia included (Wilkinson et al 2004).
As a first hypothesis, the higher mortality in the UK might be attributed to a lack of adaptation to cold weather. This includes recursive processes (diagram d) that are normally not used in medical modelling. Recursive processes will be formally demonstrated later (chap 4.2). Also their implications for health in general and the case of deaths due to cold in special (chap. 6.6). For the investigation of the causal relations it is sufficient to state that this type poses certain difficulties.
A simple example for a recursive process is: taking a cold shower in the morning has not the aim to lower body temperature that has been elevated while lying in the warm bed (what is not true). It is a stimulus that through recursive effects raises the temperature of the body and/or the skin (Schnitzer et al: 63-66). The problem with recursive processes is that they are nonlinear (doubling the time of the shower does not double the effect) and non-trivial (chap. 4.5). What is good for one person might be harmful for someone else. Even the same person will react differently at different times.
150 years ago Bernard solved the problem in defining a milieu intérieur, which is today also called autopoietic organisation (chap. 4.8). However, experimental settings mostly exclude recursive processes from the frame of observation. Actually, the absence of such feedback-mechanisms is essential for analytical studies (Ashby, 1960: 38). That is, circular processes are made linear and a causal chain is created such that trivial concepts of cause and effect are applicable.
Only rarely this procedure is absolutely impossible. One example is the administration of oxygen in asthmatics. Giving oxygen in an asthmatic state might be beneficial or harmful. Beneficial, because oxygen is needed; harmful because the breathing centre is adapted to low oxygen levels, such that the administration of oxygen leads to reduced breathing. The results of such a therapy are confusing and, therefore, controversially discussed (Bateman/Leach 1998, Chien et al 2000, Jain/Corbridge 2001, Fujimoto et al 2002).
A further difficulty in the causal concepts is the mostly overlooked fact that a cause is not a measurable item. It is a hypothesis, a ‘statement linking together two descriptive statements’ (Bateson, 1981: 39). Causal relations are always a concept of an observer according to his interests and theories. “Causation is of the observer, not of the observed” (von Foerster, 1995: xv). Therefore, “causes are certainly connected with effects; but this is because our theories connect them, not because the world is held together by cosmic glue” (Hanson 1972: 64). Or in other words: “There are as many causes of x as there are explanations of x” (Hanson 1972: 54). That is, causal attributions are “theory-loaden from beginning to end” (Hanson 1972: 54). Already Bernard was very critical about this procedure: “..the question ‘why’..is really absurd, because it necessarily involves a naïve or ridiculous answer. So we had better recognize that we do not know; and that the limits of our knowledge are precisely here” (Bernard: 80).
Reductionist science ‘solves’ the problem by creating a stable frame under which an observation becomes reliable for a certain observer. As all other relations are excluded, the effect of one intervention into a system can be observed and described in a measurable (digital) way. But what is the cause in such a setting? Is it the specific intervention or is it the setting that holds other factors stable. The behaviour of the scientist could be seen as the cause, as well. If someone who can swim is bound and is thrown into the water, what is the cause of his drowning?
“In nature, unlike the laboratory, physical conditions are rarely held constant whilst certain factors are allowed to vary for the benefit of a well-placed observer…..Suppose that conditions in nature were held constant. The chain analogy would still be artificial, since it would not indicate how the explanation of events came about nor in what this explanation consisted” (Hanson 1972: 68).
Causal attributions are highly artificial. Moreover, they are too often based on obviousness (1), or other silent assumptions. One is that the organism or human will react similarly under changing conditions or that things will continue to happen as observed, for example that a beta blocker will induce the same changes every day. This theory is called inductionism and represents one of the foundations of current medical science. The problem of inductionism has been discussed extensively the last decades (Vickers 2006). Popper (1972) had been its main critic. The problem with inductionism shall be demonstrated by an example of Russell: A chicken observes that every morning the farmer will bring food. It is able to rely on this process. Then there are even days when the farmer brings more food (it must have been before a holiday). But one nice day the farmer comes and brings no food, but cuts off the head of the chicken (Deutsch 2000: 64). That is, not understanding the (semantic) context of an observation gives rise to numerous surprises which in medicine are sometimes called adverse reactions type B (chap. 4.1).
Due to its limitations the concept of cause underwent a lot of changes during the centuries and modern epidemiologists became quite cautious about it (Lawlor et al 2004).
A short overview reads like this:
“ David Hume refers to causation as --ie, the notion that it is the causal relation which connects entities of this world to events they produce. A cause may be called necessary when it must always precede an effect, and without it, there would be no effect; a cause is deemed sufficient if it inevitably produces disease. An individual cause can be necessary, sufficient, neither, or both. Furthermore, causality may be deterministic, an exceptionless connection between events, or probabilistic, providing only a probability value for the occurrence of an event. In medicine, specific criteria have been suggested and reviewed for their adequacy as a guide to when a causal relation has been demonstrated. These include: a dose-response effect, typically considered a major criterion of causality, which may not be met or may be due to confounding factors; consistency in replication in alternative settings or methods; strength of association; temporal relation, the cause should precede the effect; biological plausibility, or mechanism; and compatibility with existing knowledge” (Holberg/Halonen 2003)
In short: Causality can be defined in two main ways. One possibility is to say ‘if A had not occurred, then B would not have occurred.’ Moreover, to say that A is the cause of B is to say that there is a chain of causally dependent events linking A with B. This is true for laboratory research where factors can be held stable. But this is not the reality of general medicine. Therefore a probabilistic causality became necessary. “The basic idea of the probabilistic approaches to causation is that a cause is an event A, the occurrence of which makes the occurrence of another event, B, more likely than if A had not occurred.” (Hulswit 2004).
Today causality is seen as given by
- experimental circumstances (Francis Bacon,17th century),
- often repeated observation (David Hume, 18th century),
- comparison (John Stuart Mill, 19th century),
- randomising (Ronald Fisher, 20th century).
Translated into medical practice this says that the efficacy of a therapy can only be proved under the following conditions
- a study (experimental circumstances),
- a cohort (repeated observation a many patients),
- a control group (comparison),
- an accidental assignment to a class (randomisation)
(Kienle et al 2003).
All this is well known and the very basis of modern medicine. The epistemological difficulty is the following:
Probabilities are probable values that can be located on a straight of probabilities (Günther, 1976: 264-266).
Instead of a radical opposition between positive and negative there is a gradual transition from one to the other. The line describes a span of: ‘This is false this is more false than true this is more true than false this is true’; or better: ‘this is known as false this is known as more false than true this is known as more true than false this is known to be true’ (Günther, 1979: 36).
However, probabilistic logic contradicts the absolute fundament of Aristotelian logic, the Trinitarian axiom. This axiom consists of the sentence of identity, the sentence of forbidden contradiction and the sentence of the excluded third, the tertium non datur (Günther, 1976: 34). It says that something is or is not and a third is not allowed. Some maintain that the laws of probability are only an extension of Aristotelian logic to conditions of uncertainty (Jaynes/Bretthorst). But this is not the case. This point of view mixes up two issues: the condition of the observed and the condition of the observer.
The additional values between 0 and 1 have no real relation to objectives (Günther, 1979: 184-185). The probabilities reflect the epistemological condition of the observer, not any reality. For example, “the census bureauwill tell you that the typical American family has 2.1 children, but there are no families (we hope) that precisely match this mean” (Newman/Weissamn 2006). Also a patient tested HIV positive has a certain probability to be infected (between 50% and 99% according to risk-patterns chap. 2.7). But the patient is either infected with the HIV virus or not. It is the observer who assigns a probabilistic risk as he has no other option.
(1) Vollmer (1994: 107) described the process of how obviousness emerges as follows:
- Something is called as self-evident, obvious or clear (intuition).
- Someone is cited saying the same (authority).
- It is referred to as consensus in this question (majority).
- It is repeated until it is believed (adaptation).