Neither Good Evidence nor Good Medicine

Evidence-Based Medicine:
Neither Good Evidence nor Good Medicine

by Steve Hickey, PhD and Hilary Roberts, PhD

Evidence-based medicine (EBM) is the practice of treating individual patients based on the outcomes of huge medical trials. It is, currently, the self-proclaimed gold standard for medical decision-making, and yet it is increasingly unpopular with clinicians. Their reservations reflect an intuitive understanding that something is wrong with its methodology. They are right to think this, for EBM breaks the laws of so many disciplines that it should not even be considered scientific. Indeed, from the viewpoint of a rational patient, the whole edifice is crumbling.

The assumption that EBM is good science is unsound from the start. Decision science and cybernetics (the science of communication and control) highlight the disturbing consequences. EBM fosters marginally effective treatments, based on population averages rather than individual need. Its mega-trials are incapable of finding the causes of disease, even for the most diligent medical researchers, yet they swallow up research funds. Worse, EBM cannot avoid exposing patients to health risks. It is time for medical practitioners to discard EBM’s tarnished gold standard, reclaim their clinical autonomy, and provide individualized treatments to patients.

The key element in a truly scientific medicine would be a rational patient. This means that those who set a course of treatment would base their decision-making on the expected risks and benefits of treatment to the individual concerned. If you are sick, you want a treatment that will work for you, personally. Given the relevant information, a rational patient will choose the treatment the will be most beneficial. Of course, the patient is not in isolation but works with a competent physician, who is there to help the patient. The rational decision making unit then becomes the doctor-patient collaboration.

The idea of a rational doctor-patient collaboration is powerful. Its main consideration is the benefit of the individual patient. However, EBM statistics are not good at helping individual patients – rather, they relate to groups and populations.

The Practice of Medicine

Nobody likes statistics. OK, that might be putting it a bit strongly but, with obvious exceptions (statisticians and mathematical types), many people do not feel comfortable with statistical data. So, if you feel inclined to skip this article in favor of something more agreeable-please wait a minute. For although we are going to talk about statistics, our ultimate aim is to make medicine simpler to understand and more helpful to each individual patient.

The current approach to medicine is “evidence-based.” This sounds obvious but, in practice, it means relying on a few large-scale studies and statistical techniques to choose the treatment for each patient. Practitioners of EBM incorrectly call this process using the “best evidence.” In order to restore the authority for decision-making to individual doctors and patients, we need to challenge this orthodoxy, which is no easy task. Remember Linus Pauling: despite being a scientific genius, he was condemned just for suggesting that vitamin C could be a valuable therapeutic agent.

Historically, physicians, surgeons and scientists with the courage to go against prevailing ideas have produced medical breakthroughs. Examples include William Harvey’s theory of blood circulation (1628), which paved the way for modern techniques such as cardiopulmonary bypass machines; James Lind’s discovery that limes prevent scurvy (1747); John Snow’s work on transmission of cholera (1849); and Alexander Fleming’s discovery of penicillin (1928). Not one of these innovators used EBM. Rather, they followed the scientific method, using small, repeatable experiments to test their ideas. Sadly, practitioners of modern EBM have abandoned the traditional experimental method, in favor of large group statistics.

What Use are Population Statistics?

Over the last twenty years, medical researchers have conducted ever larger trials. It is common to find experiments with thousands of subjects, spread over multiple research centers. The investigators presumably believe their trials are effective in furthering medical research. Unfortunately, despite the cost and effort that go into them, they do not help patients. According to fundamental principles from decision science and cybernetics, large-scale clinical trials can hardly fail to be wasteful, to delay medical progress, and to be inapplicable to individual patients.

Much medical research relies on early twentieth century statistical methods, developed before the advent of computers. In such studies, statistics are used to determine the probability that two groups of patients differ from each other. If a treatment group has taken a drug and a control group has not, researchers typically ask whether any benefit was caused by the drug or occurred by chance. The way they answer this question is to calculate the “statistical significance.” This process results in a p-value: the lower the p-value, the less likely the result was due to chance. Thus, a p-value of 0.05 means a chance result might occur about one time in 20. Sometimes a value of less than one-in-one-hundred (p < 0.01), or even less than one-in-a-thousand (p < 0.001) is reported. These two p-values are referred to as “highly significant” or “very highly significant” respectively.

Significant Does Not Mean Important

We need to make something clear: in the context of statistics, the term significant does not mean the same as in everyday language. Some people assume that “significant” results must be “important” or “relevant.” This is wrong: the level of significance reflects only the degree to which the groups are considered to be separate. Crucially, the significance level depends not only on the difference between the studied groups, but also on their size. So, as we increase the size of the groups, the results become more significant-even though the effect may be tiny and unimportant.

Consider two populations of people, with very slightly different average blood pressures. If we take 10 people from each, we will find no significant difference between the two groups because a small group varies by chance. If we take a hundred people from each population, we get a low level of significance (p < 0.05), but if we take a thousand, we now find a very highly significant result. Crucially, the magnitude of the small difference in blood pressure remains the same in each case. In this case a difference can be highly significant (statistically), yet in practical terms it is extremely small and thus effectively insignificant. In a large trial, highly significant effects are often clinically irrelevant. More importantly and contrary to popular belief, the results from large studies are less important for a rational patient than those from smaller ones.

Large trials are powerful methods for detecting small differences. Furthermore, once researchers have conducted a pilot study, they can perform a power calculation, to make sure they include enough subjects to get a high level of significance. Thus, over the last few decades, researchers have studied ever bigger groups, resulting in studies a hundred times larger than those of only a few decades ago. This implies that the effects they are seeking are minute, as larger effects (capable of offering real benefits to actual patients) could more easily be found with the smaller, old-style studies.

Now, tiny differences – even if they are “very highly significant” – are nothing to boast about, so EBM researchers need to make their findings sound more impressive. They do this by using relative rather than absolute values. Suppose a drug halves your risk of developing cancer (a relative value). Although this sounds great, the reported 50% reduction may lessen your risk by just one in ten thousand: from two in ten thousand (2/10,000) to one in ten thousand (1/10,000) (absolute values). Such a small benefit is typically irrelevant, but when expressed as a relative value, it sounds important. (By analogy, buying two lottery tickets doubles your chance of winning compared to buying one; but either way, your chances are miniscule.)

The Ecological Fallacy

There is a further problem with the dangerous assertion implicit in EBM that large-scale studies are the best evidence for decisions concerning individual patients. This claim is an example of the ecological fallacy, which wrongly uses group statistics to make predictions about individuals. There is no way round this; even in the ideal practice of medicine, EBM should not be applied to individual patients. In other words, EBM is of little direct clinical use. Moreover, as a rule, the larger the group studied, the less useful will be the results. A rational patient would ignore the results of most EBM trials because they aren’t applicable.

To explain this, suppose we measured the foot size of every person in New York and calculated the mean value (total foot size/number of people). Using this information, the government proposes to give everyone a pair of average-sized shoes. Clearly, this would be unwise-the shoes would be either too big or too small for most people. Individual responses to medical treatments vary by at least as much as their shoe sizes, yet despite this, EBM relies upon aggregated data. This is technically wrong; group statistics cannot predict an individual’s response to treatment.

EBM Selects Evidence

Another problem with EBM’s approach of trying to use only the “best evidence” is that it cuts down the amount of information available to doctors and patients making important treatment decisions. The evidence allowed in EBM consists of selected large-scale trials and meta-analyses that attempt to make a conclusion more significant by aggregating results from wildly different groups. This constitutes a tiny percentage of the total evidence. Meta-analysis rejects the vast majority of data available, because it does not meet the strict criteria for EBM. This conflicts with yet another scientific principle, that of not selecting your data. Rather humorously in this context, science students who select the best data, to draw a graph of their results, for example, will be penalized and told not to do it again.

EBM Selects Evidence - Graph Example

One of the first lessons for science students is to not select the best evidence; all data must be considered. The lines indicate how using just the “best” data gives a better, though misleading, fit.

More EBM Problems

The problems with EBM continue. It breaks other fundamental laws, this time from the field of cybernetics, which is the study of systems control and communication. The human body is a biological system and, when something goes wrong, a medical practitioner attempts to control it. To take an example, if a person has a high temperature, the doctor could suggest a cold compress; this might work if the person was hot through over-exertion or too many clothes. Alternatively, the doctor may recommend an antipyretic, such as aspirin. However, if the patient has an infection and a raging fever, physical cooling or symptomatic treatment might not work, as it would not quell the infection.

In the above case, a doctor who overlooked the possibility of infection has not applied the appropriate information to treat the condition. This illustrates a cybernetic concept known as requisite variety, first proposed by an English psychiatrist, Dr. W. Ross Ashby. In modern language, Ashby’s law of requisite variety means that the solution to a problem (such as a medical diagnosis) has to contain the same amount of relevant information (variety) as the problem itself. Thus, the solution to a complex problem will require more information than the solution to a straightforward problem. Ashby’s idea was so powerful that it became known as the first law of cybernetics. Ashby used the word variety to refer to information or, as an EBM practitioner might say, evidence.

As we have mentioned, EBM restricts variety to what it considers the “best evidence.” However, if doctors were to apply the same statistically-based treatment to all patients with a particular condition, they would break the laws of both cybernetics and statistics. Consequently, in many cases, the treatment would be expected to fail, as the doctors would not have enough information to make an accurate prediction. Population statistics do not capture the information needed to provide a well-fitting pair of shoes, let alone to treat a complex and particular patient. As the ancient philosopher Epicurus explained, you need to consider all the data.

Restricting our information to the “best evidence” would be a mistake, but it is equally wrong to go to the other extreme and throw all the information we have at a problem. Just as Goldilocks in the fairy-tale wanted her porridge “neither too hot, nor too cold, but just right” doctors must select just the right information to diagnose and treat an illness. The problem of too much information is described by the quaintly-named curse of dimensionality, discussed further below.

A doctor who arrives at a correct diagnosis and treatment in an efficient manner is called, in cybernetic terms, a good regulator. According to Roger Conant and Ross Ashby, every good regulator of a system must be a model of that system. Good regulators achieve their goal in the simplest way possible. In order to achieve this, the diagnostic processes must model the systems of the body, which is why doctors undergo years of training in all aspects of medical science. In addition, each patient must be treated as an individual. EBM’s group statistics are irrelevant, since large-scale clinical trials do not model an individual patient and his or her condition, they model a population-albeit somewhat crudely. They are thus not good regulators. Once again, a rational patient would reject EBM as a poor method for finding an effective treatment for an illness.

Real Science Means Verification

As we have implied, science is a process of induction and uses experiments to test ideas. From a scientific perspective, therefore, we trust but verify the findings of other researchers. The gold standard in science is called Solomonoff Induction, named after Ray Solomonoff, a cybernetic researcher. The power of a scientific result is that you can easily repeat the experiment and check it. If it can’t be repeated, for whatever reason (because it is untestable, too difficult, or wrong), a scientific result is weak and unreliable. Unfortunately, EBM’s emphasis on large studies makes replication difficult, expensive, and time consuming. We should be suspicious of large studies, because they are all but impossible to repeat and are therefore unreliable. EBM asks us to trust its results but, to all intents and purposes, it precludes replication. After all, how many doctors have $40 million dollars and 5 years available to repeat a large clinical trial? Thus, EBM avoids refutation, which is a critical part of the scientific method.

In their models and explanations, scientists aim for simplicity. By contrast, EBM generates large numbers of risk factors and multivariate explanations, which makes choosing treatments difficult. For example, if doctors believe a disease is caused by salt, cholesterol, junk food, lack of exercise, genetic factors, and so on, the treatment plan will be complex. This multifactorial approach is also invalid, as it leads to the curse of dimensionality. Surprisingly, the more risk factors you use, the less chance you have of getting a solution. This finding comes directly from the field of pattern recognition, where overly complex solutions are consistently found to fail. Too many risk factors mean that noise and error in the model will overwhelm the genuine information, leading to false predictions or diagnoses. Once again, a rational patient would reject EBM, because it is inherently unscientific and impractical.

Medicine for People, Not Statisticians

Diagnosing medical conditions is challenging, because we are each biochemically individual. As explained by an originator of this concept, nutritional pioneer Dr. Roger Williams, “Nutrition is for real people. Statistical humans are of little interest.” Doctors must encompass enough knowledge and therapeutic variety to match the biological diversity within their population of patients. The process of classifying a particular person’s symptoms requires a different kind of statistics (Bayesian), as well as pattern recognition. These have the ability to deal with individual uniqueness.

The basic approach of medicine must be to treat patients as unique individuals, with distinct problems. This extends to biochemistry and genetics. An effective and scientific form of medicine would apply pattern recognition, rather than regular statistics. It would thus meet the requirements of being a good regulator; in other words, it would be an effective approach to the prevention and treatment of disease. It would also avoid traps, such as the ecological fallacy.

Personalized, ecological, and nutritional (orthomolecular) medicines are converging on a truly scientific approach. We are entering a new understanding of medical science, according to which the holistic approach is directly supported by systems science. Orthomolecular medicine, far from being marginalized as “alternative,” may soon become recognized as the ultimate rational medical methodology. That is more than can be said for EBM.

Advertisements

Every Good Doctor


Every Good Doctor Must Represent the Patient:
The Malfunction of Evidence-Based Medicine

by Daniel L. Scholten

The Orthomolecular Medicine News Service (OMNS) released this paper on cybernetics in medicine on January 3.

As part of their recent OMNS critique of the practice of “evidence-based” medicine (EBM)http://orthomolecular.org/resources/omns/v07n15.shtml (1), researchers Steve Hickey and Hilary Roberts argue that the legalistic requirements of EBM, such as its insistence on treatments that have met the “gold standard” of “well-designed, large-scale, double-blind, randomized, placebo-controlled, clinical trials”, actually prevent doctors from effectively diagnosing and treating their patients. In this article, I would like to elaborate on this part of their argument, which they warrant with a piece of cybernetic common-sense (2) known variously as the “Good-Regulator” theorem (GRT), or “Conant and Ashby” theorem, after the researchers who published its original proof. (3)

No need to worry about the technical jargon. If you can read these words then you have already understood something important about this result from the system sciences, even if you don’t call it that. (4) Likewise, if you have ever used a street map to navigate a new city, a book index to browse the contents of a book, or perhaps an x-ray image or lab report to diagnose a patient’s ailment, then you are already quite comfortable handling at least the gist of this conceptual power-tool, which can be paraphrased as follows: every good solution to a problem must be a representation of that problem. (5)

What’s It All about?

Here are several other ways to paraphrase the theorem:

  • Every good regulator of a system must be a model of that system.
  • Every good key must be a model of the lock it opens. (6)
  • Control implies resemblance.
  • Identical situations imply identical responses.

The basic idea of the theorem can be illustrated with simple thought experiments. (7) Just imagine trying to order a meal in a new restaurant without using a menu, or assemble a piece of furniture without an instruction pamphlet, or diagnose diabetes without a blood-sugar lab report. Of course, you could probably muddle your way through any number of situations with roughly the same basic set of skills that was available to our preliterate ancestors, but the unassailable fact of the matter is that maps, menus, x-ray images, and medical lab reports are potent performance enhancers and without them we risk getting lost, going hungry, or medically misdiagnosing. (8,9)

Why is There a Problem?

The truth of this can be easily obscured. One problem is that some representations are clearly better than others. At the extreme we have outdated maps, poorly written instruction pamphlets and menus with mouthwatering images that turn out to represent bland, salty, or greasy food. Another problem is that representations – from street-maps to MRI scans – can be costly to prepare. Furthermore, the expertise required to prepare or use them is costly to acquire, as measured by the years, dollars, and brain-sweat it takes to complete one’s formal education. The upshot here is that those paying the costs of such representations might reasonably wonder whether those costs outweigh the benefits. Perhaps there is a cheaper way to enhance the performance of our system regulators, to find “good solutions” to our problems, and “good keys” to fit the locks we wish to open.

One common work-around is to rely on a memorized “mental model.” Although this approach works fine for simple tasks, such as a quick stop at the grocery store to pick up extra milk, as soon as a task becomes even moderately complex, the limitations of working-memory (10) quickly render this approach useless, little better than using no representation at all. Another approach is to simply avoid the sorts of complex behavior that require us to use external representations. In the end, we must all rely heavily on this approach, if for no other reason than because the cost, time and effort required to learn how to use, say, ultrasound imaging equipment, necessarily blocks one from simultaneously learning to use, say, actuarial modeling techniques, or perhaps the Hubble Space telescope. To choose is to renounce. But this approach also has its limits and the total avoidance of such complex behaviors – perhaps due to illiteracy, innumeracy or maybe a deliberate decision to return to a preliterate hunter-gatherer way of life – is just a different sort of burden.

Yet a third way to dodge expensive models or modeling expertise is to look for “multipurpose” representations; for instance, generalized maps, menus, and user-guides, that can be reused for many different cities, restaurants, and types of equipment. (11) According to Hickey and Roberts, this third approach is actually the one that EBM advocates.

One Key Cannot Fit All Locks

They illustrate their argument with the above-mentioned lock-and-key paraphrase of the Good-Regulator theorem. To follow it, we start by making the analogy that a given patient’s symptoms are a a “lock” the doctor hopes to “open.” It follows then, by the Good-Regulator theorem, that the doctor’s diagnostic and therapeutic behaviors must “model” (represent) these symptoms. A critical qualification to be added, however, is that the doctor must model these symptoms as they occur within the specific context of the patient’s genotypically and phenotypically “characteristic anatomy, physiology, and biochemistry.” (12)

Of course, this does not mean that the doctor must perform some outlandish Jim Carey-esque caricature of the patient, perhaps donning the patient’s same clothing, hairstyle, speech patterns, behavioral mannerisms, etc. Rather, it means that the associations that arise between the doctor’s diagnostic and therapeutic responses and the patient’s symptoms must be characterized by the same sort of conventional reliability that holds between the splashes of color on, for example, a map of Manhattan and the real streets, parks, and buildings in the actual city of Manhattan.

If that splash only occasionally represented Lincoln Center – or if it sometimes represented Central Park, and sometimes, say, the South Street Seaport – you would surely be confused. Even though one could use the same given splash on a map to represent two or more real-world landmarks, common-sense and strong cultural conventions require each given color splash to reliably represent just one particular real-world location. As established by Conant and Ashby’s Good-Regulator Theorem, a doctor’s responses must have the same sort of reliable association to a given patient’s symptoms. This reliability allows us to construe the doctor’s responses as a representation or model of the patient’s symptoms. (13) “Evidence-based” medicine (EBM), with its insistence on treatments that have been confirmed by “well-designed, large-scale, double-blind, randomized, placebo-controlled, clinical trials” (14) will almost always cripple a doctor’s ability to model symptoms as they actually occur within the anatomically, physiologically, and biochemically specific context of a given patient. By way of analogy, we might consider a whimsically allegorical “evidence-based locksmith” (EBL) attempting to open a particular lock with the latest and greatest “Whiz-Bang EBL Master Key,” recently developed in accord with results determined by a meta-analysis of hundreds of “well-designed, large-scale, double-blind, randomized, placebo-controlled clinical trials.” Those trials have determined the absolute critical attributes of the perfectly average key, and the patently absurd claim is that the Whiz-Bang Master Key, by virtue of its perfectly average attributes, can now be used to open any particular lock.

Pretty silly, isn’t it.

Clearly such a perfectly average key would open very few locks, if any. To reason otherwise is to commit the “ecological fallacy,” which Hickey and Roberts summarize as “the assumption that a population value…can be applied to a specific individual.” (15) If one tries to shove such a key into some particular lock, twisting and pulling in an effort to force it, then that violates the Good-Regulator Theorem, which reminds us that a good key must actually fit the lock it’s supposed to open, not some other lock, and especially not some hypothetical perfectly average lock. The same goes for actual medical practice.

EBM Stops Doctors from Effective Practice

We still need scientific research and the data it presents. Representations are potent performance enhancers. Just imagine what our lives would be like without grocery lists, the periodic table of the elements, and ultrasound imaging techniques. But however obvious and abundant the evidence might be, medical judgment is impaired by an apparent lapse of common sense. The practice of EBM may well be a consequence of the legal system and pharmaceutical corporate bottom line. In other words, money.

But whatever the cause of such impairment, the limitations of real people, real illnesses and real doctors point to the reality that EBM is DOA. The patient is not a statistic. The treatment should not be a statistic. Every good doctor must represent the patient. Personally.

Daniel L. Scholten has a degree in mathematical sciences and over 12 years of information technology experience as programmer, analyst and consultant. He founded the The Good-Regulator Project [http://www.goodregulatorproject.org], an independent, volunteer research effort dedicated to increasing public awareness and understanding of the crucial role played by models and representations in the regulation of complex systems.

Notes & Reference:

1. Hickey, Steve and Roberts, Hilary, Tarnished Gold: The Sickness of Evidence-Based Medicine, 2011, CreateSpace.

2. A more complete list of “mostly self-evident” cybernetic principles, including the Good-Regulator theorem, have been compiled by Francis Heylighen. See “Principles of Systems and Cybernetics: An Evolutionary Perspective”, available on-linehttp://pespmc1.vub.ac.be/Papers/PrinciplesCybSys.pdf.

In his paper, Heylighen distinguishes between Conant and Ashby’s “Good-Regulator Theorem” and a “Law of Requisite Knowledge”, which states that “In order to adequately compensate perturbations, a control system must ‘know’ which action to select from the variety of available actions.” Note that although Heylighen distinguishes between them, he also states that these are equivalent principles.

3. Conant, Roger C. and Ashby, W. Ross, 1970, “Every Good Regulator Of A System Must Be A Model Of That System”, International Journal of Systems Science, vol. 1, No. 2, 89-97.

4. Those of us who can read sometimes take it for granted. Many don’t have this luxury. According to a recent UNESCO fact sheet, in 2009 more than 16% of the world’s adults (793 million people) were illiterate, with more than 64% of these being women. “Adult and Youth Literacy”, UIS Fact Sheet, September 2011, no. 16, The Unesco Institute for Statistics. Available online at http://www.uis.unesco.org/FactSheets/Documents/FS16-2011-Literacy-EN.pdf

5. I have argued for the plausibility of this paraphrase in Scholten, Daniel L., 2010, “Every Good Key Must Be A Model Of The Lock It Opens: The Conant And Ashby Theorem Revisited”, available on-line at http://www.goodregulatorproject.org. It is also congruent with an observation made by Herbert A. Simon: “Solving a problem means representing it so as to make the solution transparent”; Simon, Herbert A., 1981, The Sciences of the Artificial, 2nd edition, MIT Press, Cambridge, MA; as cited in Norman, Donald A., Things That Make Us Smart: Defending Human Attributes in the Age of the Machine, pg. 53, 1993, Basic Books, New York, NY.

6. Scholten, ibid.

7. Although I believe that such thought experiments are justified in the context of the present argument, their use in general should not be taken lightly. After all, as notes James Robert Brown, they have been used to refute the Copernican world view. See, Brown, James Robert, 1991, The Laboratory of the Mind: Thought Experiments in the Natural Sciences, Routledge, New York, NY; page 35. See also, Brown, James Robert and Fehige, Yiftach, “Thought Experiments”, The Stanford Encyclopedia of Philosophy (Fall 2011 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/fall2011/entries/thought-experiment/.

8. A critical distinction that can be made between an idealized good-regulator model, which is really a dynamic entity, and its “technical specification”, or what we might call its control-model. (Scholten, Daniel, L., “A Primer For The Conant And Ashby Theorem”,http://www.goodregulatorproject.org).

Another distinction to be recognized is that whereas the good-regulator model is dynamic, the control-model may be either static or dynamic.

As an example of a static control-model, consider a written recipe for roast duck, being used by an inexperienced cook to prepare an evening meal for guests. In this case, the system to be regulated consists of the various ingredients and kitchen tools to be used to create the meal, the dynamic good-regulator model is the human being doing the cooking, and the recipe is what we are calling the static control-model. The recipe is a control-model because the human being uses it, like a technical specification, to guide (control) his behavior and thus to “turn himself into” a good-regulator model.

As an example of a dynamic control-model, consider the case in which a child learns to use an idiomatic expression such as “two wrongs don’t make a right” by overhearing an adult use that expression in a conversation. In this case the system to be regulated is a particular portion of some conversation in which the child is participating, the dynamic good-regulator model is the child, and the dynamic control-model is the adult role-model. The idea here is that the adult’s behavior serves as a type of dynamic technical specification that the child then uses to control his or her own behavior in the context of the given conversation.

It is important to make these distinctions between a dynamic good-regulator model and its static or dynamic technical specification because otherwise the GRT appears to prove that the technical specification (control-model) is necessary, which is, I believe, a misreading of the theorem. The GRT only proves that the good-regulator model is necessary. On the other hand, it does appear to be an empirical fact that such technical specifications are also necessary. The thought-experiments illustrate this explicitly, although they also help us to see what our behavior looks like when we aren’t acting as good-regulator models.

(For an in-depth, authoritative analysis of behavioral modeling, see Bandura, A., Social Foundations Of Thought & Action: A Social-Cognitive Theory, Prentice-Hall, Inc., Englewood Cliffs, New Jersey)

9. Let’s recognize that one uniquely human characteristic is our astonishing capacity to simulate (in the manner of a Turing machine) the behavior of an enormous variety of much simpler and more specific machines. I have written more extensively about this in the “Three-Amibos Good-Regulator Tutorial,” available on-line at http://www.goodregulatorproject.org .

10 For a recent accessible discussion, see Klingberg, Torkel, 2009, The Overflowing Brain: Information Overload And The Limits Of Working Memory, Oxford University Press, New York, NY.

11. I am making the assumption here that the multipurpose model is meant to apply to cities, restaurants, equipment, etc. that are not replicas of each other. Clearly there is no problem if all owners of the same brand of laptop computer use the same user-guide.

12. Hickey and Roberts, Tarnished Gold, page 43. Hickey and Roberts emphasize that it is not simply the symptoms that matter. Also important is the particular person in which those symptoms occur, where the particularities of that person have been determined by the complex interactions between that person’s genes and the environments in which those genes have been expressed over the person’s lifetime. In their discussion of this notion of “biochemical individuality”, Hickey and Roberts cite Williams, R., 1998 (1956), Biochemical Individuality: Basis for the Genetotrophic Concept, McGraw-Hill, New York.

13. In the words of Conant and Ashby “…the theorem says that the best regulator of a system is one which is a model of that system in the sense that the regulator’s actions are merely the system’s actions as seen through a mapping….” Conant and Ashby, 1970, pg. 96.

14. Hickey and Roberts refer to this ponderous, adjectival freight-train as the “EBM-mantra”; ibid, page 164.

15. Ibid, page 24. Hickey and Roberts attribute the term to Robinson, W.S., 1935, “Ecological correlations and the behavior of individuals,” Journal of the American Statistical Association, 30, 517-536.

Rainbow Bear Returns

Recently, the mainstream media described the results of a request to put a health claim on bottled water. The manufacturers wished to claim that drinking water prevents dehydration.

Specifically, they wanted to include the statement “regular consumption of significant amounts of water can reduce the risk of development of dehydration”.

Since the word dehydration means lack of water, Rainbow Bear expected there would be no problem with this assertion. But she was wrong. The media reported:

“A European Food Safety Authority report stated that low levels of water in body tissue was a symptom of dehydration rather than a risk factor that could be controlled by drinking water.”

In other words, the European Food Safety Authority think that lack of water is not caused by a deficiency of water but is a symptom of it. Rainbow Bear thought the “scientists” involved should try the experiment themselves; if they were drinking whiskey to prevent their dehydration, it might explain their response.

Rainbow Bear could not resist telling Dr Grouch of this bizarre situation.

Rational Patients

What the medical system needs is rational patients.

Rational patients ask for the relevant information and make their own judgements. For some strange reason some medical professionals believe that they alone can make decisions. When challenged, they appeal to patients not having the necessary medical training or education. They they often claim that their decisions are uniquely important as they involve life and death. By contrast, their patients made several life or death decisions while driving or walking to the surgery. 

When patients start demanding rational treatment that works for them, medicine will have started on the long road leading to helping support good health… 

Rational Medicine?

Evidence-based medicine, or EBM, is the current gold standard for clinical regulations. To its practitioners, it seems obvious that medicine should be based on the “best” evidence. Furthermore, they claim that Evidence-based Medicine is scientific and brings an old discipline up to date.

We have looked closely at EBM. Our conclusions are astounding:

  • EBM is NOT scientific: it is concerned with legal evidence, rather than scientific data
  • EBM does not conform to the scientific method
  • EBM selects its evidence and is disturbingly biased
  • EBM breaks the rules of numerous disciplines, including information theory and cybernetics
  • EBM is inadequate for selecting treatments for individual patients
  • Thus, a rational patient (or doctor) should reject EBM as useless!

We could go on. However, our new book Tarnished Gold: The Sickness of Evidence-based Medicine explains the issues fully and suggests an alternative approach.

Comments from early reviewers suggest that some people may find the book’s content disconcerting. It is disorienting to discover that “evidence-based” medicine does not make sense. However, evidence is not the same as science, data, or information. Science-based medicine would have the acronym SBM, which most people would probably think goes without saying! Unfortunately, when it comes to EBM, the stark fact is that the emperor has no clothes.

Tarnished Gold examines the weaknesses of EBM from a variety of intellectually challenging perspectives. We suspect that readers who take the trouble to follow the arguments will conclude, as we did, that EBM is dangerously irrational.

Tarnished Gold Cover

The book is available on Amazon here and shortly on kindle.

People are not Populations

Ecological Fallacy

Evidence-based medicine uses the statistics of groups and populations as a guide to treating patients. Thus, for example, a supposedly authoritative clinical trial might claim that aspirin reduces the risk of cancer by 25%, although in reality the reduction was only 1 in 1000 (e.g. 4 people in 1000 control subjects got cancer, compared to 3 in 1000 treated subjects). If other clinical trials and maybe a meta-analysis confirm this finding, EBM practitioners consider it scientifically “proven”. People might then be recommended to take aspirin to prevent cancer.

One problem with this idea is the ecological fallacy, which happens when people try to apply group statistics to individuals. To take an example, the average dress size for women in the United Kingdom is 16. But husbands and boyfriends should beware of buying this size clothing as a birthday present for their partner. They might be lucky. But, more likely, the dress will be either too small, “So you think I should be thinner!” or too large, “So you think I’m fat!” Either way, the result will not be helpful.

EBM applies this logical fallacy to patients when it recommends treatments based on large-scale studies. For this reason, you should never assume the results of a clinical trial or media report apply directly to you. Eating cholesterol-laden eggs may, on average, increase the incidence of heart disease slightly in a large population but that is irrelevant to any particular person. You are an individual and can disregard aggregate statistics, in the same way that you would be unlikely to buy average-sized clothes or shoes.

Robinson W.S. (1950) Ecological Correlations and the Behavior of Individuals,  American Sociological Review, 15(3), 351–357.

The Goldilocks Principle

Every good solution to a problem must model the problem.

Ashby’s law means that we need enough information to solve a problem or control a system. If we have too little our solution will not work.

Strangely, too much information can also prevent a solution. We get bogged down in the detail – information overload. Like Goldilocks and her porridge the solution needs to have just the right amount of information, detail, or data – and no more!

Ross Ashby and Roger Conant explain that a good solution depends on far more than just having enough of the right amount of data. The solution needs to model to problem.

So a clinical trial of a new drug should NOT compare two groups of patients using statistical tests. When medicine employs clinical trials it is making a big mistake. The trial needs to model the doctor-patient situation. We need to model a single doctor treating an individual patient who has a unique physiology. Compare groups of patients and the result will apply to populations – on average.

If you are a supporter of evidence-based medicine please feel free to comment and explain how the much hyped clinical trials, meta-analyses and the like overcome the Goldilocks Principle. That is, how do the aggregate statistics of clinical trials model the specific interaction between a doctor and the patient?

The Goldilocks Principle is usually described as Ashby and Conant’s good regulator theorem. Links are given below to the original paper and other accounts.

Check out the Good Regulator Project. Link

Daniel L. Scholten (2010) A Primer For Conant & Ashby’s “Good-Regulator Theorem. Link

Daniel L. Scholten (2009-2010) Every Good Key Must Be A Model Of The Lock It Opens (The Conant & Ashby Theorem Revisited) Link

Here is a the original paper for download: GoodRegulator