Really Bad Science

Writing 1+1=3 on a blackboard.

The media often feeds people misinformation about medical science. Leading the deception are the so-called skeptics who claim to be hard-headed scientists.

One leading promoter of skeptic “science” is the physician Ben Goldacre. I took note of Ben Goldacre when he nominated meta-analysis as his Moment of Genius for the BBC. The BBC were asking people to describe their favorite turning point in the history of science. His choice might be described as Goldacre’s Error. One of the most egregious recent mistakes in medical science, meta-analysis is a sham way of presenting subjective information. Meta-analysis makes it possible to select the available data and get the answer you want. Selecting data is one of the biggest errors a scientist can make.

Let’s use Ben Goldacre’s book Bad Science to show how this works. I consider this book well named – it is very bad science indeed.

Here is an example of how Goldacre misleads by selecting his data. He begins by giving a graph of increased life expectancy in the UK covering the previous century.


He explains that “we are living longer, and vaccines are clearly not the only reason why.” Goldacre suggests “measles incidence dropped hugely over the preceding century, but you would have to work fairly hard to persuade yourself that vaccines had no impact on that.” Read on and I think you will find it easy, rather than hard, to decide that his claims are bunkum and his argument deceptive.

In the next chart, Goldacre switches countries from the UK to the USA. Note that the new graph has also switched to cases not mortality. More importantly he shortened the period from 120 years to only 50 years. In other words, he has now hidden much of the period covered by his earlier chart. The half century from 1901 to 1950 is no longer displayed. Goldacre has selected the period making any vaccination effect appear larger! (See How To Lie with Statistics)


We are supposed to believe that this chart suggests that the number of cases fell dramatically as a result of medicine introducing the measles vaccine.  Then Goldacre extends his argument to include the MMR vaccine.


It now looks as if the MMR vaccine polished off what little remained of this nasty disease. But Goldacre has again selected his data switching from mortality to cases and on to notifications.

Where are the serious cases and deaths that might make vaccination worthwhile?

The data starts in the 1850s

The counter argument is simple just look at this full chart of measles mortality from the UK and make up your own mind. Firstly, notice that the chart extends across the range of available data from the 1850s. Can you see the relative effect of vaccination?


Basically, the deaths from measles had plummeted down to near zero long before vaccines were introduced. Improvements in nutrition and living conditions had almost completely prevented mortality from measles.

Goldacre disregarded the massive reduction in deaths by selecting his data but he could not avoid acknowledging his sleight of hand. He stated that “there is absolutely no doubt that deaths from measles began to fall over the whole of the last century for all kinds of reasons” such as nutrition. (They had not just begun to fall – but had fallen to near zero before medicine introduced vaccination.) Is this a get-out-clause to avoid Goldacre being accused of selecting his data?

Goldacre also suggests that “the other thing you will hear a lot is that vaccines don’t make much difference anyway.” I suggest you look once again at the 1840-1980 mortality chart with the red arrow showing the start of vaccination and make your own mind up.

My interpretation of the data is that vaccination and MMR made no appreciable difference to measles mortality. Nutrition and lifestyle had already beaten the disease and it is a stretch to suggest that MMR/vaccination had even a minor effect. The mopping up attributed to MMR/vaccination is more sparingly explained by nutrition and lifestyle – just a minor continuation of the earlier improvement (Ocham’s Razor).

Pasteur had some impressive results with vaccination of sheep against anthrax but this was back in the 19th century. Unfortunately, medicine applied vaccination indiscriminately and abused the technology. Yes vaccination is a technology and it does not make sense for someone to be philosophically “for” or “against” vaccination. It depends on the application and implementation. If bitten by a rabid dog I would probably chose to be vaccinated. However, I find no benefit in the idea of being vaccinated against the risk of influenza, measles, mumps, and so on. With the widespread abuse of the technology and the unremitting propaganda, rational people may decide to avoid most and possibly all current offers of vaccination.

The decline of measles independent of both vaccination and mainstream medicine is not an isolated event. Deaths from other infectious diseases also dropped rapidly over the previous 150 years as a result of nutrition and lifestyle. Unfortunately, this improvement is not the case in all countries where pure water, good nutrition, and sanitation are needed. (Benefits arising from these basics will surely be wrongly attributed to vaccination.) Professor Thomas McKeown gave an account of the decline of infections in his book The Role of Medicine which was published back in 1979. The real information is widely available.

It is wise to check Ben Goldacre’s claims because they can be misleading and Really Bad Science!

Steve Hickey PhD

A little background for TB:

Progress with TB or a Return to the Dark Ages?

by Steve Hickey, PhD and William B. Grant, PhD

(OMNS June 17, 2013) Tuberculosis (TB) was formerly one of the most devastating scourges of mankind and remains a leading cause of death. The disease has been with humans over recorded history, and likely throughout the evolution of our species. Through the industrial revolution and into the 20 century, TB became a long term medical emergency particularly with the poor. Roughly one person in four was dying of the disease in England and similar death rates were observed in other modernising countries. One solution was to isolate the afflicted in sanatoria. The fresh air and sunlight solution practiced in those times may have been at least partly effective.

Sunlight and vitamin D played an early role in preventing and treating TB. In the early 20th century, TB patients were often sent to sanatoria in the mountains where they were exposed to solar radiation. Dr. Auguste Rollier set up such facilities in the Swiss Alps. [1] Sun exposure is associated with a lower incidence of TB six months later. [2]. It wasn’t until 2006-7 that researchers at UCLA determined how sunlight increased vitamin D levels and helps the body’s immune system prevent bacterial infections [3]. Higher blood levels of 25-hydroxyvitamin D can reduce the time required to control TB during treatment. [4,5] Recent research suggests the sanatoria approach to treatment could have been at least partly effective.

The modern myth about conquering infectious diseases such as TB is that vaccination and antibiotics came to the rescue, saving humanity from the earlier suffering. However, TB like the other major life threatening infections had already declined to a low level before these interventions were introduced. The tubercle bacillus was identified by Robert Koch in 1882 [6] by which time the death rates in England and Wales had already reduced to about half the earlier levels. The introduction of the drug isoniazid in the early 1950s was a breakthrough in antibiotic treatment but had little effect on overall mortality. Similarly, BCG vaccination was first tried in people in the early 1920s but its widespread introduction was delayed until well after World War 2. A chart of mortality from TB shows its historical decline in England and Wales for which the most extensive historical statistics are available. [7] The decline of TB was similar to the reduction in mortality for the other major infectious diseases. This graph illustrates the relative contribution of vaccination and antibiotic chemotherapy. By the time these interventions had been introduced, the major infections had already been largely defeated.

The question raised by this graph is what really caused the decline in death rates from TB and other infections. We can answer this easily and directly. Firstly, TB did not go away. There is a reasonable chance that a reader is harbouring the disease. Roughly one person in every three in the world (2-3 billion) has the infection. However, only 10-20 million have the active disease. So only one person in every 100 or so infected will have any symptoms. The rest will happily coexist with their “infection” without concern.

TB Mortality Per Million

People who come down with TB have poor or compromised immune systems. The disadvantaged were living in crowded and damp slum conditions. Although such conditions facilitate spread of the infection this explanation is insufficient. Poor nutrition provides a more direct explanation of why only some of the infected go on to succumb to the illness.

TB and Vitamin C

Despite the data strongly suggesting the impact of nutrition, corporate medicine has consistently decried the use of supplements. Recently, however, there has been a long overdue development. Catherine Vilchèze and colleagues have returned to testing the extraordinary antibiotic properties of vitamin C for TB.[8] They found that “M. tuberculosis is highly susceptible to killing by vitamin C” [3] which is consistent with previous data. [9] Notably, the mechanism of action is similar to vitamin C’s anticancer role in generating hydrogen peroxide locally which kills the unwanted cells. [10] Notably, we have been using antibiotic treatment of TB as a model for the role of vitamin C based redox therapy for cancer. The same mechanism is used to protect the body against both microorganisms and abnormal cancer cells.

Supplementation with vitamin C may prevent TB infection from becoming overt. Furthermore, vitamin C could provide an effective biological treatment for TB with the advantage of a mechanism refined by millions of years of evolution. As scientific history demonstrates, good nutrition, particularly vitamins C and D, are likely to be far more effective than antibiotics and vaccination in preventing this and other dangerous infective diseases.

Vilcheze’s suggests that drugs with a similar mechanism of action to vitamin C might be developed (presumably with great commercial advantage). However, such drugs are an unnatural intervention, and are likely to have unnecessary side-effects while vitamin C is safe. The rather obvious implication of providing high-dose nutritional supplements is once again ignored. If supplementation were to be widely applied, our society may find controlling TB is unexpectedly easy.

The recent history of antibiotics is one of misuse leading to microbial resistance. Following Multiple Drug Resistant TB (MDRTB) and eXtensively drug resistant forms (XDRTB) we are now faced with Totally Drug Resistant forms (TDRTB). The increasingly ineffective antibiotics have helped promote the return to study vitamin C as a potential treatment. However, we may be faced with something far more threatening. The history of antibiotic abuse is not reassuring. It may be possible to generate more virulent forms despite Vilcheze’s confirmation that resistance to vitamin C is exceptionally difficult to induce. The use of drugs with a similar mechanism to vitamin C may lead to resistance to our basic biological defence mechanisms. In other words, corporate misuse of this latest development could return us to the dark days of uncontrolled infections when TB was killing 1 in 4 people in the developed nations.


Much of the recent freedom from deadly infectious disease reflects historical improvements in nutrition. Over time the mechanisms by which nutrients help people be more resistant to infections are being elucidated. Increased levels of vitamin D may have provided a lower risk of TB and other infections as well as the deficiency disease rickets. It now appears that vitamin C is “extraordinarily” effective in killing the TB microorganism. Importantly vitamin C kills TB in essentially the same way as it destroys cancer cells. Linus Pauling, Robert Cathcart and others may have been prescient in suggesting vitamin C provides a unique way of maintaining good health.


1. Hobday R.A. (1997) Sunlight therapy and solar architecture, Med Hist, 41(4), 455-472.

2. Koh G.C. Hawthorne G. Turner A.M. Kunst H. Dedicoat M. (2012) Tuberculosis incidence correlates with sunshine: an ecological 28-year time series study, PLoS One, 8(3), e57752.

3. Liu P.T. Stenger S. Tang D.H. Modlin R.L. (2007) Cutting edge: vitamin D-mediated human antimicrobial activity against Mycobacterium tuberculosis is dependent on the induction of cathelicidin, J Immunol, 179(4), 2060-2063.

4. Sato S. Tanino Y. Saito J. Nikaido T. Inokoshi Y. Fukuhara A. Fukuhara N. Wang X. Ishida T. Munakata M. (2012) The relationship between 25-hydroxyvitamin D levels and treatment course of pulmonary tuberculosis, Respir Investig, 50(2), 40-45.

5. Coussens A.K. Wilkinson R.J. Hanifa Y. Nikolayevskyy V. Elkington P.T. Islam K. Timms P.M. Venton T.R. Bothamley G.H. Packe G.E. Darmalingam M. Davidson R.N. Milburn H.J. Baker L.V. Barker R.D. Mein C.A. Bhaw-Rosun L. Nuamah R. Young D.B. Drobniewski F.A. Griffiths C.J. Martineau A.R. (2012) Vitamin D accelerates resolution of inflammatory responses during tuberculosis treatment, Proc Natl Acad Sci U S A, 109(38),15449-15454.

6. Mörner K.A.H. (2005) Nobel Prize in Physiology or Medicine 1905, Presentation Speech,

7. McKeown T. (1979) The Role Of Medicine, Blackwell.

8. Vilchèze C. Hartman T. Weinrick B. Jacobs W.R. (2013) Mycobacterium tuberculosis is extraordinarily sensitive to killing by a vitamin C-induced Fenton reaction, Nature Communications, doi:10.1038/ncomms2898.

9. Hickey S. Saul A.W. (2008) Vitamin C: The Real Story, the Remarkable and Controversial Healing Factor, Basic Health.

10. Hickey S. Roberts H. (2013) Vitamin C and cancer: is there a role for oral vitamin C? JOM, 28(1), 33-46.


Oops – They Got It Wrong (and will not admit it)

Mainstream medicine branded Linus Pauling a pseudo-scientific quack for suggesting that people required mega-doses of vitamin C for good health. This attack was despite Pauling being a leading physicist and perhaps the greatest chemist of the 20th century. Corporate medicine conveniently forgot medical science is built upon Pauling’s massive contribution to molecular biology. Now the Linus Pauling Institute (LPI) at Oregon State University continues his approach to nutrition and medicine. But Dr Balz Frei, the head of the LPI, has other ideas.

Following Linus Pauling’s death in 1994 the NIH (US National Institutes of Health) claimed that people were wasting their money on high dose vitamin C supplements. They had blown apart the supposedly unscientific idea that gram level or above doses of vitamin C could work. The NIH claimed that the human adult was “saturated” at a dose of only 200 mg.[link][link] Oddly, the head of the Linus Pauling Institute supports this suggestion, and claims people need only low levels of vitamin C.[link] The National Academies of Science put it this way: “The rigorous criteria for achieving steady-state plasma concentrations (five daily samples that varied less than or equal to 10 percent) make the … data unique among depletion-repletion studies.”[link] Here “steady-state” means the NIH saturation level.

Silly errors

Corporate medicine grabbed this idea with a vengeance. They thought the NIH had shown that Pauling and his upstart followers were quacks and obviously wrong. They based RDA (Recommended Dietary Allowance) values for the population on this new research. Corporate medicine had won and classified the vitamin C “fanatics” as pseudo-scientists.

Then in 2004 Steve Hickey and Hilary Roberts showed that high-dose supplements of vitamin C had a short half-life in blood plasma. Indeed, the half-life of large doses was only 30 minutes and not the several weeks assumed earlier. The NIH claims for plasma saturation were just silly. The NIH gave a dose of vitamin C waited until it was excreted and then measured the blood levels. When the level did not rise, they described the blood as saturated. This was a blatant and obvious error that any scientist could have discovered. But the reviewers of the NIH papers, the journal editors, and the numerous authorities were taken in.

The supposedly uniquely rigorous NIH research was wrong and Pauling and the flaky health freaks may have been onto something. Amusingly, Hickey and Roberts demonstrated the silliness using the NIH’s own data.[link] Here is their graph of plasma levels of vitamin C. The added red arrows show the initially injected vitamin’s concentration falls by half in a few minutes. This is a standard method of measuring the half-life. With oral intakes the body is both absorbing and excreting the vitamin over a period of some hours, which extends and flattens the curves somewhat. Note, however, oral doses return to the baseline of about 70 (microM/L).

Half Life from the NIH data

Half Life from the NIH data

According to the NIH the plasma saturates at (70 microM/L) and doses higher than 200 mg would thus not increase the levels. This is clearly bizarre as higher levels are clearly shown in their own graphs! Don’t believe Hickey and Roberts – check the data in the chart yourself. Seventy is the minimum not the maximum. What could have possessed the NIH to make such a mistake? Moreover, what could have caused the authorities, such as the Institutes of Medicine, to avoid seeing the blunder?

The Real Steady State

To add to the nonsense the NIH published graphs for high doses showing how they expected multiple doses of 3 grams every 4 hours to raise the plasma levels consistently to 220 (microM/L). This concentration is more than 3 times the level they claimed was a maximum (saturated at 70 microM/L).[link] Their graph is shown below. The red arrow shows the approximate steady state for 18 grams a day, which we consider an underestimate. Note that the highest levels in this graph occurred with a dose of 18,000 mg a day or 90 times the dose they claimed to give maximum levels. So once again the NIH debunked their own “saturation” claims.

Steady State

Steady State

The NIH data showed that blood plasma was not “saturated” at an intake of only 200 mg a day but would need at least a dose of about 20 grams a day, or 20,000 mg, to be anything like what could reasonably be described as saturated. Once again, check the NIH data in the graph yourself. Is there really any room to doubt that their data show that the highest levels come from the largest intakes?

Even now the NIH is trying to claim that plasma levels after supplements are always below 250 (microM/L) [link]. This is despite levels of about 400 (microM/L) being described for liposomal vitamin C supplements [link] and up to about 800 (microM/L) observed for a joint supplement of ascorbic acid and liposomal vitamin C. [link]  Chemist Irwin Stone reported even higher values (~2000 microM/L or 35 mg%) in a cancer patient taking 130-150 grams of oral vitamin C a day.[link] The reader may like to ask themselves what is so important to institutional medicine that it must misrepresent data about high-doses of vitamin C.

Since Hickey and Roberts pointed out the blunder several years ago, you might think that institutional medicine would have made the necessary corrections. However, the only change appears was to replace the word “saturation” with the phrase “tightly controlled” and the intakes stay at 200 mg giving rise to a plasma level of about 70 (microM/L).

Incredibly a decade after the explanation, the “experts” continue to publish papers on vitamin C claiming that the body saturates at an intake of only 200 mg. Moreover, claims for high doses are still considered inappropriate and not even studied. So for example the highly acclaimed Cochrane review on vitamin C and the common cold does not include the doses claimed to prevent or treat the illness.[check the full document here]

Can we really trust the judgment of scientists that cannot even read a chart? Recent reviews continue this silliness. One review asked if large vitamin C supplements can be beneficial because of “saturation”, sorry “tightly controlled” levels.[link] No we are not kidding. The final irony is the assertion by the head of the Linus Pauling Institute that an RDA of 200 mg per day is the optimum amount of vitamin C.

Scientists at the Linus Pauling Institute support higher intakes and suggest a minimum of 400 mg daily.[link] So why is Dr Frei their director of research supporting the NIH claims? We note that the directors LPI “laboratory is supported primarily by grant P01 AT002034 from the National Center for Complementary and Alternative Medicine (NCCAM)” – in other words the NIH.[link]


The problem is that this error potentially has implications for the health of humanity. Consider the claims for high dose vitamin C by Pauling and others are that it will prevent heart disease, stroke, and cancer. If these claims are correct, the main ills facing advanced nations could be largely a result of shortage of this simple vitamin. In their book Medical Blunders, Robert Youngson and Ian Schott listed medicine’s denial of Pauling’s claims for vitamin C in their history of amazing true stories of mad, bad, and dangerous doctors.[link] People suggesting low intakes need to be sure that they are on solid ground.

Why are these people so unwilling to admit a simple error?

Should You Be Allowed To Make A Rational Decision?

A law might condemn intelligent and educated patients to being ill-informed and unable to make rational decisions about their own cancer. For this law to be valid, however, we need to assume all cancer patients are stupid and unable to decide for themselves.

In order for an intelligent patient to make a rational decision, he or she needs all the relevant information. People are normally allowed access to information that concerns their personal decisions. Imagine going to buy a house and being told that will cost $1,160,543 but you cannot choose the house or even see it, rather the estate agent, who is an expert in houses, will make the decision for you. Most people would consider such a demand unreasonable and would decline the purchase.

Some people prefer to get expert advice for complex decisions, although most would surely consider it important that such advice was independent and unbiased. Similarly, most people would expect to solicit expert opinion for life and death decisions, such as those involving cancer. Approved advice concerning cancer is widely available from physicians and hospitals. In the UK, such advice and treatment can be provided at no immediate charge to the patient. However, note that the government must surely be held responsible for delivering ALL the relevant data and not providing a partial or biased viewpoint. Anything else would be biased and unscientific. It is not clear how any organisation could achieve such scientific omnipotence, and certainly not by restricting access to the requisite variety of information.

More intelligent and informed patients may wish to be involved in decisions about their cancer treatment, after all, their lives may be at stake. It turns out that, in the UK, the patient is not allowed to make a rational decision or even have access to available information. Suppose that the patient is a PhD biologist and goes to her friend, a retired Nobel Prize winner, who was given his award for research into cancer. Even the Nobel Laureate would not be able to independently advise his friend on the range of treatment options open to her, without technically breaking the UK law. He would be limited to providing scientific information and educational data. Whereas what the patient really wants to know from the expert is what is the optimal treatment choice and why.

For the doctors point of view, we note Item 35 in the World Medical Association’s Helsinki Declaration. This is the primary international document on medical research ethics. This section is taken from the “additional principles for medical research combined with medical care”.

“In the treatment of a patient, where proven interventions do not exist or have been ineffective, the physician, after seeking expert advice, with informed consent from the patient or a legally authorized representative, may use an unproven intervention if in the physician’s judgement it offers hope of saving life, re-establishing health or alleviating suffering. Where possible, this intervention should be made the object of research, designed to evaluate its safety and efficacy. In all cases, new information should be recorded and, where appropriate, made publicly available.”

The reason cancer has its own restrictive UK law is that organisations can make massive profits out of this illness. This law helps defend a monopoly position for corporate medicine.

Below is a recent example of this censorship. Section 4.1(b) of the ill advised 1939 law was repealed by the Medicines Act 1968. The 1939 act suggests that it is a means of preventing advertising. However, the definition of “advertisement” is expanded to cover just about any independent communication of information. Note that we are not supporting or criticising Dr Burzynski’s ideas, or those of others. We are merely making the case that open debate at a conference is an accepted and essential part of the scientific process.


OMNS August 3, 2012

The Stranglehold that the UK 1939 Cancer Act Exerts in Great Britain

by Madeline C. Hickey-Smith

(OMNS Aug 3, 2012) Most citizens of Great Britain are totally unaware of the 1939 Cancer Act which effectively prevents them from finding out about different treatments for cancer.

Excerpts from the UK 1939 Cancer Act:

“4 – (1) No person shall take any part in the publication of any advertisement –

(a) containing an offer to treat any person for cancer, or to prescribe any remedy therefor, or to give any advice in connection with the treatment thereof; or

(b) referring to any article, or articles of any description, in terms which are calculated to lead to the use of that article, or articles of that description, in the treatment of cancer.

In this section the expression “advertisement” includes any “notice, circular, label, wrapper or other document, and any announcement made orally or by any means of producing or transmitting sounds”. [1]

Publication of such advertisements is permitted to a very restrictive group comprising members of either House of Parliament, local authority, governing bodies of voluntary hospitals, registered or training to become registered medical practitioners, nurses or pharmacists, and persons involved in the sale or supply of surgical appliances. A very tight grip, therefore, is exercised on information that is fed to citizens of Great Britain; interestingly, the Act does not apply to Northern Ireland.

That pretty much wraps it up, and wraps us (in Britain) up in the legal stranglehold that this outdated Act still exerts. Was this enacted to protect the citizens from charlatans and “quacks” or to safeguard the interests of the National Radium Trust, to whom the British Government lent money? If no one is allowed to tell us, how can we, the general public, ever find out what alternatives there are to those offered by mainstream medicine, mainly surgery, chemotherapy and radiotherapy?

No Freedom of Therapy, Information, or Assembly

My colleague, Sarah Ling, and I unwittingly found ourselves in a maelstrom when we decided to hold a convention in Birmingham, later this year, to do just that – inform the general public about some of the other ways to tackle this hideous disease than those generally doled out to their mostly trusting, but fear-filled patients. A well-justified fear of the actual treatments as well as the disease prevails.

Last year, Sarah’s sister was diagnosed with an aggressive form of cancer. Chemotherapy was the only treatment offered, which she accepted out of fear. She nearly died within hours of having it, and very sadly died days afterwards. Sarah was determined to help prevent others from enduring such trauma and so, under the umbrella of our Institute (The Cambridge Institute of Complementary Health), we organised a convention to educate people – conventional/complementary health professionals and the general public – about different ways to treat people who have cancer.

We quickly drew up a short list of speakers that we felt would have much to contribute, including Dr Stanislaw Burzynski who agreed to come and talk about his pioneering work on antineoplastins.

After posting our speakers on our web-site, one, an oncologist, pulled out due to a malevolent e-mail she had received, questioning her wisdom at sharing a platform with Dr Burzynski. She didn’t want to cause her team any controversy. We then discovered that we had attracted a lot of adverse attention that was derogatory, critical of our speakers, casting aspersions on them and on us as an organisation. Unfortunately Dr Burzynski decided not to come – so as not to expose us to the sort of attacks that he has suffered. Regrettably, the public lost an opportunity to hear first-hand of his pioneering treatments in tackling cancers, including inoperable brain tumours.

Two speakers down, we then found ourselves possibly contravening the archaic Cancer Act. We’ve had to be extremely careful in how we word any publications relating to the convention so that the Advertising Standards Agency doesn’t come down on us like a ton of bricks and prevent us from holding it at all. Britain cherishes its long-held tradition of freedom of speech, but in recent years that seems questionable. However, we can still hold debates, and that is what we are doing.

We are aware that efforts will be made to stop us, from those who are not seekers of truth. If they were truly interested in the welfare of people, they would be advocating most of the alternative/complementary approaches instead of deriding them and trying to close down clinics and individuals who practise them, via the Advertising Standards Agency. This ridiculous Act affords them the guise of protecting the public and gives them ammunition that they can use against persons advocating alternatives.

We can’t hold an open day of education on treating cancer in this country: how bizarre is that? How much longer can this information be contained?

The Cost of Ignorance

The UK National Health Service is overstretched and, as more and more people contract cancer (one in three presently), the rising costs of expensive and often ineffective treatments will surely mean they have to look at alternatives.

Conventional healthcare professionals are too often ignorant of the enormous value of unconventional treatments. How can they be otherwise, as those outside of their profession are prohibited from alluding to the fact that they can help treat cancer? Shockingly, even nutrition is most often totally overlooked during orthodox cancer treatment, and the very foods that promote cancers are given to patients in our hospitals sometimes in order to maintain calorie intake. There is frequently no advice on diet, that most crucial aspect of our health. [2]

Thankfully, some oncologists do recognise the benefits that alternative/complementary treatments offer. [3] Hopefully more and more will come to accept that integrating the best of conventional and complementary/alternative methods is the way forward.

It is our opinion that a reform of the 1939 Cancer Act is long overdue. The tenacious grip that it holds on treating cancer must be relinquished, so that patients and their healthcare providers can make an informed choice as to what approach may be best for their individual needs.

(Madeline C. Hickey-Smith has an honours degree in biology and is cofounder of the Cambridge Institute of Complementary Health . The direct link to the convention page is .)


1. The 1939 UK Cancer Act:

2. What UK cancer patients are officially told:

3. Intravenous Vitamin C as cancer therapy: Free access to twenty-one expert video lectures online. Orthomolecular Medicine News Service, April 14, 2011. or and

Those who have had quite enough of government censorship of alternative cancer treatments may also wish to look at the following:

Straus H. Censorship, sports and the power of one word. Orthomolecular Medicine News Service, May 21, 2012.

Saul AW. Half-truth is no truth at all: Overcoming bias against nutritional medicine. Orthomolecular Medicine News Service, Oct 7, 2011.

Smith RG. Vitamins decrease lung cancer risk by 50%. Orthomolecular Medicine News Service, Nov 18, 2011.


The 100 Trillion Dollar Prize for Medicine

(OMNS April 19, 2012) Here is your chance to impress your colleagues holding mere Nobel Prizes: the Newlyn Research Group announces the Cargo-Cult Prize for Medical Informatics. At 100 Trillion Zimbabwe dollars (Z$100,000,000,000,000), the prize value is nominally greater than that of all Nobel Prizes ever awarded. The value of the award was chosen to reflect the inflated worth of large-scale clinical trials.

Pic: One Hundred Trillion Dollars

1. To receive this award, all one has to do is explain the validity of the selection of the “best” evidence in the “systematic” reviews of the Cochrane Library [1]. That is, you will “prove” how non-random selection of the “best” data can be achieved without censorship, without bias, and without having determined the answer in advance.

2. In addition, the competition asks you to provide a mechanism whereby the results of Cochrane reviews can determine the “best” treatment for an individual patient who wants to make a rational decision. It is important to “prove” how a Cochrane review provides unbiased and realistic data for a doctor treating an individual patient, or for individuals rationallyconsidering decisions about their own health.

3. We also ask you to demonstrate how you would apply your suggestions to an individual. We will look for a demonstration of how the results of significant large-scale clinical studies can be applied validly to specify treatment for an individual. In so doing, you will answer the following question: how do you remove the gray area from the graph below, which shows the overlap between two highly significantly different large groups (p < 0.01 n=1000). (Using an eraser, or burying one’s head, is not considered a solution.)

Graph: Normal Distribution

The gray area, as the term suggests, illustrates the uncertainty in the decision. The larger the study, the greater is the area of uncertainty.

The winning submission will explain how a Cochrane review provides anything other than background data for rational patients and their doctors.

Winning the prize should be easy if, as claimed, Cochrane reviews and large-scale randomised placebo controlled double-blind clinical trials are a “gold standard” for medical decision making.

The inspiration for this award came from a pair of Cochrane reviews, namely, that of Bjelakovic et al. “Antioxidant supplements for prevention of mortality in healthy participants and patients with various diseases” [2] and that of Hemila et al. “Vitamin C for preventing and treating the common cold.” [3] Both reviews disparage nutritional supplements and each appears to represent the biased viewpoint of its authors, presented as impartial analysis. We have objected to these reviews on the grounds that the scientific method precludes such data selection. So far, the review authors, the Journal of the American Medical Association (JAMA), and the Cochrane Library have failed to answer these questions adequately.

Applications for the Cargo-Cult Prize for Medical Informatics are invited in the format for papers for the Journal of Orthomolecular Medicine [4], where they may be submitted for editorial and peer review. A short summary of the paper should be provided, suitable for peer review and publication in the Orthomolecular Medicine News Service. Prior to peer review, the paper will be subject to an initial screening, to remove papers that lack basic rationality. In pre-screening, we follow the (irrational) example of the Cochrane reviewers, who screen papers and remove those that 1) use high nutrient doses and 2) demonstrate that supplements are safe and effective.

We suggest that, until we make the award, all Cochrane, JAMA, and similar meta-analysis reviews are issued with the following warning:

“CAUTION: This review is a selective interpretation of the available data and incorporates the bias and prejudice of the reviewers. The effects it describes are only valid as aggregate statistics of large groups and should not be used by doctors or members of the public for making decisions about individuals.”



2. Bjelakovic G. Nikolova D. Gluud L.L. Simonetti R.G. Gluud C. (2012) Antioxidant supplements for prevention of mortality in healthy participants and patients with various diseases, Cochrane Database Syst Rev., Mar 14, 3, CD007176.

3. Douglas R.M. Hemilä H. Chalker E. Treacy B. (2007) Vitamin C for preventing and treating the common cold, Cochrane Database Syst Rev., Jul 18, (3), CD000980.


Neither Good Evidence nor Good Medicine

Evidence-Based Medicine:
Neither Good Evidence nor Good Medicine

by Steve Hickey, PhD and Hilary Roberts, PhD

Evidence-based medicine (EBM) is the practice of treating individual patients based on the outcomes of huge medical trials. It is, currently, the self-proclaimed gold standard for medical decision-making, and yet it is increasingly unpopular with clinicians. Their reservations reflect an intuitive understanding that something is wrong with its methodology. They are right to think this, for EBM breaks the laws of so many disciplines that it should not even be considered scientific. Indeed, from the viewpoint of a rational patient, the whole edifice is crumbling.

The assumption that EBM is good science is unsound from the start. Decision science and cybernetics (the science of communication and control) highlight the disturbing consequences. EBM fosters marginally effective treatments, based on population averages rather than individual need. Its mega-trials are incapable of finding the causes of disease, even for the most diligent medical researchers, yet they swallow up research funds. Worse, EBM cannot avoid exposing patients to health risks. It is time for medical practitioners to discard EBM’s tarnished gold standard, reclaim their clinical autonomy, and provide individualized treatments to patients.

The key element in a truly scientific medicine would be a rational patient. This means that those who set a course of treatment would base their decision-making on the expected risks and benefits of treatment to the individual concerned. If you are sick, you want a treatment that will work for you, personally. Given the relevant information, a rational patient will choose the treatment the will be most beneficial. Of course, the patient is not in isolation but works with a competent physician, who is there to help the patient. The rational decision making unit then becomes the doctor-patient collaboration.

The idea of a rational doctor-patient collaboration is powerful. Its main consideration is the benefit of the individual patient. However, EBM statistics are not good at helping individual patients – rather, they relate to groups and populations.

The Practice of Medicine

Nobody likes statistics. OK, that might be putting it a bit strongly but, with obvious exceptions (statisticians and mathematical types), many people do not feel comfortable with statistical data. So, if you feel inclined to skip this article in favor of something more agreeable-please wait a minute. For although we are going to talk about statistics, our ultimate aim is to make medicine simpler to understand and more helpful to each individual patient.

The current approach to medicine is “evidence-based.” This sounds obvious but, in practice, it means relying on a few large-scale studies and statistical techniques to choose the treatment for each patient. Practitioners of EBM incorrectly call this process using the “best evidence.” In order to restore the authority for decision-making to individual doctors and patients, we need to challenge this orthodoxy, which is no easy task. Remember Linus Pauling: despite being a scientific genius, he was condemned just for suggesting that vitamin C could be a valuable therapeutic agent.

Historically, physicians, surgeons and scientists with the courage to go against prevailing ideas have produced medical breakthroughs. Examples include William Harvey’s theory of blood circulation (1628), which paved the way for modern techniques such as cardiopulmonary bypass machines; James Lind’s discovery that limes prevent scurvy (1747); John Snow’s work on transmission of cholera (1849); and Alexander Fleming’s discovery of penicillin (1928). Not one of these innovators used EBM. Rather, they followed the scientific method, using small, repeatable experiments to test their ideas. Sadly, practitioners of modern EBM have abandoned the traditional experimental method, in favor of large group statistics.

What Use are Population Statistics?

Over the last twenty years, medical researchers have conducted ever larger trials. It is common to find experiments with thousands of subjects, spread over multiple research centers. The investigators presumably believe their trials are effective in furthering medical research. Unfortunately, despite the cost and effort that go into them, they do not help patients. According to fundamental principles from decision science and cybernetics, large-scale clinical trials can hardly fail to be wasteful, to delay medical progress, and to be inapplicable to individual patients.

Much medical research relies on early twentieth century statistical methods, developed before the advent of computers. In such studies, statistics are used to determine the probability that two groups of patients differ from each other. If a treatment group has taken a drug and a control group has not, researchers typically ask whether any benefit was caused by the drug or occurred by chance. The way they answer this question is to calculate the “statistical significance.” This process results in a p-value: the lower the p-value, the less likely the result was due to chance. Thus, a p-value of 0.05 means a chance result might occur about one time in 20. Sometimes a value of less than one-in-one-hundred (p < 0.01), or even less than one-in-a-thousand (p < 0.001) is reported. These two p-values are referred to as “highly significant” or “very highly significant” respectively.

Significant Does Not Mean Important

We need to make something clear: in the context of statistics, the term significant does not mean the same as in everyday language. Some people assume that “significant” results must be “important” or “relevant.” This is wrong: the level of significance reflects only the degree to which the groups are considered to be separate. Crucially, the significance level depends not only on the difference between the studied groups, but also on their size. So, as we increase the size of the groups, the results become more significant-even though the effect may be tiny and unimportant.

Consider two populations of people, with very slightly different average blood pressures. If we take 10 people from each, we will find no significant difference between the two groups because a small group varies by chance. If we take a hundred people from each population, we get a low level of significance (p < 0.05), but if we take a thousand, we now find a very highly significant result. Crucially, the magnitude of the small difference in blood pressure remains the same in each case. In this case a difference can be highly significant (statistically), yet in practical terms it is extremely small and thus effectively insignificant. In a large trial, highly significant effects are often clinically irrelevant. More importantly and contrary to popular belief, the results from large studies are less important for a rational patient than those from smaller ones.

Large trials are powerful methods for detecting small differences. Furthermore, once researchers have conducted a pilot study, they can perform a power calculation, to make sure they include enough subjects to get a high level of significance. Thus, over the last few decades, researchers have studied ever bigger groups, resulting in studies a hundred times larger than those of only a few decades ago. This implies that the effects they are seeking are minute, as larger effects (capable of offering real benefits to actual patients) could more easily be found with the smaller, old-style studies.

Now, tiny differences – even if they are “very highly significant” – are nothing to boast about, so EBM researchers need to make their findings sound more impressive. They do this by using relative rather than absolute values. Suppose a drug halves your risk of developing cancer (a relative value). Although this sounds great, the reported 50% reduction may lessen your risk by just one in ten thousand: from two in ten thousand (2/10,000) to one in ten thousand (1/10,000) (absolute values). Such a small benefit is typically irrelevant, but when expressed as a relative value, it sounds important. (By analogy, buying two lottery tickets doubles your chance of winning compared to buying one; but either way, your chances are miniscule.)

The Ecological Fallacy

There is a further problem with the dangerous assertion implicit in EBM that large-scale studies are the best evidence for decisions concerning individual patients. This claim is an example of the ecological fallacy, which wrongly uses group statistics to make predictions about individuals. There is no way round this; even in the ideal practice of medicine, EBM should not be applied to individual patients. In other words, EBM is of little direct clinical use. Moreover, as a rule, the larger the group studied, the less useful will be the results. A rational patient would ignore the results of most EBM trials because they aren’t applicable.

To explain this, suppose we measured the foot size of every person in New York and calculated the mean value (total foot size/number of people). Using this information, the government proposes to give everyone a pair of average-sized shoes. Clearly, this would be unwise-the shoes would be either too big or too small for most people. Individual responses to medical treatments vary by at least as much as their shoe sizes, yet despite this, EBM relies upon aggregated data. This is technically wrong; group statistics cannot predict an individual’s response to treatment.

EBM Selects Evidence

Another problem with EBM’s approach of trying to use only the “best evidence” is that it cuts down the amount of information available to doctors and patients making important treatment decisions. The evidence allowed in EBM consists of selected large-scale trials and meta-analyses that attempt to make a conclusion more significant by aggregating results from wildly different groups. This constitutes a tiny percentage of the total evidence. Meta-analysis rejects the vast majority of data available, because it does not meet the strict criteria for EBM. This conflicts with yet another scientific principle, that of not selecting your data. Rather humorously in this context, science students who select the best data, to draw a graph of their results, for example, will be penalized and told not to do it again.

EBM Selects Evidence - Graph Example

One of the first lessons for science students is to not select the best evidence; all data must be considered. The lines indicate how using just the “best” data gives a better, though misleading, fit.

More EBM Problems

The problems with EBM continue. It breaks other fundamental laws, this time from the field of cybernetics, which is the study of systems control and communication. The human body is a biological system and, when something goes wrong, a medical practitioner attempts to control it. To take an example, if a person has a high temperature, the doctor could suggest a cold compress; this might work if the person was hot through over-exertion or too many clothes. Alternatively, the doctor may recommend an antipyretic, such as aspirin. However, if the patient has an infection and a raging fever, physical cooling or symptomatic treatment might not work, as it would not quell the infection.

In the above case, a doctor who overlooked the possibility of infection has not applied the appropriate information to treat the condition. This illustrates a cybernetic concept known as requisite variety, first proposed by an English psychiatrist, Dr. W. Ross Ashby. In modern language, Ashby’s law of requisite variety means that the solution to a problem (such as a medical diagnosis) has to contain the same amount of relevant information (variety) as the problem itself. Thus, the solution to a complex problem will require more information than the solution to a straightforward problem. Ashby’s idea was so powerful that it became known as the first law of cybernetics. Ashby used the word variety to refer to information or, as an EBM practitioner might say, evidence.

As we have mentioned, EBM restricts variety to what it considers the “best evidence.” However, if doctors were to apply the same statistically-based treatment to all patients with a particular condition, they would break the laws of both cybernetics and statistics. Consequently, in many cases, the treatment would be expected to fail, as the doctors would not have enough information to make an accurate prediction. Population statistics do not capture the information needed to provide a well-fitting pair of shoes, let alone to treat a complex and particular patient. As the ancient philosopher Epicurus explained, you need to consider all the data.

Restricting our information to the “best evidence” would be a mistake, but it is equally wrong to go to the other extreme and throw all the information we have at a problem. Just as Goldilocks in the fairy-tale wanted her porridge “neither too hot, nor too cold, but just right” doctors must select just the right information to diagnose and treat an illness. The problem of too much information is described by the quaintly-named curse of dimensionality, discussed further below.

A doctor who arrives at a correct diagnosis and treatment in an efficient manner is called, in cybernetic terms, a good regulator. According to Roger Conant and Ross Ashby, every good regulator of a system must be a model of that system. Good regulators achieve their goal in the simplest way possible. In order to achieve this, the diagnostic processes must model the systems of the body, which is why doctors undergo years of training in all aspects of medical science. In addition, each patient must be treated as an individual. EBM’s group statistics are irrelevant, since large-scale clinical trials do not model an individual patient and his or her condition, they model a population-albeit somewhat crudely. They are thus not good regulators. Once again, a rational patient would reject EBM as a poor method for finding an effective treatment for an illness.

Real Science Means Verification

As we have implied, science is a process of induction and uses experiments to test ideas. From a scientific perspective, therefore, we trust but verify the findings of other researchers. The gold standard in science is called Solomonoff Induction, named after Ray Solomonoff, a cybernetic researcher. The power of a scientific result is that you can easily repeat the experiment and check it. If it can’t be repeated, for whatever reason (because it is untestable, too difficult, or wrong), a scientific result is weak and unreliable. Unfortunately, EBM’s emphasis on large studies makes replication difficult, expensive, and time consuming. We should be suspicious of large studies, because they are all but impossible to repeat and are therefore unreliable. EBM asks us to trust its results but, to all intents and purposes, it precludes replication. After all, how many doctors have $40 million dollars and 5 years available to repeat a large clinical trial? Thus, EBM avoids refutation, which is a critical part of the scientific method.

In their models and explanations, scientists aim for simplicity. By contrast, EBM generates large numbers of risk factors and multivariate explanations, which makes choosing treatments difficult. For example, if doctors believe a disease is caused by salt, cholesterol, junk food, lack of exercise, genetic factors, and so on, the treatment plan will be complex. This multifactorial approach is also invalid, as it leads to the curse of dimensionality. Surprisingly, the more risk factors you use, the less chance you have of getting a solution. This finding comes directly from the field of pattern recognition, where overly complex solutions are consistently found to fail. Too many risk factors mean that noise and error in the model will overwhelm the genuine information, leading to false predictions or diagnoses. Once again, a rational patient would reject EBM, because it is inherently unscientific and impractical.

Medicine for People, Not Statisticians

Diagnosing medical conditions is challenging, because we are each biochemically individual. As explained by an originator of this concept, nutritional pioneer Dr. Roger Williams, “Nutrition is for real people. Statistical humans are of little interest.” Doctors must encompass enough knowledge and therapeutic variety to match the biological diversity within their population of patients. The process of classifying a particular person’s symptoms requires a different kind of statistics (Bayesian), as well as pattern recognition. These have the ability to deal with individual uniqueness.

The basic approach of medicine must be to treat patients as unique individuals, with distinct problems. This extends to biochemistry and genetics. An effective and scientific form of medicine would apply pattern recognition, rather than regular statistics. It would thus meet the requirements of being a good regulator; in other words, it would be an effective approach to the prevention and treatment of disease. It would also avoid traps, such as the ecological fallacy.

Personalized, ecological, and nutritional (orthomolecular) medicines are converging on a truly scientific approach. We are entering a new understanding of medical science, according to which the holistic approach is directly supported by systems science. Orthomolecular medicine, far from being marginalized as “alternative,” may soon become recognized as the ultimate rational medical methodology. That is more than can be said for EBM.

Every Good Doctor

Every Good Doctor Must Represent the Patient:
The Malfunction of Evidence-Based Medicine

by Daniel L. Scholten

The Orthomolecular Medicine News Service (OMNS) released this paper on cybernetics in medicine on January 3.

As part of their recent OMNS critique of the practice of “evidence-based” medicine (EBM) (1), researchers Steve Hickey and Hilary Roberts argue that the legalistic requirements of EBM, such as its insistence on treatments that have met the “gold standard” of “well-designed, large-scale, double-blind, randomized, placebo-controlled, clinical trials”, actually prevent doctors from effectively diagnosing and treating their patients. In this article, I would like to elaborate on this part of their argument, which they warrant with a piece of cybernetic common-sense (2) known variously as the “Good-Regulator” theorem (GRT), or “Conant and Ashby” theorem, after the researchers who published its original proof. (3)

No need to worry about the technical jargon. If you can read these words then you have already understood something important about this result from the system sciences, even if you don’t call it that. (4) Likewise, if you have ever used a street map to navigate a new city, a book index to browse the contents of a book, or perhaps an x-ray image or lab report to diagnose a patient’s ailment, then you are already quite comfortable handling at least the gist of this conceptual power-tool, which can be paraphrased as follows: every good solution to a problem must be a representation of that problem. (5)

What’s It All about?

Here are several other ways to paraphrase the theorem:

  • Every good regulator of a system must be a model of that system.
  • Every good key must be a model of the lock it opens. (6)
  • Control implies resemblance.
  • Identical situations imply identical responses.

The basic idea of the theorem can be illustrated with simple thought experiments. (7) Just imagine trying to order a meal in a new restaurant without using a menu, or assemble a piece of furniture without an instruction pamphlet, or diagnose diabetes without a blood-sugar lab report. Of course, you could probably muddle your way through any number of situations with roughly the same basic set of skills that was available to our preliterate ancestors, but the unassailable fact of the matter is that maps, menus, x-ray images, and medical lab reports are potent performance enhancers and without them we risk getting lost, going hungry, or medically misdiagnosing. (8,9)

Why is There a Problem?

The truth of this can be easily obscured. One problem is that some representations are clearly better than others. At the extreme we have outdated maps, poorly written instruction pamphlets and menus with mouthwatering images that turn out to represent bland, salty, or greasy food. Another problem is that representations – from street-maps to MRI scans – can be costly to prepare. Furthermore, the expertise required to prepare or use them is costly to acquire, as measured by the years, dollars, and brain-sweat it takes to complete one’s formal education. The upshot here is that those paying the costs of such representations might reasonably wonder whether those costs outweigh the benefits. Perhaps there is a cheaper way to enhance the performance of our system regulators, to find “good solutions” to our problems, and “good keys” to fit the locks we wish to open.

One common work-around is to rely on a memorized “mental model.” Although this approach works fine for simple tasks, such as a quick stop at the grocery store to pick up extra milk, as soon as a task becomes even moderately complex, the limitations of working-memory (10) quickly render this approach useless, little better than using no representation at all. Another approach is to simply avoid the sorts of complex behavior that require us to use external representations. In the end, we must all rely heavily on this approach, if for no other reason than because the cost, time and effort required to learn how to use, say, ultrasound imaging equipment, necessarily blocks one from simultaneously learning to use, say, actuarial modeling techniques, or perhaps the Hubble Space telescope. To choose is to renounce. But this approach also has its limits and the total avoidance of such complex behaviors – perhaps due to illiteracy, innumeracy or maybe a deliberate decision to return to a preliterate hunter-gatherer way of life – is just a different sort of burden.

Yet a third way to dodge expensive models or modeling expertise is to look for “multipurpose” representations; for instance, generalized maps, menus, and user-guides, that can be reused for many different cities, restaurants, and types of equipment. (11) According to Hickey and Roberts, this third approach is actually the one that EBM advocates.

One Key Cannot Fit All Locks

They illustrate their argument with the above-mentioned lock-and-key paraphrase of the Good-Regulator theorem. To follow it, we start by making the analogy that a given patient’s symptoms are a a “lock” the doctor hopes to “open.” It follows then, by the Good-Regulator theorem, that the doctor’s diagnostic and therapeutic behaviors must “model” (represent) these symptoms. A critical qualification to be added, however, is that the doctor must model these symptoms as they occur within the specific context of the patient’s genotypically and phenotypically “characteristic anatomy, physiology, and biochemistry.” (12)

Of course, this does not mean that the doctor must perform some outlandish Jim Carey-esque caricature of the patient, perhaps donning the patient’s same clothing, hairstyle, speech patterns, behavioral mannerisms, etc. Rather, it means that the associations that arise between the doctor’s diagnostic and therapeutic responses and the patient’s symptoms must be characterized by the same sort of conventional reliability that holds between the splashes of color on, for example, a map of Manhattan and the real streets, parks, and buildings in the actual city of Manhattan.

If that splash only occasionally represented Lincoln Center – or if it sometimes represented Central Park, and sometimes, say, the South Street Seaport – you would surely be confused. Even though one could use the same given splash on a map to represent two or more real-world landmarks, common-sense and strong cultural conventions require each given color splash to reliably represent just one particular real-world location. As established by Conant and Ashby’s Good-Regulator Theorem, a doctor’s responses must have the same sort of reliable association to a given patient’s symptoms. This reliability allows us to construe the doctor’s responses as a representation or model of the patient’s symptoms. (13) “Evidence-based” medicine (EBM), with its insistence on treatments that have been confirmed by “well-designed, large-scale, double-blind, randomized, placebo-controlled, clinical trials” (14) will almost always cripple a doctor’s ability to model symptoms as they actually occur within the anatomically, physiologically, and biochemically specific context of a given patient. By way of analogy, we might consider a whimsically allegorical “evidence-based locksmith” (EBL) attempting to open a particular lock with the latest and greatest “Whiz-Bang EBL Master Key,” recently developed in accord with results determined by a meta-analysis of hundreds of “well-designed, large-scale, double-blind, randomized, placebo-controlled clinical trials.” Those trials have determined the absolute critical attributes of the perfectly average key, and the patently absurd claim is that the Whiz-Bang Master Key, by virtue of its perfectly average attributes, can now be used to open any particular lock.

Pretty silly, isn’t it.

Clearly such a perfectly average key would open very few locks, if any. To reason otherwise is to commit the “ecological fallacy,” which Hickey and Roberts summarize as “the assumption that a population value…can be applied to a specific individual.” (15) If one tries to shove such a key into some particular lock, twisting and pulling in an effort to force it, then that violates the Good-Regulator Theorem, which reminds us that a good key must actually fit the lock it’s supposed to open, not some other lock, and especially not some hypothetical perfectly average lock. The same goes for actual medical practice.

EBM Stops Doctors from Effective Practice

We still need scientific research and the data it presents. Representations are potent performance enhancers. Just imagine what our lives would be like without grocery lists, the periodic table of the elements, and ultrasound imaging techniques. But however obvious and abundant the evidence might be, medical judgment is impaired by an apparent lapse of common sense. The practice of EBM may well be a consequence of the legal system and pharmaceutical corporate bottom line. In other words, money.

But whatever the cause of such impairment, the limitations of real people, real illnesses and real doctors point to the reality that EBM is DOA. The patient is not a statistic. The treatment should not be a statistic. Every good doctor must represent the patient. Personally.

Daniel L. Scholten has a degree in mathematical sciences and over 12 years of information technology experience as programmer, analyst and consultant. He founded the The Good-Regulator Project [], an independent, volunteer research effort dedicated to increasing public awareness and understanding of the crucial role played by models and representations in the regulation of complex systems.

Notes & Reference:

1. Hickey, Steve and Roberts, Hilary, Tarnished Gold: The Sickness of Evidence-Based Medicine, 2011, CreateSpace.

2. A more complete list of “mostly self-evident” cybernetic principles, including the Good-Regulator theorem, have been compiled by Francis Heylighen. See “Principles of Systems and Cybernetics: An Evolutionary Perspective”, available on-line

In his paper, Heylighen distinguishes between Conant and Ashby’s “Good-Regulator Theorem” and a “Law of Requisite Knowledge”, which states that “In order to adequately compensate perturbations, a control system must ‘know’ which action to select from the variety of available actions.” Note that although Heylighen distinguishes between them, he also states that these are equivalent principles.

3. Conant, Roger C. and Ashby, W. Ross, 1970, “Every Good Regulator Of A System Must Be A Model Of That System”, International Journal of Systems Science, vol. 1, No. 2, 89-97.

4. Those of us who can read sometimes take it for granted. Many don’t have this luxury. According to a recent UNESCO fact sheet, in 2009 more than 16% of the world’s adults (793 million people) were illiterate, with more than 64% of these being women. “Adult and Youth Literacy”, UIS Fact Sheet, September 2011, no. 16, The Unesco Institute for Statistics. Available online at

5. I have argued for the plausibility of this paraphrase in Scholten, Daniel L., 2010, “Every Good Key Must Be A Model Of The Lock It Opens: The Conant And Ashby Theorem Revisited”, available on-line at It is also congruent with an observation made by Herbert A. Simon: “Solving a problem means representing it so as to make the solution transparent”; Simon, Herbert A., 1981, The Sciences of the Artificial, 2nd edition, MIT Press, Cambridge, MA; as cited in Norman, Donald A., Things That Make Us Smart: Defending Human Attributes in the Age of the Machine, pg. 53, 1993, Basic Books, New York, NY.

6. Scholten, ibid.

7. Although I believe that such thought experiments are justified in the context of the present argument, their use in general should not be taken lightly. After all, as notes James Robert Brown, they have been used to refute the Copernican world view. See, Brown, James Robert, 1991, The Laboratory of the Mind: Thought Experiments in the Natural Sciences, Routledge, New York, NY; page 35. See also, Brown, James Robert and Fehige, Yiftach, “Thought Experiments”, The Stanford Encyclopedia of Philosophy (Fall 2011 Edition), Edward N. Zalta (ed.), URL =

8. A critical distinction that can be made between an idealized good-regulator model, which is really a dynamic entity, and its “technical specification”, or what we might call its control-model. (Scholten, Daniel, L., “A Primer For The Conant And Ashby Theorem”,

Another distinction to be recognized is that whereas the good-regulator model is dynamic, the control-model may be either static or dynamic.

As an example of a static control-model, consider a written recipe for roast duck, being used by an inexperienced cook to prepare an evening meal for guests. In this case, the system to be regulated consists of the various ingredients and kitchen tools to be used to create the meal, the dynamic good-regulator model is the human being doing the cooking, and the recipe is what we are calling the static control-model. The recipe is a control-model because the human being uses it, like a technical specification, to guide (control) his behavior and thus to “turn himself into” a good-regulator model.

As an example of a dynamic control-model, consider the case in which a child learns to use an idiomatic expression such as “two wrongs don’t make a right” by overhearing an adult use that expression in a conversation. In this case the system to be regulated is a particular portion of some conversation in which the child is participating, the dynamic good-regulator model is the child, and the dynamic control-model is the adult role-model. The idea here is that the adult’s behavior serves as a type of dynamic technical specification that the child then uses to control his or her own behavior in the context of the given conversation.

It is important to make these distinctions between a dynamic good-regulator model and its static or dynamic technical specification because otherwise the GRT appears to prove that the technical specification (control-model) is necessary, which is, I believe, a misreading of the theorem. The GRT only proves that the good-regulator model is necessary. On the other hand, it does appear to be an empirical fact that such technical specifications are also necessary. The thought-experiments illustrate this explicitly, although they also help us to see what our behavior looks like when we aren’t acting as good-regulator models.

(For an in-depth, authoritative analysis of behavioral modeling, see Bandura, A., Social Foundations Of Thought & Action: A Social-Cognitive Theory, Prentice-Hall, Inc., Englewood Cliffs, New Jersey)

9. Let’s recognize that one uniquely human characteristic is our astonishing capacity to simulate (in the manner of a Turing machine) the behavior of an enormous variety of much simpler and more specific machines. I have written more extensively about this in the “Three-Amibos Good-Regulator Tutorial,” available on-line at .

10 For a recent accessible discussion, see Klingberg, Torkel, 2009, The Overflowing Brain: Information Overload And The Limits Of Working Memory, Oxford University Press, New York, NY.

11. I am making the assumption here that the multipurpose model is meant to apply to cities, restaurants, equipment, etc. that are not replicas of each other. Clearly there is no problem if all owners of the same brand of laptop computer use the same user-guide.

12. Hickey and Roberts, Tarnished Gold, page 43. Hickey and Roberts emphasize that it is not simply the symptoms that matter. Also important is the particular person in which those symptoms occur, where the particularities of that person have been determined by the complex interactions between that person’s genes and the environments in which those genes have been expressed over the person’s lifetime. In their discussion of this notion of “biochemical individuality”, Hickey and Roberts cite Williams, R., 1998 (1956), Biochemical Individuality: Basis for the Genetotrophic Concept, McGraw-Hill, New York.

13. In the words of Conant and Ashby “…the theorem says that the best regulator of a system is one which is a model of that system in the sense that the regulator’s actions are merely the system’s actions as seen through a mapping….” Conant and Ashby, 1970, pg. 96.

14. Hickey and Roberts refer to this ponderous, adjectival freight-train as the “EBM-mantra”; ibid, page 164.

15. Ibid, page 24. Hickey and Roberts attribute the term to Robinson, W.S., 1935, “Ecological correlations and the behavior of individuals,” Journal of the American Statistical Association, 30, 517-536.

Rainbow Bear Returns

Recently, the mainstream media described the results of a request to put a health claim on bottled water. The manufacturers wished to claim that drinking water prevents dehydration.

Specifically, they wanted to include the statement “regular consumption of significant amounts of water can reduce the risk of development of dehydration”.

Since the word dehydration means lack of water, Rainbow Bear expected there would be no problem with this assertion. But she was wrong. The media reported:

“A European Food Safety Authority report stated that low levels of water in body tissue was a symptom of dehydration rather than a risk factor that could be controlled by drinking water.”

In other words, the European Food Safety Authority think that lack of water is not caused by a deficiency of water but is a symptom of it. Rainbow Bear thought the “scientists” involved should try the experiment themselves; if they were drinking whiskey to prevent their dehydration, it might explain their response.

Rainbow Bear could not resist telling Dr Grouch of this bizarre situation.