The practice of evidence-based medicine (EBM) is held out as the gold standard of practice when it comes to evaluating how best to treat certain conditions. As this article will outline, nowhere is this standard in more disrepute than in the evidence-based practices currently in use on how best to treat mold related illness.
What is Evidence-Based Medicine? The term "evidence-based medicine" (EBM) was first used in 1990 by G.H. Gyatt, a professor from McMaster University Canada, but a broader description of EBM appeared in 1992 when the Evidence-Based Working Group published a new approach to teaching the practice of medicine in JAMA. (1) The article stressed that “evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiological rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.(2)” The article emphasized that this would require “new skills of the physician, including efficient literature searching and application of formal rules of evidence evaluating the clinical literature.(3)” Tradition, anecdote and theoretical reasoning based on the basic sciences would be replaced by evidence from high-quality, randomised, controlled trials and observational studies, in combination with clinical expertise and the needs and wishes of patients.(4)
On the Internet, numerous articles discuss other potential definitions of the term "evidence-based medicine"(5). Sackett et al. define EBM as “the integration of best research evidence with clinical expertise and patient values”(6). Another definition states that “EBM is nothing more than a process of life-long, self-directed learning in which caring for patients creates the need for clinically important information about diagnosis, prognosis, therapy, and other clinical and health care issues.” A further definition suggests that EBM is “an evolutionary progression of knowledge based on the basic and clinical sciences and facilitated by the age of information technology.(7)”
Many of the above definitions arose from a BMJ article published in 1996, which stated that EBM is the conscientious, explicit and judicious use of the best current evidence in making decisions about the care of individual patients. The practice of evidence-based medicine involves integrating individual clinical expertise with the best available external clinical evidence from systematic research.(8)
Evidence-based medicine requires asking relevant clinical questions concerning the patient’s issues, performing a literature search for relevant research data to support or refute diagnostic and/or treatment approaches, critically appraising the literature regarding its validity and applications, and then implementing one’s findings and insights in a clinical setting.
Twenty-five years ago, evidence-based medicine, which involves utilizing the medical literature to effectively guide medical practice, was considered profound enough to be described by the initial authors as a paradigm shift in the way medicine was to be practiced. The authors reported Thomas Kuhn’s description of a scientific paradigm as “[a way] of looking at the world that defines both the problems that can legitimately be addressed and the range of admissible evidence that may bear on the solution”(9). When defects in an existing paradigm accumulate to the extent that the paradigm is no longer tenable, the paradigm is challenged and replaced by a new way of looking at the world.
Some of the shift toward evidence-based medicine was initiated due to a loss of confidence in the traditional medical model and the studies that had initiated those practices. Larry Dossey M.D. commented on many of the scandals that rocked the confidence of healthcare consumers at the end of the last century(10). “The uncertainties of medicine are cause for celebration,” Dossey wrote. “Modern medicine is losing some of its invincibility. Many of the rules of good health that have guided patients and physicians for decades have taken a beating from which they may not recover. The almost blind allegiance we once had to the treatments offered has been severely undermined by these studies — some of the absolute certainties are no longer as absolutely certain.”
First, there was the Vioxx drug scandal, in which many people died from heart disease after consuming what were thought to be relatively innocuous anti-inflammatory drugs. Compounding the problem was the fact that this particular drug had been marketed as being relatively safe. Furthermore, evidence emerged that the drug companies had known for some time that the drug had an increased incidence of cardiac side effects, but they had chosen to hide these negative findings to ensure a profit.
In the Women’s Health Initiative study(11), hormone replacement therapy (HRT), specifically Premarin and Provera, once a mainstay of post-menopausal symptom management and considered to be safe, was shown to actually increase women’s risk of heart disease, stroke, thrombosis and breast cancer. The risks of increased cardiovascular disease (CVD) and breast cancer were concluded to far outweigh the benefits of osteoporosis protection and colon cancer reduction. Millions of women, to the fanfare of massive nation-wide news coverage, were immediately withdrawn from hormone replacement therapy as a result of these findings. The sales of these two drugs dropped 50% in one month. The American Association of Clinical Endocrinologists, (AACE), the American Congress of Obstetricians and Gynecologists (ACOG) and the North American Menopause Society (NAMS) recommended HRT use only for short-term symptom control.
Much criticism was levelled against the WHI study when the data were placed within a clinical perspective and further studies reached different conclusions. The results of the WHI and the Heart and Estrogen/Progestin Replacement Study (HERS) trial, when reassessed, were shown to not apply to younger women, specifically those aged 50-60. In most of the subsequent studies, there were no cardiovascular deaths among 6,000 women on HRT, as compared to several deaths in the placebo group (12). There was overwhelming evidence that the anti-atherosclerotic effect of HRT depended on the time of initiation and that early initiation was protective.
With regard to knee surgery, researchers proved that performing arthroscopic surgery on an arthritic knee, once a mainstay of surgical interventions for this condition, was no more effective than administering an anesthetic, making a skin incision, and performing a sham surgery. The outcomes in terms of pain and symptoms after either of these two procedures were virtually the same. The value of mammograms has also been seriously questioned, and it is unclear as to whether or not a mammogram has any influence on the number of women dying from breast cancer each year.
These observations are supported in the literature, which shows that many medical findings and treatment suggestions previously taken as the gold standard do not stand the test of time. John Ioannidis, known as a meta-researcher who has based his career on researching the validity of medical research findings, has shown time and time again in published studies that as many as 90 percent of the published medical information that doctors rely on is flawed (13). Eighty percent of non-randomized studies (the most common type of studies) turn out to be wrong, and 25 percent of gold-standard randomized studies turn out to be wrong, as do 10 percent of platinum-standard large randomized trials. One of his papers (14) discussed his belief that researchers were frequently manipulating data analyses, choosing career-advancing findings rather than good science and using the peer-review process to suppress opposing views (15). In perhaps one of the most ignominious examples of medical science undergoing a dramatic reversal in treatment approach, Dr. Egas Moniz received a Nobel prize in 1949 for his pioneering of the frontal lobotomy in 1936 to treat incurable mental illness (16). Times do change, and sometimes, they change radically.
A Wall Street Journal article written by Ron Winslow entitled Study Questions Evidence Behind Heart Therapies (17) discussed a study that revealed that less than 11 percent of 2,700 recommendations commonly made by cardiologists were supported by scientific evidence. Furthermore, many of the dogmatic recommendations and guidelines created by cardiologists are formed by individuals who are connected in some financial way with the pharmaceutical companies (18). Another study showed that 85 percent of individuals who had stents or angioplasties to treat their blocked coronary arteries did not need them. Furthermore, the group that did have the surgical procedures ended up much sicker than the individuals who treated their condition with drugs alone (19). Thus, more critical evaluation of standards of practice was needed.
The original 1992 Evidence-Based Medicine Working Group set out specific criteria for assessing the strength of evidence that supports clinical decisions (20). Has the diagnostic test been evaluated in a patient sample that included an appropriate spectrum of mild and severe disease, treated and untreated disease and individuals with different but commonly confused disorders (21)? Was there an independent, blind comparison with a gold standard of diagnosis (22)? Was the assignment of patients to treatments randomized (23)? Were all patients who entered the study accounted for at its conclusion (24)? Lastly, were explicit methods used to determine which articles to include at its conclusion (25)?
According to the original JAMA article, the residents “learn to present the methods and results in a succinct fashion, emphasizing only the key points. A wide-ranging discussion, including issues of underlying pathophysiology and related questions of diagnosis and management, follow the presentation of articles. They always substantiate decisions or acknowledge the limitations of the evidence and discuss the literature retrieval, the methodology of papers and the application to the individual patient. (27)” This article emphasised that this “new paradigm will remain an academic mirage with little relation to the world of day-to-day clinical practice unless physicians-in-training are exposed to role models who practice evidence-based medicine”. McMaster University recruited internists with training in clinical epidemiology and the “skills and commitment [needed] to practice evidence-based medicine. (28)” This is a tall order for a busy clinically orientated profession, and even this article agrees that practicing in this way is fraught with complexity and difficulty. Furthermore, when first published, the authors asked whether advocating evidence-based medicine in the absence of definitive evidence of its superiority in terms of improving patient outcomes is an internal contradiction (29).
One of the challenges facing a clinically trained and clinically based practitioner who does no in-house research and whose practice is full of competing demands is how to best evaluate the available evidence and make the best treatment decisions for patients who present every day with complex problems. The average physician spends far beyond 40 hours per week in the office, seeing patients, managing staffing issues and dealing with paperwork.
From these beginnings, evidence-based medicine has had some major achievements. The Cochrane Collaboration was established to collate and summarise evidence from clinical trials, methodological and publication standards for primary and secondary research were established, national and international infrastructures were built to develop and update clinical practice guidelines, resources and courses were developed to teach critical appraisal and new knowledge bases for implementation and knowledge transition were built (30).
The authors of this critical 2014 BMJ paper, entitled “Evidence-based medicine: A movement in crisis?” suggest launching of a new campaign for what they termed “real evidence-based medicine”. According to them, this is how to best describe what they mean by real evidence-based medicine and the remedying solution:
I believe some of these revised criteria have been met by Dr. Shoemaker and his co-authors. Dr. Shoemaker has published critiques of what has passed for evidence-based medicine guidelines in the management of mold illness prior to his ground-breaking work. The American College of Occupational and Environmental Medicine (ACOEM) and the American Academy of Asthma, Allergy and Immunology (AAAAI) published in 2002 and 2006, respectively, guidelines reporting that mold exposure was not capable of producing human illness. Much of the ACOEM “evidence” was based on opinion papers by defense consultants in litigation regarding water-damaged buildings (Bruce Kelman and Ronald Gots) and cited no human studies as reference material (41). Dr. Shoemaker cited an article in the Wall Street Journal and an article by Craner that exposed the bias and concealed conflicts of interest of the ACOEM authors: “there is nothing evidence-based in either the ACOEM or AAAAI, as that process begins with the observation of affected patients.” Dr. Shoemaker is clearly using the criteria regarding the best way to practice evidence-based medicine in his criticism of their lack of fulfillment of these criteria in publishing these opinion papers.
I have relied almost exclusively on Dr. Shoemaker and various co-authors of certain papers to understand the complexity of this multilayered condition. Dr. Shoemaker is extremely insistent that the steps to be followed in the diagnosis and treatment of this condition must follow the guidelines set out by his own research, as well as clinical practice and treatment guidelines. It is obvious that he has followed an evidence-based approach in this undertaking. Dr. Shoemaker began his original work with CIRS when he observed that patients with a mysterious disease seemed to improve when prescribed a lipid-lowering drug, cholestyramine. Based on that original observation, he explored the biology and pathophysiology of the disease processes in patients, using the best evidence available at the time, without the influence of financial interests. As he learned, he explored further hypothesises, published numerus studies, wrote books, collaborated with other researchers and lectured on the subject. He continues to utilise the best evidence-based practices in an attempt to understand the genomics that underlie CIRS and how the use of VIP (and the rest of the CIRS protocol) influences the proteomic and Neuroquant findings of affected individuals.
The proof regarding whether an evidence-based approach is effective in managing CIRS patients is whether the patients involved in the study enjoy improved health as compared to controls. At present, there are no long-term randomized trials of the Shoemaker approach to treating CIRS. In other words, his research may not have fulfilled the Level I criteria regarding what type of research best characterises evidence-based medicine. However, his work has nonetheless systematically fulfilled most of the other criteria in that it is patient-centered and documents responses to care that are quantifiable and reproducible.
[embed_popupally_pro popup_id="4"]
Resources
(*1) Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992; 268: 2420-5.
(*2) Ibid
(*3) Ibid
(*4) Greenhalgh, T., “Evidence-based medicine: a movement in crisis?” BMJ 2014; 13 June, 348
(*5) http://researchguides.uic.edu/ebm
(*6) Sackett D.L., et al., “Evidence-Based Medicine: How to Practice and Teach EBM.” Edinburgh: Churchill Livingstone.
(*7) Doherty, Steve. “Evidence-based medicine: Arguments for and Against.” Emergency Medicine Australasia 2005; 17: 307-13.
(*8) Sackett, D.L., Rosenberg, W.M.C., Gray, J., Haynes R.B., Richardson W.S., “Evidence-based medicine: what it is and what it isn’t.” BMJ; 312:71-72.
(*9) Kuhn, T.S., The Structure of Scientific Revolutions. Chicago, Ill: University of Chicago Press; 1970
(*10) Dossey, L., Alternative Therapies Sept/Oct 2002, Vol. 8, No.5 32
(*11) Rossouw, J.E.1, Anderson, G.L., Prentice, R.L., LaCroix, A.Z., Kooperberg, C., Stefanick, M.L., Jackson, R.D., Beresford, S.A., Howard, B.V., Johnson, K.C., Kotchen, J.M., Ockene, J. Writing Group for the Women's Health Initiative. (2002). “Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women's Health Initiative randomized controlled trial.” JAMA. 288(3):321-33.
(*12) Family Practice News. (2003). 33(11), 1-2
(*13) Freedman D., (2010). Lies, Damned Lies, and Medical Science. The Atlantic. Nov 2010 Issue.
(*14) Ioannidis, J.P.A., (2005). Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124
(*15) Freedman D., (2010). Lies, Damned Lies, and Medical Science. The Atlantic. Nov 2010 Issue.
(*16) Csoka, A., (2015). Innovation in medicine: Ignaz the reviled and Egas the regaled. Med Health Care Philos. Springer Journal, Dec 4.
(*17) Wall Street Journal |Feb 25th 2009
(*18) Rogers, S., (2009). Total Wellness. Aug, p. 1.
(*19) Boden et al., (2007). New England Journal of Medicine. Optimal medical therapy with or without PCI for stable coronary artery disease. April 12, 356; 15:5003-16.
(*20) Evidence-Based Medicine Working Group. (1992). Evidence-based medicine: A new approach to teaching the practice of medicine. JAMA, 268(2420), p. 2422.
(*21) Department of Clinical Epidemiology and Biostatistics, McMaster University. (1981). How to read clinical journals, II: to learn about a diagnostic test. Can Med Assoc J. 124:703-710.
(*22) Godfrey, K., (1985). Simple linear regression in medical research. N Engl J Med, 313, p. 1629-1636
(*23) Department of Clinical Epidemiology and Biostatistics, McMaster University. (1981). How to read clinical journals, V: to distinguish useful from useless or even harmful therapy. Can Med Assoc J, 124, 1156-1162.
(*24) Ibid
(*25) Ibid
(*26) University of North Carolina, Health Sciences Library. (2016). “Forming focused questions with PICO.”
(*27) Evidence-Based Medicine Working Group. (1992). Evidence-based medicine: A new approach to teaching the practice of medicine. JAMA, 268, p. 2420.
(*28) Ibid
(*29) Ibid
(*30) Ibid
(*31) Cohen, D., (2013). “FDA official: Clinical trial system is broken.” BMJ, p. 347.
(*32) Turner, E., Matthews, A.M., Linardatos, E., Tell, R., Rosenthal, R. (2008). “Selective publication of antidepressant trials and its influence on apparent efficacy.” N Eng J Med, 358, p. 252-60.
(*33) Perlis, R.H. et al., (2005). “Industry sponsorship and financial conflict of interest in the reporting of clinical trials in psychiatry.” Am J Psychiatry, 162(10), p.1957-60.
(*34) Vigen, R., MD, MSCS1, et al., (2013). “Association of Testosterone Therapy with Mortality, Myocardial Infarction, and Stroke in Men with Low Testosterone Levels.” JAMA, 310(17), 1829-1836
(*35) Le Couteur, D.G., Doust, J., Creasey, H., Brayne, C. (2013). “Political drive to screen for pre-dementia: Not evidence based and ignores the harm of diagnosis.” BMJ, p. 347.
(*36) Allen, D., Harkins, K. (2005). “Too much guidance?” The Lancet, 365, p. 1768.
(*37) Greenhalgh, T., Howick, J., Maskrey, N. (2014). “Evidence-based medicine: A movement in crisis?” BJM. p. 348.
(*38) Roehr, B., (2012). “GlaxoSmithKline is fined record $3billion in US.” BMJ. 345, p. e4568.
(*39) Greenhalgh, T., Howick, J., Maskrey, N. (2014). “Evidence based medicine: A movement in crisis?” BMJ. p. 348.
(*40) Ibid
(*41) Shoemaker, R. (2010). Surviving Mold: Life in the Era of Dangerous Buildings. Otter Bay Books: Baltimore, p. 310-311.