The Ethics of Knowing the Mechanism of Action

One of the common accusations made against homeopathic remedies is that no one can explain how they work. Samuel Hahnemann explained that their mechanism of action is predicated on what we would today refer to as a form of electro-chemical energy. The current scientific zeal to slice and dice everything in the known world into ever decreasing particles has resulted in high dilution research (discussed in other relevant articles).


On a more fundamental level, there is quite a lot known about the sphere of action of material doses of simple herbal tinctures and lower potency remedies that are frequently referred to in homeopathic drug compendiums (called Materia Medicas), so it is absolutely false to claim that “all” homeopathic remedies contain no molecules of the original starting substance, or are “just water”.


Homeopaths find it paradoxical that their form of medicine seems to be unfairly held to a higher burden of proof than the mainstream model largely because their accusers seem to be unaware of the fact that a majority of conventional drugs are prescribed despite their unknown mechanism of action.


Mechanism of action is defined as “the mechanism by which a pharmacologically-active substance produces an effect on a living organism or in a biochemical system…”1


Some interesting examples from conventional medicine include the 1950’s use of tetracycline (an antibiotic) in the treatment of rheumatoid arthritis on the theory that it was caused by infectious agents. This was discontinued when rheumatoid arthritis came to be thought of an auto-immune disease, and the standard treatment changed to gold compounds despite their mechanism of action being largely unknown.2


The mechanism of action for acetylsalicylic acid, a compound found naturally in white willow bark, and better known as Aspirin, was not discovered until 1971, although it had been available commercially and prescribed since about 1899.3


The mechanism of action is in fact unknown for large numbers of commonly prescribed drugs including statins, most psychotropic /psychiatric drugs like Lithium, acetaminophen and Lysodren (a common chemotherapy drug) and… general anaesthetics. Would it be ethical to stop using those on surgical patients?

And this is by no means a comprehensive list.


It is very common in the pharmaceutical industry for drugs to be in vogue for a particular condition for a certain period of time, be found to be useless, ineffective, dangerous, or more useful for some other condition than for which they were created…


We don’t have that problem with homeopathic remedies. The same ones that worked 200 years ago still work today. On the same conditions. Progress in homeopathy is about adding more remedies to our armamentarium.


The point is, health care professionals don’t have to know how a treatment works to use it. They just have to know that it does work.


Well… maybe. Or maybe not, if it’s conventional medicine. A study in the British Medical Journal detailing the breakdown of clinical evidence for 2,500 common medical treatments found:4



That’s a big grey area on the left, isn’t it?  Add “unlikely,” “likely to be ineffective or harmful,” and “trade-off,” and that’s two-thirds of conventional medical treatments that are dubious.

Homeopaths have been mocked in some circles for referring to their vast 200-year-history of clinical evidence, estimated to be around the 25,000 volume mark.

The notion that random controlled trials are the exclusive benchmark for evidence of efficacy immediately excludes clinical evidence for either homeopathy or conventional/allopathic medicine.


In the words of the immortal Bard, methinks the critics doth protest too much.



The Current Status of Evidence Based Medicine (EBM)

By Laurie J. Willberg


A recent study undertaken at McMaster University expresses concern about the ability of mainstream medical practitioners to keep up with current research and best practise recommendations, especially those that illustrate the harms of current medical practises.


"Studies have shown that patients often do not receive the best care, or may even receive harmful or unnecessary care, due to difficulties in updating information for practice [10]. Recently published articles about ineffective or potentially harmful treatments should also be included in recommendations, as physicians may not realize there are recent studies that contradict previous evidence. For example, in our study, percutaneous angioplasty for renal artery stenosis was found to be harmful in the ASTRAL trial [14], though the evidence was previously unclear. (ed. But they were doing it anyway?!) This trial, published in November 2009, was not cited in PIER, which had been updated for this topic in December 2009 at the time of our study; it has since been incorporated into PIER. At the time of our study, DynaMed, Best Practice, and UpToDate had updated renal artery stenosis to include the ASTRAL trial [14] and recommended against this procedure. Another example, recombinant activated factor VII was found to be harmful in spontaneous intracerebral hemorrhage, but had not yet been included in Best Practice, which was last updated on January 11, 2009, at the time of our study [15]" "Future research should investigate best methods of facilitating efficient updates of medical textbooks and uptake of these practice changes by health care professionals. Our study documents that these textbooks have some ways to go in keeping pace with high quality, clinically relevant new evidence. This new evidence has the capacity to impact their clinical recommendations, and potentially the quality of patient care."


So evidence-based medicine relies on two things:


(1) the delivery of the updated information, and


(2) access and use by practitioners.


One has to question the assumptions in some circles that evidence-based medicine is a fait accompli, especially when leading medical journals indicate that it certainly is not. Moreover, patients have no way of knowing whether or not or to what degree their physicians spend time updating their knowledge base.


The term EBM was originally coined by epidemiologists at McMaster: "The original model of evidence-based medicine presented in 1992 in the Journal of the American Medical Association went something like this: A clinical question would arise at the point of care, and the physician would conduct a literature search yielding multiple (sometimes hundreds of) articles. The physician would then select the best articles from the results, evaluate the research, determine its validity and decide what to do – all while the patient waited in the exam room. In reality, "that almost never happens," says John Ely, MD, associate professor in the Department of Family Medicine at the University of Iowa College of Medicine. "It's just not practical. Even the original authors of EBM are now saying it isn't practical.""


The same source reveals the conclusions of a prior study that "On average, physicians spent less than two minutes seeking an answer to a question. The two most common sources physicians turned to were 1) fellow physicians, pharmacists and other individuals and 2) drug references and textbooks."


Despite the current trend of clinicians being prodded to seek quickie answers from meta-analyses, is there still any evidence that patients are actually any further ahead, or are we just experiencing Cookbook Medicine? "The number needed to treat is the most brilliant statistic that we have devised in the last hundred years," says Robert Flaherty, MD, a family physician at the Montana State University Student Health Service, who teaches courses in evaluating the medical literature. "It tells you how many patients you need to treat for one patient to benefit. For many of our popular treatments, the NNT is 100, which means our patients have a 1 percent chance of benefiting from it." Moreover, "Almost all reports in the popular media, and many in the medical literature, present risk results as relative risk reductions rather than absolute risk reductions or number needed to treat.


Why? Most likely, it has to do with the perceived impact on readers; that is, relative risk reductions often make data seem more impressive than they actually are. Lest one think that only the public can be misled in this way, a study done by Naylor et al showed a similar effect on primary care physicians' interpretation of risk data."


The same source illustrates a case in point. A study is conducted on patients with heart disease treated with a statin drug for five years. It is reported that this drug prevented non-fatal heart attacks or deaths by 30 percent. If you translate this into absolute risk, it means that if you treat 100 people for five years you'll only prevent 2.4 of them from having an MI or coronary death.