I feel sorry for the big pharmaceutical companies. They are picked on by pretty much everyone that is not on their payroll, and even some that are. Take Pfizer and GSK and other drug companies for example, accused of deliberately hiding results from clinical trials that had results showing their antidepressants weren’t what they’re cracked up to be. Most of us, if we had to show the world the results of our personal clinical and social trials would do the same: Do do the same.
But, yes, they should be held to a higher standard than I am. In part because they claim a high ethical ground and spend millions advertising their integrity, and in part because they are developing products for use by people who may suffer or die as a result of using products that are not as advertised. And in part because they are abusing the trust of the patients who volunteer for the studies on the promise that the results obtained will be useful to science and to other patients. Pretty good arguments.
So what happens. Why are they hiding results?
The companies will claim that it is hard to publish studies with mixed results, where the results are unclear. They are right. Journals, particularly the higher profile journals, are less interested in publishing a study if it does not show a positive result. The reason is that most of these studies have relatively small sample sizes (as was the case in the recent publication - about 153 patients per trial). Failure to show a statistically significant benefit does not preclude the fact that one exists if a larger trial were done and certainly doesn’t indicate that the drug being tested has no effect. Thus the value of the information obtained from a particular trial of small sample size showing no statistical positive effect is minimal. (It is only later, when small studies accumulate, that pooling the results of these trials reveals the true efficacy or inefficacy of a drug.)
This, however, is a weak company argument. There is nothing that stops the company from publishing the results on its own web site. There are also many journals available that have the space to publish the results of smaller trials. Indeed, PLoSONE, is an excellent journal that aims to publish all research, no matter what the result or statistical signifigance.
It is also true that the FDA has copies of the complete results of these unpublished trials but is prohibited by covenants set by government from publishing trials submitted for a drug that subsequently was not approved: The information remains proprietary. The FDA ought to seek approval to make these trials publicly available on its web pages. And the pharmaceutical company lobby group - the Pharmaceutical Research and Manufacturers of America - ought to lobby government to make this regulatory change instead of lobbying to keep it in place.
But the company problems are deeper than non-publication of results that might hinder the marketing and sales of products. As this recent study showed, many companies alter the results when it comes to publication. They do this mainly by selecting study end-points that put the products in a more favourable light. In the 51 reported studies where the results were published, 11 of them (22%) had the results altered and the published results were different than the results show to the FDA. (The FDA had the drug as having no statistically significant effect, the published study showed a positive effect.)
How can this happen? Other than falsification of the data, the main reason is that all studies have multiple possible outcomes of interest. In studies of antidepressants, we may be interested in changes of mood of study patients, or suicide ideation, or attempts at suicide and so on. Yet each randomized clinical trial must declare, before the start of the study, at the time of construction of the protocol, a primary outcome of interest. The trial is designed around this primary outcome. Now in any trial there may be interest in other outcomes. For example, in a study is designed to look at suicide ideation as a primary outcome, the result may show no statistically significant difference. Yet, on another outcome, say a particular scale of patient mood, patients randomly assigned to the active compound, may have higher (better) scores and these may be statistically significant. Nonetheless, this is considered a post-hoc analysis and could easily have occurred by chance. By the rules of science it could have been a chance finding and the result can not be claimed as proof that the drug improves mood.
That major pharmaceutical companies are cheating. They are unethical. They are violating the promise made to study subjects that their participation in trials will benefit other patients and the enterprise of science. They need to be condemned, without reserve. Journals also, need to be chastised for lax editorial oversight in the publication of clinical trials that report non-primary outcomes as primary outcomes, thus misleading their readers and patients. Journals and reviewers have only to request study protocols to validate that the research submitted precisely matches the study hypotheses.
Will these companies ever get it right? Surely the will, I hope.
In a future blog I’ll get into what needs to happen within companies if they are too get it right.
Reference:
Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and influence on apparent efficacy. N Engl J Med 2008;358:252-60
Friday, January 18, 2008
Thursday, January 10, 2008
Equator - Enhancing the Quality and Transparency of Health Research

Even General Motors’ design engineers know that to design a new car they must take into account every part that is essential for the vehicle. Further they must be intimately aware and take into account all national and state regulations that govern automobile design, construction and assembly. They have a check list.
This is important because of the human and economic costs that result if mistakes are made; a part omitted, or one used that does not meet standards generally agreed upon or required by law. At a minimum, having a check list ensures that the design engineers do not arrive at a final design having forgotten to include say a break line or spark plugs or a motor to drive the windshield wipers. Even a defect or missing part discovered on the assembly line or by the first customer leaving the showroom in the rain would result in millions of design and production dollars lost. Engineering a new vehicle is expensive, often a three or four year project costing millions of dollars; not to be wasted by sloppy execution.
When it come to health research the design engineers are called ‘researchers’ and the product being designed is a research protocol that will lead to an answer to an important question relating to human health. The stakes are as high for the research community (or pharmaceutical company sponsor) as in the previous example they are for General Motors. In addition, a flawed design of health research for prospective trials involving human beings has the immediate result that the human participants are taking risks (they may be in the control group and not be treated, or in the experimental group and be subjected to an untested new product) that are from the beginning of the trial futile, unnecessary, and are always counterproductive by their yield of unreliable results.
For example, many randomized clinical trials are so poorly designed that the results can not be interpreted. Most clinical trials are too small to detect any but very large differences between study subjects receiving a new treatment and those receiving a placebo or conventional treatment. There are very few trials, even for common diseases such as heart disease and cancer that contain enough study subjects to be able to say that treatment is better than placebo in prolonging life. <1> And if this were not worrying enough, a single flaw in the study protocols resulted often in substantially biased estimates of treatment effects. <2>
Not only is this an abuse of study subjects time and sometimes their health and of taxpayer or investor money, but the final product, released onto the markets of health care with messages of breakthrough treatments, or claims of health benefits or dangers of certain foods or of sunlight, or of cell phones or of whatever, lead to flawed diagnoses , flawed or dangerous treatments. Their widespread use wastes public and private money on ineffectual or dangerous products. Vioxx - a drug that was no better than standard pain relievers and was accompanied by a non-negligible increase in heart attacks produced $ 2.5 billion in revenue for Merck in 20004 <3>
Research design, like vehicle design, is relatively uncomplicated. We know how to do it. There are generally accepted standards in science as there are in engineering. In 1996, a group of health care researchers designed a check list for the design and reporting of randomized clinical trials called CONSORT. <4> This simple design standard has been adopted as a requirement for publication of the trials by the International Committee of Medical Journal Editors (ICMJE) and subsequently adopted by major medical and health science journals in the world.
In fact there are now generally accepted and proposed standards for 82 different types of research designs, from randomized clinical trials of new drugs, to case-control studies to determine causes of cancer, to molecular genomic analyses to investigate new links between disease and genes, and so on. And the list is growing. See www.equator-network.org
Yet many of these research designs and study protocols are not used; by researchers; by the ethics review panels to ensure that the research is ethical (how can it be ethical if it is of flawed design?); by sponsors who are funding the research; and at the end of it all by the health and medical journals that are publishing the results.
We can do better. Equator, under the leadership of Doug Altman at the University of Oxford has set about to create a global resource for health researchers by identifying guidelines for research design that have already been developed, by providing resources to scientists attempting to establish design standards for their particular types of research (already more than 80 such guidelines exist www.equator.network.org ), by developing support for ongoing evaluations of research design guidelines (because design standards in research, as in industry, change as technology changes), by encouraging journal editors to implement publishing standards for research design and use them in peer and editorial review and by publishing checklists so that readers can be sure that standards were followed and the research is valid and by providing a central resource of guidelines for use by researchers, funders, ethics committees, peer reviewers, editors, physicians and health care personnel and ultimately by patients and their families.
We must do better. Fatally flawed designs in health care research are much more damaging to the public than fatally flawed designs of automobiles, for they are exceedingly more difficult to detect. Thus flawed results are much more likely to be used by patients, inappropriately and often dangerously, for a much longer period of time than it would take for a automobile designed without breaks to be detected.
1. Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Statistics Medicine 2006;3(4):409-20
2. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality estimates of treatment effects in controlled trials. JAMA, 1995;273:408-12
3. Barry Meier, New York Times, December 19, 20044.
4. Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of parallel-group randomised trials. Lancet 2001:357:1191-4
Subscribe to:
Posts (Atom)