Monday, January 28, 2008

Ovarian Cancer

What your doctor is reading - or should be
January 28, 2008
Ovarian Cancer

Cancer of the ovaries occurs in about 1 in 50 women over the age of 50 or so. While not that common, the disease is troublesome because it is difficult to diagnosis early and most forms of ovarian cancer are highly malignant, spreading quickly. Thus there has been an interest in detecting the causes of ovarian cancer and perhaps preventing it.

January 28, 2008

Most of these studies are case-control designs. Here women with diagnosed ovarian cancer are interviewed about possible risk factors. Their answers are compared with answers to the same questions by women without ovarian cancer. These types of studies provide the most of the evidence we have, but are fraught with difficulties: For example, people with a particular condition may remember past events they think might have caused their illness more frequently than people without the condition or illness. A bias (recall bias) results, distorting the true picture and creating a false association and a misleading risk factor.

In this week’s issue of The Lancet evidence is summarized from 48 different case-control and prospective studies that included questions about past or present use of oral contraceptives. The evidence is coherent and consistent across all studies and in the two research designs (case control and prospective): Women who reported using oral contraceptives had a lower risk of developing ovarian cancer. The risk ratio was 0.42 (about a 60% reduction in risk) for women who had reported using oral contraceptives for 15 years or more. Of great interest, the protective effect of taking oral contraceptives early in life, persists for years after a woman has stopped taking them. A podcast by one of the authors and a Lancet editor is available free.

Implications

For an individual woman:

As ovarian cancer is relatively rare, the chances of benefit for any one woman are small. For example if 250 women took oral contraceptives for 10 years, 1 case of ovarian cancer would be prevented.

There are hazards of oral contraceptives - mainly deep vein thrombosis (clots in the veins in the legs and abdomen) which may lead to pulmonary emboli and death, but these events are extremely rare especially with the newer low estrogen dose contraceptive pills. Oral contraceptive use has also been linked, but not strongly, with other cancers, particularly breast and cervical cancer - although the latter is most likely confounded (misleading) because we now know that it is caused by several papilloma viruses, themselves associated with more frequent sex (and now potentially preventable with the new vaccines).

And there are benefits. Oral contraceptives are the most effective form of birth control and prevention of unwanted pregnancy and even more unwanted abortions, (both of which carry substantially higher health risks than oral contraceptives).


For society:

The widespread use of oral contraceptives in high and middle-income countries means that millions of women are taking them. Thus even if the benefits of risk reduction for a single woman are small, for all women, as many as 300,000 cases of ovarian cancer will be prevented by use of oral contraceptives. This is an argument for making oral contraceptives more easily available throughout the world. Indeed, The Lancet editors have called for making the pill available over the counter, without a physicians prescription.

Cautions

This research was carefully done by experienced epidemiologists. Nonetheless, the fundamental problem with cancer prevention research on humans, is that we have to rely on non-experimental information. Thus, it remains possible that some other factor, unknown to the women or to the researchers, may have led women who were unlikely to ever get ovarian cancer to be more likely to use oral contraceptives, and women, more likely to get ovarian cancer, less likely to use them. We have no way of knowing if this bias exists.

Warnings

Women with previous deep vein thrombi, migraines, heart disease or liver disease should probably not take oral contraceptives, or at least should discuss this with their physicians.

http://en.wikipedia.org/wiki/Ovarian_cancer
http://www.nlm.nih.gov/medlineplus/ovariancancer.html
http://podcast.thelancet.com/audio/lancet/2008/9609_26january.mp3

Friday, January 25, 2008

So many small clinical trials, so little information

January 25, 2008
Editors get a lot of manuscripts reporting relatively small clinical trials, in the range of a few hundred study subjects. Such studies can never evaluate clinically meaningful outcomes because like death or hospitalization because these events are rare and require large trials in order to collect enough events for statistical testing. The smallish trials usually measure indirect outcomes, such as serum cholesterol or a change on some indicator of well being such as mood or depression. While helpful, these kinds of measure are not particularly useful to patients and physicians have to take on faith or biochemistry that there is an underlying chain of causality that will in the end yield some tangible benefit that can be experienced by the patient.

Many of these small trials are never published. Individually they convey virtually no information that can be translated into patient care. At best, if enough of them accumulate they can be combined in meta-analyses and then perhaps some estimate of treatment efficacy can be determined. But this is a long term hope and the result always remains uncertain.

It is perhaps ironic that Orwell’s 1984 was the year that Yusuf and others published their cogent arguments for doing large clinical trials of common serious illnesses. They correctly argued that discovery of a small benefit or risk (say 20% improvement) would carry enormous advantage to a large number of patients and to society. Small trials could never detect this level of benefit. Further, they pointed out that doing small trials on common illnesses and conditions would be very unlikely to yield much information about clinically important events, unless the effect was very large and even then, if the treatment was good enough to produce a big benefit, the benefit would have been obvious to the average clinician without a clinical trial. So why do them?

Yet small trials continue to be published. Yesterday, while trawling the clinical trial registry www.clinicaltrials.gov I found 1,788 studies related to the keyword ‘depression’. Limiting this search to ongoing trials (those still recruiting study subjects), those that were investigator classified as phase III or phase lV studies (not the smaller very early Phase I and II assessments for safety and efficacy that are sometimes used to determine if a ‘larger’ trial should go ahead) and those that were registered after January 1, 2005, showed 331 trials.



There were only 10 trials that had more than 1,000 subjects (and many of these were only secondarily looking at my condition of interest - depression - being studies of patients with cancer on different types of chemotherapy, for example). Most (almost half) of the trials for depression had fewer than 100 subjects. Over 9,000 patients were involved in these very small trials, all of which are unlikely to provide any meaningful information and may in fact prove harmful with false leads.

Now readers may argue that depression and other psychiatric diagnoses are difficult to study and that would be true. But this ought to support the Yusuf position that the study of interventions for depression, a common chronic disease, ought to be carried on very large patient groups looking for interventions that have some modest effect, say a 20-25% reduction in suicide or suicide attempts. There is also growing concern that the active compounds being evaluated may not only not be helpful, but may be in some way a cause of suicides. At present there are 160 ongoing trials involving almost 9,000 patients, each with fewer than 100 study subjects. When one adds in to the equation study subject drop-out, non-compliance, observer and measurement error and so on there is virtually no probability that these studies will yield useful results.

I did not examine or try to judge the adequacy of trial designs, objectives and premises, although this might be interesting given that a large proportion (73%) of these designated Phase III and IV trials are sponsored by industry.

My list of trials is obviously a potpourri. Had I limited the search to Major Depression as a diagnostic category, I would have found fewer trials and a different pattern. Also, mine was a desktop exercise and would need to be more rigoursly double checked for errors and validated. Nonetheless, the graph is a reasonable summary of ongoing clinical trials for an important clinical condition and global public health problem.

I suspect that choosing any other condition in the trial registry would yield similar distributions of study size. It would be interesting to do some of this work more formally. I wonder also, if the registry could incorporate some measure of robustness of the trials that are registered. For example, including the elements of trial design recommended in the Consort Statement www.consort-statement.org would be relatively easy for registrants to complete, (they ought to have it completed before taking their studies for ethical and institutional review in any case) and would allow users of the registry (patients, physicians, researches) to judge at a glance the quality of the trial.

Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Statistics in Medicine 1984;3:409-20

Monday, January 21, 2008

Why do pharmaceutical companies keep screwing up?

Perhaps it’s because they are trying too hard. Or lying to hard. But probably it’s because it is really difficult to bring new compounds to market, costs a lot of shareholder money and is very high risk for failure. It must be similar to designing a new commercial aircraft, or even bringing out a new car model that might or might not be market successful. The difference between a pharmaceutical and a vehicle, however, is not only one of size, it is of efficacy and safety. The auto industry knows before hand that they can design a safe product that actually works. Their only concern is whether there are enough buyers. For the pharmaceutical industry, there are no guarantees at the design stage that the product will work, nor that it will be safe.

The statins are a wonderful example of this difference and of the financial pressures on big pharmaceutical companies. Thus Merck embarked in a joint venture with Schering-Plough on a clinical trial to demonstrate that when their two products are combined into a single tablet called Vytorin, that treatment would be significantly superior to simvastatin alone, Merck’s main cholesterol lowering agent which was about to come off patent and thus be reproduced by generic drug manufacturers at a tiny fraction of the price Merck was able to get when it had the patent-monopoly on the product.

The idea to combine the two compounds made biological sense in that they acted on different sites in the cholesterol metabolic pathways. The Vytorin Enhance trial began in June 2002, aimed to recruit 720 subjects with primary hypercholesterolemia, and after 24 months of treatment to show that the cholesterol plaques in their carotids had statistically significantly shrunk. The results were eagerly anticipated by physicians, patients and shareholders. Although the study end-date was April 2006, by November 2007, the results had not been made public. Merck/Schering-Plough issued a statement to explain the delay:

“The independent panel recommended focusing the primary endpoint to the common carotid artery to expedite the reporting of the study findings.  Merck/Schering-Plough now anticipates that these results of the ENHANCE study will be presented at the American College of Cardiology meeting in March 2008.”

This is a curious statement because the study had only one primary endpoint: carotid artery intima-media thickness per subject over 24 months, comparing baseline reading with the endpoint reading (trial registration, www.clinicaltrials.gov)
Why did the head investigator, Dr. Enrico P. Vetri, a Schering-Plough employee, need to “re-focus”? There was only one focal point. It also does not escape notice that as long as the study was not published, revenue from sales of Vytorin would remain unaffected.

The answer, as it appears in a statement by the two companies on January 14, 2008, (now coming on 2 years after the projected end date of the trial) is that Vytorin doesn’t work. It won’t fly. Not only that, but the data strongly suggests that the combination product is harmful. Subjects in the trial who took simvastatin alone had cleaner carotid arteries than those who took Vytorin.

Now, in fairness to the companies, the result must have been a surprise and they must have wondered if somewhere in the analysis of the carotid artery results a data-entry problem might have occurred, perhaps that the patient data files got mixed up. (Nonetheless, in their November 17 statement, they should have been clear that this was not a question of re-focusing on end points, but something far more serious. Patients taking both drugs did worse. And this is not just that they felt worse, their carotid arteries carrying blood to their brains got more clogged up!

Pharmaceutical company corporate behaviour, we are assured, is changing. Merck claims to "put patient safety first". Yet, this kind of press statement along with the fact that it appeared 16 months after the trial end date belies that claim. The Orwellian language of the press releases must have been a deliberate choice of words by the marketing department, pressured as it always is by Finance. Obfuscation, truthiness - if not lies.

The companies have taken pains to point out that the results show only a tendency towards clogging of the carotid arteries by their product, there is no proof - i.e. statistical significant proof. Sure, the result might have arisen by chance. But the difference in intimal carotid artery plaque was 0.111 mm in the Vytorin patients and only 0.0053 in the simvasatin group. Thus plaques grew almost twice as big in the Vytorin group (91% bigger).

The Vytorin trial also demonstrates that although huge amounts of money are spent annually by patients and governments on statins, there is precious little evidence that they do anything more than lower blood cholesterol. Most folks would not give a damn about some molecule or other in their blood were it not for the fact that they believe that the blood level of their molecule is highly correlated with their atherosclerosis and of their chances of having a heart attack or a stroke. What’s measured in trials however, is 2 steps away from what is of benefit to patients - cholesterol, plaques, strokes. In fact there isn’t much evidence that lowering cholesterol reduces rates of heart attacks and strokes and this study suggests that the the combination product Vytorin may be harmful.

The same is true for most pharmaceuticals. We evaluate whether they work or not, not on the basis of a test drive or flight - will this thing actually fly - but on a set of theoretical arguments that it ought to fly. This is equivalent to Boeing saying “Hey, the engine started it ought to go up!”

In a way it is surprising that Merck and Schering-Plough agreed to go ahead with this trial. They could have proposed a trial that had the soft intermediate end points like levels of cholesterol. Indeed, of the 18 ongoing trials of Vytorin, all but one have intermediate metabolic endpoints - usually levels of LDL-c. Only one large trial - to be completed in 2011, will look at what really matters to patients and doctors - rates of heart attacks and strokes. I wonder now if it is ethical to continue further studies of Vytorin? At a minimum one would have to warn study subjects that it looks like Vytorin doesn’t prevent the clogging of their carotid (and presumably coronary) arteries and there’s a pretty good chance that it may make them worse.

Vytorin is not the only example of a drug trial going wrong. I believe that Merck got into trouble with Vioxx because they made a judgement error in terminating the collection of data on drug adverse events before the termination of the collection period for data on good outcomes. They looked for good outcomes for a longer time than they looked for bad outcomes. The extra deaths that occurred in the termination intervals did not materially effect the estimates of risk for Vioxx, but it sure looked bad for the company - that they had deliberately made this decision once they had looked at the data, or at least had seen it coming. It turns out, really, that Vioxx is no worse than the other drugs in this class - they all carry a risk of cardiovascular disease (and their benefit is no greater than less toxic drugs).

Why not just come clean? Pharmaceutical companies know that not all the drugs they design will fly. Hanging on to losers and trying to bend clinical trial results to hoodwink the FDA and physicians is not good enough. They need to set their own internal standards higher than is required and much higher than they are now. This will undoubtedly increase the cost of bringing new compounds to market, but it will preserve their reputations and long run viability.

To come clean will mean that the research divisions must be separated from the finance and marketing divisions not by a Chinese wall, but by a cement one. For Merck and Schering-Plough to choose an employee as the Principal Investigator is a mistake of Titanic proportions. Not that Dr. Vetri is not capable: It is that he could never be seen as independent. But picking a principal investigator is almost beside the point. The entire research enterprise must be in an intellectual vault. Researchers should be rewarded with attractive positions, but those positions and salaries must receive no monetary incentives related to the launch of a successful product. And they must have complete independence in choosing designing the research protocol.

The Finance departments must re-evaluate upward their risk assessments for new compounds and new studies that are proposed. With a truly independent research division it is likely that more studies will fail to show enough efficacy and safety to permit a test flight of the product on the open market. This is a cost of doing business in this high-risk environment, but it is a risk that must be factored into company bottom lines. Such a policy ought to re-assure shareholders and decrease the downside risk of endless collective action lawsuits.

Friday, January 18, 2008

What’s a drug company gonna do?

I feel sorry for the big pharmaceutical companies. They are picked on by pretty much everyone that is not on their payroll, and even some that are. Take Pfizer and GSK and other drug companies for example, accused of deliberately hiding results from clinical trials that had results showing their antidepressants weren’t what they’re cracked up to be. Most of us, if we had to show the world the results of our personal clinical and social trials would do the same: Do do the same.

But, yes, they should be held to a higher standard than I am. In part because they claim a high ethical ground and spend millions advertising their integrity, and in part because they are developing products for use by people who may suffer or die as a result of using products that are not as advertised. And in part because they are abusing the trust of the patients who volunteer for the studies on the promise that the results obtained will be useful to science and to other patients. Pretty good arguments.

So what happens. Why are they hiding results?

The companies will claim that it is hard to publish studies with mixed results, where the results are unclear. They are right. Journals, particularly the higher profile journals, are less interested in publishing a study if it does not show a positive result. The reason is that most of these studies have relatively small sample sizes (as was the case in the recent publication - about 153 patients per trial). Failure to show a statistically significant benefit does not preclude the fact that one exists if a larger trial were done and certainly doesn’t indicate that the drug being tested has no effect. Thus the value of the information obtained from a particular trial of small sample size showing no statistical positive effect is minimal. (It is only later, when small studies accumulate, that pooling the results of these trials reveals the true efficacy or inefficacy of a drug.)

This, however, is a weak company argument. There is nothing that stops the company from publishing the results on its own web site. There are also many journals available that have the space to publish the results of smaller trials. Indeed, PLoSONE, is an excellent journal that aims to publish all research, no matter what the result or statistical signifigance.

It is also true that the FDA has copies of the complete results of these unpublished trials but is prohibited by covenants set by government from publishing trials submitted for a drug that subsequently was not approved: The information remains proprietary. The FDA ought to seek approval to make these trials publicly available on its web pages. And the pharmaceutical company lobby group - the Pharmaceutical Research and Manufacturers of America - ought to lobby government to make this regulatory change instead of lobbying to keep it in place.

But the company problems are deeper than non-publication of results that might hinder the marketing and sales of products. As this recent study showed, many companies alter the results when it comes to publication. They do this mainly by selecting study end-points that put the products in a more favourable light. In the 51 reported studies where the results were published, 11 of them (22%) had the results altered and the published results were different than the results show to the FDA. (The FDA had the drug as having no statistically significant effect, the published study showed a positive effect.)

How can this happen? Other than falsification of the data, the main reason is that all studies have multiple possible outcomes of interest. In studies of antidepressants, we may be interested in changes of mood of study patients, or suicide ideation, or attempts at suicide and so on. Yet each randomized clinical trial must declare, before the start of the study, at the time of construction of the protocol, a primary outcome of interest. The trial is designed around this primary outcome. Now in any trial there may be interest in other outcomes. For example, in a study is designed to look at suicide ideation as a primary outcome, the result may show no statistically significant difference. Yet, on another outcome, say a particular scale of patient mood, patients randomly assigned to the active compound, may have higher (better) scores and these may be statistically significant. Nonetheless, this is considered a post-hoc analysis and could easily have occurred by chance. By the rules of science it could have been a chance finding and the result can not be claimed as proof that the drug improves mood.

That major pharmaceutical companies are cheating. They are unethical. They are violating the promise made to study subjects that their participation in trials will benefit other patients and the enterprise of science. They need to be condemned, without reserve. Journals also, need to be chastised for lax editorial oversight in the publication of clinical trials that report non-primary outcomes as primary outcomes, thus misleading their readers and patients. Journals and reviewers have only to request study protocols to validate that the research submitted precisely matches the study hypotheses.

Will these companies ever get it right? Surely the will, I hope.

In a future blog I’ll get into what needs to happen within companies if they are too get it right.

Reference:
Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and influence on apparent efficacy. N Engl J Med 2008;358:252-60

Thursday, January 10, 2008

Equator - Enhancing the Quality and Transparency of Health Research


Even General Motors’ design engineers know that to design a new car they must take into account every part that is essential for the vehicle. Further they must be intimately aware and take into account all national and state regulations that govern automobile design, construction and assembly. They have a check list.

This is important because of the human and economic costs that result if mistakes are made; a part omitted, or one used that does not meet standards generally agreed upon or required by law. At a minimum, having a check list ensures that the design engineers do not arrive at a final design having forgotten to include say a break line or spark plugs or a motor to drive the windshield wipers. Even a defect or missing part discovered on the assembly line or by the first customer leaving the showroom in the rain would result in millions of design and production dollars lost. Engineering a new vehicle is expensive, often a three or four year project costing millions of dollars; not to be wasted by sloppy execution.

When it come to health research the design engineers are called ‘researchers’ and the product being designed is a research protocol that will lead to an answer to an important question relating to human health. The stakes are as high for the research community (or pharmaceutical company sponsor) as in the previous example they are for General Motors. In addition, a flawed design of health research for prospective trials involving human beings has the immediate result that the human participants are taking risks (they may be in the control group and not be treated, or in the experimental group and be subjected to an untested new product) that are from the beginning of the trial futile, unnecessary, and are always counterproductive by their yield of unreliable results.

For example, many randomized clinical trials are so poorly designed that the results can not be interpreted. Most clinical trials are too small to detect any but very large differences between study subjects receiving a new treatment and those receiving a placebo or conventional treatment. There are very few trials, even for common diseases such as heart disease and cancer that contain enough study subjects to be able to say that treatment is better than placebo in prolonging life. <1> And if this were not worrying enough, a single flaw in the study protocols resulted often in substantially biased estimates of treatment effects. <2>

Not only is this an abuse of study subjects time and sometimes their health and of taxpayer or investor money, but the final product, released onto the markets of health care with messages of breakthrough treatments, or claims of health benefits or dangers of certain foods or of sunlight, or of cell phones or of whatever, lead to flawed diagnoses , flawed or dangerous treatments. Their widespread use wastes public and private money on ineffectual or dangerous products. Vioxx - a drug that was no better than standard pain relievers and was accompanied by a non-negligible increase in heart attacks produced $ 2.5 billion in revenue for Merck in 20004 <3>

Research design, like vehicle design, is relatively uncomplicated. We know how to do it. There are generally accepted standards in science as there are in engineering. In 1996, a group of health care researchers designed a check list for the design and reporting of randomized clinical trials called CONSORT. <4> This simple design standard has been adopted as a requirement for publication of the trials by the International Committee of Medical Journal Editors (ICMJE) and subsequently adopted by major medical and health science journals in the world.

In fact there are now generally accepted and proposed standards for 82 different types of research designs, from randomized clinical trials of new drugs, to case-control studies to determine causes of cancer, to molecular genomic analyses to investigate new links between disease and genes, and so on. And the list is growing. See www.equator-network.org
Yet many of these research designs and study protocols are not used; by researchers; by the ethics review panels to ensure that the research is ethical (how can it be ethical if it is of flawed design?); by sponsors who are funding the research; and at the end of it all by the health and medical journals that are publishing the results.

We can do better. Equator, under the leadership of Doug Altman at the University of Oxford has set about to create a global resource for health researchers by identifying guidelines for research design that have already been developed, by providing resources to scientists attempting to establish design standards for their particular types of research (already more than 80 such guidelines exist www.equator.network.org ), by developing support for ongoing evaluations of research design guidelines (because design standards in research, as in industry, change as technology changes), by encouraging journal editors to implement publishing standards for research design and use them in peer and editorial review and by publishing checklists so that readers can be sure that standards were followed and the research is valid and by providing a central resource of guidelines for use by researchers, funders, ethics committees, peer reviewers, editors, physicians and health care personnel and ultimately by patients and their families.

We must do better. Fatally flawed designs in health care research are much more damaging to the public than fatally flawed designs of automobiles, for they are exceedingly more difficult to detect. Thus flawed results are much more likely to be used by patients, inappropriately and often dangerously, for a much longer period of time than it would take for a automobile designed without breaks to be detected.

1. Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Statistics Medicine 2006;3(4):409-20

2. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality estimates of treatment effects in controlled trials. JAMA, 1995;273:408-12

3. Barry Meier, New York Times, December 19, 20044.

4. Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of parallel-group randomised trials. Lancet 2001:357:1191-4