Thursday, September 18, 2008

Censoring Science and scientists - The Insite Example




Censoring Science and scientists - The Insite Example
(Insite, Canada's only facility for supervised injections of illicit drugs)

Censored is a powerful and friendless word with few public advocates. When Galileo, perhaps the most famous censored scientist, published his proof that the universe is heliocentric not geocentric (that the earth was not the centre of the universe but rather revolved around the sun) the idea was unacceptable to the religious beliefs of the Catholic church of late Renaissance Italy. The proof was banned, books were burned and Galileo himself sentenced to house arrest.

Today, there are few similar examples. Yet in the privacy of our lives, offices, businesses and yes even governments ideas and evidence are suppressed, often to the point of unspeakablity. Just try to get scientists working in industry or government to comment on their work: One is quickly referred to communications departments. This censorship, which is ongoing and comprehensive, has given us ‘whistle-blowers’ and freedom of information legislation.

Similar recent examples, albeit without the arrests, of modern censorship by government are easily found. Here is an example of the text of a presentation on the health effects of global warming by Dr. Julie Gerberding, head of the prestigious Centers for Disease Control and Prevention (CDC) in the US. The censored portions of her text (over half the text was censored) were made by President Bush’s office. The censorship was not made known to Gerberding’s audience and only later came to light when revealed by an investigative committee of the US Senate.

Dr. Julie Gerbending - October 23, 2007 - Testimony before the Senate Committee on Environment and Public Works (censored version)

“The health of all individuals is influenced by the health of people, animals, and
the environment around us. Many trends within this larger, interdependent
ecologic system influence public health on a global scale, including climate
change. The public health response to such trends requires a holistic
understanding of disease and the various external factors influencing public
health. It is within this larger context where the greatest challenges and
opportunities for protecting and promoting public health occur.  
Censored
Scientific evidence supports the view that the earth’s climate is changing. A broad array of organizations (federal, state, local, multilateral, faith-based, private and nongovernmental) is working to address climate change. Despite this extensive activity, the public health effects of climate change remain largely unadressed. CDC considers climate change a serious public health concern. ...”

Health and health care are quintessentially political. Nowhere is this perhaps more clear than when dealing populations already marginalized by poverty, skin pigment, body weight, gender ... addiction. Scientific findings and proposals confront popular ideologies embodied in our elected governments. We have the government we deserve.

Yet governments today can’t ignore science any more than they can ignore economics, accounting, finance. Modern governments are expected to act wisely to improve the lives of the electorate and to invest (our) money in public projects that work and to evaluate their effectiveness. We expect governments to make policy decisions not on the basis of ideology, but by using tools of science, particularly those drawn from fields of evaluative sciences, like epidemiology, economics, finance and others.

Faced with science that it can’t censor publicly, governments turn to other techniques - distortion, suppression, delay, denigration come to mind; these are less visible than the red pen, but all are attempts to create a culture that denigrates science as elitist, impractical and amoral (the last of wich is of course what science is supposed to be.

How is Science marginalized?

I have considerable sympathy for politicians: they often find themselves near the centre of ideologic discussions which by definition, are controversial and by the nature of politics, are public. Although expected to act rationally, politicians are in politics because of their ideologies and highly motivated to champion causes favoured by at least some of the public who voted for them or are likely to do so in the (near) future. They are also decent human beings trying to do the right thing in the face of considerable uncertainty: As are the government employees who do the background work needed to develop policy and recommend changes to policy.

So, in the face of uncertainty, called upon to make wise decisions using the best scientific evidence, yet finding that evidence contrary to current ideology, governments try to marginalize the science; both the science that is out there - published - and the science that the government itself commissions or otherwise controls. Harassment of scientists by denying, or threatening to deny, funding for research, or by denying access to data needed for their research, are common recourses of governments. In this case the Federal government cut all funding for research on Insite, precluding scientific evaluation of the facility. This makes it virtually certain that there will be no further comprehensive evaluations of Insite.

Disparagement of science and scientists is another tactic used by governments (and others) when faced with uncomfortable scientific evidence. In a classic instance of shooting the messenger, Health Minister Clement in testimony before the a recent session of the Standing Committee on Health (May 29, 2008) had this take:

“On the question of science, let me assure you I've read many of the studies that have been published on Insite. These studies have the weight of publication as well as some articulate proponents who insist that their positions are the correct ones. Many of the studies are by the same authors who, quite frankly, plough their ground with regularity and righteousness. Indeed, while in our free society scientists are at liberty to become advocates for their position, I've noticed that the line between scientific views and advocacy is sometimes hard to find as the issue on Insite is developed.”

These comments deprecate all of science (among other aspersions that published studies somehow have an unfair advantage over unpublished ones!) and are on the edge of libel in their characterization of the scientists evaluating Insite and, more broadly, Science itself.

Another tactic is to completely sidestep the evidence by creating pseudo-scientific alternatives. Usually this involves forming an Expert Advisory Committee chosen by the government. In the case of Insite, Minister Clement created an Expert Advisory Committee made up of a mix of individuals with some expertise in the general area of crime and drugs, but little in the area of evaluative epidemiology or in public health research or in the management of patients with severe drug addictions. The rationale for the choices is not given. The resulting report (available here) is a summary of commissioned work and the expert committee’s interpretations. The individuals selected to serve on the committee have worked in areas related to the problems of law enforcement and drug addiction, and I’ve no doubt of their good intentions; --, but the whole exercise avoids the critical scrutiny that publication offers and occasionally demands as a condition of publication, including peer review.

When governments want expert scientific opinion it is always unclear to me why they don’t use existing agencies with real expertise in research - in this case the Canadian Institutes for Health Research. Such an agency, at a clear and visible arms-length from government, could easily set up an advisory committee, commission further studies and ensure that the commissioned research meets acceptable national and international standards. And ensure that it is peer reviewed and published.

Governments may form advisory committees and then censor their reports. In the case of Insite, Minister Clement relied at least in part on the Ministerial Advisory Council on the Federal Initiative to Address HIV/AIDS in Canada. Formed in 1998, the Council issues reports (and I am informed has visited Insite) but these reports, and the work of the committee, are published at the discretion of the Minster of Health: The mandate (dated November 2007) includes this stunning paragraph:

“These [reports, policy papers, meeting minutes] are considered as confidential advice to the Minister and their release and dissemination are therefore subject to the Minister’s review and approval.”

In fact the last published annual report is for the year 2004-5. (I am told that the missing reports are currently being ‘translated’ and will be published shortly.) Will they be censored? Will they be complete? Will the Minister alter the record? Will we know? I do not doubt the good will, sincerity and expertise of the members of the Advisory Council, but I do question their independence and wonder at their willingness to accept such a constraint to their work, one that might and likely would limit and should certainly be perceived as limiting their areas of inquiry to those likely to be acceptable to the Minister, and raises some serious questions about the political naivete of scientists who sit on such committees.

Making science not policy the target

Is it really possible to scientifically evaluate a program like Insite? Science never reveals a truth, but rather works to remove uncertainty: politics claims to reveal truths and definitely creates uncertainty. Science rarely provides a yes/no answer: Is needle exchange effective? Do injection centres improve addicts’ health? The health of communities in which they live? These public health problems are too complex, the time frame for most evaluations too short and scientific strategies too limited (we can’t do randomized clinical trials for example). While it is certainly worthwhile to criticize the research (and the criticisms so far are school-book elementary) the criticism has not been accompanied by realistic proposals for alternative research designs.

One needn’t be an epidemiologist or an economist to realize that trying to keep track of the disparate and authority-suspicious population of patients using Insite is going to be a lot harder than, say, keeping track of a cohort of university professors or politicians. Governments and other ideologic critics often expect too much from science and then are critical and disparaging of the reports they receive. Isufficient scientific evidence is equated with showing no evidence of program effectiveness, a profoundly illogical conclusion.

Gathering evidence and then setting up special committees to examine and summarize it is not wrong - if done at arms’ length from government. The problem is misusing science: censoring, underfunding, disparaging the (limited) evidence science produces, in claiming, falsely, that no evidence equals no effect, and in camouflaging ideological policy as rational, science-based public management.

Cutting funding, non-publication of reports and deliberations of expert panels, denigration of science and scientists, bypassing the rigours of publication and independent peer review with confidential and in-house documents, and placing the blame for policy inadequacy on the scientists, are all forms of censorship. Politics is hard, the decisions are tough, the ideologies vocal and voting, but still, the right thing to do is to share and make pubic the policy dilemma and all of the extant evidence, however fragmentary and fragile.

thanks to Udo Schuklenk and Rosemary Jolly for helpful comments on this piece. 

Wednesday, July 30, 2008

Screening for hypercholesterolemia - new guideline recommends screening of all children and drug therapy for those older than seven. Wise?

Pediatricians and family physicians alarmed by the growing prevalence of obesity in children in their waiting rooms and increasingly frustrated by their lack of success in encouraging parents to try to get their child(ren) to lose weight and exercise more may welcome the recommendations of the Committee on Nutrition of the American Academy of Pediatrics.<1> The recommendations reverse previous guideline recommendation for obese children and now recommend cholesterol screening and treatment. Now they can do something. But should they follow the guidelines?

The new guidelines recommend screening for hypercholesterolemia in all children beginning at 2 years of age and (presumably)repeated annually. The screening is two-pronged beginning with an individual assessment of risk based on family history and the child’s current obesity and presence of other risk factors for cardiovascular disease (CVD). Then, those 8 years of age or over are treated with diet plus pharmacotherapy depending on the levels of hyperlipidemia.

Alas there are no clinical trials of drugs for hyperlipidemia in children and hence no evidence that drug therapy will reduce the child’s risk of cardiovascular disease later in life. Nor are there even medium term studies of risk of taking these drugs in children. There is some evidence that children with a homozygous defect in their metabolism of cholesterol (familial hypercholesterolemia) do respond to drugs commonly used to treat hypercholesterolemia in adults and that this may result in some improvement in the health of their arteries by slowing the normal growth of lipid and plaque changes that occur in these high risk children. Although in the short course of these studies the usual side-effects seen with these drugs in adults were also seen in children it will take much longer term studies to assess risk in growing children.

What’s wrong with the recommendation that children be screened for hyperlipidemia?

Screening is the deliberate selection of “healthy individuals for the purpose of separating them into groups with high and low probabilities of a given disorder”<2>, in this case all children over the age of two. The purpose is to identify those at risk and to treat them at an earlier point in time to prevent later disease. In screening there is “the implicit promise that those who volunteer to be screened will benefit.” <2>

This is a bold promise in any screening question but it is an audacious promise to children with hypercholesterolemia. There is but fragmentary and tangential evidence of benefit. Granted the evidence needed to make the promise will be difficult to attain, requiring large clinical trials lasting for decades and demanding that children and their parents administer daily doses of cholesterol lowering drugs (or placebos).

There is even no evidence (and probably no expectation) that even a fraction of children and their parents will take cholesterol lowering drugs for even a few months for a few years. We can expect considerable non-compliance, especially as children leave the parental grasp.

The Academy guidelines do not consider the costs of their recommendations - costs to individuals and to society. There are approximately 72 million children (under the age of 18) in the U.S of whom 10.6 million have no health insurance and would likely have no way to pay for screening and the decades of cholesterol lowering drugs. Health insurance companies in the US and technology assessment agencies in Canada and other countries should proceed cautiously when assessing this new offering by the American Academy of Pediatrics. Money might be more wisely spent elsewhere.

The screening for hypercholesterolemia recommendations of American Academy of Pediatrics meet few of the criteria for an acceptable screening intervention. <2,4> . Failure to meet the promise of an efficacious treatment, failure to assess harm, failure to assess and evaluate cost and opportunity cost ought to render this set of recommendations inoperable.

The algorithm, however, might be useful to pediatricians and family physicians faced with individual obese children with multiple risk factors for cardiovascular disease. They may opt to evaluate the lipid status of these patients and to consider lipid lowering agents after assessing the receptivity of parents and children to life-long therapy and the probabilities of reasonable compliance. But this is not screening, it is clinical practice, or as Sackett and Holland have it, “case finding”.

References:
1. Daniels SR, Greer FR, and the Committee on Nutrition. Lipid screening and cardiovascular health in childhood. Pediatrics 2008;122:198-208
2. Sackett DL, Holland WW. Controversy in the detection of disease. Lancet 1975
3. Census Brief. Children without health insurance. CENBR/98-1 March 1998
4. UK National Screening Committee. Criteria for appraising the viability, effectiveness and appropriateness of a screening program.

Monday, June 16, 2008

Type 2 Diabetes - Time to relax: Two clinical trials show no benefit and some risk of tight control of blood sugar

What your doctor is reading or should be

Two clinical trials published earlier this month show that aggressive control of blood sugar not only failed to yield benefits, it made things worse - higher death rates and more myocardial infarctions and strokes in the group with tight control of their blood sugars, compared with the control group managed conventionally.

The ‘ADVANCE’ trial, sponsored by a pharmaceutical company using their drug gliclazide (a sulfonylurea) as the main agent to reduce blood sugar and the ‘ACCORD’, trial, funded almost entirely by the US NIH (National Institutes of Health) using several drugs evaluated the effects of tight control of blood sugar of the risks of developing complications of diabetes. The ACCORD study was terminated before the planned end date because a scheduled interim analysis showed a statistically significant higher death rate among patients in the tight control group. Tight control was a target of maintaining glycalated hemoglobin, Hbg A1C, - a measure of control of blood sugar levels now used by most physicians and patients to evaluate diabetes management - below 6 %.

The objective of both studies was to see if tight control, defined as getting the patient’s glycalated hemoglobin from an average of about 8% before the study to below 6% in patients randomized to tight control and to between 7 and 8% in the usual care group. These targets were fairly well achieved in both studies. Patients in the tight control groups were more likely to be prescribed oral agents and were more likely to be taking insulin (77% in the tight control group vs. 55% - ACCORD study).

The results were spectacular and unexpected - in both studies. The ACCORD study was stopped prematurely because of an excess of deaths in the tight control group (5%) compared to the standard therapy group (4%). This 22% increase in death rates per year (1.41 vs. 1.14 deaths per year) was statistically significant (95% CI 1.01 to 1.46) and forced the premature stop.

The ADVANCE study chose to define its primary outcome for the randomized trial as a combination of microvascular and macro-vascular complications. The rationale for choosing this mixed outcome is not explained. Using the combined outcome, the ADVANCE study showed that patients who were on tight control were more likely to avoid development of macroalbuminuria (an indicator or microvascular disease of the kidneys) 2.9% in the tight control group vs 4.1% in the usual care group, hazard ratio 0.70, 95% CI 0.57 to 0.85). There was no effect on development of clinically important microvascualar renal disease such as a need for dialysis/transplant or death from renal causes. Serum creatinine - a measure of renal deterioration that indicates patients might one day need dialysis or transplant doubled in 1.2% of tightly controlled patients vs 1.1% of usual care patients, a statistically insignificant difference.

In short, after an average of 5 years of tight control in the ADVANCE study, the only difference between the two study groups was in the proportion of patients who developed very early indications (macroalbuminuria) or microvascular disease. The study did not show an excess of deaths from macro-vascular causes (stroke and heart attacks).

While the authors of the industry funded ADVANCE study tout the result as showing “a one-fifth reduction in microvascular complications” the authors of the ACCORD study concluded that there is a previously “unrecognized harm of intensive glucose lowering in high-risk patients with type 2 diabetes mellitus.”

Here is a summary of the key outcomes in both trials:


Both studies, as expected showed that patients on tight control had more insulin reactions although the differences in frequency of severe events (in the ACCORD therapy defined as those that required medical attention) is remarkable. In the ADVANCE study severe hypoglycemia was defined as “transient dysfunction of the central nervous system” to the extent that they “required help from another person”.

Differences in softer (and perhaps harder) clinical outcomes may be due in part to the geographic locations of study participants and differences in the usual patterns of practice of clinical medicine. In the ADVANCE study it appears that patients were drawn from Australia, New Zealand, the UK, several European countries and China (perhaps including members eligible for care in a military hospital) as well as from Montreal and New York. The ACCORD study drew patients from across the US and Canada.

Interpretation

The results of both studies give pause and the results from the ACCORD study demand action on the part of clinicians treating patients with type 2 diabetes mellitus. In the face of an substantial increase in the risk of death from all causes, clinicians must be much less aggressive in encouraging patients to get better control of their blood sugars. This appears to be dangerous in patients with type 2 diabetes mellitus. In the ACCORD study only 50% of patients achieved glycalated hemoglobin Hbg A1C levels of 6.5% or less. Even this modest achievement in blood sugar control resulted in harm.

There is no good explanation for the results (why would more aggressive control of blood sugar in patients with type 2 diabetes lead to heart attacks, stroke and death). But there need not be an explanation in order for clinicians and their patients to change direction on the management of type 2 diabetes mellitus. More is not only not better, it is worse - in terms of life expectancy and the frequency of severe hypoglycemic episodes.

The negligible benefit on microvascular disease noted in the ADVANCE study is perhaps not a surprise as there is some evidence that tight control does result in slower progression of damage to very small blood vessels. Nonetheless it is worth noting that this did not manifest itself in a lesser frequency of development of retinopathy, nor of severe renal failure requiring treatment, nor in the worsening of renal function as measured by serum creatinine. These benefits can be considered minimal and come at a cost of higher death rates overall.

Implications

For patients and families

Patients with type 2 diabetes can’t escape being told daily on television and in magazine and Web advertisements that controlling blood sugar is essential to their health and longevity. Devices to measure blood glucose (after every meal and before and after every physical activity) abound, along with advertisements from multiple drug companies anxious about their market and share values.

Blood sugar, however, does not appear to be a central or perhaps even a peripheral cause of the complications of diabetes. Blood sugar more and more looks like an innocent bystander or itself the product (not the cause) of some other disorder of blood vessels. This is a bit like the old saw that a carpenter with only one tool - a hammer - sees the solution to every building problem as requiring a nail. We can measure blood sugar. So we develop ways to smash it down to the ‘normal’ range, with glucose meters, oral hypoglycemic agents and insulin.

Certainly blood sugar can be problematic, especially when it is very high and other metabolic changes occur - but these are rare in type 2 diabetes. When first diagnosed with diabetes, most patients have no symptoms. The diagnosis is an unwelcome surprise. Patients then learn they are at risk of developing ‘complications’ of diabetes such as renal failure, leg ulcers, blindness, heart attacks, congestive heart failure and stroke.

Commentators on these 2 studies agree that patients (and their physicians) should back away from the target of achieving ‘normal’ levels of glycalated hemoglobin (6%). They agree that it is more important to emphasize efforts to achieve normal body weight, to follow a Mediterranean diet, to take medication for hypercholesterolemia, and treated for hypertension if present. All interventions reduce the chances of heart attack, stroke and death.

But these 2 studies force a larger consideration - the wisdom of screening asymptomatic individuals for abnormal blood sugars. By doing so we label individuals as being at risk of later serious and life threatening complications for which there is no effective therapy. The ‘do no harm’ principle of medicine is violated. Sure, we can make weak arguments that perhaps their long term risks of microvascualar complications could be reduced by close attention to blood sugar, but even this is modest at best and comes with additional risks that are much more serious.

A diagnosis of type 2 diabetes mellitus is not a gift, it is a burden that patients will have to carry for the rest of their lives. A burden of increased frequency of visits to physicians, of glucose measuring devices, or pharmaceuticals with side effects of hypoglycemia and perhaps other risks and an instantaneous trip from the land of the healthy to the land of the sick, with no return ticket.

For society

The FDA and similar agencies in other countries must revise approvals for the advertisement to physicians and directly to patients of information about diabetes and its treatment. In future advertisements need to include a warning that tight control of blood sugar increases the likelihood of death.

Current guidelines (US American Diabetes Association) suggest “the A1C goal for the individual patient is an A1C as close to normal (<6%) as possible without significant hypoglycemia.” This recommendation requires revision as do the screening and case-finding recommendations for the detection of asymptomatic type 2 diabetes mellitus.


References
Advance Collaborative Group. Intensive blood glucose control and vascular outcomes in patients with type 2 diabetes. N Engl J Med 2008;358:2560-72

Action to Control Cardiovascular Risk in Diabetes Study Group. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2009;358:2545-59

Tuesday, June 10, 2008

 The irritating imprecision of medicine - will network analysis help?

A friend of mine thought about buying a car the other day and went to the dealer to inspect new models. He’s interested in reliability of the car as he travels a long distance usually on back roads visiting customers and trying to make sales.

“The best model” the dealer said, “is this one. It rarely breaks down, in fact over 10 years only 16% have to be replaced or have major repairs.”

“That’s one in six” my friend said. The dealer replied that he could buy (expensive) insurance to cover the period not under warranty. “Best we can do” he said.

Each time I use the Framingham calculator I’m reminded of the similar imprecision of modern medicine.

As I fiddle with it today, entering a total cholesterol near the top of the range, a ‘good’ cholesterol in the middle of the range and a normal blood pressure yields a hypothetical me with a 16% chance of a heart attack over the next 10 years. While this is prognostically 4 times higher than if my total cholesterol were the lowest on the scale and my good cholesterol highest, it still means that even with these excessive cholesterol levels my chances of having a heart attack is still far below even 50/50. Most men my age with bad cholesterol values don’t have a heart attack or die. Indeed if there were 6 men similar to me only one of us would have a heart attack in the following 10 years.

Why do some get heart attacks and others don’t? This is the irritating imprecision of medicine. While all cholesterols (good and bad) are the same, each of the 6 men have different ‘other factors’ that play into a complex yet unknown equation that somehow leads one of us to suffer the bad outcome.

When I see a patient with a bad condition I always remember my first patient with cancer of the lung. Mr. Walker was 59, and other than pain in his shins (an infrequent but known secondary effect in a small number of patients with lung cancer) he was well. Yet there was no hope. Removal of the lung would not save Mr. Walker - we had good epidemiological studies showing dismal prognosis and as most older patients could not survive with a single lung, high operative mortality rates. I presented his case to our professor, a man older than the patient. To my recommendation that Mr. Walker be sent home without having his lung removed, my professor replied that a few of his lung cancer patients who had surgery survived for very long periods, so why deny this one the chance. The professor recommended that I see if my patient could walk up a flight of stairs without stopping. If he passed this ‘stress test’ he could survive with one lung, said the professor. He was old enough to have seen enough lung cancer to know that not all were alike.

Wouldn’t it be fine if our physician knew which of the 6 of us was going to suffer the bad outcome? Not everyone who smokes a pack a day for 40 years will get lung cancer. In fact, most won’t. We all know committed smokers who believe that inhaled smoke gives the lung tissues a protective coating. Perhaps they are right.

Of 100 men with prostate cancer limited to the prostate 75 will not have any evidence of metastases 10 years later. (For those having a prostatectomy only slightly more, 85, do not develop metastases over 10 years. Although the disease phenotypes are the same - cell type adenocarcinoma, anatomic location within the prostate at the time of diagnosis - only about 25% of men go on to develop metastases.   While this might look like the play of chance, science insists there must be a reason. Greater diagnostic precision will reduce prognostic imprecision and lead some men to avoid debilitating and unnecessary prostatectomy.

Physicians make diagnoses on the basis of anatomical location of symptoms, physical signs (anatomical and physiological - e.g. blood pressure) and laboratory tests that measure specific body systems for homeostasis such as oxygen transport, blood sugars, blood, immune response, the presence of pathogenic organisms and toxins, and so on.

A series of papers over the past decade have demonstrated that we need to rethink our reductionist conceptions of diseases as being a set of distinct entities, like congestive heart failure, or sickle cell disease or AIDS and begin to understand that these illnesses are each complex involving multiple genetic, environmental and social factors. As we begin to understand these factors and their relationships, our conceptions of disease will change, diagnostic and prognostic labels and estimates will be altered, and new systems of understanding human physiology and pathobiology will enter the language of diagnosis and the practice of medicine. These discoveries have arisen in the emerging field of network analysis.

The genetic component can be thought of as a limited but as yet incompletely identified set of genes that control human development and cellular function throughout life. Some genes are known to be involved in specific areas of human development and pathophysiological response because mutations in these genes result in specific diseases, such as sickle cell anemia, familial cardiomyopathy, pulmonary arterial hypertension, diabetes, various cancers and so on. Some of these are monogenetic in that a single mutation is common to all who have the defect. In others seemingly identical diseases can result from mutations in more than one gene or can arrive from different mutations in the same gene - for example the clinical disease -  familial pulmonary arterial hypertension -  can arise from over 50 different mutations.

It seems helpful to conceptualize two categories of genes, those that have a specific role with a particular type of cell and those that are more generic. The specific-role or primary genes become apparent when mutations arise. Sickle cell anemia derives from a mutation in a single gene that results in the substitution of valine for glutamic acid at a specific position in the molecule that makes up the beta-chain of hemoglobin. Under hypoxic (or other) conditions this results in the formation of hemoglobin polymers which cause the erythrocyte to assume a sickle shape. Genes involved in various malignancies probably function in a similar but more complex way.

The other category of disease modifying genes have broader effects that serve to modify threats to cells from the specific disease-related genes and from environmental threats such as temperature, radiation, hydration and tonicity, oxygen, micro and macro nutrients, infective agents and toxins. The ability of an individual to accommodate these genetic and environmental threats is also part of the genomic makeup of that individual. The resultant pathology depends on the interaction of the gene with the environment.

Conceptually, it might look like this:


From the same paper here is the disease network for sickle cell anemia, a disease that can present with many pathophenotypes:

The figure is copied from the article by Loscalzo, Kohane and Barabasi <1>
The primary genetic abnormality, hemoglobin S (red) can be affected by other genetic abnormalities, if they are present (grey). The various clinical presentations of sickle cell anemia (in blue) are thus the result of a network of cellular and sub-cellular events, aided and modified by environmental agents (green) and the genomic elements that control the bodies generic reactions (yellow).

On a general level none of this is new or surprising. We know that our genetic complement and our lifetime interaction with environmental factors are somehow responsible for our particular pathophenotype experience - the disease we have. What is perhaps new is the notion that although the phenotypes may look identical, they are not: What is different about them is how they are produced - their underlying networks of causation and damage control. Understanding these networks will add precision to diagnosis and will be helpful in developing pharmacologic interventions that disrupt or disconnect the disease causing elements of the network.

While the primary nodes of causality - primary and secondary genomes and environmental factors (physical and social) are limited, the secondary networks of each primary node are more complex and the number of possible interactions between them large. Thus a single anatomic or cellular or molecular pathophenotype might have been the product of a very large number of possible interactions involving the primary nodes of causation.

Even in this simple schematic model, there are an exponential set of of possible interactions each of which might produce a different presentation of the ‘same’ anatomic or functional disease or pathophenotype. Thus some adenocarcinomas of the breast are affected by primary genomes such as the BRACA genes, secondary genomes such as estrogen receptors, and are likely further modulated by environmental causes, and perhaps environmental factors that play a role in prognosis and the role of the secondary genome in control of proliferation, immune response, apoptosis/necrosis and so on.

Although logically unassailable the new model of disease would be mathematically unusable due to the exponential number of possible combinations of nodes and sub-nodes (if all possible connections are considered to be random events).

Fortunately they are not. The pathways are part of non-random networks between primary and secondary nodes of influence, themselves interconnected. Network analysis is changing the way we look at biological, social, economic, electronic and other networks.

Practical applications are in the future, perhaps decades from now. But we should pause when considering a diagnosis to hand on to a patient, for each diagnosis comes with a prognosis and a set of care burdens and, unfortunately, with considerable imprecision about likely outcomes. Recent publications on aggressive control of blood sugars are only the latest to confirm how little we know about the natural history of the diseases we diagnose. (see next post

References

1. Loscalzo J, Kohane I, Barabasi A. Human disease classification in the post-genomic era: A complex systems approach to human pathobiology.

Wednesday, April 16, 2008

Type 1 Diabetes - Ongoing clinical trials of immune suppressant therapy

What your doctor is reading or should be.
April 16, 2008

(photo courtesy of nicointhebus (Nicolas Monnot)

The recent onset of diabetes in the child of friends of ours led me to the registry of clinical trials. There are 68 studies in the registry recruiting children for studies on type 1 diabetes in children. Our friends, whose son is 8, had already contacted an investigator for one of the trials who encouraged their participation.

This trial and several others take advantage of new understandings of the proximate causes of Type 1 diabetes now believed to be related to autoimmunity, or said another way, due to the body somehow reacting against itself, or parts of itself, in this case the beta cells in the pancreas that make insulin. The result is a gradual decrease in beta cells and usually the gradual onset of diabetes, high blood sugars and later the vascular and other complications that may arise.

The idea behind many of the trials of aggressive intervention in new onset Type 1 diabetes is to try to slow the gradual destruction of beta cells by interfering with the autoimmune system that is producing antibodies that stimulate T-Lymphocytes that are causing the damage. One way to do this is to block the antibody receptor site on the T-Lymphocyte.

One of the trials (registry number NCT00129259) involves the molecule hOKT3gammal (Ala-Ala), which is administered intravenously over a 14 day period. This regimen is repeated a year later. The aim is to determine if patients receiving the molecular antibody (drug) will retain more of their ability to produce insulin. In a previous study of a very small number of patients, this seemed to be the case with about 70% of patients with new onset Type 1 diabetes receiving the drug maintaining or increasing their insulin production compared to about only 20% of control patients, who received usual care for their diabetes.<1> For information on this trial see www.clinicaltrials.gov


Interpretation

While this is an exciting area of new research, the results so far are but preliminary. All the trials do not rule out the possibility of the results being due to chance (as the number of patients in the trials is small) or bias (most studies are open label - meaning that the investigators and the patient’s physicians know whether their patient is in the treatment or control group).

Also, the immune system is complex, important for a very large number of body functions and protections and incompletely understood. Alterations in T-cells, a key component of our immune systems, may generate adverse events during the trials (so far no serious events have been detected) or later. Early trials involving small numbers of patients are unlikely to detect infrequent yet serious adverse events.

Implications

For the patient and family

Type 1 diabetes always comes as an unexpected and unpleasant surprise. The possibility of a treatment that could reverse or slow the onset of insulin-dependent diabetes, or reduce the need for insulin and perhaps even delay or deny the onset of known adult complications of diabetes, generates hope. It is unlikely, however,  that the drugs being studied will be released for clinical use for at least several years.

Participation in a trial may provide parents with a way to express their non-competing feelings of hope and anxiety by ‘doing something’. Unfortunately, outside the US there are few trials to available.

To find a trial in your area, use the search function at the registry (www.clinicltrials.gov) and enter the search terms: type 1 diabetes, open studies, interventional studies, child and country. The trials are described and contact information is provided (usually a telephone number and address).

For Society

While type 1 diabetes is not common, it is a serious illness and deserves our attention, research and treatment resources. Finding a way to prevent the illness or severely limit its effects would provide relief to many families, especially those who harbour the genetic signatures that in some way enable the disease to take hold.

Reference

Herold KC, Gitelman SE, Masharani U. et al. A single course of anti-CD3 monocoloanal antibody hOKT3 1(Ala-Ala) results in improvement in C-peptide responses and clinical parameters for at least 2 years after onset of type 1 diabetes. Diabetes 2005;54:1763-9

Tuesday, April 8, 2008

Wyeth v. Levine: Should the FDA be held responsible for adverse events?

Diana Levine, a musician, went to the hospital with severe nausea associated with a migraine headache. She was given Phenergan, a Wyeth drug, by intramuscular injection. When the nausea persisted, the drug was administered by IV push into a vein. Unfortunately some of the drug made contact with arteries in the arm, resulting in arterial damage (a known risk), gangrene and eventual amputation.

Levine’s case against Wyeth is based on the fact the product label did not mention dangers associated with the IV push method of administration, and that Wyeth knew about these risks. A Vermont jury awarded Ms. Levine $6.7 million dollars.

Wyeth did know about the risks associated with IV push administration of Phenergan but did not include a statement of these risks on the product label, nor did the FDA require Wyeth to include this warning. (It is unclear is the FDA was aware of the risks.) Wyeth appealed to the Vermont Supreme Court and lost. The company then appealed to the US Supreme Court which will hear the case next week.

The case is of considerable interest to pharmaceutical companies in the US. If the Supreme Court rules that FDA approval of a drug and the content of its labeling and warnings, means that the pharmaceutical companies can not be libel for adverse events the companies will be permanently off the hook for any resulting damages that arise in use of their products.

There is precedent for the Supreme Court to decide in favour of Wyeth. Only recently the Court ruled that companies making medical devices that received FDA device and labelling approvals could not be held liable for injuries associated with use of these products. (The case involved an angioplasty ballon catheter that burst during a procedure.) Although the Court based its decision on a narrow and rather literal interpretation of US federal law governing medical devices, the decision was 8 to 1, with Justice Ruth Bader-Ginsburg dissenting. Some Democrats in Congress will try to amend the law so that device manufacturing companies would once again be held liable.

There are several problems with assigning complete responsibility for product safety and labelling to the FDA. While recognizing that the Supreme Court narrowly interpreted this responsibility and may render a different opinion in the case of drugs (which are governed by a different federal law), it is useful to consider some of these difficulties.

First are the countervailing pressures on the FDA: a) to ensure that the products they approve are safe, and b) to do so quickly so that patients can benefit from new drugs and devices. The first requires patience, abundant data and analysis, and prudence: The second, speed, limited data and analysis, and imprudence.

Companies only grudgingly provide data to the FDA. More data means more studies, longer delays in approval and marketing, and greater chances that additional evidence will negate approval or reduce subsequent sales. The company goal is to provide the minimum of amount of information that will yield approval for the widest possible uses and users. If legal responsibility for approval and use can be passed to the FDA, then companies will further pressure the Agency to proceed with less not more information.

Developing new drugs or getting approvals for new uses of drugs already approved is costly to companies; although the costs are likely exaggerated and blended indecipherably with marketing costs. Still, many candidate chemicals (drugs) fail and have to be abandoned when they are subjected to clinical trials. Drug development costs, rightly, are passed along to patients and drug plans in the form of higher prices. If the Supreme Court rules in favour of Wyeth, rejecting Diana Levine’s claim, then it is very likely that the FDA will raise the approval bar. Drug development and approval costs - funded in large part directly by pharmaceutical companies seeking FDA approval - will increase, but perhaps less than the substantial reductions in cost that will accrue to companies from reduced legal fees, court costs and liability settlements often in billions of dollars. (I presume that patients can’t sue the FDA and that the government will not compensate them for adverse events they suffer.)

It should also be remembered that many and indeed most pharmaceutical companies have been considerably less than forthright in sharing information with the FDA. More information means more vigilance and greater probability of discovering something wrong or dangerous about the product. It is only during the legal discovery process in liability cases that new information comes to light, including evidence of deliberate cover-up and hiding information from the FDA. A case involving alleged fraudulent data submitted (or not submitted) to the FDA is also before the Supreme Court. This involves Warner-Lambert Company and Pfizer Inc. v. Kimberly Kent et al. in a class action law suit in Michigan claiming that the company submitted fraudulent data to the FDA about the diabetic drug Rezulin (rosiglitazone). Michigan laws grant immunity from product liability for drug manufacturers unless the FDA approval was obtained through fraudulently withholding or misrepresenting information. This is obviously a catch-22, for to prove that Warner-Lambert withheld or misrepresented data requires the law suit to go forward and the company be forced to release all relevant documents. This can’t be done without the case being heard.

Thus the Supreme Court decision on Wyeth v. Levine, and Warner-Lambert/Pfizer v Kent will alter not only the drug approval process and drug safety and availability, but pharmaceutical company balance sheets as well.

The precedent established in the case involving the angioplasty device hinged on federal law that precisely forbids states from establishing any additional requirement different from or in addition to the FDA requirements. This makes sense. Otherwise, companies would need to seek approval in each state creating a redundant, costly and slow approval process for new products. But did the law mean that the states could not consider other factors in liability cases? Only a narrow reading of the law would yield this meaning. Supreme court justice Antonin Scalia, writing for the majority took this view.

There is a wider view to support the Supreme Court if it decides on the narrow interpretation and maintains that product liability for pharmaceuticals can’t be pursued for FDA approved drugs. The current situation permitting these liability law suits in most states can be seen as second-guessing the FDA, as all cases involve drugs approved the Agency, and thus perhaps weakening its credibility. Having the courts decide if products are safe (or were safe) is not a robust way to evaluate scientific evidence. Dramatic adverse events affecting a few people may lead to the withdrawl form the market of genuinely useful drugs for the majority.

The Court, however, will also take cognizance of the burden that will be placed on the FDA if it grants pharmaceutical companies immunity from product liability law suits in all states.And have to grapple with the resulting axiom that there will be no recourse or compensation for individuals harmed by drugs, individuals like Ms. Levine.

Sources

Cornell Law School http://www.law.cornell.edu/supct/cert/06-1498.html

Cristina Carmody Tilley Wyeth v. Leine Northwestern University Medill Journalism Supreme Court Cases by Year http://docket.medill.northwestern.edu/archives/004674.php

Linda Greenhouse, Barnaby Feder: Justices Shield Medical Devices from Lawsuits New York Times, February 21, 2008

Photo of Diana Levine from her appearance on CBS Morning Show, Feb 25, 2008 available at
http://www.wcax.com/Global/story.asp?S=7921065

Thursday, February 28, 2008

US FDA sets standards for medical journals and peer review

Photo Credit vaneska~tHOmz's
In seeking to define standards for ‘good reprint practices’ by pharmaceutical companies seeking to provide copies of journal articles to doctors the FDA has had to grapple with the quality of articles and of journals. That they have failed is hardly a surprise, but that they even tried is quite astonishing given their intimate knowledge of the hundreds (if not thousands) of flimsy and misleading research reports submitted to the agency from these same companies seeking drug approvals, most of which are published in these same journals.

The FDA has a legislated role not only to approve new pharmaceuticals, but to do so quickly. The agency is under pressure from all sides except the ranks of the cautious and skeptical to approve drugs and new usages with minimal delay and minimum of assurance they are effective and safe. Once approved, pharmaceutical companies market these drugs to physicians and patients for use.

Pharmaceutical companies (and some physicians and sick patients) are anxious to see if drugs approved for one indication - say epilepsy in adults - might just work for other health problems - say Parkinson’s disease or among groups of individuals (such as children and pregnant women) where there are often few clinical trials.

These ‘off-label’ (non-FDA approved) uses are the target of this FDA Guidance for Industry on Good Reprint Practices for the Distribution of Journal Articles.... The guidance on good reprint practices is a set of suggestions that pharmaceutical companies should use when they are promoting drugs for unapproved uses. The FDA claims that:

“public health may be advanced by healthcare professionals’ receipt of medical journal articles ... on unapproved or new uses of approved or cleared medical products that are truthful and not misleading.”

This is ludicrous. That pharmaceutical companies would select journal articles that are “truthful and not misleading” is irrational from the companies point of view - they will select articles that support use of the product, that emphasize benefit and minimize harm - and naive or blind from the FDA point of view.

There is a wealth of evidence that the published literature looks like a shipwreck of clinical trials that have been shown to be wrong. First of all it is clear even from the perspective of the FDA that published studies sponsored by pharmaceutical companies show a terrific bias towards benefit and against harm. Studies showing benefit of a product are published, those showing no benefit or harm are not. The astonishing thing is that the FDA is intimately aware of this sham evidence base, for it has the only complete record of all clinical trials done to explore new uses of existing products.

For example of 74 clinical trials of antidepressants (involving 12 drugs for which companies sought approval 31% were never published. Published clinical trials show that the drugs are on average 31% better than the comparison drugs or placebos; however, when all the evidence is examined - published and unpublished - the 12 drugs on average showed no benefit over the comparison group.<1> Ludicrous is not too extreme an adjective to describe a belief that pharmaceutical companies will show doctors the complete evidence, on anything.

In its disconnect from reality the FDA goes further by listing a set of criteria that pharmaceutical companies can use (are not required to use) when choosing articles to flog to doctors and their patients. Under the rubric “Good Reprint Practices” we find that:

The Journal should:
1. Have an editorial board that uses experts.
2. Have an editor and board independent of the journal owners
3. Have a policy of full disclosure of conflict of interest or biases.
4. Be peer reviewed

The Journal article should not:
1. Be false or misleading
2. Be a drug company funded special supplement.
3. Have been withdrawn by the journal
4. (Promote a product) that poses a significant risk

I know of few journals that would; a) not have these policies in place; b) publish articles known to be false or misleading and; c) not know and understand that they regularly violate them all, despite their best intentions.

Even well funded journals like the New England Journal of Medicine and JAMA get duped and make mistakes. And they know that they are publishing but part of the evidence, mostly the part showing benefit.

The vast vast majority of journals, however, are small, employ a part-time editor and have but limited time and resources to provide effective oversight of the material they are publishing. They can't guarantee to effectively sort the excellent science from the sloppy, and even less to detect purposeful deceit by funding sponsors with embedded conflicts of interest. Yet, the FDA criteria render virtually all medical journals eligible for cherry picking of articles by company marketing departments and peddling of it to practicing physicians and their patients.

This is not an effective or safe way to improve the health of the public. It is a an effective and now legally protected way ("the FDA said we should") for pharmaceutical and device manufacturing companies to expand markets for unapproved uses.

Further there is no need to promote selected research publications on specific drugs to physicians. Although individual practitioners ought to be able to recognize the conflict of interest of the pharmaceutical company salesperson, they don’t. The majority continue to accept visits and advice form various peddlers. The average practitioner (and patient) has little training, experience or frankly interest in reading and understanding research articles.

In fact, individual practitioners should be encouraged to not read research articles involving randomized trials of a single drug, for there is a reasonable probability that any one study will be flawed or subsequently disproved by accumulating further evidence. Average practitioners are encouraged to stick to guidelines which while themselves are still susceptible to bias and influence by pharmaceutical companies are at least one-half degree of freedom removed from Wall Street.

The FDA criteria for choosing a journal and article do not reflect the very flawed reality of current medical publishing nor of the sad state of market driven medical research, nor of the abilities, availabilities and interests of practitioners to read an understand. The FDA does not have to cave to pressure from Wall Street to let pharmaceutical companies market directly to individual practitioners and their patients by flogging journal articles. Ethical pharmaceutical companies should act responsibly and urge the FDA to drop this guidance.

Reference

Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252-60.

Thursday, February 21, 2008

Diabetes cured? - Rosiglitazone trials raise new questions.

What your doctor is reading, or should be.

Today, diabetes in adults is common. Yet, when Banting, Best and others in 1922 showed the effects of a crude extract of pancreatic tissue on the blood sugar of a 14 year old boy the disease was uncommon. <1> Only those who had extremely high blood sugars, usually discovered in childhood with what we would now call Type I diabetes, were diagnosed with diabetes. They were ill, had ketosis (acid in their blood), were extremely ill and usually did not survive long beyond adolescence. Insulin changed that dramatically.

Type II diabetes was obviously around, but was not diagnosed by physicians because it was asymptomatic unless the blood sugar levels were exceedingly high, in which case the patient had loss of sugar in the urine which was accompanied by water and produced symptoms of thirst and frequent urination, day and night. Thus, in the 1960’s and 70’s it was uncommon for physicians to diagnose diabetes unless the patient symptoms, in which case, either insulin or one of the oral agents could be prescribed - in addition to diet and weight loss, neither of which was effective (then and today).

Type II diabetes became a recognized disease when it was observed that individuals with high blood sugars had a higher risk of micro-vascular complications (peripheral small arterial disease in the legs, leading eventually to ulcers and amputations, and microvascular effects on the kidney and retina and associated renal failure and loss of vision) as well as macrovascular complications (atherosclerosis of larger arteries and associated heart attacks and strokes).

Epidemiologic studies revealed a clear association between high blood sugars and these bad outcomes. But it is a jump in logic to assume that the culprit is the blood sugar. Perhaps the blood sugar increases are due to some other cause, which is also causing disease in small and large arteries? In this counter explanation, the blood sugar is but a sign of the disease, not the cause. Reducing blood sugar may not prevent the bad outcomes.

The interest in the new oral agents, for the treatment of asymptomatic increases in blood sugar, and aggressive marketing by manufacturer’s of these agents, along with endorsements by then nascent patient advocacy groups such as national diabetes associations, led to an epidemic of Type II diabetes, later fueled by increasingly sedentary lifestyles and a glut of obesity.

What was needed to unravel this possible paradox - that we were diagnosing and treating a disease that did not exist (treating a blood test not a disease) was a randomized clinical trial tabulating outcomes that really mattered to people - fewer amputations, less renal and visual failure, fewer strokes and heart attacks and perhaps longer life.

Readers will understand the importance of RCTs for this type of question. We understand that such RCTs will be difficult to complete because the outcomes are relatively rare and occur but years later; thus the need for very large numbers of study subjects with Type II diabetes followed for long periods of time. Expensive trials, long durations, prone to missing information and loss to follow-up of many trial subjects makes this type of research unwieldy, unfriendly and open to variable interpretations, all of which has happened. .

There have been but a few such trials. Earliest was the University Group Diabetes Study about 30 years ago. The results were unclear and the trial is still being interpreted. Nonetheless the results form the basis of current therapeutic recommendations.

The other important outcome trial of type II diabetes is the United Kingdom Prospective Diabetes Study (UKPDS), involving over 4,000 patients randomized to receive an oral anti-diabetic agent and followed on average for about 10 years. <2> That study showed improved outcomes, but the trial was messy (patients often took multiple agents over this time frame including insulin, many were lost to follow-up etc.). The results are still discussed.

Since then, as newer oral pharmaceuticals were developed to lower blood sugar or increase insulin sensitivity, regulatory agencies have required RCT evidence only that they work: that they do lower blood sugar. Ignoring patient meaningful outcomes, only small RCTs of short duration are needed.

Recently, however, pharmaceutical companies have conducted larger RCTs to determine if use resulted in fewer serious complications among patients with Type II diabetes. Undoubtedly, the motivation for doing these trials is to develop data that would give their product a competitive edge in an increasingly crowded market of new compounds.

The ADOPT trial (A Diabtes Outcome Prevention Trial), <3>The DREAM trial (Diabetes Reduction Assessment with Rampril and Rosiglitazone Medication), <4> and the RECORD trial (Rosiglitazone Evalutaed for Cardiac Outcomes and Regulation of Glycemia in Diabetes) <5> trials were all sponsored by pharmaceutical companies anxious to show that their particular drugs were better than cheap generic versions of older drugs (sulphonyl ureas and metformin).

All 3 trials showed no benefit. And all 3 trials strongly suggest that the new medications are not only of no benefit, but are harmful to patients with Type II diabetes.

Suffice to look at Rosiglitazone.

In an extraordinary piece of research, Nisan and Wolski <6> found 26 reports of small clinical trials with rosiglitazone. Summarizing these in a meta-analysis they showed that patients randomized to receive this drug, often along with metformin or a sulfonyl urea were at a statistically significant higher risk (43%) of heart attacks and/or death than the control groups receiving just sulfonyl urea or metformin. [Relative risk 1.43, 95% CI 1.03 to 1.98]

In an effort to refute the Nisan Wolski paper, GalaxoSmithKlein hurriedly released an unplanned interim analysis of their ongoing trial (RECORD) that, unfortunately for the pharmaceutical company, not only failed to refute the previous results but rather confirmed them; although the company interpretation disparaged the seriousness of the harms (heart attacks, strokes and death).<4>

Pharmaceutical company sponsors generally attribute the unexpected bad-for-sales results (for their products) to unforeseen difficulties in carrying out the trials (for example lower than expected frequencies of outcome events) and, curiously, tend to attribute the unfavourable results - even if statistically significant, to chance.

But even a casual observer would interpret these studies as showing no benefit form these newer compounds. And with the potential for harm, cautious physicians and patients would conclude that is best to avoid prescribing and taking them.

Interpretation

These results are striking and deserve our attention. The evidence is clear that even within the company sponsored RCT, patients taking rosglitazone along with another agent suffered more heart attacks, strokes and/or death than those taking a single older agent such as metformin or a sulphonylurea.

These results are applicable to all drugs of this class - thiazolidinediones

Implications

For patients

Patients with type II diabetes should not be taking thiazolidinediones. Those receiving these compounds should be switched to sulphonylureas and or metformin or insulin.

Patients with type II diabetes, who are asymptomatic and have blood sugars lower than their renal threshold (no sugar in the urine), should be encouraged to lose weight and exercise. Often this will bring their blood sugars into more normal ranges.

Current guidelines <7> are less cautious than I am and continue to recommend aggressive lowering of blood sugar even in asymptomatic patients. There is some evidence from randomized trials of insulin and from animal experiments that maintaining blood sugars in near normal ranges is associated with reductions in micro-vascular disease and thus ought to reduce the risk of peripheral vascular disease, renal failure and loss of vision. Thus theoretically, a lower blood sugar is better. But there is little clinical evidence to support the guideline recommendations which are all financed by pharmaceutical companies with deeply vested interests.

For society

The epidemic of type II diabetes is the result of the hard work and clever execution of projects hatched by marketing divisions of pharmaceutical companies eager to expand markets for their products. The fact that all trials of treatments of type II diabetes have shown that treatment can cause harm (or might in the UKPDS trial), makes it imperative that we once again consider a large scale simple trial of these oral agents, and measure the simple but devastating outcomes that count for patients; amputations, renal failure, blindness, myocardial infarctions and stokes and deaths.

In the interim, patients with abnormal blood sugars as their only ‘abnormality’ might be best to avoid drugs.

References used - links to those that are free

1. Banting FG, Best CH, Collip JB, Campbell WR, Fletcher AA. Pancreatic extracts in the treatment of diabetes mellitus. Preliminary report. CMAJ 1922;12:141-6

2, UK Prospective Diabetes Study (UKPDS) Group. Effect of intensive blood-glucose control with metformin on complications
in overweight patients with type 2 diabetes (UKPDS 34). UK Prospective Diabetes Study (UDPKS) Group

3, Kahn SE, Haffner SM, Heise MA, et al. Glycemic durability of rosiglitazone, metformin, or glyburide monotherapy. N Engl J Med 2006;355:2427-43. [Erratum, N Engl J Med 2007;356:1387-8.]

4. Gerstein HC, Yusuf S, Bosch J, et al. Effect of rosiglitazone on the frequency of diabetes in patients with impaired glucose toleranceor impaired fasting glucose: a randomised controlled trial. Lancet 2006;368:1096-105. [Erratum, Lancet 2006;368:1770.]

5. Home PD, Pocock SJ, Beck-Nielsen H, et al. Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycaemia in Diabetes (RECORD) Study: interim findings on cardiovascular hospitalizations and deaths. N Engl J Med 2007;357.
DOI: 10.1056/NEJMoa073394.

6. Nissen SE, Wolski K. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular disease. N Engl J Med 2007;356:2457-2471. [Erratum, N Engl J Med 2007;357:100.]
online free

7. Nathan, D. M., Buse, J. B., Davidson, M. B., Ferrannini, E., Holman, R. R., Sherwin, R., Zinman, B. (2008). Management of Hyperglycemia in Type 2 Diabetes: A Consensus 8. Algorithm for the Initiation and Adjustment of Therapy: Update regarding thiazolidinediones: a consensus statement from the American Diabetes Association and the European Association for the Study of Diabetes . Diabetes Care 31: 173-175
online free

Monday, February 18, 2008

Chronic Knee pain from osteoarthritis and what to do about it - Topical ibuprophen.

About 1 in 3 adults over the age of 50 have chronic pains in the knees. Most are caused by osteoarthritis, essentially a wearing out of the joint, most likely due to age, prior injuries, obesity or other overuse (work or weight bearing), and in part to one’s genes. If you want to know if you have osteoarthritis a good source is the US National Institutes of Health. They have a patient friendly guide that is free. (Handout on Health Osteoarthritis) Figure is in the handout and shows the main joints that are affected by osteoarthritis.

Most people with more severe forms of pain, especially pain that limits walking, will try anti-inflammatory drugs like aspirin or ibuprofen. These drugs, collectively called NSAIDs (non-steroidal anti-inflammatory drugs), however, can cause side effects, the most serious of which is bleeding in the stomach. Minor side effects are also reported such as indigestion (gastritis or abdominal pains) and a few others such as increases in blood pressure or decreases in kidney function. They should be used with care in patients with pre-existing hypertension, heart, kidney or liver disease and in patients with known gastric bleeding and or taking drugs that might make bleeding easier (anti-coagulants, steroids).

Recently topical ibuprofen has become available.

This study is an RCT of 282 patients selected from general practices in the UK. They represent about 1% of the patients in these practices who were identified as possibly having osteoarthritis. Of these, 85% completed the study.

Primary Outcome Measure: Knee pain at 12 months in the topical and oral groups.

Result: There was no difference between the two groups in pain severity, joint stiffness or mobility.


Interpretation of study:

Topical and oral ibuprofen provided equivalent relief for osteoarthritis of the knees. The study was not designed to show that topical ibuprofen was less likely than oral ibuprophen to cause side effects, especially gastrointestinal bleeding, nor that other side effects were less or more frequent.

The study was not blinded at the patient level: subjectivity may have played a part in this result. Given the low statistical power obtained in this carefully planned study it is difficult to imagine that any other study could do much better. The we may never know the answer to the study question - are topical NSAIDs as good and safer and better tolerated than oral NSAIDs? It is likely however that oral ibuprophen would provide a larger effective dose, resulting in better pain relief and a higher incidence of side effects.

Implications:

For the patient

The rational patient could try topical ibuprofen and expect to find equivalent symptom relief to his or her oral pill: And judge for themselves if this occurred and was worth the additional cost of the topical drug.

The authors say that symptom relief generally did not change over the study, suggesting, perhaps that neither route of administration works - that a placebo or simple acetaminophen (Tylenol, Paracetamol), or no drug treatment might do just as well, and have with fewer side effects.

For Society

Given the higher costs (currently) of topical Ibuprofen, the equivalence of oral and topical ibuprophen for symptom relief and the paucity of data on serious adverse effects, health insurance plans should consider topical ibuprophen to be cost-ineffective compared to oral generic products and therefore not covered. Database studies of large numbers of users might be used to detect rate differences in serious adverse events and if this were so, then the topical agents might be cost-effective.

References:

Underwood M, Ashby D, Cross P et al. Advice to use topical or oral ibuprofen for chronic knee pain in older people: randomized controlled trial and patient preference study. BMJ 2008:336:138-42 doi:10.1136/bmj.39399.656331.25

Monday, January 28, 2008

Ovarian Cancer

What your doctor is reading - or should be
January 28, 2008
Ovarian Cancer

Cancer of the ovaries occurs in about 1 in 50 women over the age of 50 or so. While not that common, the disease is troublesome because it is difficult to diagnosis early and most forms of ovarian cancer are highly malignant, spreading quickly. Thus there has been an interest in detecting the causes of ovarian cancer and perhaps preventing it.

January 28, 2008

Most of these studies are case-control designs. Here women with diagnosed ovarian cancer are interviewed about possible risk factors. Their answers are compared with answers to the same questions by women without ovarian cancer. These types of studies provide the most of the evidence we have, but are fraught with difficulties: For example, people with a particular condition may remember past events they think might have caused their illness more frequently than people without the condition or illness. A bias (recall bias) results, distorting the true picture and creating a false association and a misleading risk factor.

In this week’s issue of The Lancet evidence is summarized from 48 different case-control and prospective studies that included questions about past or present use of oral contraceptives. The evidence is coherent and consistent across all studies and in the two research designs (case control and prospective): Women who reported using oral contraceptives had a lower risk of developing ovarian cancer. The risk ratio was 0.42 (about a 60% reduction in risk) for women who had reported using oral contraceptives for 15 years or more. Of great interest, the protective effect of taking oral contraceptives early in life, persists for years after a woman has stopped taking them. A podcast by one of the authors and a Lancet editor is available free.

Implications

For an individual woman:

As ovarian cancer is relatively rare, the chances of benefit for any one woman are small. For example if 250 women took oral contraceptives for 10 years, 1 case of ovarian cancer would be prevented.

There are hazards of oral contraceptives - mainly deep vein thrombosis (clots in the veins in the legs and abdomen) which may lead to pulmonary emboli and death, but these events are extremely rare especially with the newer low estrogen dose contraceptive pills. Oral contraceptive use has also been linked, but not strongly, with other cancers, particularly breast and cervical cancer - although the latter is most likely confounded (misleading) because we now know that it is caused by several papilloma viruses, themselves associated with more frequent sex (and now potentially preventable with the new vaccines).

And there are benefits. Oral contraceptives are the most effective form of birth control and prevention of unwanted pregnancy and even more unwanted abortions, (both of which carry substantially higher health risks than oral contraceptives).


For society:

The widespread use of oral contraceptives in high and middle-income countries means that millions of women are taking them. Thus even if the benefits of risk reduction for a single woman are small, for all women, as many as 300,000 cases of ovarian cancer will be prevented by use of oral contraceptives. This is an argument for making oral contraceptives more easily available throughout the world. Indeed, The Lancet editors have called for making the pill available over the counter, without a physicians prescription.

Cautions

This research was carefully done by experienced epidemiologists. Nonetheless, the fundamental problem with cancer prevention research on humans, is that we have to rely on non-experimental information. Thus, it remains possible that some other factor, unknown to the women or to the researchers, may have led women who were unlikely to ever get ovarian cancer to be more likely to use oral contraceptives, and women, more likely to get ovarian cancer, less likely to use them. We have no way of knowing if this bias exists.

Warnings

Women with previous deep vein thrombi, migraines, heart disease or liver disease should probably not take oral contraceptives, or at least should discuss this with their physicians.

http://en.wikipedia.org/wiki/Ovarian_cancer
http://www.nlm.nih.gov/medlineplus/ovariancancer.html
http://podcast.thelancet.com/audio/lancet/2008/9609_26january.mp3

Friday, January 25, 2008

So many small clinical trials, so little information

January 25, 2008
Editors get a lot of manuscripts reporting relatively small clinical trials, in the range of a few hundred study subjects. Such studies can never evaluate clinically meaningful outcomes because like death or hospitalization because these events are rare and require large trials in order to collect enough events for statistical testing. The smallish trials usually measure indirect outcomes, such as serum cholesterol or a change on some indicator of well being such as mood or depression. While helpful, these kinds of measure are not particularly useful to patients and physicians have to take on faith or biochemistry that there is an underlying chain of causality that will in the end yield some tangible benefit that can be experienced by the patient.

Many of these small trials are never published. Individually they convey virtually no information that can be translated into patient care. At best, if enough of them accumulate they can be combined in meta-analyses and then perhaps some estimate of treatment efficacy can be determined. But this is a long term hope and the result always remains uncertain.

It is perhaps ironic that Orwell’s 1984 was the year that Yusuf and others published their cogent arguments for doing large clinical trials of common serious illnesses. They correctly argued that discovery of a small benefit or risk (say 20% improvement) would carry enormous advantage to a large number of patients and to society. Small trials could never detect this level of benefit. Further, they pointed out that doing small trials on common illnesses and conditions would be very unlikely to yield much information about clinically important events, unless the effect was very large and even then, if the treatment was good enough to produce a big benefit, the benefit would have been obvious to the average clinician without a clinical trial. So why do them?

Yet small trials continue to be published. Yesterday, while trawling the clinical trial registry www.clinicaltrials.gov I found 1,788 studies related to the keyword ‘depression’. Limiting this search to ongoing trials (those still recruiting study subjects), those that were investigator classified as phase III or phase lV studies (not the smaller very early Phase I and II assessments for safety and efficacy that are sometimes used to determine if a ‘larger’ trial should go ahead) and those that were registered after January 1, 2005, showed 331 trials.



There were only 10 trials that had more than 1,000 subjects (and many of these were only secondarily looking at my condition of interest - depression - being studies of patients with cancer on different types of chemotherapy, for example). Most (almost half) of the trials for depression had fewer than 100 subjects. Over 9,000 patients were involved in these very small trials, all of which are unlikely to provide any meaningful information and may in fact prove harmful with false leads.

Now readers may argue that depression and other psychiatric diagnoses are difficult to study and that would be true. But this ought to support the Yusuf position that the study of interventions for depression, a common chronic disease, ought to be carried on very large patient groups looking for interventions that have some modest effect, say a 20-25% reduction in suicide or suicide attempts. There is also growing concern that the active compounds being evaluated may not only not be helpful, but may be in some way a cause of suicides. At present there are 160 ongoing trials involving almost 9,000 patients, each with fewer than 100 study subjects. When one adds in to the equation study subject drop-out, non-compliance, observer and measurement error and so on there is virtually no probability that these studies will yield useful results.

I did not examine or try to judge the adequacy of trial designs, objectives and premises, although this might be interesting given that a large proportion (73%) of these designated Phase III and IV trials are sponsored by industry.

My list of trials is obviously a potpourri. Had I limited the search to Major Depression as a diagnostic category, I would have found fewer trials and a different pattern. Also, mine was a desktop exercise and would need to be more rigoursly double checked for errors and validated. Nonetheless, the graph is a reasonable summary of ongoing clinical trials for an important clinical condition and global public health problem.

I suspect that choosing any other condition in the trial registry would yield similar distributions of study size. It would be interesting to do some of this work more formally. I wonder also, if the registry could incorporate some measure of robustness of the trials that are registered. For example, including the elements of trial design recommended in the Consort Statement www.consort-statement.org would be relatively easy for registrants to complete, (they ought to have it completed before taking their studies for ethical and institutional review in any case) and would allow users of the registry (patients, physicians, researches) to judge at a glance the quality of the trial.

Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Statistics in Medicine 1984;3:409-20

Monday, January 21, 2008

Why do pharmaceutical companies keep screwing up?

Perhaps it’s because they are trying too hard. Or lying to hard. But probably it’s because it is really difficult to bring new compounds to market, costs a lot of shareholder money and is very high risk for failure. It must be similar to designing a new commercial aircraft, or even bringing out a new car model that might or might not be market successful. The difference between a pharmaceutical and a vehicle, however, is not only one of size, it is of efficacy and safety. The auto industry knows before hand that they can design a safe product that actually works. Their only concern is whether there are enough buyers. For the pharmaceutical industry, there are no guarantees at the design stage that the product will work, nor that it will be safe.

The statins are a wonderful example of this difference and of the financial pressures on big pharmaceutical companies. Thus Merck embarked in a joint venture with Schering-Plough on a clinical trial to demonstrate that when their two products are combined into a single tablet called Vytorin, that treatment would be significantly superior to simvastatin alone, Merck’s main cholesterol lowering agent which was about to come off patent and thus be reproduced by generic drug manufacturers at a tiny fraction of the price Merck was able to get when it had the patent-monopoly on the product.

The idea to combine the two compounds made biological sense in that they acted on different sites in the cholesterol metabolic pathways. The Vytorin Enhance trial began in June 2002, aimed to recruit 720 subjects with primary hypercholesterolemia, and after 24 months of treatment to show that the cholesterol plaques in their carotids had statistically significantly shrunk. The results were eagerly anticipated by physicians, patients and shareholders. Although the study end-date was April 2006, by November 2007, the results had not been made public. Merck/Schering-Plough issued a statement to explain the delay:

“The independent panel recommended focusing the primary endpoint to the common carotid artery to expedite the reporting of the study findings.  Merck/Schering-Plough now anticipates that these results of the ENHANCE study will be presented at the American College of Cardiology meeting in March 2008.”

This is a curious statement because the study had only one primary endpoint: carotid artery intima-media thickness per subject over 24 months, comparing baseline reading with the endpoint reading (trial registration, www.clinicaltrials.gov)
Why did the head investigator, Dr. Enrico P. Vetri, a Schering-Plough employee, need to “re-focus”? There was only one focal point. It also does not escape notice that as long as the study was not published, revenue from sales of Vytorin would remain unaffected.

The answer, as it appears in a statement by the two companies on January 14, 2008, (now coming on 2 years after the projected end date of the trial) is that Vytorin doesn’t work. It won’t fly. Not only that, but the data strongly suggests that the combination product is harmful. Subjects in the trial who took simvastatin alone had cleaner carotid arteries than those who took Vytorin.

Now, in fairness to the companies, the result must have been a surprise and they must have wondered if somewhere in the analysis of the carotid artery results a data-entry problem might have occurred, perhaps that the patient data files got mixed up. (Nonetheless, in their November 17 statement, they should have been clear that this was not a question of re-focusing on end points, but something far more serious. Patients taking both drugs did worse. And this is not just that they felt worse, their carotid arteries carrying blood to their brains got more clogged up!

Pharmaceutical company corporate behaviour, we are assured, is changing. Merck claims to "put patient safety first". Yet, this kind of press statement along with the fact that it appeared 16 months after the trial end date belies that claim. The Orwellian language of the press releases must have been a deliberate choice of words by the marketing department, pressured as it always is by Finance. Obfuscation, truthiness - if not lies.

The companies have taken pains to point out that the results show only a tendency towards clogging of the carotid arteries by their product, there is no proof - i.e. statistical significant proof. Sure, the result might have arisen by chance. But the difference in intimal carotid artery plaque was 0.111 mm in the Vytorin patients and only 0.0053 in the simvasatin group. Thus plaques grew almost twice as big in the Vytorin group (91% bigger).

The Vytorin trial also demonstrates that although huge amounts of money are spent annually by patients and governments on statins, there is precious little evidence that they do anything more than lower blood cholesterol. Most folks would not give a damn about some molecule or other in their blood were it not for the fact that they believe that the blood level of their molecule is highly correlated with their atherosclerosis and of their chances of having a heart attack or a stroke. What’s measured in trials however, is 2 steps away from what is of benefit to patients - cholesterol, plaques, strokes. In fact there isn’t much evidence that lowering cholesterol reduces rates of heart attacks and strokes and this study suggests that the the combination product Vytorin may be harmful.

The same is true for most pharmaceuticals. We evaluate whether they work or not, not on the basis of a test drive or flight - will this thing actually fly - but on a set of theoretical arguments that it ought to fly. This is equivalent to Boeing saying “Hey, the engine started it ought to go up!”

In a way it is surprising that Merck and Schering-Plough agreed to go ahead with this trial. They could have proposed a trial that had the soft intermediate end points like levels of cholesterol. Indeed, of the 18 ongoing trials of Vytorin, all but one have intermediate metabolic endpoints - usually levels of LDL-c. Only one large trial - to be completed in 2011, will look at what really matters to patients and doctors - rates of heart attacks and strokes. I wonder now if it is ethical to continue further studies of Vytorin? At a minimum one would have to warn study subjects that it looks like Vytorin doesn’t prevent the clogging of their carotid (and presumably coronary) arteries and there’s a pretty good chance that it may make them worse.

Vytorin is not the only example of a drug trial going wrong. I believe that Merck got into trouble with Vioxx because they made a judgement error in terminating the collection of data on drug adverse events before the termination of the collection period for data on good outcomes. They looked for good outcomes for a longer time than they looked for bad outcomes. The extra deaths that occurred in the termination intervals did not materially effect the estimates of risk for Vioxx, but it sure looked bad for the company - that they had deliberately made this decision once they had looked at the data, or at least had seen it coming. It turns out, really, that Vioxx is no worse than the other drugs in this class - they all carry a risk of cardiovascular disease (and their benefit is no greater than less toxic drugs).

Why not just come clean? Pharmaceutical companies know that not all the drugs they design will fly. Hanging on to losers and trying to bend clinical trial results to hoodwink the FDA and physicians is not good enough. They need to set their own internal standards higher than is required and much higher than they are now. This will undoubtedly increase the cost of bringing new compounds to market, but it will preserve their reputations and long run viability.

To come clean will mean that the research divisions must be separated from the finance and marketing divisions not by a Chinese wall, but by a cement one. For Merck and Schering-Plough to choose an employee as the Principal Investigator is a mistake of Titanic proportions. Not that Dr. Vetri is not capable: It is that he could never be seen as independent. But picking a principal investigator is almost beside the point. The entire research enterprise must be in an intellectual vault. Researchers should be rewarded with attractive positions, but those positions and salaries must receive no monetary incentives related to the launch of a successful product. And they must have complete independence in choosing designing the research protocol.

The Finance departments must re-evaluate upward their risk assessments for new compounds and new studies that are proposed. With a truly independent research division it is likely that more studies will fail to show enough efficacy and safety to permit a test flight of the product on the open market. This is a cost of doing business in this high-risk environment, but it is a risk that must be factored into company bottom lines. Such a policy ought to re-assure shareholders and decrease the downside risk of endless collective action lawsuits.