Magazine

Commentary: Not Enough Patients? Don't Do the Study


By Paul Raeburn

Nearly every week, newspapers carry reports of medical research that can make biotechnology and pharmaceutical stocks rise and fall like leaves in an autumn wind. These reports can also have a profound impact on the hopes of ailing patients and their families.

It comes as a surprise, then, to learn that many medical studies include too few patients to reach any definitive conclusion. The problem arises when trials come up with negative results--that is, the conclusion that a new drug or procedure has no effect. In many cases, critics say, negative results don't necessarily mean the treatment didn't work--only that the study involved too few patients to find any benefit. Yet such trials can discourage further research on potentially promising treatments and expose volunteer subjects to needless risks.

In a recent paper in the Journal of the American Medical Assn., Dr. Scott Halpern of the University of Pennsylvania School of Medicine and his colleagues concluded that studies with too few patients can be unethical. These patients and healthy volunteers are taking risks in experimental studies, they argued, when it should be clear that those risks will not produce useful findings.

Sometimes, the problem occurs unexpectedly. Several years ago, Dr. Joseph D. Redle, then at the William Beaumont Hospital in Royal Oak, Mich., undertook a study to determine whether a drug called amiodarone would reduce the likelihood of a serious heart-rhythm abnormality after a bypass. He had no funding and was able to recruit only 143 patients before moving to a private cardiology practice in Akron. The study, published in 1999 in the American Heart Journal, found no benefit from the drug. That's no surprise, because the trial didn't have enough patients to show whether the drug worked or not. "There was a lack of resources and time to put into getting enough patients," Redle says. The value of amiodarone remained in question.

In another case, researchers at the University of Western Ontario set out to see if the cancer drug methotrexate would help maintain remissions of Crohn's disease, a disorder of the digestive tract. They recruited only 76 of the 110 patients needed, and they had problems obtaining the drug and a placebo that resembled it. "Here you have a drug that is very useful to replace steroids, and the research that needs to be done isn't done" because of funding problems, says Dr. Brian G. Feagan, an author of the trial's report, which was published in the New England Journal of Medicine in 2000. This time, the researchers got lucky: The drug was so effective that it showed a positive effect even though researchers did not recruit all the patients they thought they needed.

The problem exists in all fields of medicine. In January, Kevin C. Chung, a hand surgeon, and his colleagues at the University of Michigan Medical Center reviewed 20 years of human and animal research in leading plastic-surgery journals. They looked only at negative studies--those that failed to find a difference between two treatments. They found that more than 90% of the studies were "underpowered"--conducted with too few patients for any differences to emerge. In other words, the research was worthless. Such clinical trials "can waste resources, deter further research, and impede advances in clinical treatment," Chung and his colleagues warned.

Why does the problem continue? One reason is money. Larger studies cost more. Another problem can be finding enough patients to volunteer when a rare disease is involved. The solution is for researchers to pool financial resources and in the case of rare diseases, experimental subjects, Halpern says. But scientists with big titles and even bigger egos are often unwilling to relinquish control of their research to a group--so the underpowered studies are done instead.

Some of the blame falls on researchers who do not properly understand how to set up studies, says Andrew J. Vickers, a biostatistician at Memorial Sloan-Kettering Cancer Center in New York. When studies are launched, researchers make educated guesses about what they expect to find. That guesswork enters into calculating the number of subjects needed. The calculation is usually done by plugging numbers into a standard formula--and many biologists do not understand the subtleties that underlie the calculations, Vickers says. "Many researchers are inadequately aware that the calculations involve assumptions and guesses--and those may well be inaccurate," he says. In other words, there's still plenty of art in science.

There may be more questionable motives for an underpowered study: Drug-company researchers might not want a clear result. Suppose a company wants to show its drug has no more side effects than a competitor's product, a question often not resolved by Food & Drug Administration approval. In that case, says Halpern, "it is to your advantage to do an underpowered study, which would fail to show an increased risk of side effects."

The same might be true when a pharmaceutical company wants to show that there are no differences in the effectiveness of its new entry and that of an existing drug. "I'm fully convinced that industry-sponsored trials are often intentionally underpowered," says Halpern.

At a time when medical costs are soaring and research funding is handed out sparingly, it is troubling to discover that these studies--which waste time and money and put patients at risk unnecessarily--are still being conducted. Some researchers argue that underpowered studies are in fact worth something--that a little knowledge is better than none. But the stakes are too high to tolerate wasteful uncertainty, or worse, intentional confounding of research results. This isn't just a question of money or researchers' careers. The welfare of patients is at stake. Raeburn covers science and medicine from New York.


Tim Cook's Reboot
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus