TUESDAY, Oct. 23 (HealthDay News) — When initial findings about an experimental drug or treatment sound too good to be true, they probably are, according to a new study.
Stanford University researchers found that after a single study reports large benefits for a new medical intervention, additional studies almost always find a smaller treatment effect.
The study authors suspect that a small study size contributes to the initially inflated benefits.
“Beware of small studies claiming extraordinary benefits or extraordinary harms of medical interventions; the truth about these may be more modest,” said Dr. John Ioannidis, a professor of medicine, health research and policy and statistics at Stanford’s Prevention Research Center in California.
Ioannidis is senior author of the study, published in the Oct. 24/31 issue of the Journal of the American Medical Association.
Health experts know that most of the medical interventions introduced today have modest effects. Still, some clinical trials occasionally report finding large effects.
Dr. Andrew Oxman, author of an accompanying journal editorial, added that “few clinical interventions have been found to have big effects on outcomes that are important to patients; for example, to cut the risk of a heart attack, a stroke or some other bad outcome in half.”
Typically, when big effects are reported, “it has been in small trials that do not provide reliable evidence and it has been on laboratory outcomes [such as cholesterol levels], which may or may not translate into big effects on outcomes that are important to patients,” noted Oxman, a senior researcher at the Norwegian Knowledge Center for the Health Services in Oslo, Norway.
Ioannidis and his colleagues wanted to see how often these large benefits were reported in clinical trials, and to see if those benefits persisted when additional research was done.
For this, they went through 3,545 systematic reviews. A systematic review is a collection and critical evaluation of all of the currently available studies on a given topic. The researchers found 3,082 with data that met their criteria.
From those studies, they reviewed more than 85,000 “forest plots.” These are graphs that display the main results from each study. These graphs make it relatively easy to see the strength of the evidence for a specific treatment or intervention.
Just under 10 percent of the reviews showed a large benefit in the first published trial, while another 6 percent had a study that showed a large benefit after the first trial was published. The majority of reviews (84 percent), however, had no studies that showed a large benefit to any treatment or intervention, the investigators found.
In follow-up studies, 90 percent of the first trials that showed a large benefit failed to show such a significant effect. And 98 percent of subsequent studies that had shown a large benefit failed to maintain that response.
Out of all of the reviews, only one intervention — extracorporeal oxygenation for severe respiratory failure in newborns — showed a large positive effect on mortality in a systematic review, without any concerns about the quality of the evidence in the studies, the researchers said.
“I think some healthy skepticism and a conservative approach may be warranted if only a single study is available — even more so if that study is small and/or had obvious problems and biases,” said Ioannidis. “Most of the time, waiting for some better, larger, more definitive evidence is a good idea. No need to rush.”
Oxman added, “I suspect that many patients tend to think that an intervention either works, or does not work, without fully considering the size of the effect and potential adverse effects.”
Ioannidis and Oxman suggested that increased health and statistical literacy would help consumers make more informed choices. Oxman also said that a simple drug “fact box” that explains benefits and risks could help, too.
More information
Learn more about systematic reviews from The Cochrane Collaboration.