By GREGORY TWACHTMAN
With members of the Patient-Centered Outcomes Research Institute’s Methodology Committee expected to be named by early in 2011,
one area where they could start to improve comparative effectiveness research is in providing solid guidance for conducting observational studies.
“We need more guidance about when observational research is adequate for the task,” Vanderbilt University School of Medicine Chairman Frank Harrell said.
Speaking Dec. 2 at the “Methodological Challenges in Comparative Effectiveness Research” conference hosted by the National Institutes of Health and the Agency for Healthcare Research and Quality, Harrell said changes need to be made for observational research to gain the respect that experimental research receives.
Harrell’s call for guidance is not the first. The Institute of Medicine, in making its recommendations for how to allocate the $1.1 billion for comparative effectiveness research under the American Recovery and Reinvestment Act, said that funds should go toward developing methodological guidance for CER study design, such as the appropriate use of observational data (“Broad Recommendations For Comparative Effectiveness Research,” “The Pink Sheet,” July 6, 2009).
The guidance would come in handy as it is being predicted that CER, at least on the federal level, will be more about observational studies, systematic reviews, database studies and other broad types of analysis than head-to-head trials (“CER Policy Does Not Equate To Head-To-Head Trials, UBC’s Luce Says,” “The Pink Sheet,” July 5, 2010).
Current State Of Observational Research “Not Very Good”
The bad news is “the current state of observational research is not very good,” Harrell stated. He pointed to observational biomarker research and nutritional epidemiology as areas with the worst observational research, but in general, “the majority of it is just wrong or not reproducible or very commonly comes up with estimates … of risk factors or estimates of treatment effects that are overstated. So we need to work against that.”
Additionally, Harrell said there is a high cost that comes with observational studies, particularly on the intellectual side.
“There’s often endless debate after an observational treatment comparison is reported,” Harrell said. “There’s a cost to that endless debate. There are opportunity costs and there are real costs of time spent.”
Cambridge, Mass.-based Outcome Sciences – a provider of patient registries, technologies and studies to evaluate real-world outcomes, with seed funding from the National Pharmaceutical Council – released in April a framework for evaluating observational CER studies known as GRACE (Good Research for Comparative Effectiveness) (“PCORI Should Take Lead On Public CER Inventory, Pharma Groups Tell HHS,” “The Pink Sheet,” Aug. 23, 2010). Health insurer WellPoint also released guidelines on how it will evaluate CER, including observational studies (“WellPoint’s CER Guide Describes How It Will Determine Usefulness Of Studies,” “The Pink Sheet,” May 24, 2010).
Getting Respect For Observational Studies
One way for observational studies to gain the same kind of clout as randomized controlled trials is to conduct observational studies more like RCTs, Harrell stated.
“If observational researchers want to get respected the same way experimental researchers are respected, they need to act as experimentalists act,” Harrell said. “They need to make … every aspect of their study to be conducted rigorously so the only difference is you don’t have randomization.”
In conducting observational research, bias needs to be actively addressed so as not to impair the research findings. “Researchers need to be objective and avoid confirmation bias,” Harrell said. “Confirmation bias is one of our biggest enemies, or analyzing to a foregone conclusion.”
He suggested that there needs to be a masking of outcomes data while analysis is ongoing to approximate what is going on during RCTs. Harrell also stressed the importance of pre-filed statistical analysis plans.
“It’s very, very uncommon in observational research to actually have detailed analyses that are actually signed and dated,” he said. “This has to change and could change immediately. There is no reason not to put this into effect today. And exceptions to the analytical plan need to be documented and justified and dated so that we can have a time-tracing of exactly what happened when.”