Commissioner champions data from randomized clinical practice studies to support benefit/risk decisions but also sees value in other types of data generated outside the traditional clinical trial setting.
Randomized clinical trials have long been extolled as the gold standard for establishing drug efficacy and safety. However, FDA Commissioner Robert Califf has his sights set on increased use of another type of research for informing regulatory decision-making: randomized clinical practice studies.
Studies in which patients are randomized within a clinical practice setting provide the best opportunity for using “real world evidence” to assess a drug’s benefit/risk profile, the commissioner suggested June 16.
“Many of us within the FDA believe the most useful source of knowledge will come from randomization in the context of clinical practice,” Califf said at a conference on real world evidence sponsored by Friends of Cancer Research and Alexandria Summit.
“Unplanned analytics of a bunch of data not collected for any particular reason will have amazing value for a lot of things, but not causal inference about whether a drug or device works,” FDA’s Califf said.
“For causal inferences about the risks and benefit of medical products, most of us believe that completely unplanned use of clinical data … collected with no plan for measuring critical elements or defining an inception time for the observation, is the least useful approach for drawing an inference about the risk and benefit of a medical product,” Califf said, describing the prevailing view in the agency.
However, “these types of analyses have many other useful purposes, including the detection of adverse event clusters and developing hypotheses, and in addition developing a much deeper understanding of outcomes related to the delivery of health care.”
“I’m in no way saying this sort of analytics on extant data is a bad thing,” Califf continued. “I’m just saying let’s use it for the right purposes.”
Califf made clear that capitalizing on evidence generated outside of the traditional clinical trial setting is one of his key priorities as commissioner.
“In general, it’s time to take advantage of the change in our clinical data environment to create efficiencies by streamlining and reducing the costs of medical product development without sacrificing established standards of evidence,” he said.
Califf’s embrace of real world data to inform and support regulatory decision-making should be viewed as a positive by a biopharmaceutical industry looking for alternatives to traditional randomized clinical trials that are costly, lengthy and sometimes infeasible.
Nevertheless, the commissioner’s emphasis on randomization as an integral component by which to guide benefit/risk assessments may suggest a higher standard than some in industry would prefer when it comes to real world data.
Unsustainable, Undesirable Clinical Trial System
Real world evidence is a term increasingly used to describe evidence generated from data collected outside the traditional clinical trial setting. This includes so-called pragmatic clinical trials, which incorporate randomization in routine clinical care settings and rely on data collected during patient care. Other types of real world data include electronic health records for patients treated on- or off-label, observational studies, patient registries, administrative claims data, patient surveys, and mobile health-generated data, such as through smartphones and wearable devices.
Real world evidence “is thought to better reflect the general population and the care they receive, given that enrollment in clinical trials is often limited to patients with specific baseline characteristics, with often restricted eligibility,” an issue brief for the June 16 meeting states.
Industry and patient advocates see utility for such evidence to speed medical product development, particularly for new indications of already approved products. However, there are a number of hurdles to greater development and use of real world evidence, including what some stakeholders see as a lack of FDA clarity around its expectations for such data (“Industry Need FDA ‘Engaged’ Before Investing In Observational Studies” — “The Pink Sheet,” Mar. 14, 2016). There also are significant technical challenges related to data quality and infrastructure capacity that must be addressed (“Real-World Evidence: Efficacy Assessments Await FDA Clarity, Pilot Projects” — “The Pink Sheet,” Mar. 14, 2016).
Califf, a cardiologist and former clinical trialist at Duke University, has long advocated for changes in how clinical trials are conducted and evidence is generated to guide the use of medical products. In a 2010 speech in which he cited the high cost, long duration and enrollment constraints associated with randomized controlled trials, Califf went so far as to say “randomized trials are dead in the US, or at least they are dinosaurs” (“New Trial Designs Needed To Get Most Out Of CER, Duke’s Califf Says” — “The Pink Sheet,” Apr. 19, 2010).
He reiterated some of these concerns at the real world evidence conference. Citing an economic analysis on the growing costs of traditional clinical trials, Califf said, “This is not a sustainable system, and I think given the changes we have in our clinical data environment it’s not even a desirable system.”
Traditional trials are “very useful for Phase I and Phase II and a lot of Phase III, but there is a lot of Phase III and beyond that really should be done in an entirely different way,” he said. “We can dramatically increase the amount of useful evidence to guide practice and inform the optimal use of medical products by providing evidence on product benefit and risk in a real world setting with a wider range of patients than is typical for current clinical trials.”
Complementary, Not Polar Opposites
Califf made clear he disagreed with the perception that randomized controlled trials and real world evidence are polar opposites. Rather, they should be viewed as complementary, he said.
“In my view, it’s a major error to pose real world evidence and randomized controlled trials as opposing concepts,” Califf said. “Instead they should be viewed as part of two very different dimensions, both of which have to be considered.”
The first dimension is the source and quality of the data. “This could range from information collected in the context of clinical practice or from a personal device in a person’s home, to rarified data from whole genome sequencing or a specialized research-only clinic,” Califf said, noting “there’s a broad spectrum there.”
The second dimension is the method of learning, or research. “The method could range from individual patient randomization to cluster randomization and other interesting approaches like basket trials … being championed in oncology, to prospective registries and finally unplanned analyses of data collected for other purposes,” Califf said. “If you’re positive you call these analytics. If you’re negative you call this data dredging.”
By drawing a line between randomized controlled trials and real world data, “I think we’re missing a point,” Califf said. “All of it has multiple uses. The key is to match the data type and the method to the purpose of the learning activity.”
He cautioned about the need to ensure that real world data are being relied upon in the appropriate circumstances. “The mistake we want to avoid is to say that we can infer causality when the methods really don’t support it, and where we’re really using it to say we don’t want to have the discipline to do the right study to answer the critical question.”
Califf expanded on his thoughts while speaking to reporters after his speech:
“I just want to reiterate again – we think that the place where people need to go is randomization within real world practice,” Califf said. “That’s … where the world needs to head to sort out when you want to make a causal inference.”
“Unplanned analytics of a bunch of data not collected for any particular reason will have amazing value for a lot of things, but not causal inference about whether a drug or device works,” he said. “That, for the most part, will require either randomization or a carefully planned prospective study. But the source of the data can still be real world data. It’s two different things.”