The supercar stays in the garage: Why advanced methods for comparing new treatments are used infrequently, and what regulators should do about it

By Dan Ollendorf, PhD, Director, Value Measurement & Global Health Initiatives

As a clinical epidemiologist and systematic reviewer, I get excited when new methods develop or evolve to allow comparisons of the benefits and risks of competing technologies (OK, I’m a major nerd).  It is also exciting to see innovative new medicines with breakthrough potential, introduced not individually, but in twos and threes.  Recent examples include drugs for hereditary amyloidosis, preventing migraines, and even sickle cell disease.  Advanced techniques for conducting indirect comparisons applied to a landscape of multiple new and competing medicines should be a match made in heaven. 

Not so fast!  Network meta-analysis and other forms of indirect comparison are only as good as the data that feed them.  Populations, outcome measures, and other aspects of the relevant clinical trials have to be comparable.  My Tufts-CEVR colleagues and, Huseyin Naci, at the London School of Economics and Political Science, were interested in understanding how often indirect comparisons of these new medicines were feasible, what prevented their conduct, and whether any avenue existed to improve analyses.  We focused on assessments produced by the Institute for Clinical and Economic Review (ICER) because their methods for indirect comparison (or the rationale for omitting such an analysis) are publicly reported. 

The results, described in our recent research letter, were surprising:  of the 80 medicines we identified as candidates for indirect comparison, we found that for over 50% of them it was not possible to use these techniques.  The most common reasons were small but important differences in enrolled populations as well as differences in how and when key outcomes were measured. 

We also sought to understand, for the medicines that could not be indirectly compared, how frequently did manufacturers seek early scientific advice from the Food & Drug Administration and/or the European Medicines Agency?  The answer was equally surprising—two-thirds of the time!  Regulators, therefore, seem to have missed a golden opportunity to set trial design and measurement standards for drugs early in clinical development.  Researchers will continue to be frustrated by an inability to conduct true comparative effectiveness research.  And patients, who deserve to know with as much precision as possible how benefits and risks of new alternatives compare, suffer the consequences.  It’s time for regulators to step up and provide clear guidance for manufacturers, or if they have done so, insist that manufacturers abide by it.

The supercar stays in the garage: Why advanced methods for comparing new treatments are used infrequently, and what regulators should do about it

More News Articles