Groups   |   Sign In   |   Join Now
Search our Site
NPSF News & Press: Industry News

Looking for Safety in Numbers

Thursday, April 10, 2014   (0 Comments)
Posted by: Mark Alpert
Share |
As part of a five-year project funded by a grant from the Agency for Healthcare Research and Quality, researchers at the Center for Education and Research on Therapeutics at Northwestern University are advancing statistical methods to better analyze observational data on drug safety.

Randomized, controlled trials have long been regarded as the ultimate test to gauge the safety and efficacy of a new medication. With groups of patients carefully selected to be free of confounding conditions, robust study design, and good randomization, the results of such trials are usually correctly interpreted: the medication is effective or it is not; it causes serious side effects, or it does not.

 

Once a new drug makes it to the market, however, the situation is much less controlled. In reality, the patients most likely to receive a prescription for the drug may be those who are most likely to have confounding conditions. Or a side effect that was rare in a controlled trial of 1,000 patients becomes significant when 200,000 people start taking the medication. Only then can a complete picture of the overall risks and benefits of the drug come into focus.

 

Today, in the era of electronic medical records and the ability to create vast databases of information, researchers are advancing the statistical methods needed to make inferences and connections from observed events in large populations.

 

“The need for large-scale, observational data is really important, both for the purpose of generalizing for the ultimate people who might take a particular drug in routine practice, and also for having samples that are large enough to convey the presence or absence of rare adverse events,” says Robert Gibbons, PhD, a professor of biostatistics at the University of Chicago who has long worked on investigations of drug safety.

 

Dr. Gibbons is part of the team of researchers at the Center for Education and Research on Therapeutics (CERT) at Northwestern University that is designing statistical tools for analyzing observational data. Led by principal investigator Bruce Lambert, PhD, the Northwestern CERT—called TOP-MEDs, or Tools for Optimizing Medication Safety—is one of six such centers currently funded by the Agency for Healthcare Research and Quality (AHRQ) and the only one focused primarily on medication safety.

 

The Evolution of Observational Data

Twenty years ago, the Food and Drug Administration created the Medwatch program, a system by which health professionals or patients can report adverse effects of drugs. While it serves an important function in sharing alerts about medications on the open market, FDA Medwatch relies on voluntary spontaneous reports, so analyzing such information in order to draw unbiased conclusions can be a challenge.

 

In 2006, the Institute of Medicine issued a report on drug safety that, in part, recommended the FDA find ways to utilize large health databases in support of public and private research about the safety of medical products, including medications and devices. One result has been the Sentinel Initiative, a long-term, multistage project with the goal of creating a large, national system for monitoring medical product safety.

 

“Much of the approach taken in the Sentinel Initiative has been to try to carve out something that looks like a randomized controlled trial from observational data, by matching on a large number of observable kinds of potential confounders—those people who took a particular drug versus those who took another drug. The approach is certainly a valid one, and far better than what has been done historically,” Dr. Gibbons says.

 

These kinds of projects are possible because of the explosion of so-called big data—reported observations on hundreds of millions of patients via insurance claims, electronic medical records, the veterans’ health system, and other databases that de-identify the patient’s personal information but retain the clinical parameters.

 

Among the challenges of using such vast amounts of data is the danger that a mistake anywhere along the chain of analysis could yield a completely wrong conclusion. “One of the cautions is that as these larger databases become available, it is very, very easy to do the wrong thing,” says Dr. Gibbons. “Certainly researchers at the Sentinel Initiative and our group and others are trying to come up with methodology—statistical and experimental design methodology—that helps us do the right thing when we analyze observational data.”

 

The Lambert-Gibbons collaboration has already produced a software platform called DRUGStat, which can be used to examine a very large number of drugs and a very large number of adverse effects at the same time. A large institution, such as a health system or insurer, doing surveillance for drug safety will be able to use this software to screen for unexpected events. For example, DRUGStat allows comparison of the rate of liver failure for Drug A against the usual rate for the aggregate of other drugs in that class in a given population of patients. The software can determine in statistically valid terms whether the rate of liver failure is higher in Drug A than would be expected if it were the same as all the other drugs.  

 

“Because the software uses observational data, we wouldn’t immediately conclude that this drug is causing liver failure,” Dr. Lambert explains, “but we would be suspicious that this is something that needed to be looked at.

 

“You can’t reverse-engineer and create randomization where there wasn’t any,” he adds. “But you can use a variety of statistical techniques to try to simulate randomization in some way, or at least control for all of the ways in which these patients are different from one another and try to equalize those differences.”

 

The Dilemma with Data

The most valid data typically come from randomized clinical trials, because every patient has an equal probability of either receiving the active treatment or not (being in the control group). As such, the patients and controls are generally balanced in terms of potential confounders, whether they are measured or not. In other words, in drug trials, randomization minimizes the likelihood that something other than the drug being studied is the cause of the result. When conducted properly, randomization equalizes all other possible factors that could contribute to risk or benefit. This is not true for observational studies where personal characteristics, such as the severity of illness, may influence the choice of therapy and the outcome. This might give the impression that a drug that is reserved for the most severely ill patients has poorer outcomes or a greater safety risk. This is called “confounding by indication,” and it limits the inferences that can be drawn from observational data. Randomization eliminates the possibility of this confounding.
 

 

Yet the problem with randomized trials is that their external validity is limited by the fact that such studies are typically conducted on highly selected samples, which may have little resemblance to the patients who will ultimately receive the treatment in the real world. In the real world, patients tend to be older, sicker, take more medicines, and have more complicated health status than do those enrolled in clinical trials. They may be receiving more fragmented care. “People who get the medicine are more likely to suffer all sorts of bad outcomes, because they are sicker and more vulnerable to begin with, and that contaminates all observational data,” Dr. Lambert says.
 

 

The TOP-MEDS CERT project is developing statistical methods that can tease out the dynamic selection process that leads a particular individual to be prescribed a medication, fill the prescription, and take the medication at one point in time versus another point in time,
and assessing the differential risks to that patient. “In essence, our project is manipulating the data to use each individual as his or her own ‘control,’” explains Dr. Gibbons.

 

Using large databases from the Veterans Health Administration, the TOP-MEDS project seeks to glean the most valid conclusions from analysis of observational data. “The net result is that we will be able to better understand the extent to which the results from pre-approval randomized clinical trials generalize to the ultimate population of users of the medication or device,” says Dr. Gibbons. “In those cases in which the adverse event of concern is too rare to be studied in a randomized clinical trial, this work will help to instill confidence in the conclusions that are drawn from large-scale observational studies, in and of themselves. Raising the bar on what can be learned from analysis of the experiences of millions of individual patients can lead to important advances in the protection of our nation’s public health and the better understanding of the safety and efficacy of therapeutics.”

 

As their work is being funded by AHRQ, the resulting software will be available free of charge to health systems, researchers, or others who are seeking to monitor drug safety through the use of observational data.
 

 

Click here for the AHRQ website and to learn more about the CERT program.

 


more Calendar

9/27/2016
NPSF Professional Learning Series Webcast: Health Literacy: Improving Patient Understanding

9/29/2016
Certified Professional in Patient Safety Review Course Webinar

Copyright ©2016 National Patient Safety Foundation. All Rights Reserved.
Membership Software  ::  Legal