Thursday, September 18, 2014

From randomized trials to the real world


By Urmimala Sarkar, MD, MPH

Often, at scientific conferences, the most important learning happens in the question and answer period. I spoke at the American Diabetes Association conference earlier this year, presenting results of an observational study we did on medication adherence and diabetes. We found that if people starting using the online patient portal (sometimes called the personal health record), to order their medication refills, they were more likely to take their medication regularly. Dr. Katherine Newton of Group Health Research Institute spoke before me, describing a randomized study showing that a clinical pharmacist-led blood pressure management program did not lower blood pressure any more than usual care by an outpatient provider.
The first audience comment came from a program officer from the National Heart, Lung, Blood Institute, part of the National Institutes of Health. Program officers are incredibly important because they help set the research priorities for the major funding mechanism for medical research. I will never forget her comment, because it was so strongly worded. She said (close to but not exactly a direct quote): “Having listened to your talk and the one before it, I am more convinced than ever that we should focus on randomized trials,” implying that the negative results of the randomized trial were more believable than the positive results of our observational study.
This worship of randomized trials at the expense of other forms of study is understandable. If we randomly assign people to one treatment or another, we can ensure that the differences we see between the two groups are really because of the treatment and not some other factor. After all, researchers, myself included, spend a tremendous amount of time obsessing over our methods. We go to extreme lengths to make sure that we correctly interpret the data before us. Our holy grail is “causal inference” in which we can be sure that whatever risk factor or treatment we study truly causes the health outcome we are interested in. Randomization is the best way to ensure that you’re not unwittingly attributing your effect to the wrong cause.

So why did I think this comment was off-base? First, you cannot always randomize people to one treatment or another. In the case of my study, the online patient portal was available to everyone, as a part of the health system.  When healthcare systems change or offer a new service, it is important to quantify the benefit, and government and accreditation mandates often make randomization impossible. So, we use our methodological skills to try to approximate cause and effect, by choosing the population under study carefully and adjusting for all the factors that we can think of and measure that might affect the outcome. Is it perfect? Nope. However, it’s better than not understanding the health effects of health system delivery changes.

A second reason to think beyond trials is because they are designed to answer a single question in a narrow group of people. Enrolling in a trial involves meeting strict criteria about what other medical conditions, medications, treatment, and history you have. In the real world, patients are people, not single-disease entities. As a primary care physician caring for medically complex, low income, ethnically diverse patients, I often struggle with how to apply results from trials to my own practice. It bears repeating that studying real-world populations is critical to improving health in real-world populations.

If that weren’t enough, trials are expensive and time-consuming. Trial researchers need to enroll a lot of people to detect significant differences in outcomes. However, when policymakers and public health leaders are making decisions for an entire population, small differences matter. Our study showed a 6% difference in the proportion of patients who took their cholesterol medication regularly. That’s not a huge number, but over an entire group of diabetes patients, lowering cholesterol modestly is important. It would be a tall order to fund a trial large enough to detect such a difference.

Finally, many approaches that work in randomized trials don’t end up helping in real life. A study earlier this year found no benefit to implementing surgical checklists in Canada, even though the same checklist had powerful results in other settings. No randomized trial is going to be able to explain that contradiction! We need more methods, collectively known as implementation science, in order to understand not only what works, but how it can be applied, implemented, and spread, so that new treatments and approaches translate to health benefits for all.  In our study, perhaps there was something unique about portal users that used the online refill function for their medications – understanding that, rather than designing new randomized trials of new interventions, may be well worth our time.

Let’s end the tyranny of the randomized trial and advocate for good data and rigorous methods in every aspect of health care delivery. My patients, and all patients, deserve better.


2 comments:

  1. I would like to second your thoughtful and important statement! When it comes to evidence-based clinical decisions, observational evidence has historically been ranked as a second rate citizen (at best) and often ignored. We need to acknowledge, however, that much of the published observational research is poorly designed to make causal inferences (e.g., cross-sectional studies)...and their inclusion may have given observational research a lousy reputation among policy and guideline makers. However, the ability to make valid causal inferences in observational research has made substantial advances. There are now several, rigorous causal methods such as difference-in-difference models, (e.g., the one used in your recent paper: Use of the Refill Function Through an Online Patient Portal is Associated With Improved Adherence to Statins in an Integrated Health System. Medical Care 2014 Mar;52(3):194-201), marginal structural models (MSM), instrumental variables, and directed acyclic graph-guided model specification. Systematic reviews should consider evidence based on these more rigorous approaches as a separate, special class, rather than pooling their evidence with causally inferior observational methods. Most guidelines committees continue to base decisions on primarily RCTs (considered strong evidence)....despite RCTs being often not predictive of real-world effectiveness in the end. Instead, I believe (when possible) we should base policy decisions on evidence from both experimental and observational studies; they have complementary strengths and weaknesses (e.g., internal vs external validity). In many cases, RCTs will never be performed (e.g., questions of addressing health disparities), and in such instances, we will need to inform policy and guideline decisions on rigorous observational research alone. (Andrew Karter, PhD, Division of Research, Kaiser Permanente)

    ReplyDelete
    Replies
    1. Urmimala: Thank you for this insightful comment, Andy. I absolutely agree that we have newer, better, observational data and should treat it separately from its less rigorous predecessors.

      Delete