The lifecycle of patient care innovations often begins with clinicians making practical observations at their patients' bedside, reflecting on physiologic principles, and considering “How could we do this better?” The most promising ideas emerge from this real-world environment and travel through a process intended to refine the method and confirm its utility: retrospective analyses to look for a signal of effect, pilot studies intended to standardize and evaluate the feasibility of the intervention in question, and then larger-scope investigations structured to maximize the studies' statistical power and internal validity using techniques like blinding and randomization. There has been considerable focus in recent years on the vexing problem of irreproducibility in scientific research across the spectrum of life sciences, from basic cancer research to clinical trials.1,2 Consequently, definitive studies typically enroll a highly selective study population and use rigid study protocols. Each step along the path to the accepted standard, double-blind, placebo-controlled, multi-center trial adjusts the patient population and intervention such that positive trial results are often based on data derived from a relatively artificial context.
What is seldom appreciated is that a completed multi-center trial is not the end point of a journey that translates ideas into practice, but a midpoint. Crucial questions remain. First, could this intervention be applied to other defined, related conditions or populations, which we refer to as the external validity of the study results?3 Research in the critical care environment is particularly challenging due to the great heterogeneity of index diseases and comorbidities in the subjects, such that a related question of even greater uncertainty arises: Do the results of this study generalize to the patients I care for in the medical system in which I work? In our field, it would be helpful to borrow a concept from the psychology and sociology literature—ecological validity—which refers to the generalizability of an intervention or effect observable in a study into the real world, devoid of the distortions and biases of clinical trials: usual practice environments, with practitioners of typical skills and training, hospitals with typical resources and staffing, and without confounders like selection bias and the Hawthorne effect.
Although the term ecological validity may be unfamiliar to some respiratory care providers, relevant examples are not. One well-known illustration is the research on resuscitation of septic shock, where the findings of an initial landmark trial have failed replication in multiple subsequent confirmatory trials.4–7 Evidence-based interventions of great interest to readers of this journal are the practices of minimizing sedative exposure and utilizing daily spontaneous breathing trials for the sake of optimal liberation from mechanical ventilation.8–11 In this issue of Respiratory Care, Kallet and colleagues12 provide crucial data to confirm the ecological validity of these related practices. Their institution responded to the emerging data on sedation usage and spontaneous breathing trials by implementing protocols to target light sedation and enact daily spontaneous breathing trials across multiple patient populations in 2 different ICUs. Using a registry of subjects treated for ARDS, a particularly morbid and challenging subpopulation of ventilated patients, they compared outcomes from the pre-implementation and post-implementation periods. They observed impressive improvements: reductions in the median duration of mechanical ventilation from 14 d to 9 d and ICU length of stay from 18 d to 13 d (both P < .001), differences that persisted even after adjustment for potential confounders.
Greater attention and resources should be devoted to studies, such as this one, that confirm or refute the applicability of clinical trial findings in commonplace practice. Implementation of basic guideline recommendations for ARDS care remains poor despite positive physician attitudes toward their utility.13,14 Implementing new protocols is a cumbersome and costly process, requiring educational initiatives, shifts in ingrained cultures of practice, staffing demands, capital investments, and quality-assurance feedback. Positive results of major clinical trials represent a value proposition, whereas ecological validity studies close the loop on the evidence-based medicine pathway and bring innovative ideas back to the bedside where they began.
Footnotes
- Correspondence: Dr. Matthew Maas, 710 N Lake Shore Drive, 11th Floor, Department of Neurology, Chicago, IL 60611. E-mail: mbmaas{at}northwestern.edu.
The author has disclosed a relationship with the National Institutes of Health (grant K23NS092975).
The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
See the Original Study on Page 1
- Copyright © 2018 by Daedalus Enterprises