Abstract
This paper will present a focused and personal history of physiologic monitoring, beginning with the discovery of modern anesthesia and its development from a technical practice to a scientific discipline. Emphasis will be on the essence of monitoring in the anesthesia evolution, and this work will attempt to answer the question of how to evaluate the impact of monitoring on patient outcome. Understanding that monitors are passive and that only caregivers using monitors can impact outcome is at the crux of this approach to analysis. The limited quality data involving monitoring analysis, including that from pulse oximetry, will be discussed and critiqued. The invention and rapid spread of pulse oximetry will be highlighted and used as an example throughout, but the principles developed will apply to other monitors and patient monitoring in general. The problems created by monitoring alarms will also be discussed.
- physiologic monitoring
- pulse oximetry
- intensive care unit
- alarms
- false alarms
- alarm fatigue
- patient outcome
- anesthesia
Introduction
The continued evolution of physiologic monitoring devices in recent years has raised the promise of better patient outcomes, especially in the care of the critically ill. Although the technical aspects of monitoring devices are improving and changing rapidly, very little information suggesting better patient outcomes from their use has been forthcoming.1 This is especially true of invasive monitors, such as the pulmonary artery catheter, the use of which is associated with no improvement and, in some studies, worse patient outcomes.2 Despite this lack of positive data, several organizations and institutions are pushing to deploy electronic physiologic monitoring to all hospital wards as part of a misplaced belief that this is a sure route to safety and better patient care.
In fact, even in ICUs, the frequency and variety of alarms triggered by multiple monitors cause sensory overload and desensitization of caregivers, leading to a decrease in safety and a worse outcome in vulnerable patients.3 The problem of alarm management has been widely recognized as a critical patient safety issue. Concern has lead to a universal call for action to find a solution for alarm issues.4
This paper will present a focused and personal history of physiologic monitoring, beginning with the discovery of modern anesthesia and its development from a technical practice to a scientific discipline. Emphasis will be on the essence of monitoring, and this work will attempt to answer the question: Why is there so little quality evidence demonstrating a positive impact on patient outcomes from application of current monitors and monitoring standards? Development of new monitors and monitoring standards are unlikely to lead to demonstrably better patient outcomes; however, they may mitigate some of the current risks that have accumulated from dissemination of poorly integrated devices and lack of understanding of monitoring principles.
A Brief History of Anesthesia
Although sedative and analgesic biologics had been used for thousands of years, the discovery and application of the anesthetic effects of diethyl ether, chloroform, and nitrous oxide officially date from the middle of the 19th century (1842–1847). Together with increased understanding of human anatomy and physiology, these agents allowed rapid advancement in the complexity and success of surgical procedures. The use of these newly discovered anesthetics fostered the need for skilled anesthesia providers and enhanced patient monitoring.
Shortly after the first successful public demonstration of painless surgery in the United States, ether became the predominant agent used due to its simplicity of administration, preservation of airway-protective reflexes, stimulation of respiration, production of muscle relaxation, and predictable prolonged analgesia after surgery (if the patient survived!). The administration of ether was directed by the surgeon and often delivered by the “etherizer,” a nurse, technician, or medical student with little or no training or experience in delivering this potent drug. The only equipment needed was a cone or handkerchief placed over the nose and mouth of the patient and intermittently saturated with the volatile liquid ether. Monitoring of patients receiving ether for surgery consisted of the surgeon observing the depth and character of spontaneous breathing, color of the skin and blood, and, infrequently, palpation of a pulse. The surgeon was the decision maker for the choice and amount of the anesthetic administered. At the time, surgical mortality was high and included both the lethal effects of the anesthetic and the surgical procedure itself. In addition to the deleterious effects of ether on human physiology, ether is highly flammable and can easily catch fire. The first recorded patient death from a fire during surgery with ether was reported in 1850.
The anesthetic potential of chloroform was soon identified, and it quickly became the most popular anesthetic agent in Europe. Chloroform is more potent than ether and safer in that it is not flammable. The use of and acclaim for chloroform during childbirth by Queen Victoria undoubtedly influenced its wide acceptance throughout the British Isles. The first death believed to be due to chloroform and not surgery was recorded in 1848, a year after its first demonstration in a human. Unlike ether, chloroform requires complex equipment and careful attention to dose for safe administration. In addition, chloroform is toxic to the liver and frequently causes cardiac arrhythmias.
Acceptance and evaluation of the risks of anesthesia apart from surgery came much later than discovery and use of anesthetic agents. It was not until the late 1890s that a suggestion was made that vital signs even be recorded during surgical procedures. In fact, no systematic written records were routinely used during anesthesia for nearly 100 years after its discovery. Monitoring of a patient during anesthesia consisted of watching the depth, frequency, and characteristics of breathing; assessing skeletal muscle tone; determining the frequency and characteristics of the peripheral pulse; and observing the color of the patient's skin. These observations were used to determine adequacy of the anesthetic depth for the surgery and, afterward, to determine whether the patient had survived the experience. Patient survival was assessed by the presence of spontaneous breathing and a pulse following surgery, and success of the surgery was declared based on survival at the end of the procedure. Long-term survival was not used as a marker of surgical success. At the time, there were no intravenous therapies (including fluids and blood transfusions), mechanical or manual ventilation techniques, or resuscitation standards or practices. If the heart stopped during surgery, open massage by the surgeon was attempted but usually failed to restore the dead patient. These intra-operative deaths were generally blamed on the surgery or the patient's disease; if anesthesia contributed, this was not separately noted.
Medical practice during the 1800s was one of making a diagnosis from simple clinical observations and predicting survival or death.5 Treatments were many but were not founded on scientific evidence and often not helpful. Physicians of the time, however, were careful clinical observers. By feeling the temperature and moisture of the skin, frequency and characteristics of the peripheral pulse, heart rate and characteristics of breathing, color of the skin, odor of breath and urine, and taste (yes taste!) of urine, the best physicians would make the correct diagnosis, determine a treatment plan (phlebotomy or patent medicine), and predict the outcome of seriously ill patients (eg, approximately when they would die). Patient monitoring during surgery consisted only of the eyes, ears, and hands of the person administering the anesthetic or performing the surgery. Identification of changes and trends in patient physiology noted by the senses could be used to titrate anesthetic drug administration and detect surgical complications. The only possible interventions when things were obviously deteriorating were to speed up the surgery (ie, stop the bleeding), stop the anesthetic administration, or both.
New techniques for observing and characterizing normal and abnormal physiology were developed in the years following the discovery of anesthesia. Whereas direct determination of arterial blood pressure was performed by Steven Hales in 1733,6 noninvasive blood pressure measurement using the Riva-Rocci mercury sphygmomanometer was described in 1896.7 Blood pressure measurement during anesthesia was first advocated by Cushing in 1910 but was not routinely practiced until the mid-1900s.
Harvey Cushing, a pioneer of modern neurosurgery, while a medical student, was required to administer ether for a patient of one of his surgical teachers. He had previously seen a patient die during surgery, possibly precipitated by inelegant administration of the anesthetic. As a result of the anxiety surrounding this experience, Cushing developed the anesthesia record which initially included documenting the patient's vital signs, pulse, and respiratory rate (1894). Later, he recommend adding blood pressure measurement as part of the anesthesia monitoring and recording process (1910).8,9 The rationale for creating and collecting a record of the patient's vital signs during surgery was his belief that anesthesia did contribute to mortality and that improvements in the conduct of anesthesia would only come from careful observation of vital signs and review of collective previous experiences of delivering anesthesia.
With surgical advances and increased need for anesthesia and therapeutic skills, dedicated individuals who delivered anesthesia on a regular basis led to the establishment of the profession of anesthesiology. Over the past 100 years, the mantra of professional anesthesia practitioners has been “Vigilance,” meaning continuous (one-on-one) attention to the patient receiving anesthesia. The first scientific anesthesia journal was established in 1922 in the United States (Anesthesia and Analgesia). Devices designed to extend the sensitivity and expand the range of human senses to detect normal and changing patient conditions were developed and applied in operating rooms and, later, in recovery areas and ICUs. The desire for improved physiologic monitoring using specially designed devices came from the collective anxiety experienced by those who administered the anesthesia for surgical care. From these beginnings emerged an understanding of the independent contribution of anesthesia to surgical patient mortality and a desire to improve the safety of anesthesia.
Advances to Enhance and Extend the Senses of Caregivers
Early patient monitors evolved in parallel with medical diagnostic equipment. These forward steps began with mechanical devices that enhanced human senses. In the early 1800s, the physical examination included percussion and “immediate auscultation” of the chest. Physicians placed their ear directly on the patient's chest to detect and characterize internal sounds to make cardiac and pulmonary diagnoses. Credit for inventing the first stethoscope is given to a French physician, René Laënnec who first used a rolled sheet of thick paper and later a wooden tube, to listen to the chest in 1816. Over the next several years, this simple chest auscultation device was improved to include amplification and bi-aural hearing, which enabled physicians to make better observations of normal and abnormal cardiac and lung function that were unapparent to the unaided ear.10 This improved the diagnostic abilities of physicians to predict the outcome of diseases and to prescribe treatments. Unfortunately, most therapies of that time were unlikely to alter disease progress, and the primary role of diagnosis was to predict when death was imminent.
Determination of the approach of death was aided by the use of these advances in diagnostic devices. Auscultation of a heartbeat could be detected for some time after palpable pulses were no longer apparent. However, cessation of breathing remained the cardinal sign of death, and apnea was the usual sign that death had occurred.11 Auscultation of the chest could detect faint breathing efforts that may not have been apparent to the unaided eye. Once the prognosis was confirmed as grim, family members stayed with the dying patient (death watch) and were the ones who identified the time of death when breathing stopped. Determining the occurrence of death was not considered the domain of physician practice at this time.
The human electrocardiogram was described by Einthoven in 1895 using a large string galvanometer. Electricity was developed and deployed during the 20th century, and electronic devices were then invented that were able to amplify and to display the electrical activity of the heart. These were used to diagnose and treat cardiac abnormalities. By the middle of the 20th century, oscilloscopes able to display a single lead cardiac electrical complex were common in cardiac care areas and were occasionally used during cardiac operations. Concerns about the flammability of ether and the potential for explosion with cyclopropane (introduced into anesthesia clinical practice in 1934) discouraged deployment of electrical devices in operating rooms.
The first new successful, non-flammable inhalation anesthetic agent was halothane, a highly halogenated hydrocarbon, which was introduced into clinical use in 1956. At this time, patient monitoring during anesthesia was left to the person providing the anesthetic and had changed very little since the discovery of ether. Anesthesia was usually administered by a specially trained nurse and consisted of intermittently feeling the pulse; occasionally obtaining a manual blood pressure measurement; and observing muscle tone, skin color, respiratory characteristics, and breathing frequency. Endotracheal tubes were available but were infrequently used, and positive pressure manually assisted ventilation was used with or without intubation. Monitoring had changed little over the 100 years since anesthesia was discovered.
An extraordinary but simple advance in patient monitoring was the development and deployment of continuous auscultation of the chest using a precordial or esophageal stethoscope and a custom fit ear piece worn by the clinician proving anesthesia (Fig. 1). For the first time, this mechanical device allowed continuous monitoring of the anesthetized patient (Fig. 2). This device permitted immediate detection of cardiac rhythm changes, respiratory obstruction, or complete loss of a previous patent airway. For the first time, use of the precordial monitor allowed early detection and rapid intervention to prevent hypoxia and progression to cardiac arrest. The value of continuous precordial auditory monitoring was recognized first by anesthesia specialists caring for children, where airway issues are frequent, but its use spread to benefit patients of all ages.
Personal Travels in Anesthesia and Patient Monitoring
My exposure to continuous patient monitoring began in 1971 as a medical student. I chose to spend 2 weeks on an anesthesia rotation as part of the required 2-month student surgical rotation. Part of the anesthesiology teaching program included direct patient care in the operating room overseen by a staff anesthesiologist. This rotation required being fitted with a personal “hearing aid” earpiece to listen to the patient's chest during the surgery. I was immediately impressed by how this device made it possible for me to do several different things at once, being aware (almost subliminally) of cardiac activity and breathing as well as being able to adjust the delivery of anesthesia agents and observe the progress of the surgery. Later in my medical school experience, I spent an additional 2 months with the anesthesiology department, learning more subtleties of the techniques of anesthesia delivery and confirming my developing interest in the field. My earpiece and precordial stethoscope continued to be an essential part of my practice. I finished residency training in August 1978 and arrived at the University of Virginia in September to begin my professional career. I carried my ear piece with me to this first “real” job. Monitoring of patients in the operating rooms and ICU was primitive (by today's standards) and rarely included a 3-inch orange oscilloscope screen with a single cardiac complex displayed. It wasn't until 5 or so years later that multi-lead continuous electronic cardiac monitoring was becoming standard in the operating rooms and ICUs.
During the late 1970s and early 1980s, the precordial stethoscope was the only useful continuous patient monitor. With rare exceptions, blood pressure was intermittently manually determined and recorded by hand on the anesthetic record, virtually unchanged since Harvey Cushing's anesthetic performed in 1896. Invasive monitoring was available (arterial lines, central venous catheters, and pulmonary artery catheters) but only routinely used in cardiac surgery patients and occasionally in the cardiac ICU. There are no quality outcome studies published regarding continuous precordial monitoring; however, there are anecdotal reports of patient “saves” with its use.
While I was obtaining additional training in internal medicine at a private practice teaching hospital in Memphis, Tennessee, I rotated for a month with an anesthesiology group in the operating room. No one in the group used precordial monitoring; a finger on a pulse was the standard technique for continuous patient monitoring within the group. I used my precordial stethoscope in all of the cases in which I was involved. Shortly after anesthesia induction in a middle age woman for a cholecystectomy, her heart sounds suddenly disappeared. I informed my supervising attending, and he felt for a pulse and called for an electrocardiogram (ECG) scope. Ventricular fibrillation was identified, cardiopulmonary resuscitation was started, and defibrillation was delivered successfully.
The start of anesthesia requires many manual tasks, and keeping a finger continuously on a patient's pulse is a technical impossibility. The precordial listening device can be used continuously from the start of anesthesia. It provides more information on patient condition than intermittently feeling a pulse. I visited Memphis after completing my anesthesiology residency to interview for a job with the same anesthesia group and noted that almost all of the group members were now using precordial stethoscopes. Despite the current commitment to evidence-based practice supported by large randomized, blinded controlled trials, anecdotes such as this can powerfully influence individual clinical practice and can facilitate diffusion of technology that results in improvements in care.
Monitoring With Pulse Oximetry
Although precordial monitoring will allow the caregiver to detect airway obstruction and suspect other ventilation issues, inadequate delivery of oxygen can occur in the face of adequate ventilation due to other causes. Gas lines and anesthesia systems can fail, and oxygen tanks will become depleted during use. Failure to notice these events can result in delivery of hypoxic gas mixtures during anesthesia and mechanical ventilation. Developing hypoxemia is difficult to identify clinically from skin color and vital signs. The effects of prolonged hypoxemia can be devastating. Hypoxemia could only be suspected by noting gray or blue skin or nail bed color (cyanosis), and confirmed by arterial blood analysis (which took at least 30 min in a distant laboratory to determine in 1985). Although polarographic oxygen monitoring of inhaled gas concentration and anesthesia machine “fail-safe” valves (a pressure-sensitive device that stopped all gas flows if the oxygen line or tank pressure reached zero) helped to prevent inadvertent hypoxic gas administration, the detection of hypoxia in patients remained problematic.
The use of ear oximetry to alert high altitude fighter pilots of the need of an oxygen mask before losing consciousness paved the way for the development and use of pulse oximetry in the operating room. This device was attached to a person's earlobe and measured the saturation of the entire collection of blood in the pinna, most of which was venous. By adjusting the pressure on the earlobe and squeezing out most of the venous blood, a value closer to arterial saturation could be obtained. This device was never perceived as useful as a monitor of oxygen saturation in the operating room, since it was not a measure of the arterial blood and was finicky, responding to temperature and blood flow changes.
In the late 1970s, the first practical finger arterial oximeter was developed by the Biox company, based on estimating the arterial saturation during the period of maximum digital swelling. This was accomplished by tracking the periodic change in total light absorption produced by the pulsatile flow of arterial blood into the finger. This change in light path during cardiac activity is termed digital plethysmography, which allows separation of the arterial signal from other tissue and venous blood contributions to light absorption. Using red and infrared light-emitting diodes, this device calculated the arterial saturation at peak digital expansion by measuring the change in the ratio of absorption at these 2 wavelengths and applying Beer's Law. The operating room potential of this new device was appreciated by the owners of the Nellcor company, who then purchased the patent rights, improved the sensors and displays, and added a variable tone reflecting saturation changes. Nellcor began producing and marketing a device for monitoring during anesthesia in 1985. For the first time, the presence or development of arterial hypoxemia could be identified in real time before any change in skin color or hemodynamics was apparent. Other than the ECG, this was the first successful continuous electronic monitor for use during anesthesia. Over the next several years, use of pulse oximetry spread rapidly through the anesthesia community.
Changing and standardizing medical practices in any environment is difficult. The first publication attempting to mandate minimal standards for monitoring during anesthesia came from Boston in 1986 in a JAMA publication: “Standards for Patient Monitoring During Anesthesia at Harvard Medical School” authored by Eichhorn, Copper, Cullen, and others.12 This paper was met with scorn and anger by many anesthesia providers; no medical person wants to be told that he/she must change his/her practice. A cardinal requirement of the standards was for continuous monitoring by a trained anesthesia person during every anesthetic. In addition, what was to be “continuously” monitored was for the first time specified:
During every administration of general anesthesia, the anesthetist shall employ methods of continuously monitoring the patient's ventilation and circulation. The methods shall include, for ventilation and circulation each, at least one of the following:
For Ventilation—Palpation or observation of the reservoir breathing bag, auscultation of breath sounds, monitoring of respiratory gases such as end-tidal carbon dioxide, or monitoring of expiratory gas flow. Monitoring end-tidal carbon dioxide is an emerging standard and is strongly preferred.
For Circulation—Palpation of a pulse, auscultation of heart sounds, monitoring of a tracing of intra-arterial pressure, pulse plethysmography/oximetry, or ultrasound peripheral pulse monitoring.12
Over the ensuing years, pulse oximetry became routinely available in operating rooms and later spread to the recovery rooms, ICUs, acute patient wards, and even patient's homes. To reiterate, these minimal required monitoring standards strongly asserted the need for one-on-one presence of a trained individual monitoring the patient and specified several possible ways to achieve the required continuous monitoring of circulation and ventilation. Interestingly, pulse oximetry was proposed as an option for continuous monitoring of the circulatory system, not ventilation. Pulse oximetry has the additional benefit of being able to detect clinically unrecognized evolving hypoxemia. These standards were gradually adopted and over time have been modified to mandate continuous CO2 monitoring and pulse oximetry and, recently, patient temperature monitoring during anesthesia.
Today, most anesthesia practitioners, respiratory therapists, critical care nurses, and other caregivers believe that the use of a pulse oximeter is mandatory and helpful during anesthesia, with deep sedation, in the recovery room, and in critically ill patients in the ICU. In addition, most of these clinicians believe that the use of pulse oximetry has been proven to improve patient outcome when used in these environments. Most are surprised to find that the scientific study of pulse oximetry (or any other monitoring device for that matter) has failed to demonstrate any measurable important impact on patient outcome.
The next part of this paper will review what is known about the benefits and problems of monitoring in general, concentrating on studies of pulse oximetry. I will try to answer the question of why there is so little data supporting the proposition that any monitoring device has been shown to have a beneficial impact on patient care. In this process, I will answer the question of how to assess the impact of a monitor on care by looking at the impact on caregivers' actions and how this can affect patient care.
Early Studies of Pulse Oximetry in the Operating Room
A remarkable landmark study of the effect of pulse oximetry on outcome from anesthesia and surgery by Moller et al in Denmark was reported in 1993.13 Over 20,000 subjects in 5 hospitals in Denmark were randomized to receive anesthesia in an operating room equipped with a pulse oximeter or in one without an oximeter. The study was carried between 1989 and 1991. Randomization was done by surgical service block; an oximeter was assigned or not assigned to a particular operating room and then, if assigned, used for all cases in that room that day. If the subject was in an operating room with pulse oximetry, oximetry was continued through the recovery room stay.
Two papers as well as 2 editorials were simultaneously published relating to the outcome this study. Because of the study design, important patient and surgical variables or risks that could potentially affect the study outcome were not prospectively identified and randomly assigned. Therefore, the first paper reported an analysis of the distribution of these factors and concluded that there was no bias introduced from unequal distribution of patient and surgical factors.14 Also, there was no difference in these factors between the study groups and the subjects who entered but failed to complete the study. This careful analysis confirming the success of randomization was essential because the second paper revealed no significantly improved important patient outcomes between using or not using a pulse oximeter during anesthesia and in the recovery room. Specifically, there was no difference in mortality, length of hospital stay, cardiac arrest frequency, or time to wake up after anesthesia. The only significant possibly important difference was a lower incidence of cardiac ischemia as noted by ST segment changes seen on ECG monitors in the operating room in the monitored group. Despite these noted changes, there was no difference in the incidence of myocardial infarction, cardiac arrest, or any cause of mortality in the operating room or during the hospital stay. There were more respiratory complications in the monitored group, but these were minor and were attributed to differences in preoperative respiratory risks. The oximetry subjects were slightly older, and possibly sicker patients than the control subjects.
The use of the pulse oximeter resulted in a marked difference between the treatment and control in the identification and treatment of hypoxemia, which was 19 times more frequent in the monitored group (eg, the device was better at diagnosing hypoxemia than clinicians were at detecting early cyanosis). Subjects in the oximetry group more frequently received supplemental oxygen in the recovery room and were more likely to be discharged to the floor on oxygen. As a group, the monitored subjects stayed an average of 15 min longer in the recovery room before discharge to an acute hospital floor. The postoperative complication rate was about 10% in both groups. Changes in clinical care were more frequent in the monitored group and increased with increasing preoperative patient illness. In the sickest group of subjects with oximetry (American Society of Anesthesiologists physical status 4), over 20% had care changed in the operating room and over 25% in the post-anesthesia care unit, mostly receiving increased inspired oxygen or narcotic reversal agents, identified by oxygen saturation changes.
The most interesting findings from this study came from the survey of the anesthesiologists who participated. Despite knowing the lack of impact pulse oximetry had on serious patient outcomes, 92% of the 104 anesthesiologists believed that using a pulse oximeter during anesthesia and in the recovery room would be beneficial and improve safety. In addition, 18% stated they experienced a specific event where pulse oximetry allowed avoidance of a serious event or complication. Ninety-four percent reported that they had experienced an event where pulse oximetry was very helpful at guiding clinical care. Only one person reported that the device had provided false security during an event leading to (possibly) a worse outcome. Eighty percent said that they felt more secure when using a pulse oximeter, and 54% said its use had changed their clinical practice of anesthesia.
Since the publication of this early, large study, several other trials evaluating the impact of oximetry in the operating room have been carried out. Despite these efforts, no significant improvements in important patient outcomes have been identified with pulse oximetry use in the perioperative patient population. This has led to a Cochrane Collaborative authors' conclusion in 201415 that is essentially unchanged from the initial conclusions in 1993, 2003, and 2009:
The studies confirmed that pulse oximetry can detect hypoxemia and related events. However, we have found no evidence that pulse oximetry affects the outcome of anesthesia for patients. The conflicting subjective and objective results of the studies, despite an intense methodical collection of data from a relatively large general surgery population, indicate that the value of perioperative monitoring with pulse oximetry is questionable in relation to improved reliable outcomes, effectiveness, and efficiency. Routine continuous pulse oximetry monitoring did not reduce either transfer to ICU or mortality, and it is unclear if there is any real benefit from the application of this technology for patients…15
At first blush, this evidence-based conclusion, which has stood unchallenged by additional data for over 20 years, seems unlikely because of the universal belief that early detection and prevention of hypoxemia will improve patient outcome in the operating room and post-anesthesia care unit. This belief is widely shared by patients and clinicians, and the use of pulse oximetry has become mandatory as a practice standard in anesthesia. Possibly as a by-product of mandatory monitoring with pulse oximetry during anesthesia, dramatically reduced malpractice insurance rates in anesthesiology have occurred. This is despite the lack of scientific support demonstrating less patient risk of harm with its use.
The fundamental problem with analysis of monitors is they do not deliver care; they only deliver data that a responsible caregiver can use to deliver or change care. To associate a patient's outcome (good or bad) with use of a monitor, the clinician must take an action that requires using the data being monitored, an action that would not have been taken without the monitor being available. Early development of oxygen desaturation is indeed difficult to clinically detect and can be caused by mechanical failures, technical faults, patient physiological changes, and other things. Identifying early desaturations can alert the caregiver of the need to look more carefully for a developing problem with the oxygen source, the anesthesia machine function, the patient's breathing, the mechanical ventilator, or changed cardiac or vascular function. Severe hypoxemia is associated with detectable clinical changes (cyanosis) and may have severe patient consequences, but this is only likely if hypoxia is not quickly reversed or if cardiac activity is severely compromised (ie, cardiac arrest or profound hypotension). To summarize, pulse oximetry can deliver notice of impending severe hypoxemia, but the device itself is only helpful to the patient by alerting the caregiver of this possibility. The pulse oximeter does not identify the cause or deliver any treatment that could benefit the patient. If patient outcome is changed by using a monitor, then the actions of the caregivers must have changed due to having the monitored data, and these actions are the cause for the changed outcome, not the monitor.
To appropriately evaluate the impact of a particular monitor on patients, one must study the impact of the monitor on the caregiver behavior rather than on the patient outcome only. Few studies of pulse oximetry, or any other monitor for that matter, have embraced this approach. Care is complex, and the caregiver can be affected by a monitor in many ways. Some of these effects are listed in Table 1.
Effect of Improved Monitor Function on Caregiver Behavior
Early pulse oximeters often failed to deliver reliable or, at times, any data. Intermittent device failure can distract caregivers from attention to real issues and needed patient care if they spend time and energy adjusting the device or moving it to a different finger. Patient movement and low peripheral perfusion adversely affected these early pulse oximeters. In the original pulse oximetry studies by Moller et al13,14 discussed above, it was noted that the oximeter failed and was abandoned partially or completely in 7.2% of the very sickest (American Society of Anesthesiologists physical status 4) subjects. The failure rate increased linearly with physical status.
In 1995, the Masimo company introduced a new measurement technique that could accurately determine saturation during patient motion and low perfusion states by separating the arterial signal from the venous signals and other interferences. Taking advantage of the availability of this new technology, we attempted to compare the effect on meaningful patient outcomes of giving the caregiver conventional or improved oximetry data in a randomized trial in the ICU following cardiac surgery.16
Two covered and shielded pulse oximeter probes, one conventional pulse oximeter and one with the new technology, were attached to adjacent fingers of 86 subjects on entry to the ICU immediately following coronary artery surgery. We informed caregivers only that we were evaluating a new oximeter, no mention was made that we were studying changes in patient care delivered at the bedside. The data from both oximeters were collected continuously and later analyzed from the computer record, but only the data from one of the devices were randomly selected to be displayed to the caregiver at the bedside. Data were collected until subject extubation or for a maximum of 24 h if the subject was not extubated within 24 h.
After all data were collected, we confirmed that the absolute amount and percentage of nonfunctional monitoring time were significantly greater for the conventional than for the new technology (8.7 ± 16.4% vs 1.2 ± 3.3% for the new pulse oximeter, P < .001) and the time to extubation was not different between the 2 groups (634 ± 328 min for the new pulse oximeter vs 706 ± 459 min for the conventional pulse oximeter).
Clinicians managing subjects with data from the new technology oximeter weaned subjects faster to an FIO2 of 0.40 (176 ± 111 min for the new pulse oximeter vs 348 ± 425 min for the conventional pulse oximeter, P = .0125) and obtained fewer confirmatory arterial blood gas measurements (2.7 ±1.2 for the new pulse oximeter vs 4.1 ±1.6 for the conventional pulse oximeter, P < .001) but made the same number of ventilator changes during this weaning process (2.9 ± 1.2 for the new pulse oximeter vs 2.9 ± 1.7 for the conventional pulse oximeter). Our conclusion from this study is that ICU clinicians change and improve ventilator weaning processes (for oxygen weaning) when more reliable oximetry monitoring data are presented. Subjects benefited by requiring less blood removal, fewer hospital charges, and less time exposed to an unnecessarily high FIO2. Caregivers dramatically changed how they provided care with the improved, more reliable, data.
Most importantly, this paper describes a way to study the impact of any monitoring device by identifying the effects of using the monitor on caregivers' behaviors and processes of care rather than only looking only for changes in patient outcomes. It helps explain why pulse oximetry was universally embraced by clinicians and rapidly became a standard of care in anesthesia practice. During this study period, when the experimental setup was not being used, the new device was often placed on patients not enrolled, who were in extremis and in whom no signal was obtainable with the conventional device.17 Often, a believable and useful saturation could be obtained even when cardiopulmonary resuscitation was being administered. The staff had come to rely on the new device in the most extreme cases when the conventional device had failed, giving them confidence to deliver necessary care without pausing frequently to run blood gases. A monitor is passive; it can only benefit patients if the person watching the monitor needs and uses the monitoring data to change patient care in a way that will improve outcome.
The ideas discussed so far were presented in part as the “Egan Lecture” at the AARC Congress in November 2015 in Tampa, Florida. Dr Egan was a pioneer supporting the establishment of the respiratory care profession. He authored an early text, Fundamentals of Inhalation Therapy published in 1969.18 His opinions about patient care and respiratory therapists (known as inhalation therapists at that time) are encapsulated in his words:
The patient on a mechanical ventilator requires close attention and observation by both nurses and inhalation therapists who are knowledgeable in the clinical aspects of inadequate ventilation…The possibility of mechanical failure and the sudden changes that may develop in the patient's physiology make it mandatory that he [the patient on mechanical ventilation] not be left alone for an instant.18
About monitoring he said: “…mechanical or electronic monitors must not be relied upon as a substitute for the personal attention of a skilled therapist. A monitor will not correct a deficiency, and its value depends entirely upon the capability of the personnel responding to its call.”18 It seems we have forgotten this early admonition of Dr Egan. Clinicians at the bedside responding to changing patient conditions, not the monitors, make the interventions that can make a difference in patient outcome.
Confusion Arises When Monitors Are Used to Make a Diagnosis
Confusion persists between monitoring a patient and making a diagnosis in a patient when the measurement made is obtained using a monitoring device. A study illustrating this conundrum was recently published.19 Treatment of 108 infants with symptoms of mild-to-moderate bronchiolitis and a true saturation of 88% or more presenting to an emergency room in a Toronto hospital were studied. Researchers were evaluating the effect that oximetry measurements during pulse oximetry monitoring had on care decisions while patients were being evaluated for treatment and/or hospital admission in the emergency department. The end points measured were prolonged stay in the emergency department, frequency of hospital admission, or return and admission within 72 h of emergency department discharge.
After true saturation was determined to be >88%, parental consent was obtained, and subjects were monitored in the emergency department with either an accurately calibrated pulse oximeter or one in which the actual value was set to read 3% higher than the actual values. Families were blinded to the oximeter assignment, as were the treatment clinicians and researchers analyzing the data. Physicians were informed only that participants had a 50% probability of having their displayed saturations altered by a physiologically small amount and that the true triage saturation was 88% or higher.
The findings in this study were that infants monitored with an artificially elevated pulse oximeter were less likely to be hospitalized initially and less likely to receive active hospital care for >6 h in the emergency department than those with unaltered (accurate but lower) oximetry readings. Analysis of the subjects entered into the study revealed they were appropriately distributed between the 2 study groups by age, symptoms, and initial screening saturation. There was no increase in return to the emergency department or admission within 3 d between the discharged subjects from either group.
This study suggests that the displayed oxygen saturation during monitoring was the strongest factor influencing the clinician's decision to increase treatment or admit the patient. There were no differences in patient clinical outcomes except for a much lower admission rate for those monitored with the artificially increased saturation displayed. The monitored value, even knowing it was possibly wrong, was used for diagnosis of the severity of the child's illness and highly influenced the decision to escalate emergency department care and admit the child. Equally ill children with artificially elevated situations were viewed as less sick and discharged home more frequently and sooner with no ill effects.
If used for monitoring, trends in oximetry during observation and treatment, not absolute values, would have directed care. The outcome of this trial is informative but not surprising; confusion between making a diagnosis (eg, hypoxemia) and treating the cause of a physiologic derangement assisted by monitoring the trend during treatment are entirely different actions. From this study, oximetry as a diagnostic technique was used by caregivers as a strong criterion for hospital admission and treatment in infants with bronchiolitis. The ability of a monitor to present data that are as accurate as data from the standard diagnostic laboratory device confounds the intrinsic meaning of “monitoring.” Monitoring, in the simplest sense, is watching changes in a patient's physiology over time while the patient is being actively treated and using the monitor's output as a trend to decide whether to continue or to alter the treatment. While a specific diagnosis may be suspected, making a diagnosis is not the primary or even an important contribution of the monitoring device. That an absolute (but false) high saturation level predicted discharge in the described study is a misconception of the monitor. Saturation trend data were not reported in the study, but they could have been useful in deciding which patients were improving and could be safely discharged from the emergency department and those failing to improve who needed admission for further treatment. Instead, those with falsely high saturations (only by an average of 1.5%) were twice as likely to be discharged and do well, and did not require admission at a later time, than those children who were admitted based on the accurate but low oximeter reading. Using the pulse oximeter for diagnosis of severity of illness of these children did not help the clinicians or the subjects in this trial.
New Monitors, New Problems: Alarms
The introduction of pulse oximetry began with the monitor delivering a fixed frequency sound with each heartbeat, thus releasing the anesthesia provider from an earpiece on the chest and/or a finger on a pulse. This allowed more freedom to pursue other important activities in the operating room while still maintaining continuous patient observation. Shortly thereafter, with the invention and addition of a varying tone that reflected the direction of a change in saturation percentage, the pulse oximeter delivered auditory trended data without requiring frequent scanning of the monitor screen to identify desaturations. This addition made the oximeter very useful and extremely popular with anesthesia providers.
With the proliferation of additional electronic monitoring devices over the last 20 years and the removal of the monitor (the person watching the patient) from the bedside as in the ICU, monitor alarms have become necessary to identify when a patient needs immediate, personal attention from their (remote) caregiver. In the operating room environment with one-on-one care, monitor alarms have allowed multiple tasks to be carried out without constantly looking away from the patient and the operative field but still being aware of the changes in the patient's vital functions. Alarms provide an added layer of observation and detection of undesired events by alerting the caregiver when reaching thresholds of changes of the patient's physiology. These threshold alarm values in the operating room are set by the clinician and depend on patient and procedural factors. Alarms supplement but do not replace the continuous monitor and patient watching taking place. Outside of this environment, when the caregiver is often far from the patient and may be responsible for many patients, monitor alarms are essential but have led to serious problems. Attaching a monitor with a critical value alarm setting to a patient receiving life support but with no one close to the bedside to respond, gives a false sense of security while actually increasing the risk of experiencing an unfortunate event.20 A monitor alarm does not replace the need for constant attention to the patient. Monitors are passive; they do not deliver care.
False and non-critical alarms are very frequent in ICUs, and these account for the vast majority of alarms.21 These false alarms draw clinicians to the bedside of a stable patient to attend to the alarm and away from other patients actually needing attention or care.22 The practice of one person caring for multiple patients with electronic monitoring invariably results in delaying or missing important patient changes due to the large number of false alarms that require attention and distract and delay the caregivers from responding to actual issues (true alarms). In a large ICU, alarms are sounding constantly, with little differentiation that one particular alarm is more important than another. Alarm fatigue and desensitization are serious problems.4,23 Many medical regulatory agencies have identified this problem as very serious, requiring each institution to create its own unique solution.24 Central monitoring of cardiac rhythms by telemetry has been useful in cardiac care units staffed with monitor watchers who can differentiate malignant situations requiring clinician attention from benign alarms which can be silenced centrally. Other monitors, such as the pulse oximeter, are not as easily analyzed at a central display and often trigger alarms that may or may not be real or require patient attention.
There is no simple solution to this problem of alarm management. Integrating monitors and using artificial intelligence to reduce nuisance alarms and prioritize alarms that predict serious events has been suggested as a way to improve the problem. However, due to the wide variety and ages of the devices in clinical use, there is no available standard technology that can improve this situation.22 Some institutions have created local solutions to reduce the number of false alarms and automatically notify nurses of the need to respond to potentially true alarms using automatic text alerts and pagers.25 Linking the patient alarm to the nurse who is assigned to that patient improves response time and accuracy of assessment of the situation when an alert is enunciated.
The desire to increase patient safety and the fear of litigation have resulted in demands for universal monitoring of all hospitalized patients, not just those in critical care areas. The problems detailed above with ICU monitoring are compounded as monitoring expands beyond the ICU walls. Cardiac telemetry units have used remote monitoring with alarms based on continuous ECG monitoring. They have not been without problems.26 Several institutions have tested systems for deploying patient monitors and directing responders in acute patient wards with mixed results.27 These early trials were undertaken with the hope that with universal monitoring it would be possible to identify patients beginning to decline and then provide early interventions. If so, early intervention would result in fewer rapid response team calls and ICU admissions and shorter lengths of stay for those admitted to an ICU.
In one study, the researchers implemented a patient surveillance system based on pulse oximetry with nursing notification of violation of alarm limits via wireless pager.28 Data were collected for several months before and several months after implementation of the monitoring and call system. Concurrently, matching outcome data were collected on 2 other postoperative units not being monitored. Baseline threshold alarm limits were set to reach a balance between actionable and false positive alarms (SpO2 <80% and heart rate <50 and >140 beats/min). These limits could be altered by the nursing staff up to ±10% or altered further, for a specific reason, by a physician. Notification delay is an important issue in alarm frequency management.29 Appropriate delay eliminates many transient and motion artifact-generated false alarms; a 15-s audio alarm delay at the bedside and an additional 15-s delay for pager annunciation was chosen for this study (leading to a 30-s delay before a nurse would be notified by pager of violation of alarm thresholds).
There were half as many deaths observed after implementation (2 deaths) compared with the time before (4 deaths) in the studied unit. These included deaths on the ward and those occurring after transfer to an ICU. This observation did not reach statistical significance. Rescue events were dramatically less frequent in the monitored unit compared with the non-monitored units and decreased from 3.4 (1.89–4.85) to 1.2 (0.53–1.88) per 1,000 patient discharges after implementation. No significant changes were seen for rescue events in the comparison units between the 2 time periods. Transfers to the ICU also declined significantly, from 5.6 (3.7–7.4) per 1,000 patient days to 2.9 (1.4–4.3) in the monitored unit, whereas only small, nonsignificant changes were observed in the 2 comparison units between the 2 study periods. This study suggests that there may be benefits of implementing universal monitoring, but a balance between reaching the alarm threshold and delaying the nursing call must be carefully decided to reduce the false negative calls. Also, trained staff and reliable communication systems must be established and meticulously maintained to achieve success. This study only included monitoring with pulse oximetry; adding additional monitors increases manyfold the opportunities for failure. Although this study is mildly encouraging, the authors state that they experienced failure in several previous institutional attempts to deploy universal monitoring. They focus in this paper on the details of how they improved the response of those staff who must react to all alarms if the system is to be fail-safe.
In summary, solutions to detection of patients who begin to deteriorate outside of intensive monitoring areas where responders are not close to the bedside cannot depend on our current use of monitoring devices and the monitoring practices as employed in ICU settings. There is a misguided belief that universal monitoring of all hospitalized patients with pulse oximetry or other monitors will prevent undesired outcomes and improve patient safety.30 As emphasized throughout this paper, monitors are passive; they do not provide patient care. Only caregivers can improve outcomes by making correct interventions when they are needed, and only by being at the bedside at the right time. Monitors and alarms can alert responsible caregivers as to when to be at the bedside, but adequate and trained personnel must be readily available to arrive swiftly and administer needed care. No current monitoring or alerting system is 100% accurate, and massive numbers of false alarms will still be issued, compounding the need for patient and monitor attention. Dealing with the distracting false calls but remaining vigilant for the true ones is the biggest problem preventing successful response to alarms. Deployment of monitors is easy; establishing prompt, reliable, and informed responses is hard. Doing the first without the second is a prescription for failure and increased patient harm. Such a failure can inflict a huge burden of guilt on the caregivers involved and increase the potential liability to the institution.
Solutions to early detection of patient deterioration need to be more robust than the simple alarms issued from the monitoring devices currently in use. Research needs to be done integrating multiple patient variables from large monitored patient populations to identify markers that reliably predict the need for care escalation. Human factors, such as fatigue, need exploration and interventions to improve caregiver vigilance during long care shifts. Adequate backup personnel and frequent breaks are helpful in preventing exhaustion and desensitization to repetitive alarms. Institutions considering providing universal monitoring would be wise to proceed deliberately and with adequate attention to the devices selected and the humans responsible for the care to be delivered.
Conclusions
Monitors are passive; they do not deliver care. By themselves they cannot achieve or affect important patient outcomes. The lack of quality outcome data is not a condemnation of monitoring; the question of how to study monitors must be restated. Monitors (ie, the caregivers who are watching patients) do affect patient outcome. When care is one-on-one, a physiological monitor can help the caregiver to give the best care by demonstrating the immediate need for and effects of the intervention. When the care is provided to several patients simultaneously, monitors can provide alarms that attract caregiver attention. If most alarms are not actionable, care will suffer. Finding safe and reliable solutions to too many false alarms in this setting is essential but has not yet been achieved. For now, patients and caregivers suffer daily from alarm overload.
Footnotes
- Correspondence: Charles G Durbin Jr MD FAARC, Box 800710, Charlottesville, VA 22908. E-mail: cgd8v{at}virginia.edu.
Dr Durbin has disclosed a relationship with Halyard Medical Company.
Dr Durbin presented a version of this paper as the 42nd Donald F Egan Scientific Memorial Lecture at the 61st AARC Congress, held November 7-10, 2015, in Tampa, Florida.
- Copyright © 2016 by Daedalus Enterprises