Prioritizing performance measurement for emergency department care: consensus on evidencebased quality of care indicators

Original Research

Michael J. Schull, MD, MSc*†‡; Astrid Guttmann, MDCM, MSc*‡§||; Chad A. Leaver, MSc*; Marian Vermeulen, BScN, MHSc*; Caroline M. Hatcher, RN, BScN, MHS"; Brian H. Rowe, MD, MSc#; Merrick Zwarenstein, MBBCh, MSc, PhD***; Geoffrey M. Anderson, MD, MSc, PhD*

From the *Institute for Clinical Evaluative Sciences, Toronto, ON; †Department of Medicine, University of Toronto, Toronto, ON; ‡Department of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON; §Divisions of Pediatrics and Emergency Medicine, The Hospital for Sick Children, Toronto, ON; ||Department of Pediatrics, University of Toronto, Toronto, ON; ¶Foothills Medical Centre, Alberta Health Services, Calgary, AB; #Department of Emergency Medicine, University of Alberta, Edmonton, AB; **Centre for Health Services Sciences, Sunnybrook Research Institute, Toronto, ON.

CJEM 2011;13(5):300-309

Abstract

Background:

The evaluation of emergency department (ED) quality of care is hampered by the absence of consensus on appropriate measures. We sought to develop a consensus on a prioritized and parsimonious set of evidence-based quality of care indicators for EDs.

Methods:

The process was led by a nationally representative steering committee and expert panel (representatives from hospital administration, emergency medicine, health information, government, and provincial quality councils). A comprehensive review of the scientific literature was conducted to identify candidate indicators. The expert panel reviewed candidate indicators in a modified Delphi panel process using electronic surveys; final decisions on inclusion of indicators were made by the steering committee in a guided nominal group process with facilitated discussion. Indicators in the final set were ranked based on their priority for measurement. A gap analysis identified areas where future indicator development is needed. A feasibility study of measuring the final set of indicators using current Canadian administrative databases was conducted.

Results:

A total of 170 candidate indicators were generated from the literature; these were assessed based on scientific soundness and their relevance or importance. Using predefined scoring criteria in two rounds of surveys, indicators were coded as “retained” (53), “discarded” (78), or “borderline” (39). A final set of 48 retained indicators was selected and grouped in nine categories (patient satisfaction, ED operations, patient safety, pain management, pediatrics, cardiac conditions, respiratory conditions, stroke, and sepsis or infection). Gap analysis suggested the need for new indicators in patient satisfaction, a healthy workplace, mental health and addiction, elder care, and community-hospital integration. Feasibility analysis found that 13 of 48 indicators (27%) can be measured using existing national administrative databases.

Discussion:

A broadly representative modified Delphi panel process resulted in a consensus on a set of 48 evidence-based quality of care indicators for EDs. Future work is required to generate technical definitions to enable the uptake of these indicators to support benchmarking, quality improvement, and accountability efforts.

Résumé

Contexte:

L'absence de consensus sur les mesures appropriées nuit à l'évaluation de la qualité des soins dans les services des urgences (SU). Nous avons voulu établir un consensus sur un ensemble d'indices de la qualité des soins en SU qui seraient priorisés, parcimonieux et fondés sur des preuves.

Méthodes:

Le processus a été mené par un comité directeur et groupe d'experts représentatifs au plan national (représentants de l'administration hospitalière, de la médecine d'urgence, de l'information en matière de santé, du gouvernement et des conseils de la qualité provinciaux). Une revue exhaustive de la littérature scientifique a été effectuée pour déterminer les indices potentiels. Le groupe d'experts a étudié les indices potentiels en utilisant la méthode avec panel Delphi modifiée, à l'aide de sondages électroniques. La décision ultime d'inclure ou non les indices a été prise par le comité directeur dans le cadre d'un processus avec groupe nominal, lors d'une discussion guidée. Les indices de l'ensemble final ont été classés en fonction de leur priorité pour les mesures. Une analyse de l'écart a déterminé les domaines où il sera nécessaire de continuer à développer des indices. Une enquête de faisabilité de la mesure de l'ensemble final d'indices a été menée en ayant recours à des bases de données administratives canadiennes actuelles.

Résultats:

Un total de 170 indices potentiels a été généré à partir de la littérature; ceux-ci ont été évalués en fonction de leur validité scientifique et de leur pertinence ou importance. Les indices ont été codés comme «retenu» (53), «non retenu» (78) ou «indéterminé» (39) au cours de deux cycles d'enquêtes en se basant sur des critères de notation prédéfinis. Un ensemble final de 48 indices retenus a été sélectionné et regroupé en neuf catégories (satisfaction du patient, fonctionnement du SU, sécurité du patient, gestion de la douleur, pédiatrie, problèmes cardiaques, problèmes respiratoires, accident vasculaire cérébral et sepsie ou infection). L'analyse de l'écart a suggéré un besoin pour de nouveaux indices dans les domaines suivants : satisfaction du patient, milieu de travail sain, santé mentale et toxicomanie, soins aux personnes âgées et intégration hôpital-communauté. L'analyse de faisabilité a montré que 13 des 48 indices (27%) peuvent être mesurés à l'aide des bases de données administratives nationales existantes.

Discussion:

La méthode par panel Delphi modifiée avec groupe largement représentatif a permis d'obtenir un consensus sur un ensemble de 48 indices de la qualité des soins fondés sur des preuves pour les SU. Du travail plus poussé sera requis pour générer les définitions techniques permettant l'application de ces indices, ce qui favorisera l'analyse comparative, l'amélioration de la qualité et les efforts de reddition de comptes.


Over 119 million visits occur each year in emergency departments (EDs) in the United States,1 and about 12 million visits are made to Canadian EDs.2 Annual ED use in the United States increased by 32% from 1996 to 2006, whereas the number of EDs decreased,3 and worsening overcrowding led the Institute of Medicine to describe American EDs as “nearing the breaking point.”4 The situation is similar in EDs in Canada,5 Australia,6 and parts of Europe.7

Concerns about access to and the quality of ED care are widespread. In a community-based survey of 1,400 adults in each of five countries (Canada, Australia, United Kingdom, New Zealand, United States) conducted in 20018 and repeated in 2004,9 Americans and Canadians were the most likely to have used EDs and the most likely to say they received “fair or poor” care. They were also among the most likely to have waited more than 2 hours for ED care.

Concern over long ED waiting times and overcrowding has been articulated in the emergency medicine literature for many years; however, only in recent years have major initiatives to reduce these times been launched.10–13 These initiatives have largely focused on reducing waiting times in EDs and have, for the most part, not targeted other aspects of quality of care. A singular focus on reducing waiting times could inadvertently lead to worsening of other aspects of ED quality of care.14 Hence, a balanced measurement approach that targets ED waiting time as a high-priority indicator but includes other high-priority quality of care indicators is needed.

There have been many calls for quality of care monitoring and reporting as a means of improving accountability and quality in health care delivery15,16 for both adults17–19 and children.20 Although there has been some work on indicators of quality of care in EDs,21–23 these efforts resulted in a large number of indicators rather than a smaller, more feasible subset. Moreover, the appropriateness of some commonly used indicators has been questioned. 24 The absence of agreement on appropriate high-priority measures of quality of ED care prevents cross-jurisdiction comparisons, benchmarking, and the evaluation of quality improvement interventions.

We report on the development of a national consensus on ED quality of care indicators. This process began in January 2008, when the Calgary Health Region hosted a national emergency department performance measurement summit. Clinical, research, administrative, and decision-making experts from nine Canadian provinces attended. Invitations to the summit were based on expertise (clinicians, administrators, quality improvement experts, information management experts, and decision makers); local, regional, or national profile in emergency medicine; and recommendations from other invitees. The objective was to develop and prioritize an evidence-based and parsimonious set of quality of care indicators for EDs through a nationally representative and scientifically rigorous process. Given the plethora of existing indicators, there was agreement that the indicators would be selected from among existing ones as opposed to developing new ones. Participants agreed to the use of a modified version of the Alberta Quality Matrix for Health (Table 1) to define the domains of quality of care for indicator selection and identification of gaps.25 Two summit participants, a researcher (M.J.S.) and a health system decision-maker (C.M.H.), were selected to colead the process.

Table 1: Domains of quality of care and safety for indicator selection

Domain Definition
Acceptability Health services are respectful and responsive to user needs, preferences, and expectations
Accessibility Health services are obtained in the most suitable setting in a reasonable time and distance
Appropriateness Health services are relevant to user needs and are based on accepted or evidence-based practice
Effectiveness Health services are provided based on scientific knowledge to achieve desired outcomes
Efficiency Resources are optimally used in achieving desired outcomes
Safety Mitigate risks to avoid unintended or harmful results
Healthy workplace Provision of health services does not lead to an unhealthy work environment for health care staff

Adapted from Health Quality Council of Alberta.25

Methods

A national steering committee (N = 24) was established to develop and approve the methodology for the selection of ED indicators, determine the membership of an expert panel (N = 21) (Appendix A, available online), and advise on dissemination of results. The role of the expert panel was to review existing indicators and related evidence and rate each indicator on specific dimensions. Steering committee members and expert panelists were selected from participants at the face-to-face Calgary summit and through nominations by participants and research team members. We sought broad representation from clinicians (doctors and nurses), hospital and ED administrators, health information experts, regional health authorities, ministries of health, and provincial health quality councils.

We conducted a systematic review of the peer-reviewed publications and grey international literature (open-source publications of government, academia, or industry not usually found through publishers) to identify existing quality of care and patient safety indicators relevant to care in the ED. We sought indicators applicable to clinical conditions (diseases or presenting complaints) or operational processes with associated best practice evidence. The medical databases MEDLINE, CINAHL, Cochrane Library, and HealthSTAR were searched from inception to 2008 using specific terms, such as emergency department, emergency care and emergency pediatric care, emergency health services, performance indicators, quality indicators, performance measures, quality measures, report card, registry, benchmarks, and standards, as well as a variety of terms to capture clinical care quality process indicators for specific conditions (available from the authors). Additional Internet-based searches were conducted of clinical practice guidelines and consensus and best practice reports. Finally, we reviewed indicators currently recommended or monitored by health quality and accreditation organizations and/or by governing associations or societies for relevant clinical specialties in Canada, the United Kingdom, and the United States (e.g., Canadian Association of Emergency Physicians, Joint Commission on Accreditation of Healthcare Organizations, National Quality Forum, Agency for Healthcare Research and Quality, Institute for Healthcare Improvement, Hospital Quality Alliance, Centers for Medicare & Medicaid Services, Evidence-Based Medicine Resource Centre, and National Institute for Health and Clinical Excellence).

Indicators were included for consideration based on the following criteria: 1) provision of sufficient descriptive information for operationalization (i.e., could it be expressed as a numerator and a denominator?) and 2) published evidence of the relevance or importance to ED patient outcomes and/or processes of care. To cast a wide net, the quality of the evidence (e.g., study design, bias, confounding, or outcome measurement) and the psychometric validation of the indicators were not considered.

In situations where two or more indicators were worded similarly and/or measured the same outcome and/or process of care, the one judged by the research team (M.J.S. and A.G.) to be most clearly expressed was retained for further consideration. “Time-to” indicators measuring the same process or outcome, but using different time thresholds, were combined into a single indicator with all time thresholds listed (e.g., percentage of patients with an unplanned return visit to the ED resulting in admission within 48 hours [or 72 hours] of being seen and discharged from the ED, stratified by adult or pediatric patients). Candidate indicators resulting from this review were then grouped according to clinical and operational categories.

Candidate indicators were reviewed by the expert panel in two rounds of electronic surveys. The surveys incorporated links to supporting references for each indicator. In the round 1 surveys, panelists evaluated each indicator on two 5-point Likert rating scales for 1) scientific soundness and 2) the relevance or importance to users and health care providers (Table 2). Ratings from all respondents were weighted equally and combined, and then indicators were classified as “retained”: median score = 4 on soundness and at least one of the importance or relevance measures; “borderline”: median score 3.0 to 3.9 on soundness and at least one of the relevance measures; or “discarded”: median score < 3.0 on soundness. The aim of this process was to use expert panelists' assessments to group indicators as highly rated and poorly rated by most participants, with a middle group of indicators receiving repeat assessment. In the round 2 survey, panelists were provided median scores for each borderline indicator from round 1 and were asked to vote whether to keep or discard each borderline indicator. Borderline indicators that received a vote of “keep” from at least half of the panelists remained borderline; the remainder were reclassified as discarded. This step allowed panelists the opportunity to reduce the number of borderline indicators arising from the first survey.

Table 2: Indicator assessment criteria

Criteria Description
1. Soundness*  
For outcome measure Sufficient scientific evidence exists to support a link between performance on this patient outcome indicator and processes of care
For process measure Sufficient scientific evidence exists to support a link between performance on this process indicator and patient outcomes
2. Relevance/importance—user This indicator is important because it reflects a potentially serious or common gap in the quality of care for patients
3. Relevance/importance—provider This indicator is important because hospitals or health care providers are able to act in specific ways to respond to quality of care gaps it measures

*Only one criterion for soundness was assessed depending on whether the indicator was a process or an outcome measure.

The final phases of indicator selection occurred at a face-to-face meeting of the steering committee. A summary of the expert panel survey results was provided to participants in advance of the meeting. The steering committee anonymously voted on all borderline indicators using a 5-point Likert rating scale that ranged from 1 (must not retain) to 5 (must retain). Borderline indicators with a median score of = 4 were reclassified as retained; those < 4 were reclassified as discarded. Next, in a facilitated nominal group process, all retained and discarded indicators were reviewed to either affirm or overturn the retained or discarded status of each indicator; this last step produced the final set of indicators.

The expert panel then prioritized each indicator in a head-to-head comparison with each of the other indicators within the same clinical or operational category, based on which of the two indicators being compared they considered to be of “higher priority for measuring quality of care in Canadian emergency departments.” The prioritization score was calculated as the number of times an indicator was selected as the higher priority versus every other indicator in the same category.

The steering committee also carried out a gap analysis using the Alberta Health Quality Matrix for Health25 to determine priority areas for future indicator development. The set of selected indicators was reviewed and mapped onto the Alberta matrix to facilitate discussion on several gaps and areas for future indicator development being identified.

We also conducted a feasibility review of the final set of indicators to determine the capacity for current routinely available Canadian administrative databases to capture each respective indicator. We looked for variables in the Canadian Institute for Health Information's (CIHI) National Ambulatory Care Reporting System (NACRS) and Discharge Abstract (DAD) databases that could be used to define each indicator and determined whether capture was feasible with existing data or feasible with better quality data (particularly where the data element was not mandatory) or whether new data elements would be required.

This study was approved by the Sunnybrook Health Sciences Centre Research Ethics Board.

Results

Indicator selection

A total of 170 candidate indicators were generated from the detailed literature review (Figure 1). In the round 1 survey, expert panelists ranked 53 indicators as retain, 31 as discard, and 86 as borderline. Response rates ranged from 47.6 to 100% for the round 1 surveys, depending on the clinical or operational category. In the round 2 survey (response rate 52.4%), 47 of the 86 borderline indicators from round 1 were reclassified as discarded and 39 remained borderline (see Appendix B, available online, for the full results from the round 1 survey).

Figure 1: Flow diagram of the indicator selection process.

At the national steering committee meeting, 15 members were in attendance. All retained, borderline, and discarded indicators were reviewed. Committee rankings and facilitated discussion resulted in 46 of the retained (n = 53) and borderline (n = 39) indicators being included in the final set. The steering committee also made substantive terminology changes to six indicators to improve clarity and clinical relevance at this stage. In addition, 2 of the 78 discarded indicators were considered important by the steering committee and so were included in the final set, for a total of 48 indicators, categorized into nine clinical and operational categories. In the final survey, expert panel members prioritized the 48 retained indicators within each clinical and operational category. The response rate for the indicator prioritization survey was 90.5%. The complete set of 48 indicators in nine categories and their priority rankings are presented in Table 3.

Table 3: Description and prioritization of the final set of 48 ED quality of care indicators selected, by clinical/operational category

ED quality of care category and indicators Priority within category Prioritization score* Current feasibility
Patient satisfaction      
Overall patient assessment of how well information was communicated to them or their family during their ED stay 1 N/A +
ED operations      
ED LOS—time from first documented contact in the ED to the time of physical departure from the ED (overall and by CTAS) 1 85 +++
Time from arrival in the ED to first physician assessment, by CTAS 2 81 +++
Time from decision to admit to departure to floor, for admitted patients 3 69 +++
Ambulance offload time—time from patient/ambulance arrival to transfer of care to ED staff 4 52 +++
Percentage of patients who left the ED without being seen 5 50 +++
Time from ED physician consultation request to decision to admit (if admitted) or to physical departure (if discharged) 6 49 +++
Percentage of ED stretcher hours/day occupied by inpatients 7 45 +++
Time from first documented contact in the ED to consultation request or physical departure (if discharged) 8 45 +
Patient safety      
Percentage of patients with an unplanned return visit to the ED resulting in admission within 48 h (or 72 h) of being seen and discharged from the ED, stratified by adult/pediatric patients 1 58 +++
Percentage of patients with headache discharged home from the ED who were admitted to hospital with a subarachnoid hemorrhage in the subsequent 14 d 2 57 +++
Percentage of ectopic pregnancy patients with a missed diagnosis 3 55 +++
Percentage of central lines inserted in the ED that developed catheter-related bloodstream infections 4 37 ++
Percentage of patients with an unplanned return visit to the ED without admission within 48 h (or 72 h) of being seen and discharged from the ED, stratified by adult/pediatric patients 5 24 +++
Percentage of intubated patients for whom end-tidal carbon dioxide was monitored 6 24 ++
Pain management      
Time to first dose of analgesic in all painful conditions requiring analgesia 1 12 +
Percentage of patients with documented pain assessment 2 5 +
Pediatrics      
Percentage of pediatric patients (aged 0–28 d) with fever who received a full septic workup 1 69 ++
Percentage of pediatric patients (aged 0–28 d) with fever who received broad-spectrum intravenous antibiotics 2 55 ++
Percentage of pediatric patients (aged 3 mo–3 yr) with croup who were treated with steroids 3 53 ++
Percentage of pediatric patients (aged 3 mo–3 yr) with urinary tract infection who had urine cultures obtained by catheter, suprapubic, or midstream methods 4 28 +
Percentage of pediatric patients (aged 3 mo–3 yr) with bronchiolitis who received a chest radiograph 5 25 ++
Percentage of pediatric patients (aged 3 mo–3 yr) with bronchiolitis who were treated with antibiotics 6 25 ++
Cardiac conditions      
Percentage of eligible patients with AMI who received thrombolytic therapy or PCI 1 80 +
Percentage of patients with AMI who received an ECG within 10 min of hospital arrival 2 73 +
Percentage of patients with primary PCI who received their primary PCI within 90 min of arrival 3 73 +
Percentage of patients with AMI who were given ASA a) in the 24 h before hospital arrival or b) within 3 h of hospital arrival (or 24 h of hospital arrival or during their ED stay) 4 61 +
Percentage of patients with chest pain who returned to an ED within 72 h to 7 d of an index visit with a confirmed diagnosis of AMI/ACS 5 61 ++
Percentage of patients with STEMI on first ECG who received fibrinolytic therapy within 30 min of ED arrival 6 57 +
Percentage of patients with atrial fibrillation who were treated with or received anticoagulation drug therapy or an antiplatelet therapy, if indicated 7 36 +
Percentage of patients with PCI transported to hospital by ambulance who received primary PCI within 120 min after call for ambulance 8 35 +
Respiratory conditions      
Percentage of patients with asthma who received corticosteroids in the ED and at discharge (if discharged) stratified by age 1 81 +
Time from arrival in the ED to first documented ß-agonist-type bronchodilator therapy for an acute exacerbation of asthma 2 77 +
Percentage of patients with asthma who had an unplanned return visit to the ED for the same or a related asthma exacerbation within 24 h (or within 24–72 h, or within 72 h) of ED discharge 3 68 +++
Percentage of patients with asthma who had an objective measurement of lung function during primary ED assessment (one or more of peak flow, oxygen saturation, FEV1, spirometry) 4 66 ++
Percentage of patients with community-acquired pneumonia who received initial antibiotic therapy within 4 h (or 6, or 8, or 24 h) of arrival 5 60 +
Percentage of patients with COPD who received corticosteroid therapy in the ED and at discharge (if discharged) 6 55 +
Percentage of patients with community-acquired pneumonia who had vital signs (including O2 assessment) recorded in the ED 7 53 +
Percentage of patients with community-acquired pneumonia who had an inpatient LOS = 2 d 8 16 +++
Stroke      
Percentage of eligible patients with acute stroke who received tPA 1 48 +
Percentage of potentially eligible patients with acute stroke who had a CT scan of the brain within 25 min of arrival at ED 2 43 +
Percentage of patients with acute stroke given tPA for whom tPA best practice treatment protocol was followed for tPA administration 3 43 +
Percentage of patients with acute stroke who had their blood glucose level checked on arrival at ED or by EMS prior to arrival and regularly for the first 24 hours 4 25 +
Percentage of patients with acute stroke who had an ECG 5 11 ++
Sepsis/infection      
Time to antibiotics in patients with bacterial meningitis 1 39 +
Percentage of patients with severe sepsis or septic shock who were given broad-spectrum antibiotics within 4 h of ED arrival 2 38 +
Percentage of patients with severe sepsis or septic shock who survived to hospital discharge (or to 28 d following discharge, or 60 d) 3 14 +++
Percentage of patients with severe sepsis or septic shock who were monitored for lactate clearance 4 11 +

ACS = acute coronary syndrome; AMI = acute myocardial infarction; ASA = acetylsalicylic acid; COPD = chronic obstructive pulmonary disease; CT = computed tomographic; CTAS = Canadian Emergency Department Triage and Acuity Scale; ECG = electrocardiogram; ED = emergency department; EMS = emergency medical services; FEV1 = forced expiratory volume in 1 second; LOS = length of stay; N/A = not applicable; PCI = percutaneous coronary intervention; STEMI = ST segment elevation myocardial infarction; tPA = tissue plasminogen activator.

*Prioritization was calculated by taking the sum for each indicator ranked as the highest priority (coded as 1) to each indicator within the same clinical/operational category (coded as 0) in the paired-comparison exercise.

Feasibility of measuring indicator using current administrative data sets: +++ = feasible; ++ = feasible if quality of data in current data fields is enhanced; + = not feasible unless new data elements collected in administrative data sets.

These indicators were presented with multiple time thresholds existing in the literature; panelists were not asked to select a preferred time threshold.

Gap analysis

The steering committee mapped the 48 indicators to the domains of the Alberta Health Quality Matrix for Health (see Table 1).25 Rules for assignment of types of indicators to specific domains were established a priori and reviewed by the steering committee, and a given indicator could map to multiple domains. A large number of indicators mapped to safety, effectiveness, appropriateness, and efficiency, whereas relatively few indicators mapped to acceptability, accessibility, or a healthy workplace. Specific gaps included a lack of trauma and pain management indicators. The steering committee identified the highest priority for new evidence-based indicator development in the areas of patient satisfaction, healthy workplace, mental health and addiction, elder care, and community-hospital integration.

Feasibility of data collection based on existing administrative databases

A feasibility assessment determined that 13 (27%) of 48 indicators could be measured using current data elements in the CIHI-NACRS or via NACRS plus linkage with other existing administrative databases, such as CIHI's DAD or death records. These 13 include some higher-priority indicators (i.e., those ranked as 1, 2, or 3 within a category) for ED operations, patient safety, and sepsis or infection. Nine (19%) additional indicators, including five of the six pediatric indicators, could be feasibly measured with enhanced data quality and completeness in existing NACRS data fields, for example, by improved coding and inclusion of a time stamp for key ED interventions (see Table 3). Capture of the remaining 26 indicators was not feasible using current provincial or national administrative databases; however, they could be obtained from new data elements in NACRS (occasionally in conjunction with improved data quality in existing data elements) or through other data sources (e.g., chart review).

Discussion

Using a nationally representative modified Delphi panel process, we developed a consensus on a set of 48 evidence-based indicators to measure and compare the quality of care in Canadian EDs. Indicators were prioritized within each of nine clinical or operational categories: patient satisfaction, ED operations, patient safety, pain management, pediatrics, cardiac conditions, respiratory conditions, stroke, and sepsis or infection. Although this number represents a substantial reduction from the 170 ED quality of care indicators identified from our systematic review, it is likely that even this more parsimonious list is still too large for all indicators to be routinely reported at either the health jurisdiction or hospital level. Our prioritization of indicators should provide further guidance with respect to the selection of routine quality measures by health policy and decision makers. For example, the top priority indicators by clinical and operational grouping are listed in Table 4.

Table 4: Top priority* indicators by ranking, by clinical and operational grouping

Clinical/operational category Indicator and definition
ED operations ED length of stay: time from first documented contact in the ED to the time of physical departure from the ED (overall and by CTAS)
Patient safety Unplanned return visit to the ED resulting in admission within 48 h (or 72 h) of being seen and discharged from the ED, stratified by adult/pediatric
  Patients with headache discharged home from the ED who were admitted to hospital with a subarachnoid hemorrhage in the subsequent 14 d
Pain management Time to first dose of analgesic in all painful conditions requiring analgesia
Pediatrics Patients (aged 0–28 d) with fever who received a full septic workup
  Patients (aged 0–28 d) with fever who received broad-spectrum intravenous antibiotics
  Patients (aged 3 mo–3 yr) with croup who were treated with steroids
Cardiac Eligible patients with acute myocardial infarction who received thrombolytic therapy or percutaneous coronary intervention
Respiratory Patients with asthma who received corticosteroids in the ED and at discharge (if discharged) stratified by age
Stroke Eligible patients with acute stroke who received tissue plasminogen activator
Sepsis/infection Time to antibiotics in patients with bacterial meningitis
  Patients with severe sepsis or septic shock who were given broad-spectrum antibiotics within 4 h of ED arrival

CTAS = Canadian Emergency Department Triage and Acuity Scale; ED = emergency department.

*For pediatric indicators, the top three are listed to ensure that a priority indicator for newborns and infants is included. For patient safety and sepsis/infection indicators, the top two indicators are listed because their prioritization scores were separated by only 1 point.

These indicators have face validity in that they cover many of the most serious health care emergencies seen in EDs, such as acute myocardial infarction, stroke, sepsis, and asthma. Moreover, these are conditions for which therapies administered in the ED exist that are known to reduce mortality and/or morbidity. Other indicators, such as those associated with appropriate pain management, represent common concerns among ED patients. Finally, ED operations indicators such as ED length of stay represent important indicators of ED efficiency and overcrowding, which are of particular concern to health administrators, policy makers, clinicians, and patients alike.

Our steering committee included clinicians, administrative experts, and health system decision makers, but some stakeholders may have been underrepresented. For example, small and rural EDs, members of trauma programs, and non-ED specialists were not well represented. Furthermore, other panelists and jurisdictions may have different priorities with regard to ED quality of care. It is likely that different indicators will be more or less relevant to different audiences; for example, priority ED indicators for an ED manager may be different from those of a quality and safety officer in a health ministry. Our indicators reflect some of the most important illnesses seen in EDs resulting in hospital admissions (e.g., acute myocardial infarction, asthma, stroke, and infection); however, they did not identify quality markers in several other important conditions, such as heart failure or major trauma. The approach taken in this exercise was to develop a set of indicators based on current evidence, selected from the wide array that have already been developed and used in hospitals and health system jurisdictions. Given the wide variability in published evidence for indicators, we did not formally assess the quality of evidence but rather allowed experts to judge it for themselves and invited them to include their own informal knowledge and expertise in their assessments. This is an important step to enhance face validity among disparate audiences of clinicians, hospital administrators, and health system decision makers.

This process also identified several important gaps, including in patient satisfaction, a critical ED quality indicator. Although the expert panel and steering committee reviewed many existing patient satisfaction indicators, all were discarded because they were either not deemed to be appropriate for ED care or were overly specific with respect to clinical care processes and nonrepresentative of the ED patient experience (see Appendix B, available online). It was beyond the scope of the exercise conducted to develop new indicators; however, this is an important area of future work. The committee made several recommendations regarding ED patient satisfaction: 1) that improved indicators be urgently developed; 2) that composite indicators (i.e., incorporating several critical elements such as communication and courtesy) were the most useful and actionable indicators; 3) that patient satisfaction indicators differentiate between the care provided by different ED health care practitioners (physicians, nurses, and other ED staff); and 4) that a common and improved methodology for collecting patient satisfaction data be developed to ensure that valid comparisons can be generated. Other gaps we noted include measures of a healthy workplace (e.g., absenteeism, sick time, occupational safety, nosocomial infections); patient mental health and addiction (given the significance of this patient issue in Canadian EDs); elder care (e.g., adverse events, such as falls and development of delirium); and community-hospital integration (e.g., preventable ED visits by nursing home or long-term care home residents, linkage with community services such as home care at discharge from the ED, and avoidable ED visits).

Careful evaluation of ED care is becoming increasingly important as many jurisdictions are undertaking large-scale, complex, and system-level efforts to improve ED quality of care.10–13 These efforts focus largely or solely on reducing ED waiting times, which our results suggest are important quality measures in their own right. However, measurement of a broad array of priority indicators is important to determine whether improving overall timeliness has a halo effect (i.e., leads to improvement in other quality measures) or inadvertently worsens other aspects of ED care.

Our work provides guidance by selecting a subset of indicators that are easy to comprehend and widely accepted as relevant by those who will drive improvement. However, we acknowledge that these indicators relate to only one measurement domain and that administrators and decision makers, those often tasked with measuring ED performance, require additional perspectives. For example, Ontario's balanced scorecard ED reports,26 produced annually, report on three other quadrants in addition to clinical use and outcomes: patient satisfaction, financial performance, and system integration and change. Nonetheless, we are confident that our national effort will help significantly improve existing clinical indicators while enabling better comparability between jurisdictions. Once fully operationalized with technical definitions, we recommend the implementation, at the local, regional, or national level, of a set of indicators comprising one or more of those rated by our panelists as the highest priority, within each group of indicators, and we encourage, where possible, longitudinal and cross-jurisdiction measurement. Future research could further trim the number of indicators by evaluating the incremental value of adding each additional lower ranked indicator to those already in each group. Indicators also require regular review and refinement to ensure that the evolution of indicators keeps pace with the evolution of evidence.

Acknowledgments

We sincerely thank the members of the National Steering Committee (in alphabetical order: Howard Abrams, Marc Afilalo, Shahin Ansari, Francois Belanger, Debra Carew, Tim Cooke, Cathy Davis, Christopher Dean, Jonathan F. Dreyer, Joseph Gebran, Michael Harvey, Brian R. Holroyd, Grant Innes, Leighanne MacKenzie, Morag Mochan, Joe Nemeth, Wesley B. Palatnick, Glen Perchie, Tom Rich, John Ross, Antonia S. Stang, James Stempien, Gary F. Teare, Patricia Walsh) and the National Expert Panel (in alphabetical order: Robert Abernethy, Francis Bowen, Candice Bryden, Michael J. Bullard, Ben Chan, Debbie Cotton, Cathy Davis, Paul Ellis, Debbie Gibson, Eric Grafstein, Jocelyn Gravel, Dante Morra, Sharon Ramagnano, Tom Rich, Kaveh G. Shojania, Patti Simonar, Douglas Sinclair, Jo-Ann Talbot, Bernard Unger, Alain Vadeboncoeur) for their contributions to this project. We also thank Sahba Eftekhary, Jenny Lam-McCulloch, and the Institute for Clinical Evaluative Sciences (ICES) Knowledge Transfer team for their contributions to this project.

References

  1. Pitts SR, Niska RW, Xu J, et al. National Hospital Ambulatory Medical Care Survey: 2006 emergency department summary, National Health Statistics Reports; no 7. Hyattsville, MD: National Center for Health Statistics; 2008.

  2. Carriere G. Use of hospital emergency room. Statistics Canada Health Reports 2004;16(1).

  3. McCaig LF, Stussman BJ. National Hospital Ambulatory Medical Care Survey: 1996 emergency department summary. Adv Data 1997;(293):1-20.

  4. Committee on the Future of Emergency Care in the United States Health System. Hospital-based emergency care: at the breaking point. Washington (DC): The National Academies Press; 2006.

  5. Bond K, Ospina MB, Blitz S, et al. Frequency, determinants and impact of overcrowding in emergency departments in Canada: a national survery. Healthc Q 2007;10:32-40.

  6. Fatovich DM, Nagree Y, Sprivulis P. Access block causes emergency department overcrowding and ambulance diversion in Perth, Western Australia. Emerg Med J 2005;22:351- 4, doi:10.1136/emj.2004.018002.

  7. Locker T, Mason S, Wardrope J, et al. Targets and moving goal posts: changes in waiting times in a UK emergency department. Emerg Med J 2005;22:710-4, doi:10.1136/ emj.2004.019042.

  8. Schoen C, Doty MM. Inequities in access to medical care in five countries: findings from the 2001 Commonwealth Fund International Health Policy Survey. Health Policy 2004;67: 309-22, doi:10.1016/j.healthpol.2003.09.006.

  9. Schoen C, Osborn R, Huynh PT, et al. Primary care and health system performance: adults’ experiences in five countries health affairs, no. 2004. doi:10.1377/hlthaff .w4.487.

  10. Vancouver Coastal Health. Innovation reduces ER congestion in lower mainland (March, 2009). Available at: http://www.vch. ca/about_us/news/media_contacts/news_releases/innovation_ reduces_er_congestion_in_lower_mainland (accessed January 4, 2010).

  11. Alberta Health Services. Edmonton EMS, Capital Health hopeful changes will improve ambulance access (April 12, 2007). Available at: http://www.capitalhealth.ca/NewsAndEvents/ NewsReleases/2007/Ambulance_access.htm (accessed January 4, 2010).

  12. Ontario Ministry of Health and Long-term Care. Ontario wait times. Available at: http://www.health.gov.on.ca/transformation/ wait_times/wait_mn.html (accessed January 4, 2010).

  13. Alberti G. Transforming emergency care in England. London: National Health Service; 2005. p. 1-44.

  14. Brimelow A, British Broadcasting Corporation News. A&E target ‘risks patient safety.’ Available at: http://news.bbc.co.uk/ 2/low/health/8580761.stm (accessed March 24, 2010).

  15. Institute of Medicine. Pathways to quality health care: performance measurement, accelerating improvement. Washington (DC): The National Academies Press; 2006.

  16. McGlynn EA. Introduction and overview of the conceptual framework for a national quality measurement and reporting system. Med Care 2003;41:I1-7, doi:10.1097/00005650- 200301000-00001.

  17. McGlynn EA. An evidence-based national quality measurement and reporting system. Med Care 2003;41:I8-15, doi:10.1097/00005650-200301000-00003.

  18. Galvin RS, McGlynn EA. Using performance measurement to drive improvement: a road map for change. Med Care 2003;41:I48-60, doi:10.1097/00005650-200301001-00006.

  19. Marshall MN, Shekelle PG, Leatherman S, et al. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA 2000;283:1866-74, doi:10.1001/jama.283.14.1866.

  20. Cain EF. Improving emergency medical services for children through outcomes research: an interdisciplinary approach. Proceedings of a conference. Ambul Pediatr 2002;2:285-348, doi:10.1367/1539-4409(2002)002,0285:IEMSFC.2.0. CO;2.

  21. Lindsay P, Schull M, Bronskill S, et al. The development of indicators to measure the quality of clinical care in emergency departments following a modified–Delphi approach. Acad Emerg Med 2002;9:1131-9, doi:10.1111/ j.1553-2712.2002.tb01567.x.

  22. Guttmann A, Razzaq A, Lindsay P, et al. Development of measures of the quality of emergency department care for children using a structured panel process. Pediatrics 2006; 118:114-23, doi 10.1542/peds.2005-3029.

  23. Rowe BH, Bond K, Ospina MB, et al. Data collection on patients in emergency departments in Canada. CJEM 2006;8: 417-24.

  24. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med 2008;168: 351-6, doi:10.1001/archinternmed.2007.84.

  25. Health Quality Council of Alberta. Alberta Quality Matrix for Health. Available at: http://www.hqca.ca/index.php?id5 %2035 (accessed Feb 1, 2008).

  26. Hospital Report Research Collaborative. Hospital performance results 2008: emergency department care. Available at: http:// www.hospitalreport.ca/downloads/2008/edc_2008.html (accessed Nov 20, 2009).