Article Text

Download PDFPDF

A framework for classifying patient safety practices: results from an expert consensus process
  1. Sydney M Dy1,
  2. Stephanie L Taylor2,3,
  3. Lauren H Carr4,
  4. Robbie Foy5,
  5. Peter J Pronovost1,
  6. John Øvretveit6,
  7. Robert M Wachter4,
  8. Lisa V Rubenstein2,3,
  9. Susanne Hempel2,
  10. Kathryn M McDonald7,
  11. Paul G Shekelle2,3
  1. 1Johns Hopkins University, Baltimore, Maryland, USA
  2. 2RAND Corporation, Santa Monica, California, USA
  3. 3Veterans Administration, Greater Los Angeles, Los Angeles, California, USA
  4. 4University of California, San Francisco, San Francisco, California, USA
  5. 5University of Leeds, Leeds, UK
  6. 6The Karolinska Institute, Stockholm, Sweden
  7. 7Stanford University, Stanford, California, USA
  1. Correspondence to Sydney Dy, Johns Hopkins University, Rm 609, 624 N Broadway, Baltimore, MD 21090, USA; sdy{at}jhsph.edu

Abstract

Objective Development of a coherent literature evaluating patient safety practices has been hampered by the lack of an underlying conceptual framework. The authors describe issues and choices in describing and classifying diverse patient safety practices (PSPs).

Methods The authors developed a framework to classify PSPs by identifying and synthesising existing conceptual frameworks, evaluating the draft framework by asking a group of experts to use it to classify a diverse set of PSPs and revising the framework through an expert-panel consensus process.

Results The 11 classification dimensions in the framework include: regulatory versus voluntary; setting; feasibility; individual activity versus organisational change; temporal (one-time vs repeated/long-term); pervasive versus targeted; common versus rare events; PSP maturity; degree of controversy/conflicting evidence; degree of behavioural change required for implementation; and sensitivity to context.

Conclusion This framework offers a way to classify and compare PSPs, and thereby to interpret the patient-safety literature. Further research is needed to develop understanding of these dimensions, how they evolve as the patient safety field matures, and their relative utilities in describing, evaluating and implementing PSPs.

  • Patient safety
  • patient safety practices
  • medical errors

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Patient safety, or avoiding or mitigating unintended injuries from the delivery of healthcare, is an important focus of quality improvement efforts.1 To Err is Human, the report from the Institute of Medicine (IOM) that brought attention to the serious harm occurring to patients as a result of patient safety hazards throughout our healthcare system, concluded that the key solution to reducing harm lay in systems-based interventions rather than enhancing clinical interventions or technology.2 Solutions to address safety problems, or patient safety practices (PSPs), have been defined by the Agency for Healthcare Research and Quality (AHRQ) as interventions, strategies, or approaches intended to prevent or mitigate unintended consequences of the delivery of healthcare and to improve the safety of healthcare for patients.3

The literature on PSPs has rapidly expanded since the IOM report, and many PSPs are widely promoted or required by national organisations such as the National Quality Forum and The Joint Commission. However, interpreting the published literature on PSPs has been challenging, in part because PSP studies are more heterogeneous than traditional clinical studies (eg, evaluating drugs). PSP interventions often have multiple components and different levels where the intervention may be applied, often on the system level as opposed to an individual patient or provider. PSP evaluations are also frequently sensitive to the context of the intervention, and the generalisability of their results often is undermined by the lack of a commonly accepted and theoretically informed framework within which to describe salient characteristics such as settings, participants, targeted clinical behaviours and interventions.4

We therefore developed and evaluated a framework for describing and classifying PSPs. We sought to describe characteristics in which PSPs might differ that are important when interpreting the evidence about their effectiveness. This framework may also serve as a foundation for further understanding of the role of context in evaluating effectiveness, uptake and dissemination, for diverse types of PSPs.

Methods

Defining the framework for patient safety practices

The methods for developing, testing and refining the framework are shown in box 1. As a first step, we asked team members to identify existing conceptual frameworks used to classify PSPs and more general quality-improvement interventions (table 1).5–11 This provided an initial approach on how to develop a framework within which to place and describe key descriptive features related to the PSPs. Most of these frameworks had some areas of overlap, such as the broad categories of organisational levels or interventions potentially targeted. One key element across these frameworks was a description of the level or levels involved in implementation of an intervention: larger system or environment (eg, regulatory structures); organisation (eg, hospital or hospital network); team (eg, a clinical team); and individual (eg, healthcare professionals). The frameworks also all differed in some dimensions for classification, such as whether the domains were organised by the target of the intervention (eg, culture or communication), type of intervention or level (eg, organisational or individual).

Box 1

Steps for developing, testing and refining framework

  • Step 1. Review existing patient safety and quality-improvement models to derive initial overarching framework.

  • Step 2. Draw upon existing PSP frameworks to identify key dimensions of PSPs that contribute to adequately describing their attributes.

  • Step 3. Integrate these dimensions into the overarching framework.

  • Step 4. Ground the framework with five example PSPs.

  • Step 5. Technical Expert Panel (TEP) selects PSPs within framework to gauge its utility and acceptability.

  • Step 6. Refine framework through TEP consensus meeting to discuss additional diversity dimensions.

Table 1

Review of existing conceptual frameworks

Based on a review of these frameworks, we developed an initial framework for classifying PSPs on three different dimensions: level at which the PSP applied (individual, care team, organisational or national); primary setting(s) (hospital, nursing home or ambulatory) where the PSP applied; and whether the PSP was regulatory or voluntary.

In order to identify an initial diverse set of PSPs for development and testing of the framework, we first identified high-impact and diverse safety problems. We chose the two most policy-relevant and well-defined lists of problems, the 2006 National Quality Forum Never Events12 and the CMS (Center for Medicare Services) Never Events List of Hospital-Acquired Conditions for Fiscal Year 2009.13 Some examples of the NQF Never Events include surgical (eg, wrong procedure), product or device (eg, use of contaminated drugs), patient protection (eg, infant discharged to wrong person), care management (eg, medication errors) and criminal actions (eg, patient abduction). Some additional examples in The CMS Never Events List include safety issues such as vascular catheter-associated infections and deep-vein thrombosis following certain orthopaedic procedures.

We then selected PSPs designed to address these safety problems from a variety of sources, including the AHRQ Evidence-based Practice Report, Making Health Care Safer,14 the AHRQ Patient Safety Network,8 the National Quality Forum Safe Practices 2009 update15 and the Joint Commission Patient Safety Goals.16 We also solicited input from several relevant organisations for any key PSPs not on this list, including AHRQ and the Institute for Healthcare Improvement (IHI).

Technical expert panel survey and consensus meeting

As part of the larger project, we identified 22 members for a Technical Expert Panel (TEP) with expertise in patient safety implementation and/or evaluation methodology, as well as front-line healthcare delivery experts (eg, chief medical officers, patient safety directors). We asked the TEP to complete an online survey17 using the framework to select examples of a diverse set of PSPs. Details of the survey results are not provided here, since selection of PSPs was not the focus of this paper, but are available upon request. We helped frame ‘diversity’ by organising the list of 66 PSPs to select from according to two domains: (1) the level of practice to which it is applied (such as national/regional, organisational, care team or individual) and (2) the setting to which it is most commonly applied (such as hospital, clinic or nursing home). We noted that the diversity could be within or across domains.

We asked each TEP member to select among PSPs organised using the draft framework. They could select two PSPs within each level/setting domain combination where there were more than two choices, and could also add any key PSPs that were not included. We then held an in-person TEP expert consensus process meeting which included discussion of the usefulness and applicability of the dimensions in the framework, and suggestions for additional key dimensions to consider. In other words, the TEP role in evaluating the framework was not a quantitative voting process, but an evaluation of the usability of the framework followed by a consensus process. We synthesised the results of these discussions to develop a final version of the framework.

Results

The final framework includes 11 classification dimensions, as shown in table 2. To demonstrate how these dimensions can be used to describe and classify PSPs, their application to a set of five example PSPs is shown in table 3 (since the selection of PSPs was not the focus of this paper, the full survey results are not included here but are available upon request). The expert discussion and reasoning behind the modification of the framework and selection and development of several of the included dimensions are summarised below.

Table 2

Classification dimensions for patient safety practices (PSPs)

Table 3

Classification dimensions, as applied to describe five examples

In discussion of the overall dimension of level (national/regional, organisation, care team and individual), the TEP judged that these levels were too difficult to define, and that many interventions crossed multiple levels, or that a PSP defined at a higher level was often addressing behaviour at a more focused level. For example, handwashing is an individual action, but PSPs for improving hand hygiene are at an organisational level. We therefore redefined this dimension as the target of the PSP and simplified it to two levels: individual behaviour compared with organisational change.

The TEP discussed several other potential dimensions for classifying PSPs, focusing on three issues: importance, the status of current evidence and the context in which the PSP is implemented. Panel members discussed various elements of the potential dimension of importance in detail, including importance of the outcome that a PSP is designed to address; importance as a broader feature of patient safety in institutions that can affect multiple outcomes (eg, culture); and importance in terms of its impact on a particular outcome. Importance could also relate to relevance to the institution or to current external regulatory pressures. The importance of a patient safety outcome—such as wrong-site surgery—may also be correlated with its rarity in any particular setting, making it more difficult to evaluate an intervention. As a result of this discussion, the concept of importance was split into several dimensions, including regulatory versus voluntary, and whether the relevant safety issue is pervasive in a setting or targeted to specific patients.

Since the evidence base for PSPs is often controversial or conflicting, members of the panel suggested this as a possible dimension to include. Panel members who had conducted systematic reviews in this area advised that controversy may be a stronger component of this dimension at this point in the development of the PSP literature, as there is currently very little evidence to support the use of most PSPs, and there would often be little practical variation currently on this dimension. There are frequently only a few poorly controlled or descriptive studies and/or limited meta-analyses, often with limited and low-quality data, with little information on context, adaptation over time or underlying theory on the impact of context on effectiveness. Evaluations showing that PSPs are ineffective may be less frequently published or less widely known than those that show effectiveness. If a given PSP has evidence for effectiveness in one organisational context, there may be missing evidence indicating lack of effectiveness outside this specific context.

The panel also discussed the implications of context sensitivity as a classification dimension. As the above illustration suggests, certain types of PSP may be particularly context-sensitive, that is, they may be effective in one context but not others. A particular complex PSP, such as the universal protocol, may have different effects in a teaching compared with a non-teaching hospital context. The panel discussed that specific aspects of the PSP implementation process, such as measurement and feedback, might be handled differently in each context, thereby modifying outcomes; other PSPs might be relatively insensitive to context, such as human or organisational factors. For example, the panel suggested that the use of a technology such as antibiotic-impregnated catheters would be expected to be less sensitive to contextual factors compared with a complex intervention such as team-based care. Nevertheless, the panel agreed that context is a broad and poorly defined term, and the field lacks agreement on the categories of context that should be evaluated in a PSP.

Discussion

Through a process of synthesising existing conceptual frameworks and an expert panel consensus process, we developed a framework for describing the dimensions of PSPs. This framework of 11 key dimensions describes many elements important for classifying PSPs and interpreting the literature, and provides a foundation for exploring the issues of context sensitivity and the diversity of implementation and evaluation approaches. For example, this framework identifies key differences between PSPs that are commonly grouped together, such as the Universal Protocol and practices to reduce catheter-related bloodstream infections (CRBI). Both target at the hospital setting, are activities conducted by individual providers, are one-time events for each patient, are relatively new PSPs that require substantial amounts of behaviour change and have checklists as a key feature of the intervention. However, the Universal Protocol is currently required by regulatory authorities, while CRBI practices are not; and the safety issue the Universal Protocol is designed to prevent is very rare (an estimated 1300–2700 wrong-side/wrong-site, wrong-procedure and wrong-patient surgeries occur annually in the USA),19 while CRBI are much more common (an estimated 41 000 episodes in US hospitals in 2009).20 These distinctions may be important in explaining differences in uptake and effectiveness between published reports about the Universal Protocol and CRBI interventions.

The framework potentially offers a rational basis for integrating PSPs into institutional and regulatory safety programmes or combining PSPs so that they are complementary rather than duplicative and so that the evidence supporting the PSP is transparent. The framework is distinct from frameworks used for quality-improvement interventions in addressing the different types of interventions included in the PSP literature. One objection to developing classification systems in quality and safety is that they can oversimplify and stifle innovation—that these types of interventions are too complex and dynamic for classification. We believe that this framework serves as a way to appreciate that complexity by understanding key dimensions that apply to patient safety practices, and understanding where further development might be useful.

For this framework, the classification exercise stimulated the experts to develop new hypotheses and insights, and we hope that further application of the framework will help generate more ideas in the future. This framework may also help in defining key areas where further research is needed. For example, there are advantages and disadvantages of regulatory compared with voluntary approaches to patient safety. Regulatory approaches may not allow for flexibility in needs and implementation across settings, and may define a PSP too precisely too early in the process, when innovation is needed so that the PSP can evolve. However, when voluntary, a PSP may be implemented only rarely even when there is good evidence to support it, particularly as there are often disincentives and competing demands.

This work clearly represents an initial stage that needs further study and testing. The project team and technical expert panel represented a depth of understanding of PSP research, implementation, health policy issues and the PSP literature, as well as a variety of relevant perspectives. However, the framework development process had many methodological challenges, including the lack of clear definitions for many of the dimensions and potential overlap between them, which may have led members of the technical expert panel to interpret them differently in the survey and consensus process. Much of the literature on PSPs is in its early stages. For example, there are fewer rigorous evaluations of effectiveness compared with the field of quality improvement, where large systematic reviews on interventions such as case management have been able to describe the impact of variation in the intervention or context on the findings for effectiveness.21 However, both the PSP and quality-improvement fields are still at relatively early stages with regard to the development and application of theory in understanding problems, devising solutions and evaluation.22 Many dimensions of the framework remain ill-defined or underexplored in the literature, and many PSPs may not be defined or developed sufficiently, or exist in too many variations (such as the variety of approaches available for falls prevention), to be easily classified by these dimensions.

In conclusion, this PSP framework offers a way of categorising a diverse range of PSPs. This is a necessary initial step in the process of deriving a common language within which to describe the salient features of PSPs and to develop appropriate evaluation methods to interpret their effects. This forms a basis for future research which can evaluate the relative importance of these dimensions by comparing implementation and effectiveness of PSPs that differ by these dimensions within a setting or in similar studies. There will be scope for work to explore whether this framework can better accommodate other dimensions of PSPs, including policy issues (eg, public reporting), methodological issues (eg, ease of measurement of PSP implementation or outcomes) and PSP complexity (eg, how to define and measure its variation between settings). Moreover, given the rapidly maturing science supporting PSP, it will be important to update this framework. Understanding differences and similarities among PSPs could improve the integration of implementation efforts requiring multiple PSPs within institutional safety programmes and regulatory programmes as well as development of a more coherent and cumulative evidence base in patient safety.

Acknowledgments

The technical expert panel included AS Adams, DW Bates, L Bickman, C Brown, P Carayon, L Donaldson, N Duan, DO Farley, T Greenhalgh, J Haughom, ET Lake, R Lilford, KN Lohr, GS Meyer, M Miller, D Neuhauser, G Ryan, S Saint, K Shojania, SM Shortell, DP Stevens and K Walshe.

References

Footnotes

  • Linked articles 047035, 047993, 049379.

  • The authors of this paper are responsible for its content. Statements in this paper should not be construed as endorsement by the Agency for Healthcare Research and Quality or the US Department of Health and Human Services.

  • Funding The research reported here was supported under Contract No. HHSA-290-2009-10001C from the Agency for Healthcare Research and Quality, US Department of Health and Human Services.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles