Skip to main content

Decision boxes for clinicians to support evidence-based practice and shared decision making: the user experience

Abstract

Background

This project engages patients and physicians in the development of Decision Boxes, short clinical topic summaries covering medical questions that have no single best answer. Decision Boxes aim to prepare the clinician to communicate the risks and benefits of the available options to the patient so they can make an informed decision together.

Methods

Seven researchers (including four practicing family physicians) selected 10 clinical topics relevant to primary care practice through a Delphi survey. We then developed two one-page prototypes on two of these topics: prostate cancer screening with the prostate-specific antigen test, and prenatal screening for trisomy 21 with the serum integrated test. We presented the prototypes to purposeful samples of family physicians distributed in two focus groups, and patients distributed in four focus groups. We used the User Experience Honeycomb to explore barriers and facilitators to the communication design used in Decision Boxes. All discussions were transcribed, and three researchers proceeded to thematic content analysis of the transcriptions. The coding scheme was first developed from the Honeycomb’s seven themes (valuable, usable, credible, useful, desirable, accessible, and findable), and included new themes suggested by the data. Prototypes were modified in light of our findings.

Results

Three rounds were necessary for a majority of researchers to select 10 clinical topics. Fifteen physicians and 33 patients participated in the focus groups. Following analyses, three sections were added to the Decision Boxes: introduction, patient counseling, and references. The information was spread to two pages to try to make the Decision Boxes less busy and improve users’ first impression. To try to improve credibility, we gave more visibility to the research institutions involved in development. A statement on the boxes’ purpose and a flow chart representing the shared decision-making process were added with the intent of clarifying the tool’s purpose. Information about the risks and benefits according to risk levels was added to the Decision Boxes, to try to ease the adaptation of the information to individual patients.

Conclusion

Results will guide the development of the eight remaining Decision Boxes. A future study will evaluate the effect of Decision Boxes on the integration of evidence-based and shared decision making principles in clinical practice.

Peer Review reports

Introduction

Resources for finding medical evidence have evolved greatly in the past few years. Searches take less time and results are more relevant than ever before. For many clinical questions, however, even the best available evidence does not always produce a single best answer. In some cases, the scientific evidence about outcomes is insufficient; in others, proof of benefit is more or less counter-balanced by proof of harm. In 2007, Clinical Evidence classified 51% of treatments as having insufficient evidence and 7% of treatments as tradeoffs between benefits and harms[1].

Where two or more medically acceptable options exist, the choice should depend on the patient’s circumstances, values, and preferences[2]. Values and preferences refer to patients’ perspectives, beliefs, expectations, and goals for life and health, and more broadly to the processes patients use to consider the options and their relative benefits, harms, costs, and inconveniences[3]. To make an informed choice, patients thus need access to the best available information, presented in a format that makes it easy for them to make a decision consistent with their values and preferences[4].

Since 1992, the field of shared decision making has mainly focused on developing and evaluating patient decision aids: interventions designed to translate information more directly to patients and to help them better clarify their values[5]. Most patient decision aids are designed so that patients can work through them on their own. Studies show that printed, electronic, and audiovisual patient decision aids help increase patients’ knowledge, feeling of being adequately informed, and participation[5], and reduce the overuse of screening or treatment options not clearly associated with health benefits for all[6]. The research also shows, however, that while patients want information about their medical condition and treatment, they do not necessarily wish to be responsible for deciding on treatment[7]. In other words, patient decision aids can only go so far: patients want their healthcare provider’s input on their care. Other than training and continuing medical education programs for healthcare professionals[8], relatively less efforts have focused on how to foster a culture where clinicians embrace shared decision making as a clinical skill[9].

In this article, we present a tool designed for clinicians that aims to improve the participation of both patient and clinician in the decision-making process. This tool, called the Decision Box, is intended to help the clinician recognize that a decision needs to be shared with the patient, prepares the clinician to communicate evidence-based information to the patient, and assists the clinician in seeking patient’s values and preferences regarding the decision to be made. The Decision Box is a short clinical summary[10, 11] that integrates the best available evidence from studies and syntheses to provide quantitative information on management options. It is specialized to cover medical questions that have no single best answer. More than a summary, though, it is framed in a way to help the user weigh the risks and benefits of all options in light of the patient’s individual health status. It also offers guidance on the shared decision-making process.

Our objective was to develop Decision Box prototypes, test them with patients and clinicians, and try to improve them by addressing the barriers identified during user testing with regards to the communication design. More specifically, this paper presents the process used to select clinical topics for 10 Decision Boxes, the evaluation of users’ experience of the tool, and the pre-test of a questionnaire that we will use in a future implementation study.

Methods

This project was approved by the research ethics committees of the Centre de Recherche du Centre Hospitalier Universitaire de Quebec and McGill University.

Selection of clinical topics for the decision boxes

Using a Delphi survey described elsewhere[12], a panel of seven of the researchers involved in this project (including four practicing family physicians) selected 10 clinical topics they perceived as relevant to primary care practice. Because of our interest for the translation of genetic innovations to the population, we initially instructed the panelists to select two genomic topics among the 10, but at the second round we changed our instructions and asked them to select three instead. The panel was instructed to select topics that did not have a single best choice: i.e., the decision addressed should enclose scientific uncertainty about the outcome or balance of benefits and harms[5]. Panelists were asked to propose additional topics after the first round of the survey. At each round, we retained the topics that were chosen by all panelists, and removed the topics that were chosen by three panelists or less. The survey was stopped after three rounds.

Development of the decision box prototypes

To develop the Decision Box prototypes, we chose two topics that were more interesting to clinicians (selected early in the Delphi), that targeted different populations to maximize the diversity of participating patients, and that would be easier to explain because patients were generally familiar with them. We thus chose ‘Prenatal screening for the detection of trisomy 21’, and then had a choice between ‘Colorectal cancer screening with fecal occult blood test’ and ‘Prostate cancer screening with the prostate-specific antigen (PSA) test’. We chose PSA because of the controversy around this test.

We planned the documents so they would respond to the learning objectives of the training program (Table 1). We developed a first version of the Decision Box on PSA testing, based on the communication design of the Drug Facts boxes[13] and on research on risk communication (described in[12]). At this stage, the document consisted in a simple two-column black and white table with text and colour graphics. It was presented to the members of the Canada Research Chair in Implementation of Shared Decision Making, in Quebec (Canada), who grave their opinion on the elements that should be removed and those that should be kept, together with general comments on the tool. We modified this version and submitted it to a graphic designer who chose the graphical display. We then produced the second prototype on prenatal screening directly with the graphic designer. Presentations of the tools to experts in shared decision making (SDM), knowledge translation, and genomics at scientific meetings led to four more versions before the design proved satisfactory.

Table 1 Specifications for the decision box independent learning program

Each prototype was written in both French and English, and divided into three sections (Figures 1 and2). At the top, the ‘Presentation of the intervention to patients’ section described the intervention for which a decision was required, in straightforward lay language. This section broadly described the accuracy of the test, the population for which intervention might be appropriate, and clarified the decision to be made. The ‘Study Findings’ section presented the results of a single study that we considered to be the most relevant to present population or ‘average risks’ of benefits and harms of the intervention and to have the highest strength of evidence. This section first described the study (the study population and the length of follow-up) in a single sentence and then combined narration, graphics, and numbers to outline the study’s findings about the intervention’s benefits and harms. We used a few additional studies to present important elements that were not covered by the main study, for example, the proportion of miscarriages following amniocentesis, but to simplify the document we did not describe the studies themselves, and we did not give references to these studies. At the bottom, a ‘Confidence In The Results’ section gave publication information about the main study on which the ‘Study Findings’ section was based and made a statement about the study’s quality and its consistency with other published studies on the same topic. This assessment was adapted from the methodology of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group[14].

Figure 1
figure 1

The Decision Box prototype on prenatal screening for trisomy 21 (BEFORE evaluation).

Figure 2
figure 2

The Decision Box prototypes on prostate cancer screening (BEFORE evaluation).

A graphic designer produced the prototypes as one-page color documents. The main title of each prototype was ‘Decision Box’; the subtitle stated the intervention. The endorsement of each prototype by Laval University and the date of the last update were placed in small letters at the bottom.

Users’ experience of the decision box prototypes

Participant inclusion criteria and the recruitment process

To explore users’ perceptions of the communication design used in the prototypes and to seek suggestions for improvement, we conducted two focus groups with family physicians and four focus groups with patients. Using their professional networks, two members of the research team (ML and RG) recruited practicing family physicians from the Family Medicine Units in Quebec and Montreal in Canada. Patients were recruited from these two sites and met criteria for participation if they were: men between 45 and 75 years old, or women between 20 and 40 years old who wanted to have a child, were pregnant, or were already mothers. Participants received a monetary compensation for their participation. The clinics’ support staff distributed information sheets about the study to eligible patients. Interested patients then contacted the research team to participate.

Focus groups

We presented the two Decision Box prototypes to two groups of family physicians: first, a French-speaking group and second, an English-speaking group. We presented the same prototypes to a purposeful sample of patients who agreed to participate and who qualified for the study (in other words, we presented the prototype on prostate cancer screening to male volunteers and the prototype on prenatal screening to female volunteers). We used a maximum variation strategy to populate the samples, which we segregated by mother tongue. In this way, we constituted four focus groups of patients: French-speaking men, French-speaking women, English-speaking men, and English-speaking women.

We conducted all interviews at the clinics where participants were recruited. We used a semi-structured interview guide to explore participants’ experience of the communication design used in Decision Boxes based on Peter Morville’s User Experience Honeycomb[15]. More precisely, physician interviews explored the tool’s value in preparing them to communicate scientific information to patients and helping patients make informed, value-based decisions. The patient interviews explored how patients felt about their physician reading the Decision Box before their clinical encounter to better prepare for their visit, and to ask them whether the Box contained all the information they needed to make a decision.

The focus groups were moderated by two experienced interviewers: one for the French-speaking groups and one for the English-speaking groups. One moderator held a master’s degree in anthropology and the other a master’s degree in library and information science. Both were research professionals at the time of the study. Two observers (AG and either PP, ML or RG) took notes on the process and content of the discussions. One observer (AG) was a postdoctoral fellow, one was a researcher (PP), and the others (RG and ML) were family physicians and colleagues of the participating physicians. No physician was present during interviews of one of his/her own patients. One observer (AG) was present at all focus group discussions, to ensure consistency in the approach. All discussions were audiotaped and professionally transcribed.

Questionnaires

At the beginning of each focus group, we collected demographic data from all of the participants and questioned them about their health history regarding the topic addressed in the Decision Box. After the focus group, the patients were administered the Decisional Conflict Scale[16]. Following the focus groups, family physicians pre-tested a self-administered questionnaire to be used in a larger study on the implementation of the Decision Boxes (Additional file1). This questionnaire evaluated the respondents’ perceptions of the Decision Box on PSA testing for prostate cancer screening. The questionnaire measured physicians’ interest in the clinical topic using a visual analog scale that ranged from 0 (no interest) to 10 (deep interest). It also comprised the information sub-scale of the Decisional Conflict Scale[16], the Information Assessment Method[17], a scale based on the Theory of Planned Behaviour that evaluated physicians’ intention to use in their practice what they had learned from the Decision Box to help their patients make an informed decision[18], and a scale based on the Technology Acceptance Model (TAM2) that evaluated physicians’ perceptions of the usefulness and ease of use of the Decision Box[19].

Analysis

One researcher (AG) and two research professionals performed a thematic qualitative data analysis of the content of focus group discussions following a hybrid deductive/inductive approach[20]. This analysis identified barriers and facilitators to the participants’ experience with the prototypes. The deductive analysis searched for attributes related to the seven facets of the User Experience Honeycomb[15]: valuable, usable, credible, useful, desirable, accessible, and findable. The inductive analysis integrated new themes mentioned by participants. First, to assess whether the User Experience Honeycomb applied and to explore possible sub-themes, the researcher and the two research professionals separately went through the same portion of one of the focus group transcripts and noted any attributes related to the Honeycomb. The three coders then compared their results and came to a consensus on the themes and sub-themes mentioned in this transcript sample. Next, they noted these themes and sub-themes in a manual of codes, labelling and defining them as well. The transcripts were entered as project documents into specialized software (NVivo 9, QSR International, Cambridge, MA, USA), and the codes developed for the manual were entered as nodes. One research professional then applied these codes to the English interviews while the other applied them to the French interviews to identify meaningful units of text. The first author (AG) then read all transcripts and reviewed the codes applied by the research professionals to the six interviews to ensure completeness and appropriateness of the code manual, and consistency of approach. Again, any modifications to the predetermined code manual were discussed among the three coders until consensus was reached.

We modified the Decision Boxes to take into account users’ comments. We also performed descriptive statistical analyses of the answers to the questionnaire.

Results

Selection of clinical topics

For genomic topics, two rounds were necessary for a majority of panelists to select the same three clinical topics (Table 2). For the other topics, the final selection was achieved after the third round. In round one, one panelist selected a single genomic topic instead of two as instructed, and nine of the other topics instead of eight. At the second round, one panelist selected six topics instead of the five as instructed. Only prostate cancer screening and colorectal cancer screening with the fecal occult blood test were unanimously selected during the first round of the survey.

Table 2 Clinical topics and number of panelists selecting each topic at each survey round (*: indicates the round at which the topic was selected)

Users’ experience of the decision Box prototypes

Participants’ characteristics

Eighteen of 35 (51%) eligible physicians agreed to participate, and 15 attended the interviews—all participants stayed until the end. All physicians who did not agree to participate stated they were not free on the date of the focus group. Seven physicians participated in the French group interview and eight in the English interview; 73% were women. Most were between 30 and 60 years old (median = 40 years old) and had practiced medicine for 5 to 37 years (median = 13 years). The four groups of patients totalled 33 participants. Within groups of women and groups of men, levels of education, employment status, and age were similar, regardless of language; but these characteristics differed between groups of men and women (Table 3).

Table 3 Characteristics of participating patients in each focus group

Among the 17 women participants, 10 had been pregnant at least once and none had a child with trisomy 21. Seven went through prenatal testing for trisomy 21, and none received a positive result. One had an amniocentesis, and her fetus was not diagnosed with trisomy 21. Of the participating women, 10 had received information on prenatal screening before the interview and mentioned several sources of information (sometimes more than one source): eight mentioned their doctor, their medical clinic or the hospital; three mentioned the internet; and some mentioned a university course, a specialised prenatal private clinic, friends and family, and books. Of the 16 men who participated, nine had been screened for prostate cancer at least once, four had received a positive result following screening, and three had had a biopsy. In two of the three, the biopsy had revealed prostate cancer, for which they were treated. Eight of the male participants had received information on prostate cancer screening before the interview. As sources of information on the cancer, seven mentioned their doctor, their medical clinic, or the hospital, two mentioned an advertisement or the television, one medical publications, and one family members.

Focus groups

The focus groups with physicians lasted about one hour and 45 minutes and those with patients lasted about one hour. Patients’ and physicians’ perceptions of the Decision Boxes mostly concurred, although the physicians discussed more the data, whereas patients discussed more the shared decision-making process.

The User Experience Honeycomb used to develop the interview guide describes seven facets (or qualities) of an individual’s experience of a product. During coding, we felt that the user’s experience of a Decision Box would be best represented as a process, because the facets at play change from the time when clinicians access the Decision Box to the time when they use it in their practice. Following our analysis, we propose eight successive steps to users’ experience of an evidence-based shared decision-making support tool over time, and we describe which facets of the users’ experience are at play at each of these steps (Figure 3). The eight proposed steps are successive: for the document to accomplish its purpose (here, to assist the physician to share a decision with the patient), the user must go through each step in sequence, one after the other.

Figure 3
figure 3

Steps of the users’ experience of an evidence-based shared decision-making support tool over time. Correspondence between each step and facets of the model used to develop the interview guide (i.e., Morville’s User Experience Honeycomb) are shown in parenthesis.

When analyzing the interviews, we coded barriers and facilitators to users’ experience of Decision Boxes at each step. These results are detailed in the Table 3a to3g, and we can draw a few more general observations.

Accessing the information

Multiple communication channels were proposed by participants to facilitate access to Decision Boxes. Even if we specified to participants that Decision Boxes were developed to be used primarily before the clinical encounter, some still proposed to use them during the clinical encounter and suggested that printed format or mobile applications might then be more useful than the Internet.

Integrating the information

Most of the interviews’ discussions concerned the value of the communication design to integrating the information. Understandability, the format of graphics and missing information were the factors most often reported as barriers to the ‘Finding and understanding the information’ step (Table 4, section c). Comments on the understandability of the information mostly referred to how the presentation of the information allowed the user to pinpoint the risks or the benefits of the presented options. Participants generally found the proposed bar charts to be complex, and many suggested not using any, or using alternate representations such as flow charts or little men (icon array). Among the data found to be missing by participants, they reported that alternative screening tests should be described. Concerning the format of numbers, one-half the groups mentioned that presenting percentages would be helpful. Some barriers to trusting the information were more often reported for the prostate cancer prototype, namely subjectivity and information differing from what participants already knows.

Table 4 Factors related to users’ experience of the Decision Box and frequency of interviews where they were mentioned (Men = participant from the men’s group; Wo = participant from the women’s group, MD = participant from the physicians’ group; FR = French-speaking group; En = English-speaking group)

Using the information in practice

Concerning the tool’s usefulness in the clinical setting, there was a general perception that synthesized and simplified information facilitated the communication of information to patients (Table 4, section e). Yet, five groups reported difficulty applying the information to individual patients as a barrier to communicating the information. There was general agreement that a lack of non-scientific information was a barrier of the Decision Box to seek patient values (Table 4, section f) and that guidance to make a decision was lacking. Specific aspects of the clinical topics covered in the prototypes (PSA or prenatal testing) were reported to influence the sharing of decisions with patients. For the PSA test, the lack of evidence on which to base a decision and risks’ outweighing benefits were perceived as barriers to sharing the decision with patients (Table 4, section g). Having appreciated using the Decision Box was most often reported by physicians as a facilitator to sharing of the Decision Box with peers.

Prototype modification

Following analyses, we modified the Decision Box prototypes to try to take the participants’ perceptions and suggestions into account (Figures 4 and5). Redesigning of the prototypes emerge mostly from the analyses, as many solutions were found during the analysis phase, by discussion among the three coders. Potential solutions to the identified barriers were discussed again with the professionals and researchers of the Canada Research Chair in Implementation of Shared Decision Making. Redesigning of the graphic aspects was generally straightforward: the first author (AG) used the graphic design software herself, and integrated modifications to the prototypes as ideas were emerging. The graphic designer was involved more punctually to help resolve specific issues (for example, with the graphic on the Decision box on prenatal screening).

Figure 4
figure 4

Decision Boxes on prenatal screening for trisomy 21 modified to reflect user experience testing (AFTER evaluation).

Figure 5
figure 5

Decision Boxes on prostate cancer screening modified to reflect user experience testing (AFTER evaluation).

We first added three sections: an introduction-to-the-document section, a patient counseling section, and a reference section. The document was spread over two pages to make it look less busy and to try to improve first impressions. To try to improve the boxes’ credibility, we made more visible the names of the research institutions involved in developing the boxes. A statement on the boxes’ purpose and a flow chart representing the shared decision making process were added to clarify the Decision Box’s purpose. Where there was evidence, the presentation of risk factors and of benefits and harms of the intervention according to these factors became essential elements of the modified documents, because we thought that this would help physicians apply and communicate the information to individual patients. In the patient counseling section, we proposed three questions that physicians could ask patients to help clarify patients’ values and preferences and to guide the decision-making process. Last, hoping to clarify that the Decision boxes are based on the best available evidence, we added more references to studies and, when applicable, described the design used by the studies, the study participants, and the length of the intervention.

Patients’ decisional conflict after the interview

After the interview, the patients’ mean Decisional Conflict Scale score was 25% (±SD 12%) and ranged from 2% to 44%, indicating low decisional conflict.

Questionnaire for family physicians

Physicians’ mean interest in PSA testing for prostate cancer screening as a clinical topic was 7.9 ± 1.2 (SD) on a visual analog scale ranging from 0 (no interest) to 10 (deep interest). On the information subscale of the Decisional Conflict Scale, which ranged from 0 (feels extremely informed) to 100 (feels extremely uninformed), physicians gave a mean score of 25.6% ± 10.7 for the Decision Box on PSA testing for prostate cancer, indicating that they felt well-informed after reading it. Using the Information Assessment Method, all participants reported that the Decision Box on PSA testing had an impact on them or their practice. The most frequently reported type of cognitive impact was that it would remind them of something they already knew (93%). All physicians reported they would use this information for their patients, and the most frequently reported planned use was to resolve a doubt (60% of physicians). Eighty-seven percent of physicians expected the information to benefit their patients, with the most frequently reported expectation being that the information would make the patient more knowledgeable about health or healthcare (60%). Physicians’ intention to use in their practice what they learned from the document to help their patients make an informed decision averaged 5.4 ± 1.2 (SD) on a scale from 1 (strongly disagree) to 7 (strongly agree), indicating that they had the intention to use what they learned. On average, physicians perceived the Decision Box prototypes as being somewhat easy to use (4.2 ± 1.5) and useful (4.8 ± 1.0) on a scale ranging from 1 (strongly disagree) to 7 (strongly agree).

Discussion

In this study, we explored facilitators and barriers to the communication design of two Decision Box prototypes by engaging users in their testing, and we modified these prototypes to try to minimize the influence of the observed barriers. Our findings improve understanding of the design of evidence-based shared decision-making support tools, which in turn will improve their value for end users.

Accessing the documents

With 95% of Canadian physicians using electronic tools[21], the Internet might be the most efficient communication channel to deliver Decision Boxes to clinicians. However, participants suggested accessing Decision Boxes through multiple channels, such as printed documents, the internet, and mobile applications, especially for use during the clinical encounter. A website offering a one-click access to printable Decision Boxes may thus be useful in clinical encounters.

Integrating the information

Newer information tools summarizing the current best evidence, such as Decision Boxes and synopses, respond to the widely acknowledged problem of information overload in healthcare. Brevity and lower density of information are especially critical to foster a positive first impression of such tools[22, 23], and to improve users’ comprehension of the options[24]. Other factors are also at play during the first contact between the receiver and the message, such as the credibility of the source of information (expertise, trustworthiness)[25]. By clearly identifying three universities as the sources of Decision Boxes in the modified documents, we are building on the ‘reputed credibility’ that universities generally possess and on the ‘experienced credibility’ stemming from users’ academic experiences[26]. Participants also reported that knowledge of the tool development methodology would influence their first impressions of the Decision Boxes. Consequently, the website hosting the Decision Boxes will include a methodology subsection to describe the typical indicators needed to appraise clinical summaries, namely the methods used to search and update the literature, and to critically appraise the retrieved sources[10].

Our findings support another study that also reported understandability of the information as key factor of a positive user experience[27]. According to Rosenbaum[28], ‘understandability’ involves two separate dimensions: the users’ perception of their own understanding, which we explored in this study, as well as an objective measure of correct understanding that would need to be tested separately. Hoping to improve users’ perceptions of understandability, we modified the Decision Boxes to use percentages to convey probabilities whenever possible. Major organizations, such as the Cochrane Collaboration[29] and the International Patient Decision Aid Standards (IPDAS) Collaboration[30], have been recommending natural frequencies to present absolute risks. A recent randomized trial however compared adults’ understanding of five different numerical formats, and found the percent format had slightly higher comprehension overall[31]. Following participants’ comments on the understandability of the graphics used in the prototypes, we either removed them or simplified them, but we kept bar graphs because these have been reported to be readily understood and helpful[32]. The research literature is not clear on which graphs are most effective to communicate health risk[33, 34].

Message attributes can also influence credibility[25], but at a later step that we named ‘Trusting the Information’. Participants reported objectivity and confidence in results as key message attributes that influenced their trust in the information. To foster perceptions of objectivity, negative and positive features of options should be presented equally to patients[30]. Objectivity might have been questioned more for the prostate cancer prototype, first because the size of the harms section was larger than that of the benefits, but also because its content differed from what most participants already knew. The theory of cognitive consistency proposes that information which is compatible with existing beliefs is the most likely to be accepted, and that which emphasizes the undesirable qualities of existing beliefs may be selectively avoided or ignored[35]. Comments on the ‘Confidence in Results’ section reveal that participants’ trust in Decision Boxes was influenced by the quality of evidence presented within. This supports findings from a study of the Cochrane Collaboration’s summary of findings table in which users also indicated that the table’s credibility was reduced when GRADE ratings were low[27, 28].

Using the information in practice

Following participants’ comments, we added information on the benefits and harms of the intervention according to individual patient risk, when available, to try to improve applicability of the Decision Boxes to practice. Risks can be personalized based on individual risk factors for a condition (such as age or family history), or it can be calculated using formulae derived from epidemiological data[36]. A review on the effectiveness of personalized risk communication in the context of screening showed little impact of this strategy to promote informed decision making[36], but another study reported primary care practitioners preferred personalized risks[37].

Participants mentioned the prototypes lacked non-scientific information, and this concurs with recommendations to base decisions not only on scientific evidence but also on patients’ values and preferences when two or more medically acceptable options exist[2]. To try to help patients clarify and express their preferences and values, the modified Decision Boxes proposed three questions for the clinician to ask patients. Participants also requested some guidance on the process of shared decision making and we are planning to provide such guidance in a tutorial on the website hosting Decision Boxes. The IPDAS Collaboration suggests that patient decision aids should provide a step-by-step method to make a decision or include tools like worksheets or lists of questions to use when discussing options with a health professional[30].

Strengths and limitations

Because we recruited both English- and French-speaking participants, and both clinicians and patients, we were able to gather a large array of points of views reflecting some of the future users of Decision Boxes. We recruited a diversity of male patients, including two men who had been treated for prostate cancer. However, although we recruited one woman who had an amniocentesis following positive screening, we did not interview any woman who had had a child with trisomy 21.

Some biases may have affected the focus group interview. For example, the researchers’ presence may have biased responses towards more positive comments. Also, participants did not receive the documents in their usual context, and because they had more time to look at the documents, they may have underlined more problems than they would normally have been aware of. We tried to minimize this bias by setting the context at the beginning of the interviews.

Testing Decision Boxes on two different clinical topics was a strength of this study. Most the uncovered barriers are widely applicable to many topics because they describe communication design, rather than content issues, and so provide a good basis for developing future Decision Boxes. However, we did uncover some barriers/facilitators that were specific to topic in the present evaluation. Developers of new Decision Boxes should then consider testing with target users, an important step to uncover possible barriers specific to topic. For the eight remaining Decision Boxes, we are planning to further explore the influence of topic on users’ experience of the Decision Boxes.

Our interview guide and study design were useful mostly to explore barriers to integrating the information, because the participants really experienced these steps when reading the prototypes at the beginning of the focus groups. Because participants did not meet with patients after reading the Decision Boxes, the reported barriers to using this type of information in practice needs to be confirmed among a sample of clinicians who had the opportunity to use what they learned from the Decision Box with their patients. Our interpretation of participants’ comments led us to modify the prototypes to address the problems and limitations they perceived. We need to verify if our interpretation was right, and if the choices that we made to address the limitations of the prototypes truly improved the Decision boxes and did not generate new problems.

Conclusions

We identified factors influencing the communication design of Decision Boxes that act when accessing them, when integrating the information presented within, and when using them in clinical practice. These factors will guide the development of the eight remaining Decision Boxes covering the other topics selected at the beginning of this project. In the next phase of this program, we will evaluate users’ perceptions of the Decision boxes that are to be developed following our interpretation to confirm that our modifications truly addressed the identified problems and did not generate new problems. We plan to use a mixed approach to collect users’ perceptions of the modified Decision boxes, using the questionnaire tested in this study and focus groups. In a longer term, we plan to evaluate the effect of Decision Boxes on the integration of evidence-based and SDM principles in clinical practice. We hypothesize that the implementation of Decision Boxes in clinical practice will prepare the physicians to better communicate the benefits and harms of the available options to their patients. Better communication will allow patients to become more involved in decisions concerning their health, and in turn lead to a more judicious use of current best evidence.

References

  1. BMJ Publishing group: How much do we know?. 2011, from http://clinicalevidence.bmj.com/ceweb/about/knowledge.jsp,

  2. Wennberg JE: Unwarranted variations in healthcare delivery: implications for academic medical centres. Brit Med J. 2002, 325: 961-964. 10.1136/bmj.325.7370.961.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Montori VM, Guyatt GH: Progress in evidence-based medicine. J Amer Med Assoc. 2008, 300: 1814-1816. 10.1001/jama.300.15.1814.

    Article  CAS  Google Scholar 

  4. Edwards A, Elwyn G, Covey J, Matthews E, Pill R: Presenting risk information - A review of the effects of ‘framing’ and other manipulations on patient outcomes. J Health Commun. 2001, 6: 61-82. 10.1080/10810730150501413.

    Article  CAS  PubMed  Google Scholar 

  5. Stacey D, Bennett CL, Barry M, Col NF, Eden KB, Holmes-Rovner M, Llewellyn-Thomas H, Lyddiatt A, Légaré F, Thomson R: Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2011, 5 (10): CD001431-

    Google Scholar 

  6. Evans R, Edwards A, Brett J, Bradburn M, Watson E, Austoker J, Elwyn G: Reduction in uptake of PSA tests following decision aids: systematic review of current aids and their evaluations. Patient Educ Couns. 2005, 58: 13-26. 10.1016/j.pec.2004.06.009.

    Article  PubMed  Google Scholar 

  7. Charles C, Gafni A, Whelan T: Shared decision-making in the medical encounter: what does it mean? (or it takes at least two to tango). Soc Sci Med. 1997, 44: 681-692. 10.1016/S0277-9536(96)00221-3.

    Article  CAS  PubMed  Google Scholar 

  8. Légaré F: Inventory of Shared Decision Making Programs for Healthcare Professionals.http://decision.chaire.fmed.ulaval.ca/index.php?id=180&L=2,

  9. Harter M, van der Weijden T, Elwyn G: Policy and practice developments in the implementation of shared decision making: an international perspective. Z Evid Fortbild Qual Gesundhwes. 2011, 105: 229-233. 10.1016/j.zefq.2011.04.018.

    Article  PubMed  Google Scholar 

  10. Banzi R, Liberati A, Moschetti I, Tagliabue L, Moja L: A review of online evidence-based practice point-of-care information summary providers. J Med Internet Res. 2010, 12: e26-10.2196/jmir.1288.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Haynes B: Of studies, syntheses, synopses, summaries, and systems: the ‘5 S’ evolution of information services for evidence-based healthcare decisions. Evid Based Nurs. 2007, 10: 6-7. 10.1136/ebn.10.1.6.

    Article  PubMed  Google Scholar 

  12. Giguere A, Legare F, Grad R, Pluye P, Rousseau F, Haynes RB, Cauchon M, Labrecque M: Developing and user-testing Decision boxes to facilitate shared decision making in primary care - a study protocol. BMC Med Inform Decis Mak. 2011, 11: 17-10.1186/1472-6947-11-17.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Schwartz LM, Woloshin S, Welch HG: The drug facts box: providing consumers with simple tabular data on drug benefit and harm. Med Decis Making. 2007, 27: 655-662. 10.1177/0272989X07306786.

    Article  PubMed  Google Scholar 

  14. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H: GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011, 64: 383-394. 10.1016/j.jclinepi.2010.04.026.

    Article  PubMed  Google Scholar 

  15. Morville P: User experience Design.http://www.semanticstudios.com/publications/semantics/000029.php,

  16. O’Connor AM: Validation of a decisional conflict scale. Med Decis Making. 1995, 15: 25-30. 10.1177/0272989X9501500105.

    Article  PubMed  Google Scholar 

  17. Pluye P, Grad RM, Johnson-Lafleur J, Bambrick T, Burnand B, Mercer J, Marlow B, Campbell C: Evaluation of email alerts in practice: Part 2 - validation of the information assessment method. J Eval Clin Pract. 2010, 16: 1236-1243. 10.1111/j.1365-2753.2009.01313.x.

    Article  PubMed  Google Scholar 

  18. Ajzen I: The theory of planned behavior. Organ behav hum. 1991, 50: 179-211. 10.1016/0749-5978(91)90020-T.

    Article  Google Scholar 

  19. Venkatesh V, Davis FD: A theoretical extension of the technology acceptance model: Four longitudinal field studies. Manage Sci. 2000, 46: 186-204. 10.1287/mnsc.46.2.186.11926.

    Article  Google Scholar 

  20. Fereday J, Muir-Cochrane E: Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International Journal of Qualitative Methods. 2006, 5: 80-92.

    Google Scholar 

  21. The College of Family Physicians of Canada, Canadian Medical Association: The Royal College of Physicians and Surgeons of Canada: National Physician Survey.http://www.nationalphysiciansurvey.ca/nps/2010_Survey/2010nps-e.asp,

  22. Grandage KK, Slawson DC, Shaughnessy AF: When less is more: a practical approach to searching for evidence-based answers. J Med Libr Assoc. 2002, 90: 298-304.

    PubMed  PubMed Central  Google Scholar 

  23. Wang R, Bartlett G, Grad R, Pluye P: The cognitive impact of research synopses on physicians: a prospective observational analysis of evidence-based summaries sent by email. Inform Prim Care. 2009, 17: 79-86.

    CAS  PubMed  Google Scholar 

  24. Peters E, Dieckmann N, Dixon A, Hibbard JH, Mertz CK: Less is more in presenting quality information to consumers. Med Care Res Rev. 2007, 64: 169-190. 10.1177/10775587070640020301.

    Article  PubMed  Google Scholar 

  25. Wathen CN, Burkell J: Believe it or not: Factors influencing credibility on the Web. J Am Soc Inf Sci Tech. 2002, 53: 134-144. 10.1002/asi.10016.

    Article  Google Scholar 

  26. Tseng S, Fogg B: Credibility and computing technology. Communications of the ACM. 1999, 42: 39-44.

    Article  Google Scholar 

  27. Rosenbaum SE, Glenton C, Nylund HK, Oxman AD: User testing and stakeholder feedback contributed to the development of understandable and useful Summary of Findings tables for Cochrane reviews. J Clin Epidemiol. 2010, 63: 607-619. 10.1016/j.jclinepi.2009.12.013.

    Article  PubMed  Google Scholar 

  28. Rosenbaum SE: Improving the user experience of evidence: a design approach to evidence-informed healthcare. PhD thesis. 2010, The Oslo School of Architecture and Design, Oslo, Norway

    Google Scholar 

  29. Rosenbaum SE, Glenton C, Oxman AD: Summary-of-findings tables in Cochrane reviews improved understanding and rapid retrieval of key information. J Clin Epidemiol. 2010, 63: 620-626. 10.1016/j.jclinepi.2009.12.014.

    Article  PubMed  Google Scholar 

  30. Elwyn G, O’Connor AM, Bennett C, Newcombe RG, Politi M, Durand MA, Drake E, Joseph-Williams N, Khangura S, Saarimaki A: Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi). PLoS One. 2009, 4: e4705-10.1371/journal.pone.0004705.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Woloshin S, Schwartz LM: Communicating data about the benefits and harms of treatment: a randomized trial. Ann Intern Med. 2011, 155: 87-96.

    Article  PubMed  Google Scholar 

  32. Carling CL, Kristoffersen DT, Flottorp S, Fretheim A, Oxman AD, Schunemann HJ, Akl EA, Herrin J, MacKenzie TD, Montori VM: The effect of alternative graphical displays used to present the benefits of antibiotics for sore throat on decisions about whether to seek treatment: a randomized trial. PLoS Med. 2009, 6: e1000140-10.1371/journal.pmed.1000140.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Ancker JS, Senathirajah Y, Kukafka R, Starren JB: Design features of graphs in health risk communication: a systematic review. J Am Med Inform Assoc. 2006, 13: 608-618. 10.1197/jamia.M2115.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Epstein RM, Alper BS, Quill TE: Communicating evidence for participatory decision making. J Amer Med Assoc. 2004, 291: 2359-2366. 10.1001/jama.291.19.2359.

    Article  CAS  Google Scholar 

  35. Marriott S, Palmer C, Lelliott P: Disseminating healthcare information: getting the message across. Qual Health Care. 2000, 9: 58-62. 10.1136/qhc.9.1.58.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Edwards AGK, Evans R, Dundon J, Haigh S, Hood K, Elwyn GJ: Personalised risk communication for informed decision making about taking screening tests. Cochrane Database Syst Rev. 2006, CD001865. 10.1002/14651858.CD001865.pub.

    Google Scholar 

  37. Hill S, Spink J, Cadilhac D, Edwards A, Kaufman C, Rogers S, Ryan R, Tonkin A: Absolute risk representation in cardiovascular disease prevention: comprehension and preferences of healthcare consumers and general practitioners involved in a focus group study. BMC Public Health. 2010, 10: 108-10.1186/1471-2458-10-108.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

This work was supported by the APOGEE-Net/CanGèneTest Research and Knowledge Network. AG was also supported by postdoctoral training fellowships from APOGEE-Net/CanGèneTest and from the Knowledge Translation Canada research network. We would like to thank the participants whose thoughtful recommendations helped us modify the Decision Boxes. We would also like to thank Annie Frappier and Michael Schula for contributing to data collection and analysis, Josée Boulet for designing the Decision Boxes, and Jennifer Petrela for writing assistance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anik Giguere.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

AG, ML, FL, RG, PP, RBH, MC, FR and FL contributed to the study plan, and to data collection via the Delphi survey. AG, ML, RG, PP contributed to data collection with focus groups. All authors contributed to the development of the Decision box prototypes. AG and ML contributed to data analysis. AG wrote the first draft of the manuscript. All authors reviewed the manuscript and approved its final version.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Giguere, A., Légaré, F., Grad, R. et al. Decision boxes for clinicians to support evidence-based practice and shared decision making: the user experience. Implementation Sci 7, 72 (2012). https://doi.org/10.1186/1748-5908-7-72

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1748-5908-7-72

Keywords