Abstract
Objective To assess residents’ clinical questions, where they get their answers, the utility of those answers, and if an evidence-based medicine (EBM) workshop improves the use of evidence-based electronic resources.
Design Prospective observational cohort study.
Setting Urban family medicine teaching clinics in Edmonton, Alta, in 2007.
Participants First- and second-year family medicine residents training in the family medicine teaching units.
Methods An observer recorded clinical questions posed by residents in clinic, the resources used to answer these questions, and how residents thought the answers modified practice. Resources were categorized broadly as colleagues, electronic, or paper. Answer utility was ranked in decreasing order as large change, small change, confirmed, expanded knowledge, or no help. Use of resources was compared before and after an EBM workshop, and between residents under normal supervision and those in semi-independent clinics.
Results Thirty-eight residents from 5 sites were observed addressing 325 questions in 114 clinical half-day sessions (420 patients). Residents had 0.8 questions per patient and answered 83.4% of questions with 1 resource (range 1 to 6). Residents made 406 attempts to answer questions, using colleagues 65.5% of the time (93.6% were preceptors), electronic resources 20.7% of the time, and paper resources 13.8% of the time. Answers from colleagues were least likely to require secondary resources (F test, P < .001). The utility of answers from colleagues (F test, P = .002) was superior to that of answers from electronic resources, and this difference remained significantly higher in sensitivity analysis. The EBM workshop training did not influence electronic resource use (17.8% before and 15.1% after, Fisher-Freeman-Halton test, P = .18), but semi-independence from preceptors increased the use of electronic resources from 16.5% to 51.0% (Fisher-Freeman-Halton test, P < .001).
Conclusion Residents have many questions during clinical practice. Preceptors were used more commonly than all other resources combined and were the most dependable resource for residents to obtain answers. Although an EBM workshop was not associated with increased use of electronic evidence-based resources, semi-independent work appeared to be.
Uncertainty is common in clinical practice. Generalist physicians have 0.7 to 8 questions for every 10 patients,1–4 with older and busier clinicians generally reporting fewer questions.1,2 As expected, clinical questions are more common among learners—medical students have 5 clinical questions per patient5 and residents have 0.7 to 1.6 clinical questions per patient.3,6,7 While electronic evidence-based medicine (EBM) information resources are frequently promoted for answering these questions, research indicates that practising clinicians and residents rarely use them.1–4
In many previous studies investigators have prompted physicians, often multiple times, to ask questions.2–4,7,8 This might have falsely inflated the number of truly important questions that were relevant to practice. Physicians do not attempt to answer 34% to 77% of these questions,2–4,7,8 usually because the answer is thought to be unnecessary.2–4 In previous studies the utility of questions was often not explored6,7 or the questions were classified simply as satisfactory (or helpful) or not.2–4
We introduced an EBM curriculum to our residency program to promote evidence-based resource use and lifelong learning. The curriculum includes access to computers for efficient use of EBM resources; a 2-day workshop to teach EBM principles and introduce Internet medical information resources; and a formal assignment about answering clinical questions entitled Brief Evidence-based Assessment of the Research. Evaluation, including observation in the clinical setting, is a core component of the curriculum. A full description of the curriculum has been previously published, which provides further detail.9
Our primary objective in this study was to determine the types of questions residents asked while in family medicine clinic, identify the resources used to answer these questions, and assess the utility of the answers found from those resources. Our secondary objectives were to identify factors (eg, access to preceptors, international medical training, and EBM training) that might influence resource use (Internet, paper, and colleagues or preceptors) for answering clinical questions.
METHODS
The University of Alberta (U of A) Family Medicine Residency Program in Edmonton, like other family medicine residencies in Canada, is 2 years in duration: trainees are junior residents in the first year and senior residents in the second year. The U of A program comprises a mix of Canadian and international medical graduates (IMGs). All residents complete an EBM workshop midway through the second month of their first year. First-year residents see patients independently, but then review and see each patient with their faculty supervisors. Second-year residents either work in a fashion similar to first-year residents or work semi-independently at “resident clinics” only reviewing cases with supervising faculty at the end of the day. Supervising faculty are available to these residents but work separately. Residents are assigned their training centre at the start of their residencies, and all senior residents in the 2 training centres with resident clinics work semi-independently.
In 2007, 16 faculty members, representing the 5 primary teaching centres of the U of A Family Medicine Residency Program agreed to participate in the project. Research assistants (V.M. and S.A.) then approached residents affiliated with the faculty members, reviewed the study with the residents, and obtained residents’ consent to participate. Information provided to solicit consent included a description of the objectives of the study and what data would be recorded. All residents agreed to participate and were observed for at least 1 session. The in-clinic study period was from June to August 2007. As a result, residents on family medicine rotations and in clinic daily during that time were more likely to be observed than residents on other rotations and those returning for a half-day in clinic once a week.
We observed residents in all activities outside of the patient encounter including interacting with preceptors and recording their notes. If residents asked a question (of their preceptor, colleagues, or over the telephone), we recorded it. If residents were observed looking in textbooks, accessing hand-held devices, using the computer (other than for recording notes), or engaging in similar activities, we would ask the residents if they had a question. Questions were defined as any patient-related inquiry or uncertainty that could not be addressed through history, physical examination, or review of the chart. Therefore, a question such as “Is this patient taking blood pressure medications?” was not recorded while “In this otherwise healthy elderly woman with hypertension, what is the best first choice of blood pressure medications?” was recorded. Residents were not encouraged to divulge questions they had not expressed or did not attempt to answer. We also did not inquire about questions addressed at home or outside of the workplace.
We recorded all methods used to answer questions and successes and failures with finding answers. There is no validated tool to assess the value of answers; although, in previous studies, participants classified answers as helpful (or satisfactory) or not.2–4 In our study, residents were asked to categorize each answer’s utility in 1 of 5 ways (ranked in relative order of utility): large change (in practice), small change (in practice), confirmed (practice), expanded knowledge base, and no help. Confirmed practice was defined as something that verified the resident’s knowledge base and practice related to the question, while expanded knowledge base was defined as something that broadened the resident’s knowledge base but did not necessarily address the specific question. The number of patients seen by the residents was also recorded. A session of clinical activity was defined in half-day units.
Data were recorded directly into a Microsoft Excel database using a laptop computer. Questions were categorized by system or specialty (eg, cardiology, rheumatology) and assigned an evidence-based question category (eg, therapy, diagnostic). Resources were grouped within 3 broad categories: colleagues (any health care professional), electronic (computers, Internet, or hand-held computers or personal digital assistants), and paper (texts, formulary). Resources were further subdivided within each primary category and, when possible, into individual resources (specific texts and websites). Any uncertainty about recording questions or resources was clarified with the primary researchers (G.M.A. or C.K.).
Analysis
Owing to resource limitations, we could only observe residents for 3 months (June to August). A power or sample-size calculation was not performed as we were limited to a convenience sample owing to time constraints. However, we attempted to maximize our observations of residents during that period to increase the study’s power.
We used unpaired t tests to determine if the number of questions, number of patients, or number of questions per patient differed between IMGs and Canadian graduates. As our sample size was relatively small, the Fisher-Freeman-Halton test was used to determine which resources were least likely to require a second or more resources to answer the questions. This test was also used to determine if resource use varied between IMGs and Canadian graduates, residents before and after the EBM workshop, or residents directly supervised and those working more independently in the resident clinics. Second-year residents who did not work in resident clinics functioned in the same way as first-year residents, so we did not pool second-year residents and compare them to first-year residents.
As many of the individual residents’ questions came from the same residents, they were not independent. To account for the correlation among questions asked by the same residents, we used regression analysis that took the correlation into account. Also, as the utility ranking between confirmed and expanded knowledge base was potentially confusing, we performed a sensitivity analysis pooling these 2 options into 1 category. Statistical significance was defined as a P value of less than .05.
The project received ethical approval from the Health Research Ethics Board of U of A.
RESULTS
Thirty-eight residents from 5 sites were observed addressing 325 questions over 114 clinical half-day sessions (420 patients). Half of the residents were in first year (19 of 38) and the other half were in second year. Six of the residents in second year worked semi-independently in the resident clinics. Fifteen of the residents (39.5%) were IMGs.
Questions
Residents saw a mean of 3.7 patients per half-day (range 1 to 9) and had 0.8 questions per patient (range 0 to 4). The specialty type and classification of the questions are provided in Table 1. The most frequent questions involved obstetrics and gynecology (12.0%), cardiology (10.2%), infectious diseases (8.3%), psychiatry (8.3%), rheumatology (7.4%), and dermatology (7.1%).
Resources
There were 406 attempts to answer the 325 questions, for an average of 1.2 attempts per question and a range of 1 to 6 attempts. Most questions (n = 271, 83.4%) were answered with 1 resource. Answers from colleagues rarely required secondary resources (7.3%) compared with paper (37.8%) and electronic (41.1%) resources, indicating that the likelihood of needing more than 1 resource to answer a question depended significantly on the first resource used (F test, P < .001).
Data on the resources used and the utility of the answers are provided in Table 2. Colleagues (93.6% were preceptors) were the most common resource used (266 of 406 attempts, 65.5%). Colleagues had the lowest chance (12.4%) of providing an answer that was of no help compared with paper (25.0%) and electronic (29.8%) resources (F test, P = .03).
When answers were ranked by utility, a significant difference was evident in the utility of answers from colleagues, electronic resources, and paper resources (F test, P = .006). Pairwise comparisons indicated that answers from colleagues were significantly more useful than those from electronic resources (F test, P = .002). The other 2 pairwise comparisons were not significantly different: paper compared with electronic (F test, P = .10) and colleagues compared with paper (F test, P = .29).
When the utility ranking options confirmed and expanded knowledge base were combined, the difference in the utility of answers from the 3 resources was no longer statistically significant (F test, P = .06). Pairwise comparisons revealed that colleagues’ answers were statistically superior to those from electronic resources (F test, P = .02). The other 2 pairwise comparisons remained non-significant: paper compared with electronic (F test, P = .34) and colleagues compared with paper (F test, P = 0.30). No help was reported for 15.4% (50 of 325) of the first attempts to answer a question and no further search was carried out for half of those. Four of the 54 questions in which multiple resources were used ended with no help from any resource. In total, 8.9% (29 of 325) of questions ended with no help.
Training
When comparing Canadian graduates with IMGs, there was no difference in the mean number of sessions observed (2.9 for Canadian graduates vs 3.2 for IMGs, difference −0.3 [95% CI −2.2 to 1.5], P = .72 [unpaired t test]); mean number of patients seen per session (3.8 for Canadian graduates vs 3.4 for IMGs, difference 0.4 [95% CI −0.7 to 1.5], P = .47 [unpaired t test]); or mean number of questions asked per patient (1.1 for Canadian graduates vs 0.9 for IMGs, difference 0.2 [95% CI −0.4 to 0.8], P = .51 [unpaired t test]). When attempting to find answers, Canadian graduates used colleagues, electronic resources, and paper resources 62.8%, 22.3%, and 15.0% of the time, respectively, compared with IMGs who used colleagues, electronic resources, and paper resources 71.2%, 17.4%, and 11.4% of the time, respectively. There was no significant difference between the 2 groups (Fisher-Freeman-Halton test, P = .26).
The breakdown of resources used before the EBM workshop, after the EBM workshop, and in the resident clinics is provided in Figure 1. No significant difference was evident in resident resource use before or after the EBM workshop (Fisher-Freeman-Halton test, P = .18). However, a significant difference was observed between residents in the resident clinics and all other residents combined (Fisher-Freeman-Halton test, P = .001). Residents in the resident clinics used more electronic resources (51.0% vs 16.5%), fewer colleagues (32.7% vs 70.0%), and similar frequency of paper resources (16.3% vs 13.4%).
Appropriateness of resources
There were 12 questions (3.7%) related to drug dosing—4 required secondary resources and all 3 types of resource were used similarly (between 3 and 5 times). Although it is difficult to define the type of questions that could be addressed quickly and reliably using electronic or paper resources, many questions might have been good candidates. For example, there were 24 questions (7.4% of the total) about medication adverse events and contraindications, but 62.5% of these questions were addressed with colleagues. On the other hand, questions related to judgment and experience likely could not be addressed through paper or electronic resources. Examples include, “What do we do with a patient who is convinced he has hypothyroidism despite normal laboratory results?” or “Do we need to give medications to a patient who thinks she has peripheral vascular disease but who has been assessed by a vascular surgeon with negative test results?”
DISCUSSION
In this study, family medicine residents asked questions regularly in clinical practice—0.8 questions per patient. The questions related almost entirely to therapy and diagnosis but were spread across specialties, as expected with the broad undifferentiated patient population seen in family practice.
Previous studies have prompted physicians for questions,2–4,7,8 and, as a result, many of the “questions” reported were not important enough to pursue.2–4 We believe that focusing only on questions that were pursued in active practice more accurately reflects the relevant questions arising for residents in daily practice. If we concentrate only on pursued questions in past research, we find that residents have 0.2 to 1.6 pursued questions per patient3,6,7 and that practising clinicians have 0.02 to 0.3 pursued questions per patient.1–4
Few previous studies have examined the need for multiple resources or the utility of the answers. Ely and colleagues8 found that 7% of pursued questions required multiple resources, partial answers were found for 26% of questions, and no answers were found for 28% of questions. In other studies, answers were not found for 20% of pursued questions2 and 13% of answers were unsatisfactory.3 Unfortunately, this information provides little insight into the utility of the resources used by residents.
In our study, 17.7% of the attempts were believed to be of no help. However, in many cases, further attempts resulted in helpful answers, and only 8.9% of all questions ended with no help from any resource, a finding lower than in previous studies.2,3,8 Paper and electronic resources had a 25.0% and 29.8% chance, respectively, of being no help compared with only 12.4% for colleagues, consistent with the finding that answers from colleagues were significantly less likely to require secondary resources (7.3%) compared with paper (37.8%) and electronic (41.1%) resources.
The chance of an answer resulting in a large or small practice change was about 25% for both colleagues and paper resources, compared with only 11.9% for electronic resources, consistent with the finding that the utility of answers provided by colleagues was significantly higher than that of answers from electronic resources.
Although the EBM workshop includes specific training in electronic evidence-based information resources, residents did not increase their use of these resources after the workshop and, instead, relied heavily on their preceptors (two-thirds of the time). Past research has shown that colleagues1,2 and preceptors3,6 are the most commonly used resource for clinical questions (29% to 44%). It is likely that colleagues provide answers more quickly than other resources, although the time taken to access various resources has not been examined in this or previous studies. Residents also appear to choose preceptors because answers from other resources are more likely to need secondary resources, and the utility of preceptor answers is at least as good as that of paper resources and is superior to that of electronic resources.
It is understandable that residents would continue to address questions with preceptors after the workshop, particularly when preceptors remained readily available. Although faculty are advised to encourage residents to pursue answers to their questions using evidence-based resources, this approach is not consistently reinforced. Furthermore, some questions, particularly those relying on experience and judgment (eg, “What do we do with a patient who is convinced he has hypothyroidism despite normal laboratory results?”) are better addressed with a colleague than with any other resource.
Interestingly, the use of electronic resources increased dramatically from 17% to 51% and became the primary method for addressing clinical questions in the resident clinics. This use of electronic resources was almost twice the highest result (26%)6 reported in previous studies of residents3,6,7 or clinicians in practice.4,8 It is possible that once access to preceptors was limited, residents readily adopted electronic medical resource use because they had acquired the necessary knowledge and skills.
Past research has suggested that IMGs might find EBM resource use, particularly computer use, more challenging.10–12 Our study found no difference between Canadian graduates and IMGs in the use of electronic resources, as both were relatively infrequent users of electronic resources. As well, IMGs’ self-reported concerns about computer use might be exaggerated or the result of falsely placed insecurity, and such concern might not be as important as has been previously suggested.
Limitations
Our study was conducted at only 1 institution and included a relatively small sample. Still, our findings mirror those of other studies, many of which had similar numbers of observed residents and questions,3,6,7 and are likely applicable to other generalist postgraduate trainees. The validity of the utility rating system used in our study is untested, and the ranking of the 5 scores could be debated. Combining the 2 potentially most arbitrary utilities (confirmed and expanded knowledge base) in a sensitivity analysis resulted in the difference in the utility of answers from the 3 resources losing statistical significance (F test, P = .06). However, the utility of answers from colleagues remained statistically superior to those from electronic resources (F test, P = .02), confirming that the superior utility of colleagues’ answers over electronic resources is robust to changes in the utility ranking. It could be argued that residents (and our research assistants) were not able to assess the value of answers. However, to determine if answers were based on the best evidence would have required a complete review of the evidence for all 325 questions. In the end, we decided that the value of answers could be reasonably assessed by the individual with the question.
Although we did not investigate questions explored outside of the clinic, other research shows that very few clinical questions are actually pursued after clinic.3,6 It is probable that our observations influenced whether residents asked questions and how the questions were addressed (ie, the Hawthorne effect). This is a challenge common to all similar research. However, as our results mirror those of previous studies, this effect does not appear to have been a significant problem in our case.
Last, the important finding from the resident clinics that semi-independence appears to promote the use of electronic evidence-based resources is based on a small sample within the study (6 residents with 49 questions). Although this result has face validity, further testing is needed. We intend to study physicians recently graduated from the program who have completed the EBM curriculum to determine how they answer clinical questions and compare them to similar physicians in practice. We also hope to examine if answers from preceptors are based on high-level evidence or if they are expert-opinion or experience based.
Conclusion
This study demonstrates that residents frequently have questions in practice and attempt to find answers for them. They primarily use preceptors to answer their clinical questions—an EBM workshop in isolation does not modify this behaviour. Answers from preceptors are less likely to require additional resources and the utility of these answers, as rated by residents, is at least as high as that for paper resources and superior to that of answers from electronic resources. Working semi-independently was associated with an increased use of electronic evidence-based resources, but owing to the small sample size, this requires verification in further studies.
Notes
EDITOR’S KEY POINTS
-
Family medicine residents will try to address clinical questions or uncertainty in 4 out of every 5 patients seen.
-
Preceptors are used more often to answer these clinical questions than paper and electronic evidence-based resources combined.
-
Answers from preceptors were the least likely to require secondary resources to acquire the answer and were rated as more helpful than electronic evidence-based resources.
-
Electronic resource use did not change after an evidence-based medicine workshop but did increase when immediate access to preceptors was limited.
POINTS DE REPÈRE DU RÉDACTEUR
-
Les résidents en médecine familiale cherchent à résoudre des questions ou incertitudes d’ordre clinique dans 4 cas sur 5 des patients qu’ils voient.
-
Ce sont les professeurs qui sont le plus souvent utilisés pour répondre à ces questions plutôt que l’ensemble des documents papier et des ressources électroniques fondées sur des preuves combinés.
-
Les réponses des professeurs étaient les moins susceptibles d’exiger des ressources additionnelles pour obtenir la réponse et elles ont été jugées plus utiles que les ressources électroniques fondées sur des preuves.
-
L’utilisation des ressources électroniques n’a pas été modifiée après un atelier sur la médecine fondée sur des preuves; toutefois, cette utilisation augmentait lorsque l’accès immédiat aux professeurs était limité.
Footnotes
-
This article has been peer reviewed.
-
Cet article a fait l’objet d’une révision par des pairs.
-
Contributors
Dr Allan conceived the idea, attained ethics approval, developed the project, recruited sites, supervised the research assistants, analyzed the data, completed the first draft of the article, and is the corresponding author. Ms Ma and Ms Aaron assisted in the project development, recruited and enrolled residents, refined and completed all data collection, summarized data and assisted in analysis, and contributed to editing of the final paper. Mr Vandermeer assisted in data analysis, completed statistical analysis, and contributed to editing of the final draft. Dr Manca assisted with development of the project, analysis of data, and editing of the final draft of the manuscript. Dr Korownyk attained the grant funding and assisted in development of the project, the drafting of ethics, supervision of the research assistants, analysis of data, and substantive editing of the paper. All authors approve the final version submitted.
-
Competing interests
None declared
- Copyright© the College of Family Physicians of Canada