Abstract
Objective To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process.
Design Randomized trial.
Setting Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que.
Participants Fifteen second-year family medicine residents.
Intervention Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database.
Main outcome measures The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine’s effect on the decision-making process in clinical practice.
Results Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7.6) minutes with InfoClinique and 22.3 (7.8) minutes with the Trip database (P = .30). Participants’ perceptions of each engine’s effect on the decision-making process were very positive and similar for both search engines.
Conclusion Family medicine residents’ ability to provide correct answers to clinical questions increased dramatically and similarly with the use of both InfoClinique and the Trip database. These tools have strong potential to increase the quality of medical care.
Health care professionals’ capacity to find the best scientific evidence to answer a clinical question is a key aspect of evidence-based practice.1 In addition, retrieving information about the benefits and harms of interventions and sharing this information with patients is a key component of shared decision making in clinical practice.2–4 The Internet now allows clinicians to quickly access a range of pre-appraised evidence synopses, summaries of original research studies, and synthesis facilitating the practice of evidence-based care and shared decision making. According to the 6S hierarchy of pre-appraised evidence,5 in the absence of systems providing immediate access to evidence-based clinical information linked to electronic medical records, resources classified as summaries (eg, UpToDate, DynaMed, and Clinical Evidence) should be consulted first to make the practice of evidence-based care most efficient. However, it takes months to years before these tools are updated with the most recent evidence originating from systematic reviews.6 This gap might prevent some patients from getting optimal health care. In addition, although these tools can be freely accessed in many academic centres, most individual clinicians have to pay a subscription fee to access information.
Federated medical search engines allow clinicians to search multiple original and pre-appraised sources of evidence at the same time, offering simultaneous access to 5 of the 6S sources of evidence (studies, synopses of studies, synthesis, synopses of synthesis, and summaries). The sites indexed by the search engines might or might not require a subscription fee to be accessed. Numerous federated medical search engines for health professionals are available free of charge on the Web, but very few explicitly prioritize searching sites professing to provide evidence-based clinical information. The Trip database7 is probably the most popular of the sites that do. InfoClinique, developed by Laval University in Quebec city, Que, also prioritizes searching evidence-based websites, but allows users to search both French and English websites. Other engines, such as SUMSearch 2, and more recently MacPLUS Federated Search, are limited in the number of the sites they search, either because they only index a few relevant sites (SUMSearch 2) or because users must pay a subscription fee to access many of the sites indexed (MacPLUS).
Only a few studies have assessed the efficacy of different clinical information retrieval tools.8–11 To the best of our knowledge, no randomized trial has yet compared federated medical search engines or compared such engines to other clinical information retrieval tools. Evidence of the efficacy of such tools would help clinicians wishing to practise according to evidence-based practices and shared decision-making principles to understand the value of these tools and choose the ones that are most appropriate to their contexts.
We conducted a randomized trial to compare the ability of users to find the correct answer to clinical questions, an essential first step to evidence-based practice and shared decision making, searching with 2 federated medical search engines, InfoClinique and the Trip database. We also aimed to determine users’ perceptions of the engines’ effects on clinical decision making.
METHODS
Participants
The trial was conducted between February and May 2007. We solicited the participation of a convenience sample of all 15 second-year family medicine residents (14 female and 1 male) working in 3 family medicine units of the family medicine program of the Faculty of Medicine of Laval University in Quebec city, Que. All residents agreed to participate and all signed informed consent forms. The trial was approved by the ethics review board of the Saint-François d’Assise Hospital at the Quebec University Hospital Centre.
Pretrial training
In January 2007, before the trial began, all residents participated in a 2-hour training session on how to structure clinical questions in the population, intervention, control, and outcome (PICO) format and to answer questions using InfoClinique and the Trip database. At the same time they were introduced to a website developed to collect data for the project.
Development of clinical questions
Over the following 2 weeks, each of the 15 residents was required to generate 2 PICO format questions about treatments or preventive interventions (“therapy” questions) based on their routine clinical encounters.12 Questions focused on drug prescriptions—for example, questions about dosage or drug interactions—were excluded. The resident was also required to specify the clinical context in which the question was being asked and to state whether the decision had already been made or had yet to be made. Each question was validated by the member of the research team who supervised the resident who had written the question (M.C., M.L., or P.F.). As they were validated, questions were immediately posted on the study website.
Intervention
The intervention consisted of the use of 2 Internet-based medical search engines, InfoClinique and the Trip database, that specialize in helping users find clinical information. Both tools can be used free of charge. Following the taxonomy used in the 6S model of pre-appraised evidence,5 the principal resources indexed in the search engines are synopses of studies, syntheses (systematic reviews), synopses of syntheses, and summaries (evidence-based clinical practice guidelines and evidence-based textbooks). Both tools also allow easy access to patient information resources and to original studies from the MEDLINE database through a PubMed search using the “clinical queries” strategy.
InfoClinique13 was launched by the Department of Family and Emergency Medicine at Laval University in 2003 and was developed from a search engine produced by Coveo Solutions Inc. InfoClinique indexes the content of 74 medical Web resources, in all of which users can access the entire or partial content for free. InfoClinique’s search interface accommodates either English or French; as for the Web resources themselves, 23 are indexed in both French and English, 35 are only indexed in English, and 16 are only indexed in French.13 Fifteen resources are specific to health care in the province of Quebec. The websites indexed by InfoClinique are assigned to 1 of the following categories: evidence-based medicine (34 of the 74 websites fall into this category), continuing professional development, complementary and alternative medicine, patient information, professional information, public health, medical images, and electronic textbooks. The user can search all categories at once or can select 1 or more categories. When the user selects all categories, InfoClinique prioritizes results from the evidence-based medicine category when producing the search results. Category filters can also be applied following the search to view results from specific categories.
The Trip database,7 produced by Trip Database Ltd of the United Kingdom, was launched in 1997.14,15 At the time of the study, the Trip database was searching 489 English medical Web resources at the same time. Access to all pre-appraised evidence websites sites is free with the exception of Clinical Evidence, full texts of articles from the Cochrane Library, and the evidence-based journals from the BMJ Group. New entries are indexed daily, weekly, or monthly, depending on the type of resource. Search results are categorized by type of resource and prioritize evidence-based synopses, systematic reviews, and guidelines. Search results can be limited to a specific type of resource.7
Outcome measures
The main outcome measure was the efficacy of each medical search engine, defined as the proportion of correct answers to clinical questions produced by the second-year family medicine residents. Correct answers were determined by consensus. First, on site at Laval University, each resident and his or her supervisor answered the resident’s 2 clinical questions using the information retrieved by all the residents who had searched for an answer to those questions. The answer produced by the resident and his or her supervisor was then reviewed by 1 of the other 3 supervisors. Second, using the protocol developed for the “just-in-time” project,16 a trained librarian from the Institute of Population Health at the University of Ottawa in Ontario answered each question. Both answers were then independently reviewed by a research team member at Laval (M.L.) and by another team member at Ottawa (W.H.). These reviewers retained all 30 answers provided by Laval as the criterion standard. The answers generated by the methodology followed at Laval were generally more comprehensive than those provided by the librarian in Ottawa (indeed, the answers produced at Laval included the answers produced in Ottawa).
Two independent assessors (S.R. and J.O.) compared residents’ answers to the criterion standard and classified them as correct or incorrect. Their interrater agreement was good, with κ coefficients of 0.66, 0.61, and 0.78 for answers provided before the search, after the initial search, and after an additional search, respectively. Any discrepancies were resolved by consensus under the supervision of the lead investigator (M.L.). The assessors were blind to the resident, to the medical search engine first used, and to any Web sources consulted after the initial search. Assessors knew, however, whether an answer had been provided before the search, after the initial search, or after the additional search.
Four secondary outcome measures were used to assess residents’ perceptions of how the search engines affected decision making in clinical practice. First, the Comfort with Information for Shared Decision Making scale (CI-SDM) is an 11-item Likert-scale questionnaire assessing respondents’ comfort with knowledge they can use to engage patients in shared decision making with regard to specific clinical questions (5—most comfort to 1—least comfort). The CI-SDM has not been validated, but it was derived from a validated French version of the 16-item Decision Conflict Scale for physicians.17 Second, the Usefulness of Clinical Information Scale (UCIS) is a 5-item Likert-scale questionnaire based on Shaughnessy and colleagues’ equation (relevance × validity/work) of the usefulness of clinical information18 (5—most useful to 1—least useful). Third, the French version of the Impact Assessment Scale (IAS)9,19,20 is an ordinal scale of 10 statements about the effect of information retrieved on clinical practice: the respondent checks all statements that apply. Fourth, the Strength of Recommendation Taxonomy (SORT)21 measures the respondent’s assessment of the consistency and the quality of the data retrieved as if she or he were to use it to make a recommendation. Using a decisional algorithm, the respondents ranked the data as A, B, C, or not evaluable.
At the end of the trial, residents were asked to complete an online questionnaire about their intention to use InfoClinique and the Trip database in the future and their view of the usefulness and ease of use of each engine. Questions about residents’ intentions were based on the theory of planned behaviour,22 and questions about their perceptions of the engines’ usefulness and ease of use were based on the Technology Acceptance Model.23
Randomization and data collection
Figure 1 describes the trial and the data collection process using the study website. Residents could access the website with an Internet connection from anywhere and at any time. Upon entering the website with their password, residents were provided with a list of 20 of the 30 original clinical questions (their own 2 questions, plus 18 questions randomly selected by computer). The questions were displayed in PICO format and ordered according to the time they were posted on the study website. They were instructed to select 1 of their as-yet-unanswered 20 questions (step 1). This number was based on the amount of time allocated to residents’ projects at Laval University and the estimated time required for participating in the study (searching for responses and answering clinical questions, completing online questionnaires, and writing a summary of findings for their own 2 questions).
Trial and data collection process for each clinical question, using the study website
CI-SDM—Comfort with Information for Shared Decision Making scale, IAS—Impact Assessment Scale, SORT—Strength of Recommendation Taxonomy, UCIS—Usefulness of Clinical Information Scale.
Residents were then required to provide a structured answer to the question, based on their current knowledge (step 2). Their answer had to include their best estimates (qualitative and quantitative, if known) of the benefits and harms of each option, including the option of doing nothing. Their answer also had to state the clinical decision that they would make. They were also asked to complete the CI-SDM questionnaire.
When ready to begin searching for the correct answer, residents were randomly prompted by the computer with the home page of either InfoClinique or the Trip database (step 3). For each question, a computer program randomly instructed half of the 10 respondents—the resident who had generated the question and 9 other randomly selected residents—to first perform the search with InfoClinique; the other half were instructed to begin their search with the Trip database. We thus expected residents to perform 300 searches, of which 150 would begin with InfoClinique and 150 would begin with the Trip database. Our random allocation procedure did not seek to assign each participant an equal number of initial searches per search engine: in other words, we accepted that some participants would begin their searches more often with one search engine than with the other. The assignment of individual questions was concealed from investigators.
After searching with either InfoClinique or the Trip database, residents were instructed to select the most relevant and valid information encountered and save it in a temporary file. We recorded search time; up to 30 minutes per question was recommended but was not mandatory. Residents had to stop the timer either when satisfied with their search or when they thought that they had spent sufficient time searching.
After completing the search, residents answered the clinical question, giving their treatment decision based on the information retrieved thus far (step 4). In addition, they had to complete 4 Web-based questionnaires: CI-SDM, UCIS, IAS, and SORT. Residents who were satisfied with the information retrieved by their search could move onto a new question. Residents who were unsatisfied could perform an additional search using any resources available on the Internet, including the alternate medical search engine (step 5). In that case, after completing their subsequent search, they had to answer the clinical question and state their decision in light of the new information retrieved. They also had to complete the 4 questionnaires again (step 6). All the steps for a question were to be completed during the same Web session.
Statistical analysis
Sample size
We based sample size on the time Laval University allowed for residents’ projects: namely, 1 half-day per week over 6 months. This time frame permitted a sample of 150 searches with each search engine, enough to capture a difference of 16% (50% vs 66%) in the proportion of each study group’s correct answers with a power of 80% and α error of .05 in a parallel-group trial design. That said, in our study, this sample size produced greater power because the 15 residents had to randomly use one or the other search engine for each of their 20 searches, acting as their own control. For an exact a priori power calculation, an estimate of the correlation of correct answers resulting from searches with each search engine would have been required. This information was not available before the trial.
Analysis
Our main analyses compared the efficacy of searching with the 2 search engines and the engines’ effect on decision making after the initial search (T1). The primary outcome measure was the proportion of correct answers produced by searching with each engine (efficacy). Results for this measure were compared using the general linear mixed model adjusted for the baseline (T0) proportions. The general linear mixed model took into account the fact that all residents searched with both search engines and the units of observation were therefore not totally independent. We used the McNemar χ2 test to evaluate the difference between search engines in the proportion of the 30 questions for which a search produced at least 1 correct answer after the initial search. The difference between the engines’ CI-SDM mean scores was assessed using mixed-model analysis of covariance, adjusting for baseline mean scores, and mixed-model analysis of variance to compare UCIS mean scores and mean search times. We used the χ2 test from the general linear mixed model (polytomous logistic regression) to compare the distribution of IAS and SORT results. Observations reported after the additional search (T2) represent the cumulative responses of T1 and T2 (the last observation carried forward). We limited our analysis of T2 observations to describing the data: we performed no statistical tests on T2 data. A paired t test was used to detect differences between search engines regarding residents’ intention to use the engines, perceived usefulness, and perceived ease of use. We considered a 2-tailed P value of .05 or less to be statistically significant. All statistical analyses were performed with the SAS statistical package.
RESULTS
Figure 2 presents the trial flow diagram until the completion of the additional search. Of 300 possible pairs of answers (1 answer before searching and 1 answer after the initial search), 254 pairs of answers (85%) were produced by 14 residents (1 resident withdrew before answering any questions). Of these, 132 pairs of answers (52%) concerned questions that had been assigned an initial search with InfoClinique and 122 pairs of answers (48%) concerned questions that had been assigned an initial search with the Trip database. Twenty-seven of the questions were answered by 8 to 10 residents; the remaining 3 questions were answered by 6 or 7 residents.
Flow diagram of the trial until completion of the additional search
Residents’ use of InfoClinique or the Trip database for the initial search was quite balanced, with only a few residents using one search engine first considerably more often than they used the other engine first (Table 1). The residents performed 53 additional searches: 26 after an initial search with InfoClinique (20% of all InfoClinique searches) and 27 with the Trip database (22% of all Trip searches). Table 2 presents the residents’ location, the residents’ level of knowledge, and the level of difficulty of the clinical questions, according to the search engine used to perform the initial search. The distribution of results was similar for all 3 variables.
Number of initial searches performed with InfoClinique and the Trip database by each resident
Residents’ locations, residents’ level of knowledge, and level of difficulty of questions, according to the search engine used for the initial search
Evaluated in terms of residents’ capacity to answer the clinical question correctly after performing an initial search, the efficacy of the 2 search engines was very similar (Table 3). There was an important absolute increase in the proportion of right answers after residents searched with either InfoClinique (increase of 36%) or the Trip database (increase of 39%) (P = .68). Only 6 answers (2%) that had been correct before the initial search became incorrect after the initial search (5 questions answered with InfoClinique and 1 question answered with the Trip database). These 6 answers were given by 6 different participants for 6 different questions. The increase in the proportion of cumulative correct answers of the initial search and the additional search (T2) was small: 5% for the InfoClinique group and 4% for the Trip database group. In other words, performing an additional search did not yield an important gain when compared with the yield produced after the initial search.
Outcome measures according to the search engine assigned for the initial search: Before searching (T0), after the initial search (T1), and after any additional search (T2).
At least 1 resident produced a correct answer to 26 of the 30 questions (87%) answered after an initial search with InfoClinique and to 28 of the same 30 questions (93%) answered after an initial search with the Trip database (P = .68). The 4 questions incorrectly answered with InfoClinique were different from the 2 questions incorrectly answered with the Trip database. This signifies that it was possible to answer all the questions correctly using one or the other medical search engine. All correct answers were retrieved from a free-access website indexed in either one of the search engines.
As measured with the CI-SDM, the UCIS, and the IAS, the residents’ perception of the 2 search engines’ effect on decision making in clinical practice was similar (Table 3). For the SORT scale, the difference was not statistically significant but the data suggest the possibility of a difference between engines, with more A-level evidence retrieved with the Trip database.
Only the CI-SDM was completed before and after the search. The increase in residents’ confidence that their knowledge was adequate for engaging patients in shared decision making regarding a clinical question was identical for the 2 engines. The magnitude of the increase, calculated on the basis of the combined 254 answers, was 0.83 (95% CI 0.70 to 0.95). This corresponds to an effect size of 1.2. An effect size of this magnitude is considered large.24 The mean (SD) time of the initial search for each question was 23.5 (7.6) minutes with InfoClinique and 22.3 (7.8) minutes with the Trip database (P = .30).
The mean scores of residents’ intentions to use the engines, their perceptions of the engines’ ease of use, and their perceptions of the engines’ usefulness were high and similar for the 2 engines. The 0.2-point difference in the score of perceived usefulness in favour of InfoClinique, although statistically significant, corresponds to an effect size of 0.2 and thus is not clinically significant (Table 4).24
Participants’ intention to use the search engines and their perception of the search engines’ ease of use and usefulness: N = 14.
DISCUSSION
To the best of our knowledge, this is the first randomized trial to compare the effects of 2 federated medical search engines on clinical decision making. Our trial shows that InfoClinique and Trip database searches were similarly effective at helping family medicine residents find the correct answers to clinical questions posed by themselves or by colleagues. The answers found with the 2 engines did not differ in their effect on clinical decision making. In addition, we found that searching with either medical search engine greatly improved users’ capacity to answer clinical questions correctly and greatly increased their comfort with the knowledge they acquired to engage in shared decision making with patients.
Comparison with previous work
We located only 2 randomized trials comparing the efficacy of searching with different clinical information retrieval tools.10,11 In the first,11 searching MEDLINE before searching a selection of evidence-based health care resources was compared with searching the same evidence-based health care resources before searching MEDLINE. In this trial, the proportion of answers assessed by participants as satisfactory was similar in the 2 scenarios. However, only searching MEDLINE produced a higher proportion of satisfactory answers than only searching the evidence-based health care resources did (81% vs 65%). In the second trial,10 DynaMed, a commercial electronic textbook (classified as a summary according to the 6S hierarchy of pre-appraised evidence5), was compared with usual sources of information. The proportion of answers assessed by participants as adequate was similar in the 2 scenarios, with 73% of answers assessed as adequate overall. These trials were marred by 2 methodologic flaws: the failure to record participants’ answers before searching, and the decision to use respondents’ own determination of a correct or incorrect answer as the outcome measure. Both were avoided in our study.
Three studies have observed an improvement in the proportion of questions answered correctly before and after searching with different clinical information retrieval tools.25–27 These studies reported an improvement of 32% among senior medical and nursing students using MEDLINE,25 an improvement of 20% among senior medical students using MEDLINE,26 and an improvement of 27% among physicians and clinical nurses using various tools (MEDLINE, Merck Manual, Harrison’s Online, and others).27 Differences in the clinical question type, in the population of the study, and in the type of clinical information retrieval tools might explain the differences among studies, including ours. It has been observed that a greater global improvement can be expected when physicians pick the clinical question for which they wish to search for an answer, possibly because they believe that a definitive answer exists or for another reason.28 In our study, this possibility was highly attenuated because apart from their own 2 clinical questions, participants did not select the questions they answered.
An additional strength of our study is its design: a randomized trial with residents using both search engines in random order. In addition to minimizing risk of bias, this design increased the power with the sample size available. As the correlation of correct answers resulting from searches with each search engines was 0.45, with 132 and 122 searches with InfoClinique and the Trip database, respectively, our study had a power of 80% to find a difference of at least 14% in the proportion of correct answers between the 2 search engines.
Limitations
Our study also had limitations. First, the questions we studied were limited to therapy- and prevention-related matters of interest to second-year family medicine residents, structured in PICO format. It is likely that we can generalize our findings to primary care clinicians who use medical search engines to answer therapy- and prevention-related clinical questions, but we cannot draw conclusions as to the efficacy of searching with the engines and the effect on clinical decision making for questions addressing prognosis, pathogenesis, or diagnostic procedures.
Second, it is possible that our results would have differed had we used unstructured clinical questions. However, framing questions using the PICO format is recommended as the first step to an effective search1 and should be applied to all searches for clinical information.
Third, participants were second-year residents in family medicine. This might be a limitation to external validity. Virtually all our residents were fully skilled in using electronic communication and information tools and were at the peak of their medical knowledge—the study was conducted during the months just preceding their licensure final examination. Other type of users might be less effective in using these electronic clinical information retrieval tools. Nevertheless, there was a lot of variation in the level of knowledge before initiating the search, and all residents much improved the proportion of their correct answers after the search (data not shown), indicating that most clinicians should be able to improve their performance at finding answers to clinical questions using either search engine. On the other hand, using a sample of family medicine residents was not a threat to internal validity considering the experimental randomized design of our study.
The fourth limitation concerns the study’s experimental setting. Participants did not answer questions at the point of care. They were also allowed to search longer than the 2 minutes estimated to be feasible during clinical practice to answer 1 question.29 We accepted this limitation in light of our aim to test the search engines in a “reflection on action” mode (after the physician-patient encounter) rather than in a “reflection in action” mode (during the encounter). Having now demonstrated that InfoClinique and the Trip database can be used to find correct answers to a large proportion of clinical questions typical in family medicine, we believe that it would be appropriate to evaluate the efficacy of these tools during patient encounters.30–32
Fifth, some results of the initial searches were not saved owing to problems with the study website. This was not associated with the ability of residents to save their searches or to the fact that an answer was correct or not. These represented only 5% (15 of 300) of all possible results of the first searches. We do not believe this substantially biased our results.
Sixth, this study was conducted between February and May 2007. The 2 search engines have since been modified to reflect the evolution of available Web-based resources, and it is possible that the resources they currently index would produce different results than those produced in 2007. However, to our knowledge neither engine has undergone a substantial change that would have decreased users’ ability to find correct answers. If anything, their current interface and the way that they now index websites should have improved searching capacity and the effect on clinical decision making.
Our results and limitations indicate that more evaluations and comparisons of clinical information retrieval tools are needed. Clinical questions should cover a larger span of clinical themes that discuss diagnosis, pathogenesis, and prognosis in addition to therapy. Studies should be conducted both during and after patient encounters. They should rely on objective outcome measures and include baseline data. Studies on the validation of instruments such as the CI-SDM, UCIS, and SORT should also take place.
We also acknowledge that finding the best available medical evidence with a medical search engine does not automatically improve practice and patient care. Based on a Cochrane systematic review, there is currently insufficient evidence to support or refute the use of any electronic clinical information retrieval tools by health care providers to this end.33 However, finding such information is essential if evidence-based practice and shared decision making are to occur in clinical practice. Physicians have difficulty estimating the benefits and harms of interventions they commonly prescribe.34 Using effective medical search engines to access pre-appraised evidence should help clinicians apply this evidence in practice. Ideally, the engines would also index patient decision aids aimed at fostering shared decision making.34,35
Conclusion
This trial showed that both InfoClinique and the Trip database provided access to evidence-based clinical information on the benefits and harms of treatment and preventive interventions used in family medicine, information which is a prerequisite to evidence-based practice and shared decision making. The choice of one search engine over the other thus becomes a matter of preference.
Acknowledgments
This study was funded by the Fonds Gilles-Cormier from Laval University and the College of Family Physicians of Canada (Janus grant). Dr Légaré is Tier 2 Canada Research Chair in Implementation of Shared Decision Making in Primary Care. We thank Mehdi Atmani for the development of the Web-based platform and Doug Salwedel who provided answers following the just-in-time protocol. We thank Jennifer Petrela for editing the paper.
Notes
EDITOR’S KEY POINTS
-
Finding the correct answer to clinical questions is an essential first step to evidence-based practice and shared decision making. This randomized trial compared the ability of second-year family medicine residents to find the correct answer to clinical questions using 2 federated medical search engines, InfoClinique and the Trip database. It also aimed to determine users’ perceptions of the engines’ effects on clinical decision making.
-
This trial showed that both InfoClinique and the Trip database provided access to evidence-based clinical information on the benefits and harms of treatment and preventive interventions used in family medicine. Residents also found the 2 search engines similarly useful and easy to use. The choice of one search engine over the other is thus a matter of preference.
POINTS DE REPÈRE DU RÉDACTEUR
-
Répondre correctement à des questions d’ordre clinique représente une première étape essentielle à une pratique basée sur des données probantes et à une prise de décision partagée. Cet essai randomisé comparait la capacité de résidents de deuxième année de médecine familiale de répondre correctement à des questions cliniques en se servant de 2 moteurs de recherche médicaux, InfoClinique et la base de données Trip. L’essai visait aussi à déterminer ce que les utilisateurs pensent de l’effet de ces outils sur la prise de décision clinique.
-
Cet essai a montré que InfoClinique et la base de données Trip donnaient tous deux accès à des informations cliniques fondées sur des preuves à propos des avantages et inconvénients des interventions curatives ou préventives utilisées en médecine familiale. Selon les participants, les 2 moteurs de recherche étaient également utiles et faciles à utiliser. Le fait de choisir un moteur de recherche plutôt que l’autre est donc une question de préférence.
Footnotes
-
This article has been peer reviewed.
-
Cet article a fait l’objet d’une révision par des pairs.
-
Contributors
Drs Labrecque, Légaré, Cauchon, Frémont, Hogg, McGowan, and Gagnon wrote the research protocol. All authors approved the final protocol. Drs Labrecque, Frémont, and Cauchon and Mr Ratté were responsible for the overall conduct of the study. Mr Ratté was the project’s research coordinator. Drs Labrecque, Cauchon, Frémont, Ouellet, and Hogg and Mr Ratté participated in data collection. Drs Labrecque and Légaré and Mr Ratté participated in data analyses and interpretation. Mr Ratté and Dr Labrecque drafted the first version of the manuscript. All authors approved the final manuscript.
-
Competing interests
Drs Labrecque, Cauchon, and Frémont are members of the development team of InfoClinique. Participants were not offered any financial incentives. Use of InfoClinique and the Trip database is free.
- Copyright© the College of Family Physicians of Canada