A discipline is defined by an active area of research and a codified body of knowledge1,2 that provides legitimacy for the work it does and leads to innovation. Academic family medicine has existed for almost 50 years. Family medicine research has made great strides,3 with a steady increase in research articles published since the 1950s. What has been the impact of all this activity?
Meaning of impact
Research impact has been defined as “the value and benefit associated with using the knowledge produced through research, and being involved in conducting research.”4 Its meaning and measurement differ among groups and individuals. The meaning of impact for funders and policy makers might be conceived of as influence on policy, health service delivery, and population health outcomes. Academic institutions are more interested in how a discipline is represented in the academic milieu. For researchers, impact might mean the influence of their work on other researchers and practitioners in the field. For clinicians, the focus will be on the effect on daily practice.
Measuring impact
Funders and policy makers use methods such as benchmarking, case studies, peer review, economic rate of return, logic modeling, and bibliometrics.5 The recent allocation of $3 billion in annual research funding in the United Kingdom, using graded case study “stories,” illustrates a trend toward using more qualitative measures of impact as well as hard data.6 Academic faculties traditionally rely heavily on bibliometrics to allocate funds, make promotion and tenure decisions, and benchmark their research. Full-time and clinician researchers in family medicine are largely held to the measures used by their institutions. They want to demonstrate the worth of their research to funders and policy makers, their employers, and their colleagues. Full-time practising family physicians will perceive research to have impact to the extent of its practical application. They might be sceptical about the value of research received from experts7 and might not be very interested in academic measures of impact.
The Canadian Academy of Health Sciences has proposed an impact framework and a preferred menu of nearly 70 indicators and metrics of impact that can be used for evaluating the return on investment in health research.8 The Canadian Academy of Health Sciences reports and others emphasize that there is no one simple set of metrics for all.5,8 This article focuses on a subset of measures that deal with the published body of knowledge, commonly called bibliometrics, and a set of newer “alternative” publication metrics.
Bibliometrics
Examples of simple measures are number of articles published in peer-reviewed journals or number of times an article has been cited. More sophisticated measures include the h index,9 which quantifies the impact of a scholar who has published h papers, each of which has been cited in other papers at least h times. Measures of journal influence such as the journal impact factor (JIF)10 and the SCImago Journal Rank indicator11 are calculated from data in bibliographic databases. The JIF is calculated by dividing the number of citations to a given journal in Thomson Reuters’ Journal Citation Reports by the total number of articles published in the 2 previous years. A JIF of 1.0 means that, on average, the articles published 1 or 2 years ago have been cited 1 time. The JIF is popularly recognized if not well understood, and the inappropriate use of a journal-level metric to measure the impact of an individual researcher or article has been discussed.10,12
Citations to 2 articles from The Seven Wonders of Family Medicine Research,13 a list of 7 influential family medicine research articles, demonstrate that bibliometrics are not absolutes—they depend on the source from which they are derived. Searches conducted on the same day found that articles from the list had quite different numbers of citations in Web of Science, Scopus, and Google Scholar (Table 1).14,15 Note that the 2 measures of impact are not well correlated—these 2 articles, both judged to have high impact by peers, received widely different numbers of citations.
Alternative metrics
Recently a wave of alternative metrics or altmetrics has appeared.16 These include statistics such as the number of times an article has been downloaded, viewed, or shared on social media platforms such as Twitter. These altmetrics give a more immediate view of impact than, for example, the number of times an article is cited, a measure that might take several years to develop. Altmetrics use reflects the reality of researchers who are increasingly working in an online environment and publishing their research in nontraditional venues and formats—like YouTube and SlideShare, blogs, and institutional repositories. Currently, online users can see altmetrics displayed beside articles in databases such as Scopus and in online journals such as BMJ and JAMA. Alternative metrics offer new ways to measure the social impact of research, the importance of which is increasingly acknowledged.12,17 However, alternative metrics are not without problems, such as the ease with which they can be manipulated or gamed.16
Challenges
Why is it difficult to determine the impact of family medicine research with publication metrics? One reason is that family medicine research articles are scattered among family medicine journals—and even more widely scattered among non–family medicine journals. For example, a Scopus search conducted in 2012 found that the 250 most highly cited family medicine research articles were published in 71 different journals, with 47 journals publishing 1 article each (Figure 1). Only 5 of the 71 journals could be described as “family medicine” journals.18 Only 3 articles in the list of 7 influential family medicine papers previously mentioned were published in family medicine journals.13 This scattering of research articles makes finding relevant articles more problematic for researchers and authors, and impact measures might be affected. The scatter of articles is a problem for most family physicians, who do not have time to scan multiple journals for items of interest.
Establishing the influence of publications on the behaviour of individual clinicians is a complex and challenging task.17 The time lag between research findings and when benefits occur is a challenge. The longer the time, the more difficult it is to track impact and attribute it to the project. One reason is that researchers do not necessarily record all their dissemination activities or maintain contact with users of their research. Another reason is that impact can occur at any of the different stages of the research cycle. One research team that undertook this tracking challenge reported extensive outputs beyond the expected peer-reviewed publications, including impact on processes and policies, producing new knowledge, and building capacity. In their study, the number of peer-reviewed publications was not always a good indicator of impact—one of the projects with the highest impact had no peer-reviewed publications.19
How does one measure the impact of seminal thinkers such as Dr Ian McWhinney? Certainly his articulation of the principles of family medicine has contributed to the conceptual basis of the discipline, but when this is actuated in a new clinical method then it has a direct impact on practice. The inadequacies of simple bibliometrics and altmetrics illustrate the adage that not everything that counts can be counted.
Notes
Hypothesis is a quarterly series in Canadian Family Physician, coordinated by the Section of Researchers of the College of Family Physicians of Canada. The goal is to explore clinically relevant research concepts for all CFP readers. Submissions are invited from researchers and nonresearchers. Ideas or submissions can be submitted online at http://mc.manuscriptcentral.com/cfp or through the CFP website www.cfp.ca under “Authors and Reviewers.”
Footnotes
Competing interests
None declared
- Copyright© the College of Family Physicians of Canada