Abstract
Objective To examine the consistency of the ranking of Canadian and US medical graduates who applied to Canadian family medicine (FM) residency programs between 2007 and 2013.
Design Descriptive cross-sectional study.
Setting Family medicine residency programs in Canada.
Participants All 17 Canadian medical schools allowed access to their anonymized program rank-order lists of students applying to FM residency programs submitted to the first iteration of the Canadian Resident Matching Service match from 2007 to 2013.
Main outcome measures The rank position of medical students who applied to more than 1 FM residency program on the rank-order lists submitted by the programs. Anonymized ranking data submitted to the Canadian Resident Matching Service from 2007 to 2013 by all 17 FM residency programs were used. Ranking data of eligible Canadian and US medical graduates were analyzed to assess the within-student and between-student variability in rank score. These covariance parameters were then used to calculate the intraclass correlation coefficient (ICC) for all programs. Program descriptions and selection criteria were also reviewed to identify sites with similar profiles for subset ICC analysis.
Results Between 2007 and 2013, the consistency of ranking by all programs was fair at best (ICC = 0.34 to 0.39). The consistency of ranking by larger urban-based sites was weak to fair (ICC = 0.23 to 0.36), and the consistency of ranking by sites focusing on training for rural practice was weak to moderate (ICC = 0.16 to 0.55).
Conclusion In most cases, there is a low level of consistency of ranking of students applying for FM training in Canada. This raises concerns regarding fairness, particularly in relation to expectations around equity and distributive justice in selection processes.
Selection into residency training positions in Canada is a competitive, high-stakes process. Annually, eligible medical students from Canada and the United States compete for their preferred discipline and location for training, and residency programs compete for their preferred applicants.
Canadian residency programs generate rank-order lists (ROLs) of candidates from scores derived from a review of standardized files and interviews. Students separately rank the programs they apply to based on their desire to enter a specific program or training site. The resulting Canadian Residency Matching Service (CaRMS) “match” maximizes, using a computerized algorithm, the ROLs of programs’ candidate preferences and the ROLs of students’ program preferences to allocate where and in which discipline individuals will commence residency training.1
This resource-intensive process for programs involves considerable expense and stress for students who might travel across Canada for selection interviews. In 2013, Canadian medical graduates applied to an average of 14 programs.2 That year, the first iteration match rate for this group was 95% (2590 of 2735), and 86% (2355 of 2735) were matched to their first-choice discipline. Of note, although this represents a high match rate for discipline of choice, only 63% (1710 of 2735) of graduates matched to their first-choice program location.3
Unfortunately, despite recommendations regarding quality-assurance mechanisms in selection4 and the publication of selection criteria by each Canadian residency program, concerns persist about the lack of transparency in how programs select residents.5 Students appear then to rely often on anecdote and myth6,7 when making important decisions about elective choices and the timing and location of specific rotations relative to the CaRMS application process. Despite concerns about perceived lack of transparency, few studies have explored students’ perceptions of fairness in selection processes for residency training in Canada, and no studies have examined the consistency of the selection processes employed by programs, either by discipline or as a whole.
In the United Kingdom, Patterson et al previously used organizational justice theory8–15 as a framework to study perceptions of fairness regarding the selection for general practice training.16 This framework is built upon the concepts of procedural justice (the fairness of the process) and distributive justice (the fairness of the outcome). As described by Patterson et al, the fairness of a selection process can be considered in relation to how much the selection process meets a set of rules around both procedural and distributive justice concepts.13–15
The outcome of the selection process for FM residency training in Canada (the location to which a student matches) depends on how each FM program ranks a student and how the student ranks each FM program. Therefore, the position on a program’s ROL can be considered an outcome of this part of the selection process and, using the distributive justice rule of equity (the outcome, ie, where ranking of a student is based on the input or attributes that the student brings to the process), the consistency of ranking of the same student by different programs could be used, for the first time, to examine the fairness of this part of the selection process for FM residency training. The purpose of this study was to examine the consistency of ranking of eligible Canadian and US students applying to FM residency training sites in Canada in the first iteration of the CaRMS match. The hypothesis was that the degree of consistency would be high and the outcome of this part of the selection process (the ranking of students) should therefore be perceived as fair, based on evidence of compliance with the distributive justice rule of equity.
METHODS
Data collection
Consent was obtained from the senior faculty individual (ie, Associate and Assistant Deans of Postgraduate Medical Education) at all 17 Canadian medical schools to access the anonymized rank-order data of students applying to all FM residency programs in the first iteration of CaRMS from 2007 to 2013. As this study was only focused on the consistency of ranking of the same students by different programs as an outcome measure to which distributive justice rules apply, no data were collected on how students ranked programs or the actual selection processes employed by programs to make these ranking decisions. (These data would apply more to a study focused on the student experience of a selection process and the degree of application of rules around procedural justice.)
All 17 medical schools have an FM residency program, and each program has a variable number of “sites” to which students can apply. In 2013, the 17 residency programs had 109 sites available among them. Information made available on the individual CaRMS program web-pages for each of the 109 sites was examined for location, population, and description of specific selection criteria or focus of training (eg, training for a career in a rural environment). Where this was unclear, the current or most recent FM Residency Program Director was contacted by K.W.J. for clarification. Five subsets of sites were created based on this analysis (Table 1).
Aside from US graduates, international medical graduate applicant data were removed from the ranking data to ensure that subsequent analysis focused only on eligible Canadian medical graduates and US* students who were ranked each year in the match process outlined above.3
Data analysis
The nature of the selection process imposed limitations on data analysis. Students apply to their program of interest; however, programs do not rank all the applicants, nor do they rank the same applicant pool. Furthermore, there is a “nesting” issue, where school ranks are nested within students. Typical methods of assessing reliability (κ, Cronbach α, or Krippendorff α) require more complete data (meaning a large number of schools ranking the same candidates) to calculate a reliability coefficient and do not take into consideration the nesting issue with the data.
To assess the consistency of the ranking of a student by different programs and sites, hierarchical linear modeling, specifically a random effects model, was used to calculate the variability in rank between and within students. Hierarchical linear modeling accounts for the issue of nesting (nonindependence of ranks within students), addresses the concern of missing data, and accommodates for the different number of rankings within students.17,18 The results of this analysis—the within-student and between-student covariance parameter estimates—were then used to calculate a consistency coefficient via an intraclass correlation coefficient (ICC). The ICC was calculated as follows:
In this formula, is the between-student variability, is the within-student variability, and is the total variance. If within-student variability is larger than the between-student variability, the ICC will be low. Intraclass correlation coefficient estimates can vary between 0 and 1. Typically, an ICC of 0.29 or less indicates weak consistency; between 0.3 and 0.49 indicates fair consistency; between 0.5 and 0.69 indicates moderate consistency; and 0.7 or greater indicates strong consistency.19
Initial analysis reviewed how students who applied to more than 1 of any of the 109 FM residency sites across Canada in 2013 were ranked by each site or program. This analysis was repeated for all sites in Canada between 2007 and 2013. Consistency of ranking of students for each of the 5 subsets between 2007 and 2013 was examined, searching for consistency of student ranking by programs or sites that appeared to be seeking similar, preferred selection attributes (eg, rural training focus), or where the type or location of training site appeared similar (eg, large urban-based site or same province).
The study received ethics approval from the University of Calgary Conjoint Health Research Ethics Board.
RESULTS
Analysis of the 2013 CaRMS match data revealed fair consistency in how students were ranked across all sites to which they applied (ICC = 0.37). The same analysis for 2007 to 2012 showed a similar fair consistency in ranking (ICC = 0.34 to 0.39) (Table 2). With the exception of the western province rural FM group, and one 2007 ICC, the ICC values in each of the subsets over the years demonstrate weak to fair consistency in ranking by schools of the same students applying to more than 1 FM residency site (ICC = 0.16 to 0.44). The western province rural FM group demonstrated improved consistency in ranking over recent years to a moderate consistency in 2013 (ICC = 0.55). For the 3-schools group, consistency decreased from fair in 2007 (ICC = 0.44) to very weak in 2013 (ICC = 0.17) (Table 3).
DISCUSSION
The purpose of this study was to examine the consistency of ranking of eligible Canadian and US students applying to FM residency training sites in Canada in the first iteration of the CaRMS match. The distributive justice rule of equity was used to frame the project; compliance with this rule would provide evidence of a fair process in how students are ranked across programs. The focus of the study was therefore on the outcome (consistency of student rank position) and not the process of selection itself.
The results show that for students who applied to more than 1 FM residency program or site across Canada, the consistency of student ranking by these sites or programs was low—ie, the same student had a low likelihood of being ranked in the same relative position on separate FM programs or site ROLs (ICC = 0.34 to 0.39).
Subset analyses reveal similar, mostly low, levels of consistency even when examining how students who apply to sites with similar profiles or stated training focus (eg, rural FM) are ranked. This again indicates a low likelihood of students applying to programs or sites in each of these groups being ranked similarly by more than 1 site.
Although these results confirm a low level of consistency of ranking of the same students, one could argue that, as more students choose FM as their discipline of choice,3,20 many FM programs in Canada have moved from recruitment mode to selection mode. In this more competitive environment, programs might consider that they should be permitted to rank their applicants in any way they believe is appropriate and that it does not matter how different programs, using their own selection criteria and processes, rank the same students.
In reviewing these results, based on the low levels of consistency, the distributive justice rule of equity in the ranking of students is not being met. The question then is whether the ranking process for selection for FM residency training in Canada is fair. Although programs might claim individual local factors in selection that are viewed as unique to them, what level of inconsistency in ranking of the same students should be tolerated before the processes employed by programs and the resulting outcomes are viewed as unfair? Should students applying to FM residency training in Canada expect to be ranked similarly by different sites, particularly when the sites might be similar; when the sites publish similar or exactly the same selection criteria; where standards for training are the same for all FM training sites21; where the expectation and intent is to train family physicians so that they are prepared to enter early professional practice anywhere in Canada; and where training is funded from the public purse? Organizational theory and the concept of distributive justice suggest that where the outcome of a selection process is perceived as not being aligned with the input that applicants bring to the process (ie, personal attributes), the process will be viewed as unfair. Where the stakes are high and when students do not always match to their preferred site or program, then there is considerable potential for a challenge to the processes employed by programs in building their ROLs that perhaps should not be ignored.
Strengths and limitations
This is the first study of the consistency of ranking of applicants to FM residency training in Canada. It is strengthened by the provision of ranking data over a 6-year period by all 17 Canadian medical schools and the sharing of clarifying information on specific sites provided by current and recent FM residency program directors. However, the study was not designed to gather data on the actual selection processes employed by programs and sites. Only higher-level criteria (rural site, larger urban-based site, location) were used to build subsets for analysis. It is possible that with more detailed analysis focused on lower-level criteria (eg, specific selection attributes), a higher level of consistency might be identified for sites expressing the same individual attributes. We did not review CaRMS webpages for stated selection criteria for sites open to applicants between 2007 and 2012; it is possible that the selection criteria in some of the sites included in the subsets changed over this time; however, this was considered unlikely.
Conclusion
This study found a low level of consistency of ranking of the same students applying for FM residency training by different programs or sites. This indicates a failure to meet the distributive justice rule of equity and brings into question the fairness of the overall selection process for FM residency training in Canada.
There is a need to examine why this is the case and to then modify approaches to selection so that the consistency, as well as the validity, of processes is higher and ultimately then fairer to every medical student applying for FM residency training. Similar work regarding resident selection has already been completed in the United Kingdom, and it is perhaps now time for this work to be carried out in Canada.
Notes
Editor’s key points
▸ This study examined the consistency of ranking of eligible Canadian and US students applying to family medicine (FM) residency training sites in Canada in the first iteration of the Canadian Resident Matching Service match. The distributive justice rule of equity was used to frame the project; compliance with this rule would provide evidence of a fair process for how students are ranked across programs. The focus of the study was therefore on the outcome (consistency of student rank position) and not the process of selection itself.
▸ This study found that the likelihood of the same medical student applying to more than 1 FM residency program being ranked similarly by each program was low.
▸ As the intent of all Canadian FM residency programs is to train residents to practise anywhere in Canada, and therefore desired selection attributes should be similar, such a low level of consistency in ranking strongly suggests a failure to meet the distributive justice rule of equity, where the outcome of a selection process is dependent upon the attributes that each applicant brings to the process.
Points de repère du rédacteur
▸ Dans cette étude, on a voulu vérifier si on procédait d’une façon uniforme pour classer les étudiants canadiens et américains qui posent leur candidature pour une résidence en médecine familiale (MF) lors du premier tour de jumelage du Canadian Resident Matching Service. On a utilisé le principe de justice distributive pour concevoir le projet, le fait de se conformer à ce principe étant la preuve que le classement des étudiants est équitable dans tous les programmes. Cette étude portait donc principalement sur le résultat (la similitude des rangs obtenus par l’étudiant) et non sur le processus de sélection lui-même.
▸ Cette étude a montré qu’il était peu probable que les étudiants qui posent leur candidature à plus d’un programme de résidence en (MF) obtiennent à peu près le même rang dans chacun des programmes.
▸ Étant donné que tous les programmes de résidence en médecine familiale ont pour but de former des résidents capables de pratiquer n’importe où au Canada et que, par conséquent, le mode de sélection devrait être semblable, un tel manque d’uniformité dans la sélection laisse fortement supposer un manque de conformité avec le principe de justice distributive, puisque le résultat du processus de sélection dépend des attributs que chaque candidat fournit au processus.
Footnotes
Contributors
Dr Wycliffe-Jones, Dr Schipper, and Ms Robinson contributed to the concept of the study. Drs Wycliffe-Jones, Hecker, Schipper, and Topps and Ms Robinson contributed to the study design. Drs Wycliffe-Jones and Topps contributed to data acquisition. Dr Hecker and Ms Abedin contributed to data analysis and interpretation. Dr Wycliffe-Jones prepared the first draft of the article and all authors critically revised the subsequent drafts. All authors approved revisions and the final version of the manuscript.
Competing interests
None declared
This article has been peer reviewed.
Cet article a fait l’objet d’une révision par des pairs.
- Copyright© the College of Family Physicians of Canada