Abstract
Objective To compare the characteristics and responses of patients completing a patient experience survey accessed online after e-mail notification or delivered in the waiting room using tablet computers.
Design Cross-sectional comparison of 2 methods of delivering a patient experience survey.
Setting A large family health team in Toronto, Ont.
Participants Family practice patients aged 18 or older who completed an e-mail survey between January and June 2014 (N = 587) or who completed the survey in the waiting room in July and August 2014 (N = 592).
Main outcome measures Comparison of respondent demographic characteristics and responses to questions related to access and patient-centredness.
Results Patients responding to the e-mail survey were more likely to live in higher-income neighbourhoods (P = .0002), be between the ages of 35 and 64 (P = .0147), and be female (P = .0434) compared with those responding to the waiting room survey; there were no significant differences related to self-rated health. The differences in neighbourhood income were noted despite minimal differences between patients with and without e-mail addresses included in their medical records. There were few differences in responses to the survey questions between the 2 survey methods and any differences were explained by the underlying differences in patient demographic characteristics.
Conclusion Our findings suggest that respondent demographic characteristics might differ depending on the method of survey delivery, and these differences might affect survey responses. Methods of delivering patient experience surveys that require electronic literacy might underrepresent patients living in low-income neighbourhoods. Practices should consider evaluating for nonresponse bias and adjusting for patient demographic characteristics when interpreting survey results. Further research is needed to understand how primary care practices can optimize electronic survey delivery methods to survey a representative sample of patients.
Measuring the patient experience is an integral step toward understanding and improving the quality of primary care.1–5 As part of quality improvement efforts, family practices are increasingly being asked to survey patients about their experiences with care. In Ontario, patient experience surveys are now mandatory for all family health teams (FHTs) and community health centres.6 However, finding the capacity to survey patients regularly can be challenging in a busy family practice. Traditional methods of collecting patient experience data, including surveying patients in clinic waiting rooms, via telephone, or by mail, can be costly and burdensome to primary care practices.
Electronic surveys sent via e-mail offer a low-cost alternative for frequent patient surveys. However, access to digital resources, including the Internet, is unequal across demographic and socioeconomic groups.7–9 Additionally, how data are collected can influence the content of survey responses. These mode effects, which occur when data obtained via one mode of data collection are different from that obtained by another, have been described in the literature.2,10–18 The method of data collection can also influence data quality and response rate19 owing to several issues, including differences in sampling frames and nonresponse.19,20 There is little literature to guide family practices on how patient characteristics and responses might differ when conducting patient experience surveys in their practices using different delivery methods.
Our objective was to assess whether patients responding to an online survey delivered via e-mail were representative of a practice population and whether their characteristics and responses differed from those of patients surveyed in clinic waiting rooms.
METHODS
Setting and context
The St Michael’s Hospital Academic Family Health Team (SMHAFHT) is a large, interprofessional primary care organization with 5 clinics serving 35 000 enrolled patients in the inner city of Toronto, Ont. The SMHAFHT serves a diverse population including urban professionals and families who work or live downtown, as well as patients who are homeless, live in poverty, or are new to Canada. The SMHAFHT developed a patient experience survey as part of their quality improvement work, based on questions in the Commonwealth Fund International Health Policy Survey.21 The survey captured patient perspectives about access and patient-centredness through questions with Likert-scale response options. The survey also asked for sociodemographic information, including age, gender, and postal code, as well as self-rated health (the complete survey is available on request from the corresponding author).
In January 2013, the SMHAFHT began to collect patient e-mail addresses as part of routine demographic data. Since January 2014, all patients with e-mail addresses on file have been e-mailed a link to the survey during their birth month. The survey is hosted online by FluidSurveys, and patients complete the survey on their home devices. To determine if patients responding to the online survey were representative of our patient population, we distributed the same survey in clinic waiting rooms during July and August 2014. Patients registering at the clinic were systematically approached by summer students who provided interested patients with an electronic tablet that enabled patients to access the online survey. Patients who could not communicate in English, had an advanced form of dementia, or self-reported already completing the survey were not eligible to participate in the waiting room survey.
Design and participants
We conducted a cross-sectional analysis comparing respondent demographic characteristics and survey responses from e-mail surveys completed between January 1, 2014, and June 30, 2014, (N = 594) and surveys completed in the waiting room between July 1, 2014, and August 31, 2014 (N = 606). For both surveys, we excluded patients younger than 18 years of age owing to potential inconsistencies in data quality for this age group (e-mail survey n = 7; waiting room survey n = 14). We also obtained demographic data for all patients enrolled in the SMHAFHT (as of August 2014) from the electronic medical record, including whether patients had e-mail addresses on file.
Analysis
We considered any survey that contained only blank responses to be incomplete and excluded these from the analysis. We used the Statistics Canada Postal Code Conversion File to convert patient postal codes to the corresponding neighbourhood income quintile based on census data.22 We compared e-mail and waiting room respondents’ demographic characteristics (age, gender, income quintile, and self-rated health) using 2 tests and compared these characteristics to those of all patients enrolled in the SMHAFHT. We used 2 tests to compare the demographic profile of enrolled patients with and without e-mail addresses on file to determine if the sampling frame for the e-mail survey was representative of the entire patient population.
We analyzed responses to 3 questions related to access to care and 3 questions related to patient-centredness. We dichotomized survey responses (eg, always or often versus sometimes, rarely, or never) and compared responses received from e-mail and waiting room delivery using 2 tests. When respondents declined or refused to answer, these were removed from the denominators. We used multivariate logistic regression models to assess differences in responses from the 2 delivery methods before and after adjustment for patient demographic characteristics. We performed a sensitivity analysis to assess whether differences in e-mail and waiting room responses varied by clinic, stratifying the analysis to compare differences in responses for all questions for 2 clinics: the largest clinic (clinic A) with a more affluent patient population, and a smaller clinic (clinic D) with a less affluent patient population. All analyses were conducted using SAS, version 9.4.
This initiative was reviewed by institutional authorities at St Michael’s Hospital and deemed to require neither research ethics board approval nor written informed consent from participants.
RESULTS
Response rates for the surveys delivered via e-mail and in the waiting room are summarized in Figure 1. Patients who completed either the e-mail or the waiting room survey were similar with respect to self-rated health (Table 1). However, patients responding to the e-mail survey had a different age (P = .0147) and gender distribution (P = .0434) and were more likely to live in higher-income neighbourhoods (P = .0002) than those who participated in the waiting room. When compared with the demographic profile of all patients enrolled in the SMHAFHT, the waiting room survey overrepresented those aged 18 to 34, while the e-mail survey overcaptured respondents aged 50 to 64 (Figure 2). Female patients were overrepresented by the e-mail survey (Figure 2). Patients living in low-income neighbourhoods were underrepresented in the e-mail survey but overrepresented in the waiting room survey (Figure 2), whereas those living in high-income neighbourhoods were overrepresented among e-mail survey respondents.
As of June 30, 2014, 17.0% of enrolled patients aged 18 or older had e-mail addresses on file. Patients between the ages of 25 and 64 were more likely to have an e-mail address on file (P < .001), as were female patients (P < .001), but no differences were seen by income quintile profile (P = .0971; Figure 3).
Responses were similar between the e-mail and waiting room survey except for 2 questions (Table 2): patients who responded to the e-mail survey were less likely to report being able to see a provider on the same or next day (53.3% vs 60.2%, P = .0265) and more likely to report that their health care providers always or often spent enough time with them (89.2% vs 85.1%, P = .0457). After adjustment for patient demographic characteristics, there were no significant differences between e-mail and waiting room survey responses.
Our stratified analysis found that unadjusted differences in e-mail and waiting room responses were consistent between clinic A and clinic D except for the question relating to same- or next-day access. We re-ran the multivariate logistic regression model for same- or next-day access to include an interaction term between clinic and type of survey. This interaction term was non-significant (P = .0638), indicating that the relationship between survey method and responses did not vary by clinic.
DISCUSSION
We found that patients responding to a primary care patient experience survey delivered via e-mail were more likely to be between the ages of 35 and 64, be female, and live in high-income neighbourhoods compared with patients responding to the same survey conducted in the waiting room. However, we found minimal differences in neighbourhood income quintiles between patients with and without e-mail addresses in their medical records. There were few differences in responses to the survey questions between the 2 survey methods, and any differences were explained by the underlying differences in patient demographic characteristics.
Electronic surveys delivered to patients by e-mail offer practices a convenient, low-cost method of regularly surveying patients to improve quality of care; however, our findings support concerns that they might underrepresent patients living in low-income neighbourhoods. In our setting, underrepresentation of these patients did not relate to differences in providing the practice with an e-mail address (sampling bias) but likely related to the probability of patients living in low-income neighbourhoods responding to the e-mail survey invitation (nonresponse bias). Differences in computer literacy and access to Internet resources might be contributing factors7–9,23,24; however, others have also reported low response rates across numerous survey methods among populations with low income.25–27
We also found differences in the age and gender distribution of respondents to the e-mail survey compared with respondents to the waiting room survey. These differences are likely due to a combination of sampling and nonresponse bias. For example, female patients were more likely than expected to have e-mail addresses on file and a greater proportion responded to the e-mail survey compared with the waiting room survey. In contrast, patients aged 50 to 64 were less likely than expected to have e-mail addresses on record but a greater proportion responded to the e-mail survey compared with the waiting room survey. The response rate to the e-mail survey was much lower than that of the survey conducted in the waiting room, which is a known limitation of surveys delivered by e-mail.19 Low response rates increase the likelihood of nonresponse bias and can limit the generalizability of survey results.19
While respondents to the waiting room survey were more likely to reside in low-income neighbourhoods compared with e-mail survey respondents, they were also more likely to report being able to see a provider the same or next day when they needed care. This difference in experience of access to care might be a reflection of differences inherent to the 2 survey delivery methods. Patients in the waiting room have successfully navigated the health care system to obtain an appointment with a provider, whereas e-mail survey respondents might not have visited the clinic for several months and might be less familiar with clinic procedures. In addition, the confluence of respondents living in low-income neighbourhoods and better perceived access might reflect our clinic setting, where we have designed services to be accessible to marginalized populations (eg, by accommodating walk-in patients with urgent concerns). We also found that respondents to the waiting room survey were less likely to report that their care provider spent enough time with them compared with e-mail survey respondents. Patients who perceive having short appointment times believe that it is detrimental to their treatment as well as their relationships with their providers.28 Satisfaction with the length of primary care visits has been reported to be associated with age, race, health status, and visit type, although education level was not found to be significant.29
Differences in patient experience of same- or next-day access and time spent during the appointment disappeared after adjustment for respondent demographic characteristics. Patient experience surveys in both acute30,31 and primary care32,33 settings have found differences in responses across survey mode and patient mix, and suggest that valid comparisons across institutions require adjustment. However, most primary care practices likely do not have the resources to statistically adjust survey responses.
Limitations
This study has some limitations. First, the timing of the surveys, while chosen for practicality, was different and might affect the comparability of the 2 delivery modes. Patients in the waiting room were surveyed directly before an appointment, whereas patients receiving the e-mail survey had not necessarily had recent appointments with their providers. Survey timing has been shown to be an important factor in assessing patient experience; patient evaluations are poorer when measured at longer times after the encounter.34–42 Second, the survey was conducted anonymously, so we were not able to analyze the demographic characteristics of nonrespondents to understand how nonresponse bias might have affected the generalizability of the survey findings. The demographic distribution of e-mail respondents appeared to be significantly different from that of those patients with e-mail addresses on file, so it is likely that nonresponse bias is relevant here. Both survey modes were relatively technologically intensive, so they might have excluded the elderly or those with lower education or technical comfort.43–45 Anecdotally, there were some technical issues with the tablet technology used in the waiting room survey, with some patients needing assistance to complete the surveys; most incomplete surveys were conducted in the waiting room. Third, we used the patient’s neighbourhood income quintile22 as a marker of socioeconomic status, which is not necessarily representative of a patient’s actual income, especially in urban neighbourhoods that are being gentrified.46 In addition, neighbourhood income quintiles are not assigned to people with no fixed address, so this measure cannot be used to assess homeless or underhoused patients. Finally, while we had a reasonable sample size for univariate analyses, we might not have had sufficient power to detect differences in the multivariate logistic regression.
Conclusion
Patient experience surveys distributed via e-mail might underrepresent patients from low-income neighbourhoods. The method of survey delivery can influence who responds, which might in turn affect survey responses. Practices should consider evaluating for nonresponse bias and adjusting for patient demographic characteristics when interpreting survey results. Further research is needed to understand how primary care practices can optimize electronic survey delivery methods to generate responses from a representative sample of patients.
Acknowledgments
We thank Madison Giles, Joshua Feldman, Lisa Miller, Sharon Wiltshire, and the St Michael’s Hospital Academic Family Health Team Quality Steering Committee for their contributions.
Notes
EDITOR’S KEY POINTS
Measuring the patient experience is an integral step toward understanding and improving the quality of primary care. As part of quality improvement efforts, family practices are increasingly being asked to survey patients about experiences with care.
It is important for primary care practices to recognize that the method of survey delivery can influence which patients respond, which might in turn affect responses. Low-cost alternatives to collecting patient experience data, such as electronic surveys sent through e-mail, might underrepresent patients who live in low-income neighbourhoods. This study compared results and patient characteristics for respondents to surveys delivered by e-mail and in the waiting room.
Responses were similar between the surveys except that patients who responded to the e-mail survey were less likely to report being able to see a provider on the same or next day (53.3% vs 60.2%, P = .0265) and more likely to report that their health care providers always or often spent enough time with them (89.2% vs 85.1%, P = .0457). These differences were explained by the underlying differences in patient demographic characteristics. Practices should consider evaluating their patient experience survey results for nonresponse bias and adjusting for patient characteristics when interpreting survey results.
POINTS DE REPÈRE DU RÉDACTEUR
Évaluer l’expérience des patients est une étape essentielle pour comprendre et améliorer la qualité des soins primaires. Dans le contexte d’efforts pour améliorer la qualité, les cliniques de santé familiale sont de plus en plus appelées à faire des sondages auprès des patients au sujet de leurs expériences en rapport avec les soins.
Il importe que les cliniques de soins primaires reconnaissent que la méthode de distribution du sondage peut influencer l’échantillon des répondants et, par conséquent, les réponses exprimées. Une option peu coûteuse pour recueillir des données sur l’expérience des patients, comme des sondages électroniques par courriel, peut sous-représenter les patients qui vivent dans des quartiers à faible revenu. Cette étude comparait les résultats et les caractéristiques des patients selon qu’ils avaient répondu à des sondages par courriel ou dans la salle d’attente.
Les réponses étaient semblables entre les sondages, sauf que les répondants par courriel étaient moins susceptibles de répondre qu’ils pouvaient voir un médecin le même jour ou le lendemain (53,3 c. 60,2 %, P = ,0265) et plus enclins à dire que les professionnels de la santé passaient assez de temps avec eux toujours ou souvent (89,2 c. 85,1 %, P = ,0457). Ces distinctions s’expliquaient par les différences sous-jacentes dans les caractéristiques démographiques des patients. Dans l’interprétation des résultats d’un sondage, les cliniques devraient envisager d’évaluer le biais de non-réponse et d’apporter des ajustements en fonction des caractéristiques des patients.
Footnotes
This article has been peer reviewed.
Cet article a fait l’objet d’une révision par des pairs.
Contributors
Both authors contributed to the concept and design of the study, data gathering, and interpretation. Dr Slater conducted the data analyses and drafted the manuscript. Both authors critically reviewed the manuscript and approved it for publication.
Competing interests
None declared
- Copyright© the College of Family Physicians of Canada