Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Published Ahead of Print
    • Archive
    • Supplemental Issues
    • Collections - French
    • Collections - English
  • Info for
    • Authors & Reviewers
    • Submit a Manuscript
    • Advertisers
    • Careers & Locums
    • Subscribers
    • Permissions
  • About CFP
    • About CFP
    • About the CFPC
    • Editorial Advisory Board
    • Terms of Use
    • Contact Us
  • Feedback
    • Feedback
    • Rapid Responses
    • Most Read
    • Most Cited
    • Email Alerts
  • Blogs
    • Latest Blogs
    • Blog Guidelines
    • Directives pour les blogues
  • Mainpro+ Credits
    • About Mainpro+
    • Member Login
    • Instructions
  • Other Publications
    • http://www.cfpc.ca/Canadianfamilyphysician/
    • https://www.cfpc.ca/Login/
    • Careers and Locums

User menu

  • My alerts

Search

  • Advanced search
The College of Family Physicians of Canada
  • Other Publications
    • http://www.cfpc.ca/Canadianfamilyphysician/
    • https://www.cfpc.ca/Login/
    • Careers and Locums
  • My alerts
The College of Family Physicians of Canada

Advanced Search

  • Home
  • Articles
    • Current
    • Published Ahead of Print
    • Archive
    • Supplemental Issues
    • Collections - French
    • Collections - English
  • Info for
    • Authors & Reviewers
    • Submit a Manuscript
    • Advertisers
    • Careers & Locums
    • Subscribers
    • Permissions
  • About CFP
    • About CFP
    • About the CFPC
    • Editorial Advisory Board
    • Terms of Use
    • Contact Us
  • Feedback
    • Feedback
    • Rapid Responses
    • Most Read
    • Most Cited
    • Email Alerts
  • Blogs
    • Latest Blogs
    • Blog Guidelines
    • Directives pour les blogues
  • Mainpro+ Credits
    • About Mainpro+
    • Member Login
    • Instructions
  • RSS feeds
  • Follow cfp Template on Twitter
  • LinkedIn
  • Instagram
Research ArticleResearch

Measuring the patient experience in primary care

Comparing e-mail and waiting room survey delivery in a family health team

Morgan Slater and Tara Kiran
Canadian Family Physician December 2016; 62 (12) e740-e748;
Morgan Slater
Senior Research Associate in the Department of Family and Community Medicine at St Michael’s Hospital in Toronto, Ont.
MSc PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tara Kiran
Family physician at St Michael’s Hospital and Assistant Professor and Clinician Investigator in the Department of Family and Community Medicine at the University of Toronto, Associate Scientist with the Centre for Research on Inner City Health at the Li Ka Shing Knowledge Institute of St Michael’s Hospital, and Adjunct Scientist at the Institute for Clinical Evaluative Sciences.
MD CCFP MSc
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: tara.kiran@utoronto.ca
  • Article
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF
Loading

Abstract

Objective To compare the characteristics and responses of patients completing a patient experience survey accessed online after e-mail notification or delivered in the waiting room using tablet computers.

Design Cross-sectional comparison of 2 methods of delivering a patient experience survey.

Setting A large family health team in Toronto, Ont.

Participants Family practice patients aged 18 or older who completed an e-mail survey between January and June 2014 (N = 587) or who completed the survey in the waiting room in July and August 2014 (N = 592).

Main outcome measures Comparison of respondent demographic characteristics and responses to questions related to access and patient-centredness.

Results Patients responding to the e-mail survey were more likely to live in higher-income neighbourhoods (P = .0002), be between the ages of 35 and 64 (P = .0147), and be female (P = .0434) compared with those responding to the waiting room survey; there were no significant differences related to self-rated health. The differences in neighbourhood income were noted despite minimal differences between patients with and without e-mail addresses included in their medical records. There were few differences in responses to the survey questions between the 2 survey methods and any differences were explained by the underlying differences in patient demographic characteristics.

Conclusion Our findings suggest that respondent demographic characteristics might differ depending on the method of survey delivery, and these differences might affect survey responses. Methods of delivering patient experience surveys that require electronic literacy might underrepresent patients living in low-income neighbourhoods. Practices should consider evaluating for nonresponse bias and adjusting for patient demographic characteristics when interpreting survey results. Further research is needed to understand how primary care practices can optimize electronic survey delivery methods to survey a representative sample of patients.

Measuring the patient experience is an integral step toward understanding and improving the quality of primary care.1–5 As part of quality improvement efforts, family practices are increasingly being asked to survey patients about their experiences with care. In Ontario, patient experience surveys are now mandatory for all family health teams (FHTs) and community health centres.6 However, finding the capacity to survey patients regularly can be challenging in a busy family practice. Traditional methods of collecting patient experience data, including surveying patients in clinic waiting rooms, via telephone, or by mail, can be costly and burdensome to primary care practices.

Electronic surveys sent via e-mail offer a low-cost alternative for frequent patient surveys. However, access to digital resources, including the Internet, is unequal across demographic and socioeconomic groups.7–9 Additionally, how data are collected can influence the content of survey responses. These mode effects, which occur when data obtained via one mode of data collection are different from that obtained by another, have been described in the literature.2,10–18 The method of data collection can also influence data quality and response rate19 owing to several issues, including differences in sampling frames and nonresponse.19,20 There is little literature to guide family practices on how patient characteristics and responses might differ when conducting patient experience surveys in their practices using different delivery methods.

Our objective was to assess whether patients responding to an online survey delivered via e-mail were representative of a practice population and whether their characteristics and responses differed from those of patients surveyed in clinic waiting rooms.

METHODS

Setting and context

The St Michael’s Hospital Academic Family Health Team (SMHAFHT) is a large, interprofessional primary care organization with 5 clinics serving 35 000 enrolled patients in the inner city of Toronto, Ont. The SMHAFHT serves a diverse population including urban professionals and families who work or live downtown, as well as patients who are homeless, live in poverty, or are new to Canada. The SMHAFHT developed a patient experience survey as part of their quality improvement work, based on questions in the Commonwealth Fund International Health Policy Survey.21 The survey captured patient perspectives about access and patient-centredness through questions with Likert-scale response options. The survey also asked for sociodemographic information, including age, gender, and postal code, as well as self-rated health (the complete survey is available on request from the corresponding author).

In January 2013, the SMHAFHT began to collect patient e-mail addresses as part of routine demographic data. Since January 2014, all patients with e-mail addresses on file have been e-mailed a link to the survey during their birth month. The survey is hosted online by FluidSurveys, and patients complete the survey on their home devices. To determine if patients responding to the online survey were representative of our patient population, we distributed the same survey in clinic waiting rooms during July and August 2014. Patients registering at the clinic were systematically approached by summer students who provided interested patients with an electronic tablet that enabled patients to access the online survey. Patients who could not communicate in English, had an advanced form of dementia, or self-reported already completing the survey were not eligible to participate in the waiting room survey.

Design and participants

We conducted a cross-sectional analysis comparing respondent demographic characteristics and survey responses from e-mail surveys completed between January 1, 2014, and June 30, 2014, (N = 594) and surveys completed in the waiting room between July 1, 2014, and August 31, 2014 (N = 606). For both surveys, we excluded patients younger than 18 years of age owing to potential inconsistencies in data quality for this age group (e-mail survey n = 7; waiting room survey n = 14). We also obtained demographic data for all patients enrolled in the SMHAFHT (as of August 2014) from the electronic medical record, including whether patients had e-mail addresses on file.

Analysis

We considered any survey that contained only blank responses to be incomplete and excluded these from the analysis. We used the Statistics Canada Postal Code Conversion File to convert patient postal codes to the corresponding neighbourhood income quintile based on census data.22 We compared e-mail and waiting room respondents’ demographic characteristics (age, gender, income quintile, and self-rated health) using Embedded Image2 tests and compared these characteristics to those of all patients enrolled in the SMHAFHT. We used Embedded Image2 tests to compare the demographic profile of enrolled patients with and without e-mail addresses on file to determine if the sampling frame for the e-mail survey was representative of the entire patient population.

We analyzed responses to 3 questions related to access to care and 3 questions related to patient-centredness. We dichotomized survey responses (eg, always or often versus sometimes, rarely, or never) and compared responses received from e-mail and waiting room delivery using Embedded Image2 tests. When respondents declined or refused to answer, these were removed from the denominators. We used multivariate logistic regression models to assess differences in responses from the 2 delivery methods before and after adjustment for patient demographic characteristics. We performed a sensitivity analysis to assess whether differences in e-mail and waiting room responses varied by clinic, stratifying the analysis to compare differences in responses for all questions for 2 clinics: the largest clinic (clinic A) with a more affluent patient population, and a smaller clinic (clinic D) with a less affluent patient population. All analyses were conducted using SAS, version 9.4.

This initiative was reviewed by institutional authorities at St Michael’s Hospital and deemed to require neither research ethics board approval nor written informed consent from participants.

RESULTS

Response rates for the surveys delivered via e-mail and in the waiting room are summarized in Figure 1. Patients who completed either the e-mail or the waiting room survey were similar with respect to self-rated health (Table 1). However, patients responding to the e-mail survey had a different age (P = .0147) and gender distribution (P = .0434) and were more likely to live in higher-income neighbourhoods (P = .0002) than those who participated in the waiting room. When compared with the demographic profile of all patients enrolled in the SMHAFHT, the waiting room survey overrepresented those aged 18 to 34, while the e-mail survey overcaptured respondents aged 50 to 64 (Figure 2). Female patients were overrepresented by the e-mail survey (Figure 2). Patients living in low-income neighbourhoods were underrepresented in the e-mail survey but overrepresented in the waiting room survey (Figure 2), whereas those living in high-income neighbourhoods were overrepresented among e-mail survey respondents.

Figure 1.
  • Download figure
  • Open in new tab
Figure 1.

Participation rates for e-mail and waiting room surveys

*Includes surveys containing only blank responses.

†Surveys completed by patients younger than 18 years were excluded.

Figure 2.
  • Download figure
  • Open in new tab
Figure 2.

Demographic distribution of e-mail survey respondents, waiting room survey respondents, and all patients enrolled in the SMHAFHT: A) Age distribution, B) sex distribution, and C) income quintile.

FHT—family health team, SMHAFHT—St Michael’s Hospital Academic FHT.

View this table:
  • View inline
  • View popup
Table 1.

Comparison of respondent demographic characteristics for e-mail and waiting room surveys

As of June 30, 2014, 17.0% of enrolled patients aged 18 or older had e-mail addresses on file. Patients between the ages of 25 and 64 were more likely to have an e-mail address on file (P < .001), as were female patients (P < .001), but no differences were seen by income quintile profile (P = .0971; Figure 3).

Figure 3.
  • Download figure
  • Open in new tab
Figure 3.

Age, gender, and income quintile distribution for all patients enrolled in the SMHAFHT with e-mail addresses available in the medical record compared with those without available e-mail addresses

SMHAFHT—St Michael’s Hospital Academic Family Health Team.

Responses were similar between the e-mail and waiting room survey except for 2 questions (Table 2): patients who responded to the e-mail survey were less likely to report being able to see a provider on the same or next day (53.3% vs 60.2%, P = .0265) and more likely to report that their health care providers always or often spent enough time with them (89.2% vs 85.1%, P = .0457). After adjustment for patient demographic characteristics, there were no significant differences between e-mail and waiting room survey responses.

View this table:
  • View inline
  • View popup
Table 2.

Comparison of responses between e-mail and waiting room surveys

Our stratified analysis found that unadjusted differences in e-mail and waiting room responses were consistent between clinic A and clinic D except for the question relating to same- or next-day access. We re-ran the multivariate logistic regression model for same- or next-day access to include an interaction term between clinic and type of survey. This interaction term was non-significant (P = .0638), indicating that the relationship between survey method and responses did not vary by clinic.

DISCUSSION

We found that patients responding to a primary care patient experience survey delivered via e-mail were more likely to be between the ages of 35 and 64, be female, and live in high-income neighbourhoods compared with patients responding to the same survey conducted in the waiting room. However, we found minimal differences in neighbourhood income quintiles between patients with and without e-mail addresses in their medical records. There were few differences in responses to the survey questions between the 2 survey methods, and any differences were explained by the underlying differences in patient demographic characteristics.

Electronic surveys delivered to patients by e-mail offer practices a convenient, low-cost method of regularly surveying patients to improve quality of care; however, our findings support concerns that they might underrepresent patients living in low-income neighbourhoods. In our setting, underrepresentation of these patients did not relate to differences in providing the practice with an e-mail address (sampling bias) but likely related to the probability of patients living in low-income neighbourhoods responding to the e-mail survey invitation (nonresponse bias). Differences in computer literacy and access to Internet resources might be contributing factors7–9,23,24; however, others have also reported low response rates across numerous survey methods among populations with low income.25–27

We also found differences in the age and gender distribution of respondents to the e-mail survey compared with respondents to the waiting room survey. These differences are likely due to a combination of sampling and nonresponse bias. For example, female patients were more likely than expected to have e-mail addresses on file and a greater proportion responded to the e-mail survey compared with the waiting room survey. In contrast, patients aged 50 to 64 were less likely than expected to have e-mail addresses on record but a greater proportion responded to the e-mail survey compared with the waiting room survey. The response rate to the e-mail survey was much lower than that of the survey conducted in the waiting room, which is a known limitation of surveys delivered by e-mail.19 Low response rates increase the likelihood of nonresponse bias and can limit the generalizability of survey results.19

While respondents to the waiting room survey were more likely to reside in low-income neighbourhoods compared with e-mail survey respondents, they were also more likely to report being able to see a provider the same or next day when they needed care. This difference in experience of access to care might be a reflection of differences inherent to the 2 survey delivery methods. Patients in the waiting room have successfully navigated the health care system to obtain an appointment with a provider, whereas e-mail survey respondents might not have visited the clinic for several months and might be less familiar with clinic procedures. In addition, the confluence of respondents living in low-income neighbourhoods and better perceived access might reflect our clinic setting, where we have designed services to be accessible to marginalized populations (eg, by accommodating walk-in patients with urgent concerns). We also found that respondents to the waiting room survey were less likely to report that their care provider spent enough time with them compared with e-mail survey respondents. Patients who perceive having short appointment times believe that it is detrimental to their treatment as well as their relationships with their providers.28 Satisfaction with the length of primary care visits has been reported to be associated with age, race, health status, and visit type, although education level was not found to be significant.29

Differences in patient experience of same- or next-day access and time spent during the appointment disappeared after adjustment for respondent demographic characteristics. Patient experience surveys in both acute30,31 and primary care32,33 settings have found differences in responses across survey mode and patient mix, and suggest that valid comparisons across institutions require adjustment. However, most primary care practices likely do not have the resources to statistically adjust survey responses.

Limitations

This study has some limitations. First, the timing of the surveys, while chosen for practicality, was different and might affect the comparability of the 2 delivery modes. Patients in the waiting room were surveyed directly before an appointment, whereas patients receiving the e-mail survey had not necessarily had recent appointments with their providers. Survey timing has been shown to be an important factor in assessing patient experience; patient evaluations are poorer when measured at longer times after the encounter.34–42 Second, the survey was conducted anonymously, so we were not able to analyze the demographic characteristics of nonrespondents to understand how nonresponse bias might have affected the generalizability of the survey findings. The demographic distribution of e-mail respondents appeared to be significantly different from that of those patients with e-mail addresses on file, so it is likely that nonresponse bias is relevant here. Both survey modes were relatively technologically intensive, so they might have excluded the elderly or those with lower education or technical comfort.43–45 Anecdotally, there were some technical issues with the tablet technology used in the waiting room survey, with some patients needing assistance to complete the surveys; most incomplete surveys were conducted in the waiting room. Third, we used the patient’s neighbourhood income quintile22 as a marker of socioeconomic status, which is not necessarily representative of a patient’s actual income, especially in urban neighbourhoods that are being gentrified.46 In addition, neighbourhood income quintiles are not assigned to people with no fixed address, so this measure cannot be used to assess homeless or underhoused patients. Finally, while we had a reasonable sample size for univariate analyses, we might not have had sufficient power to detect differences in the multivariate logistic regression.

Conclusion

Patient experience surveys distributed via e-mail might underrepresent patients from low-income neighbourhoods. The method of survey delivery can influence who responds, which might in turn affect survey responses. Practices should consider evaluating for nonresponse bias and adjusting for patient demographic characteristics when interpreting survey results. Further research is needed to understand how primary care practices can optimize electronic survey delivery methods to generate responses from a representative sample of patients.

Acknowledgments

We thank Madison Giles, Joshua Feldman, Lisa Miller, Sharon Wiltshire, and the St Michael’s Hospital Academic Family Health Team Quality Steering Committee for their contributions.

Notes

EDITOR’S KEY POINTS

  • Measuring the patient experience is an integral step toward understanding and improving the quality of primary care. As part of quality improvement efforts, family practices are increasingly being asked to survey patients about experiences with care.

  • It is important for primary care practices to recognize that the method of survey delivery can influence which patients respond, which might in turn affect responses. Low-cost alternatives to collecting patient experience data, such as electronic surveys sent through e-mail, might underrepresent patients who live in low-income neighbourhoods. This study compared results and patient characteristics for respondents to surveys delivered by e-mail and in the waiting room.

  • Responses were similar between the surveys except that patients who responded to the e-mail survey were less likely to report being able to see a provider on the same or next day (53.3% vs 60.2%, P = .0265) and more likely to report that their health care providers always or often spent enough time with them (89.2% vs 85.1%, P = .0457). These differences were explained by the underlying differences in patient demographic characteristics. Practices should consider evaluating their patient experience survey results for nonresponse bias and adjusting for patient characteristics when interpreting survey results.

POINTS DE REPÈRE DU RÉDACTEUR

  • Évaluer l’expérience des patients est une étape essentielle pour comprendre et améliorer la qualité des soins primaires. Dans le contexte d’efforts pour améliorer la qualité, les cliniques de santé familiale sont de plus en plus appelées à faire des sondages auprès des patients au sujet de leurs expériences en rapport avec les soins.

  • Il importe que les cliniques de soins primaires reconnaissent que la méthode de distribution du sondage peut influencer l’échantillon des répondants et, par conséquent, les réponses exprimées. Une option peu coûteuse pour recueillir des données sur l’expérience des patients, comme des sondages électroniques par courriel, peut sous-représenter les patients qui vivent dans des quartiers à faible revenu. Cette étude comparait les résultats et les caractéristiques des patients selon qu’ils avaient répondu à des sondages par courriel ou dans la salle d’attente.

  • Les réponses étaient semblables entre les sondages, sauf que les répondants par courriel étaient moins susceptibles de répondre qu’ils pouvaient voir un médecin le même jour ou le lendemain (53,3 c. 60,2 %, P = ,0265) et plus enclins à dire que les professionnels de la santé passaient assez de temps avec eux toujours ou souvent (89,2 c. 85,1 %, P = ,0457). Ces distinctions s’expliquaient par les différences sous-jacentes dans les caractéristiques démographiques des patients. Dans l’interprétation des résultats d’un sondage, les cliniques devraient envisager d’évaluer le biais de non-réponse et d’apporter des ajustements en fonction des caractéristiques des patients.

Footnotes

  • This article has been peer reviewed.

  • Cet article a fait l’objet d’une révision par des pairs.

  • Contributors

    Both authors contributed to the concept and design of the study, data gathering, and interpretation. Dr Slater conducted the data analyses and drafted the manuscript. Both authors critically reviewed the manuscript and approved it for publication.

  • Competing interests

    None declared

  • Copyright© the College of Family Physicians of Canada

References

  1. 1.↵
    1. Browne K,
    2. Roseman D,
    3. Shaller D,
    4. Edgman-Levitan S
    . Analysis & commentary. Measuring patient experience as a strategy for improving primary care. Health Aff (Millwood) 2010;29(5):921-5.
    OpenUrlAbstract/FREE Full Text
  2. 2.↵
    1. Gribble RK,
    2. Haupt C
    . Quantitative and qualitative differences between handout and mailed patient satisfaction surveys. Med Care 2005;43(3):276-81.
    OpenUrlCrossRefPubMed
  3. 3.
    1. Urden LD
    . Patient satisfaction measurement: current issues and implications. Lippincott’s Case Management 2002;7(5):194-200.
    OpenUrlCrossRefPubMed
  4. 4.
    1. Evans RG,
    2. Edwards A,
    3. Evans S,
    4. Elwyn B,
    5. Elwyn G
    . Assessing the practising physician using patient surveys: a systematic review of instruments and feedback methods. Fam Pract 2007;24(2):117-27. Epub 2007 Jan 29.
    OpenUrlAbstract/FREE Full Text
  5. 5.↵
    1. Cleary PD
    . The increasing importance of patient surveys. Quality in health care. Qual Health Care 1999;8(4):212.
    OpenUrlCrossRef
  6. 6.↵
    1. Ontario Ministry of Health and Long-Term Care [website].
    About excellent care for all 2014. Toronto, ON: Ontario Ministry of Health and Long-Term Care; 2014. Available from: www.health.gov.on.ca/en/pro/programs/ecfa/legislation/act_regs.aspx. Accessed 2015 Sep 18.
  7. 7.↵
    1. Martin SP,
    2. Robinson JP
    . The income digital divide: trends and predictions for levels of internet use. Soc Probl 2007;54(1):1-22.
    OpenUrlAbstract/FREE Full Text
  8. 8.
    1. Robinson JP,
    2. DiMaggio P,
    3. Hargittai E
    . New social survey perspectives on the digital divide. IT Soc 2003;1:1-22.
    OpenUrl
  9. 9.↵
    1. Gilleard C,
    2. Higgs P
    . Internet use and the digital divide in the English Longitudinal Study of Ageing. Eur J Ageing 2008;5:233-9.
    OpenUrl
  10. 10.↵
    1. De Leeuw ED
    . Mixed-mode surveys and the Internet. Surv Pract 2010;3(6):1-5.
    OpenUrl
  11. 11.
    1. Denscombe M
    . Web-based questionnaires and the mode effect: an evaluation based on completion rates and data contents of near-identical questionnaires delivered in different modes. Soc Sci Comput Rev 2006;24(2):246-54.
    OpenUrlAbstract/FREE Full Text
  12. 12.
    1. Shim JM,
    2. Shin E,
    3. Johnson TP
    . Self-rated health assessed by web versus mail modes in a mixed mode survey: the digital divide effect and the genuine survey mode effect. Med Care 2013;51(9):774-81.
    OpenUrlCrossRefPubMed
  13. 13.
    1. McCabe SE,
    2. Couper MP,
    3. Cranford JA,
    4. Boyd CJ
    . Comparison of Web and mail surveys for studying secondary consequences associated with substance use: evidence for minimal mode effects. Addict Behav 2006;31(1):162-8.
    OpenUrlCrossRefPubMed
  14. 14.
    1. Beebe TJ,
    2. Locke GR 3rd.,
    3. Barnes SA,
    4. Davern ME,
    5. Anderson KJ
    . Mixing web and mail methods in a survey of physicians. Health Serv Res 2007;42(3 Pt 1):1219-34.
    OpenUrlCrossRefPubMed
  15. 15.
    1. Miller ET,
    2. Neal DJ,
    3. Roberts LJ,
    4. Baer JS,
    5. Cressler SO,
    6. Metrik J,
    7. et al
    . Test-retest reliability of alcohol measures: is there a difference between internet-based assessment and traditional methods? Psychol Addict Behav 2002;16(1):56-63.
    OpenUrlCrossRefPubMed
  16. 16.
    1. Buskirk TD,
    2. Stein KD
    . Telephone vs. mail survey gives different SF-36 quality-of-life scores among cancer survivors. J Clin Epidemiol 2008;61(10):1049-55. Epub 2008 Jun 6.
    OpenUrlCrossRefPubMed
  17. 17.
    1. Cheung YB,
    2. Goh C,
    3. Thumboo J,
    4. Khoo KS,
    5. Wee J
    . Quality of life scores differed according to mode of administration in a review of three major oncology questionnaires. J Clin Epidemiol 2006;59(2):185-91.
    OpenUrlCrossRefPubMed
  18. 18.↵
    1. De Vries H,
    2. Elliott MN,
    3. Hepner KA,
    4. Keller SD,
    5. Hays RD
    . Equivalence of mail and telephone responses to the CAHPS Hospital Survey. Health Serv Res 2005;40(6 Pt 2):2120-39.
    OpenUrlCrossRefPubMed
  19. 19.↵
    1. Bowling A
    . Mode of questionnaire administration can have serious effects on data quality. J Public Health (Oxf) 2005;27(3):281-91. Epub 2005 May 3.
    OpenUrlAbstract/FREE Full Text
  20. 20.↵
    1. Groves RM,
    2. Fowler FJ,
    3. Couper MP,
    4. Lepkowski M,
    5. Singer E,
    6. Tourangeau R
    . Survey methodology. Hoboken, NJ: John Wiley & Sons; 2004.
  21. 21.↵
    1. Health Council of Canada.
    How do Canadians rate the health care system? Results from the 2010 Commonwealth Fund International Health Policy Survey. Toronto, ON: Health Council of Canada; 2010.
  22. 22.↵
    1. Wilkins R,
    2. Khan S
    . PCCF+ version 5H user’s guide. Automated geographic coding based on the Statistics Canada Postal Code Conversion Files. Ottawa, ON: Health Analysis Division, Statistics Canada; 2011.
  23. 23.↵
    1. Diment K,
    2. Garrett-Jones S
    . How demographic characteristics affect mode preference in a postal/web mixed-mode survey of Australian researchers. Soc Sci Comput Rev 2007;25:510-7.
    OpenUrl
  24. 24.↵
    1. Miller TI,
    2. Miller-Kobayashi M,
    3. Caldwell E,
    4. Thurston S,
    5. Collett B
    . Citizen surveys on the web: general population surveys of community opinion. Soc Sci Comput Rev 2002;20:124-36.
    OpenUrlAbstract/FREE Full Text
  25. 25.↵
    1. Chang L,
    2. Krosnick JA
    . The representativeness of national samples: comparisons of an RDD telephone survey with matched Internet surveys by Harris Interactive and Knowledge Networks.; Paper presented at: Conference of the American Association for Public Opinion Research; 2001 May 17–20; Montreal, QC.
  26. 26.
    1. Schejbal JA,
    2. Lavrakas PJ
    . Proceedings of the American Statistical Association. Alexandria, VA: American Statistical Association Section on Survey Research Methods; 1995. Panel attrition in a dual-frame local area telephone survey.; p. 1035-9.
  27. 27.↵
    1. O’Neil MJ
    . Estimating the nonresponse bias due to refusals in telephone surveys. Public Opin Q 1979;43:218-32.
    OpenUrlAbstract/FREE Full Text
  28. 28.↵
    1. Tabler J,
    2. Scammon DL,
    3. Kim J,
    4. Farrell T,
    5. Tomoaia-Cotisel A,
    6. Magill MK
    . Patient care experiences and perceptions of the patient-provider relationship: a mixed method study. Patient Exp J 2014;1(1):75-87.
    OpenUrl
  29. 29.↵
    1. Gross DA,
    2. Zyzanski SJ,
    3. Borawski EA,
    4. Cebul RD,
    5. Stange KC
    . Patient satisfaction with time spent with their physician. J Fam Pract 1998;47(2):133-7.
    OpenUrlPubMed
  30. 30.↵
    1. Elliott MN,
    2. Zaslavsky AM,
    3. Goldstein E,
    4. Lehrman W,
    5. Hambarsoomians K,
    6. Beckett MK,
    7. et al
    . Effects of survey mode, patient mix, and nonresponse on CAHPS Hospital Survey scores. Health Serv Res 2009;44(2 Pt 1):501-18.
    OpenUrlCrossRefPubMed
  31. 31.↵
    1. Greaves F,
    2. Pape UJ,
    3. King D,
    4. Darzi A,
    5. Majeed A,
    6. Wachter RM,
    7. et al
    . Associations between Internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Qual Saf 2012;21(7):600-5. Epub 2012 Apr 20.
    OpenUrlAbstract/FREE Full Text
  32. 32.↵
    1. Lyratzopoulos G,
    2. Elliott M,
    3. Barbiere JM,
    4. Henderson A,
    5. Staetsky L,
    6. Paddison C,
    7. et al
    . Understanding ethnic and other socio-demographic differences in patient experience of primary care: evidence from the English General Practice Patient Survey. BMJ Qual Saf 2012;21(1):21-9. Epub 2011 Sep 7.
    OpenUrlAbstract/FREE Full Text
  33. 33.↵
    1. Paddison C,
    2. Elliott M,
    3. Parker R,
    4. Staetsky L,
    5. Lyratzopoulos G,
    6. Campbell JL,
    7. et al
    . Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey. BMJ Qual Saf 2012;21(8):634-40. Epub 2012 May 23.
    OpenUrlAbstract/FREE Full Text
  34. 34.↵
    1. Bjertnaes OA
    . The association between survey timing and patient-reported experiences with hospitals: results of a national postal survey. BMC Med Res Methodol 2012;12:13.
    OpenUrlCrossRefPubMed
  35. 35.
    1. Bowman MA,
    2. Herndon A,
    3. Sharp PC,
    4. Dignan MB
    . Assessment of the patient-doctor interaction scale for measuring patient satisfaction. Patient Educ Couns 1992;19(1):75-80.
    OpenUrlCrossRefPubMed
  36. 36.
    1. Savage R,
    2. Armstrong D
    . Effect of a general practitioner’s consulting style on patients’ satisfaction: a controlled study. BMJ 1990;301(6758):968-70.
    OpenUrlAbstract/FREE Full Text
  37. 37.
    1. Stevens M,
    2. Reininga IH,
    3. Boss NA,
    4. van Horn JR
    . Patient satisfaction at and after discharge. Effect of a time lag. Patient Educ Couns 2006;60(2):241-5. Epub 2005 Oct 25.
    OpenUrlCrossRefPubMed
  38. 38.
    1. Lemos P,
    2. Pinto A,
    3. Morais G,
    4. Pereira J,
    5. Loureiro R,
    6. Teixeira S,
    7. et al
    . Patient satisfaction following day surgery. J Clin Anesth 2009;21(3):200-5.
    OpenUrlCrossRefPubMed
  39. 39.
    1. Bendall-Lyon D,
    2. Powers TL,
    3. Swan JE
    . Time does not heal all wounds. Patients report lower satisfaction levels as time goes by. Mark Health Serv 2001;21(3):10-4.
    OpenUrlPubMed
  40. 40.
    1. Jensen HI,
    2. Ammentorp J,
    3. Kofoed PE
    . User satisfaction is influenced by the interval between a health care service and the assessment of the service. Soc Sci Med 2010;70(12):1882-7. Epub 2010 Mar 18.
    OpenUrlCrossRefPubMed
  41. 41.
    1. Jensen HI,
    2. Ammentorp J,
    3. Kofoed PE
    . Assessment of health care by children and adolescents depends on when they respond to the questionnaire. Int J Qual Health Care 2010;22(4):259-65. Epub 2010 Apr 28.
    OpenUrlAbstract/FREE Full Text
  42. 42.↵
    1. Kinnersley P,
    2. Stott N,
    3. Peters T,
    4. Harvey I,
    5. Hackett P
    . A comparison of methods for measuring patient satisfaction with consultations in primary care. Fam Pract 1996;13(1):41-51.
    OpenUrlAbstract/FREE Full Text
  43. 43.↵
    1. Rice RE,
    2. Katz JE
    . Comparing internet and mobile phone usage: digital divides of usage, adoption, and dropouts. Telecomm Policy 2003;27(8–9):597-623.
    OpenUrlCrossRef
  44. 44.
    1. Miller JD
    . Who is using the web for science and health information? Sci Commun 2001;22(3):256-73.
    OpenUrlAbstract/FREE Full Text
  45. 45.↵
    1. Brodie M,
    2. Flournoy RE,
    3. Altman DE,
    4. Blendon RJ,
    5. Benson JM,
    6. Rosenbaum MD
    . Health information, the Internet, and the digital divide. Health Aff (Millwood) 2000;19(6):255-65.
    OpenUrlAbstract/FREE Full Text
  46. 46.↵
    1. Ontario Agency for Health Protection and Promotion (Public Health Ontario).
    Summary measures of socioeconomic inequalities in health. Toronto, ON: Queen’s Printer for Ontario; 2013.
PreviousNext
Back to top

In this issue

Canadian Family Physician: 62 (12)
Canadian Family Physician
Vol. 62, Issue 12
1 Dec 2016
  • Table of Contents
  • About the Cover
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on The College of Family Physicians of Canada.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Measuring the patient experience in primary care
(Your Name) has sent you a message from The College of Family Physicians of Canada
(Your Name) thought you would like to see the The College of Family Physicians of Canada web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Measuring the patient experience in primary care
Morgan Slater, Tara Kiran
Canadian Family Physician Dec 2016, 62 (12) e740-e748;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Measuring the patient experience in primary care
Morgan Slater, Tara Kiran
Canadian Family Physician Dec 2016, 62 (12) e740-e748;
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • METHODS
    • RESULTS
    • DISCUSSION
    • Acknowledgments
    • Notes
    • Footnotes
    • References
  • Figures & Data
  • eLetters
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • Understanding disparities in primary care patient experience
  • Does advance contact with research participants increase response to questionnaires: A Systematic Review and meta-Analysis
  • Partnering with patients to improve access to primary care
  • Ten tips for advancing a culture of improvement in primary care
  • Google Scholar

More in this TOC Section

  • Electronic consultation questions asked to addiction medicine specialists by primary care providers
  • Sociodemographic variation in use of and preferences for digital technologies among patients in primary care
  • Prevalence and management of symptom diagnoses in children in general practice
Show more Research

Similar Articles

Subjects

  • Collection française
    • Résumés de recherche

Navigate

  • Home
  • Current Issue
  • Archive
  • Collections - English
  • Collections - Française

For Authors

  • Authors and Reviewers
  • Submit a Manuscript
  • Permissions
  • Terms of Use

General Information

  • About CFP
  • About the CFPC
  • Advertisers
  • Careers & Locums
  • Editorial Advisory Board
  • Subscribers

Journal Services

  • Email Alerts
  • Twitter
  • LinkedIn
  • Instagram
  • RSS Feeds

Copyright © 2025 by The College of Family Physicians of Canada

Powered by HighWire