As many of us do, I recently turned to Google while quickly searching for a copy of the article on the safety profile of the human papillomavirus vaccine by Slade and colleagues published in JAMA.1 Typing into the search field with a purpose, I largely ignored Google’s autocomplete drop-down box that proposed search strings with each keystroke. However, on my path to typing out safety Gardasil JAMA, one dramatic autocomplete prediction popped up, caught my eye, and led me to pause.
I stared carefully at what I had typed: safety Gardasil.
There, staring right back at me, was the top autocomplete suggestion from Google, in bold font: dangers of Gardasil vaccine.
Granted, the other suggestions on the list were not all that bad: safety Gardasil 2014, safety Gardasil FDA, and safety Gardasil 2011. That took nothing away, though, from the unsettling fact that safety was somehow autocompleted to dangers. This stood out as a stark turn of phrase that changed the intent of my original search essentially. In turn, I began to wonder what role autocomplete might play in shaping patient ideas by redirecting their health-related searches to results unrelated to their initial questions.
The Internet: shaping patient experiences
Patients are using the Internet to search for health information more than ever. In one survey, 72% of US Internet users had searched for health information online in the past year, and of these 77% had started with a search engine like Google.2 Many a practitioner is familiar with patients walking in with printouts of Google search results or the linked webpages, be they WebMD or other less reputable sites. Google is the world’s most popular search engine, and as a result many patients use it to search for answers to their health questions. Autocomplete consequently plays a role in their online health experience.
For the most part, autocomplete seems largely benign, if not humorous. We have all laughed at jokes on websites like Autocomplete Me, which demonstrate the limitations of the system and the very strange search strings that Google might predict.3 Google explains that autocomplete provides search-term predictions based on the previous aggregate search activities of users and the content of webpages. Using undisclosed criteria (such as how often past users have searched for a term and a small set of exclusions) in a mathematical formula, autocomplete proposes similar or “related” search strings to the user.4
Within the medical community, the inner workings of “Dr Google” are often seen as passive—a black-box search engine that patients use to locate health information of variable quality. However, the mere existence of autocomplete throws passivity into question. After all, if a search for vaccine safety suggests dangers of vaccination instead, how else is autocomplete prompting patients?
Figure 1 quickly shows the conundrum. At the time of writing, a search for cure cancer came up with the suggestions cure cancer naturally and cure cancer with diet, and even the pop-up paid advertisements in the sidebar suggested “natural” cancer remedies with a toll-free number ostensibly to learn about safe, nontoxic options to treat cancer. Tacking on the word with to that string proposed searches about curing cancer with cannabis, as well as with HIV.
Searches for depression, anxiety, and antidepressants result in autocomplete suggestions that perpetuate mental health stereotypes and stigma. Beyond simple shifts in meaning, misinformation is also an issue, although one that is somewhat mitigated. Searches for acetaminophen and ibuprofen result in the highest-ranking links referring to reliable sources that dispel the initial misinformation proposed by autocomplete.
Because the algorithm behind autocomplete is unpublished, it is unclear how many searches or views are needed in order for autocomplete to predict a certain search. However, what is clear is that autocomplete could be a double-edged sword. At best, it presents an opportunity to guide patients to helpful and trusted resources, even if they search for something misleading initially. At worst, autocomplete might be suggesting incorrect information or completely spurious associations that are curated by mass “groupthink” or even special groups attempting to fool the algorithm. Either way, the lack of active double-checking by an actual human, let alone a robust review by a medical professional, might be cause for concern.
Returning to my original search, similar vaccine search strings (eg, safety measles vaccine and safety meningitis vaccines) resulted in the same pattern of autocomplete proposing dangers in place of safety. A quick screenshot and tweet to Google with the simple question, “Are you fueling the antivaccine movement via autocomplete?” got results: a repeated search a few hours later showed that safety with vaccines was no longer being autocompleted to dangers.
Did Google take note of the concern expressed in a tweet and alter its algorithm? Google does ask users to point out “offensive” autocomplete results to support a reactive response to issues arising.4 However, the remaining questions are endless. How is Google autocomplete prompting patients in their health queries? How do the suggested search terms and subsequent searches influence patient knowledge, beliefs, or health-seeking behaviour? Might autocomplete terms reshape or reinforce misperceptions about interventions like vaccines or antibiotics? Might it redirect patients with misperceptions to evidence correcting those views?
And what are the implications for physician practice? Can physicians perhaps influence autocomplete? Could the medical community work closely with Google and other search engines to thoughtfully ensure that the algorithm is steering people toward more robust information on health behaviour, diseases, and interventions?4 A single quick fix unfortunately does not address the potential multitude of misleading or incorrect associations that might exist, some of which are not even on our radar. Nor does it address what recourse physicians have to monitor and address problematic autocomplete predictions; however, it is worth noting that in recent years we have seen certain high-profile individuals sue Google for defamation via autocomplete.5
Power of suggestion
While the literature and evidence to date have not examined priming by autocomplete specifically, the power of suggestion is well documented in psychology publications.6 In a world where three-quarters of patients are searching for health information on the Internet, ongoing patient education around reliable Internet resources and appropriate search strategies continues to be critically important. Priming by autocomplete is thus worthy of exploration: Were antivaccination patients redirected repeatedly to dangers of measles vaccines when they were simply looking up side effects? Is a noncompliant patient not taking his or her medication because autocomplete suggested that the prescribed drug is not safe? Did a patient formerly “allergic to gluten” change his or her mind after being sent to reputable sites debunking his or her beliefs?
The Internet and Google have the potential to continue helping patients to connect the dots. Harnessing this potential for good requires innovative patient education programs to allow users to critically assess the reliability of Internet resources. It also requires the health care community to work with search-engine giants to ensure their products support the knowledge and health of users. Finally, recognition of nonpassivity in Internet searching supports the need for additional research to characterize how autocomplete and other search features might be subtly shaping patient experiences online.
Footnotes
This article has been peer reviewed.
La traduction en français de cet article se trouve à www.cfp.ca dans la table des matières du numéro d’août 2016 à la page e424.
Competing interests
None declared
The opinions expressed in commentaries are those of the authors. Publication does not imply endorsement by the College of Family Physicians of Canada.
- Copyright© the College of Family Physicians of Canada