As social media drives information dissemination based on popularity rather than accuracy, “fake news” is seemingly everywhere. Political fake stories get more press, but science fake stories are also proliferating. Not all scientific misinformation is fake, strictly defined (Oremus, 2016). Much of it is simply misleading, sometimes even unintentionally. But regardless of the label, all variants of inaccurate information can be damaging to scientific literacy; it is incumbent on us to teach students to cull through scientific information in popular sources.
Fake science news is far from new. In the U.S., it has existed at least since an 1835 series in the New York Sun documenting life on the moon, which included unicorns and bat-like men who talked and flew (Goodman, 2010). The paper’s editors thought the hoax would be obvious, but the tales were widely believed. Such mass gullibility continues across a range of contexts, from scientific nonsense regarding causes of sexual orientation spread by sketchy academics (Shermer, 2016) to supposed medical miracles — think, goat’s milk that cleanses your body of parasites — spouted by actor Gwyneth Paltrow on her lifestyle website (Brown, 2017). The Stanford History Education Group studied middle school, high school and university students’ susceptibility to such misinformation and found shocking lapses of critical thinking at all levels (2016). For example, undergraduate students were shown a tweet with a link to a statistic about background checks for gun purchases. Only about a third noted the potential biasing influence of the political organizations that posted the tweet.
Fortunately, the psychology classroom provides ready opportunities to teach information literacy and scientific thinking, and there are increasingly ready-made teaching tools. The Stanford study provides sufficient detail to use their study stimuli as teaching tools. Even better, KQED’s The Lowdown provides a fake news lesson plan (PDF, 421KB) tied to the Stanford study, with links to additional helpful sources. The Center for News Literacy at the Stony Brook University School of Journalism also provides a wealth of resources, including a curriculum toolbox.
For its simplicity, however, the tool I use most often is the CRAAP test (Blakeslee, 2004). Many university libraries provide versions of the test that helps students ask good questions about a source’s:
- Currency (When was it published? Has it been updated?)
- Relevance (Does it relate to your needs? Who is the audience?)
- Authority (Who are the author and publisher? What are their credentials?)
- Accuracy (Is it reliable and truthful? Is it supported by evidence?)
- Purpose (Why does this information exist? Is there a bias?)
Of the available versions, I’m partial to the one on Juniata College’s library website (PDF, 41KB) which lists several questions for each CRAAP test criterion.
CRAAP test detractors argue that it is overly simplistic (Burkholder, 2010). But others observe that we already naturally use heuristics in our evaluation of sources (Metzger, Flanagin & Medders, 2010). The CRAAP test might help us fine-tune these natural tendencies. Moreover, the CRAAP test is memorable and easy to use — elementary school students have even used it to evaluate sources about Big Foot (Knott & Szabo, 2013). And proponents of the tool stress the importance of teaching students to think about the spirit rather than the letter of the criteria (Wichowski, D. E. & Kohl, 2012).
In many of my courses, I end each chapter or section of the course with a 20-minute CRAAP-test activity. I share a source related to psychological science: a blog post, an app, a news article, a company, an information website. Students discuss in pairs, digging beyond the source to check out any links or citations. Then, we have a group discussion for students to share their evaluations. I sometimes choose an evidence-based article from a lesser-known publication or organization, so students are not immediately tipped off by the source — say, an article on the potential psychological damage from ballet training. More often, I choose a problematic one. For example, on the website for a company aimed at treating mental illness, my students found problems with claims that were exaggerated or either loosely connected or unrelated to the cited data, and discovered that a founder’s previous company had been cited for unscientific interventions. Toward the end of the semester, each student chooses her or his own source, which I approve, and succinctly critiques it using the CRAAP test in a two-page assignment. (See Nolan & Hockenbury, 2015, for additional critical thinking activities to promote scientific literacy.)
More research is needed to determine the outcomes of using tools like the CRAAP test, but psychology classrooms seem ideal for teaching students to critique scientific sources and studying how well various interventions work. Findings from scholars of teaching and learning may one day support the view that the CRAAP test is “the most concise, flexible and memorable evaluation tool of the series of checklist tests that have been proposed since the late 1990s” (Wichowski & Kohl, 2012).
Reposted with permission from APA’s Psychology Teacher Network
Blakeslee, S. (2004). The CRAAP test. LOEX Quarterly, 31(3), 6-7.
Brown, K. (March 9, 2017). A women’s health magazine just printed Gwyneth Paltrow’s terrible health advice. Gizmodo. Retrieved from http://gizmodo.com/a-womens-health-magazine-just-printed-gwyneth-paltrow-s-1793122236.
Burkholder, J. M. (2010). Redefining sources as social acts: Genre theory in information literacy instruction. Library Philosophy and Practice (e-journal). Paper 413. http://digitalcommons.unl.edu/libphilprac/413.
Goodman, M. (2010). The sun and the moon: The remarkable true account of hoaxers, showmen, dueling journalists, and lunar man-bats in nineteenth-century New York. New York: Basic Books.
Knott, D., & Szabo, K. (2013). Bigfoot hunting Academic library outreach to elementary school students. College & Research Libraries News, 74, 346-348.
Metzger, M.J., Flanagin, A.J., & Medders, R.B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439.
Nolan, S.A. & Hockenburg, S.E. (2015). Think like a scientist: harnessing current events to teach psychological science. Psychology Teacher Network, 25(4). Retrieved from http://www.apa.org/ed/precollege/ptn/2015/12/think-like-scientist.aspx.
Oremus, W. (Dec. 6, 2016). Stop calling everything “fake news.” Slate. Retrieved from http://www.slate.com/articles/technology/technology/2016/12/stop_calling_everything_fake_news.html.
Shermer, M. (Dec. 1, 2016). Beware bogus theories of sexual orientation. Scientific American. Retrieved from https://www.scientificamerican.com/article/beware-bogus-theories-of-sexual-orientation/.
Stanford History Education Group. (2016). Evaluating information: The cornerstone of civic online reasoning. Retrieved from https://sheg.stanford.edu/upload/V3LessonPlans/Executive%20Summary%2011.21.16.pdf (PDF, 3.36MB).
Wichowski, D.E., & Kohl, L.E. (2012). Establishing credibility in the information jungle: Blogs, microblogs, and the CRAAP test. In M. Folk & S. Apostel (Eds.), Online credibility and digital ethos: Evaluating computer-mediated communication (pp. 229-251). Hershey, Pennsylvania: Information Science Reference.