Shifting people’s judgments toward the scientific involves teaching them to purposefully evaluate connections between evidence and alternative explanations.
Contemporary challenges — such as climate change, and food, energy and water security — demand that people think scientifically. These challenges are often complex and interrelated; for example, society’s increasing demand for energy contributes to human-induced climate change, which in turn, limits freshwater and food supplies (Wheeler & Von Braun, 2013). While many socio-scientific challenges are seriously impacting local, regional and global communities, an increasing availability of information has contributed to what many call a “Post-Truth Era,” where emotions and personal beliefs override scientifically valid evidence and explanations. Combating mythological and unproductive thinking in the face of current change requires increased scientific literacy, which involves knowing both what scientists know and how scientists come to know what they know.
These two components (i.e., the what and the how of scientific literacy) depend on the evaluative enterprise of science. Evaluation lies at the heart of scientific activity and includes both critique and analysis (National Research Council, 2012). For example, some climatologists construct explanatory and predictive models representing Earth’s atmosphere, and then collect empirical data to calibrate these models. Evaluation of connections between lines of evidence (e.g., sea surface temperatures) and scientific explanations (e.g., the interdependence of oceans and atmosphere) could lead to subsequent model refinements and validation with additional empirical data.
Despite this rigorous process, people often find scientific explanations to be implausible. For example, many consider a nonscientific explanation — i.e., current climate change is caused by increasing amounts of energy received from the sun — to be more plausible than the scientific explanation –i.e., current climate change is caused by human activities (Lombardi, Bailey, Bickel & Burrell, 2018; Lombardi, Bickel, Bailey & Burrell, 2018; Lombardi, Danielson & Young, 2016; Lombardi, Sinatra & Nussbaum, 2013). In other words, there is often a gap between what scientists and lay people find plausible; i.e., a plausibility gap.
Plausibility judgments about explanations
Individuals naturally make judgments about the nature of knowledge and knowing, or, epistemic judgments, and such judgments may include plausibility of an explanation, validity of a data set, and/or credibility of a source. Plausibility is specifically an epistemic judgment about the potential truthfulness of an explanation (Lombardi, Nussbaum & Sinatra, 2016). Unlike other epistemic judgments, plausibility does not generally apply to lines of evidence and sources of information. Epistemic judgments about sources (e.g., credibility, trustworthiness) are related to plausibility, but cognitively distinct (Lombardi, Seyranian & Sinatra, 2014).
Individuals often make plausibility judgments implicitly and automatically, with little purposeful thought. My colleagues and I have conducted three studies examining people’s plausibility perceptions about eight scientific statements found in a United Nation’s climate change report (Intergovernmental Panel on Climate Change, 2008). These statements reflect fundamental scientific explanations about:
- How current climate is changing.
- Why current climate is changing.
- When catastrophic impacts will occur.
Study participants rated each statement on a 1–10 plausibility scale (1 being greatly implausible or even impossible, 5 being somewhat plausible and 10 being highly plausible), and generally considered the statements, particularly those statements about predictions of future impacts, to be only somewhat plausible (Figure 1).
Figure 1. Adolescent and adult (N = 432) perceptions of the plausibility of eight explanatory scientific statements from the United Nation’s Intergovernmental Panel on Climate Change. Plausibility judgments range from 1 (highly implausible or impossible) to 10 (highly plausible). Errors bars indicate ±1 standard error. On the horizontal axis, item 1 = “The Earth is warming…,” 2 = “Evidence…shows climate is changing…,” 3 = “…human industry has caused [atmospheric greenhouse] gases to increase,” 4 = “Human activities that release greenhouse gases are causing global warming,” 5 = “Human influences on climate include rising sea levels and melting of snow and ice,” 6 = “Human releases of greenhouse gases will increase…,” 7 = “climate change will still occur for centuries because of these releases…,” and 8 = “Human caused global warming will lead to some impacts that are sudden…” Source: Lombardi and Sinatra (2012); Lombardi and Sinatra (2013); Lombardi, Seyranian, and Sinatra (2014); Lombardi, Danielson, and Young (2016).
My more recent research suggests that when people are free to simultaneously consider both a non-scientific explanation (i.e., sun-induced climate change) and the scientific explanation (i.e., human-induced climate change), they rate the scientific alternative as even less plausible. Furthermore, we have also found an appreciable plausibility gap with other controversial socio-scientific issues, including the role of hydraulic fracturing (aka fracking) in inducing moderate size earthquakes and the importance of wetlands on ecosystem services (Lombardi et al., 2018a,b).
The good news is that plausibility judgments are characteristically tentative and changeable (Lombardi et al., 2016b). With explicit and purposeful thinking, learners shifted their plausibility judgments toward the scientific and gained deeper knowledge about the fundamental concepts underlying these phenomena (Lombardi et al., 2013). Although being critically evaluative is challenging for many people, my colleagues and I have developed instructional scaffolds to facilitate such scientific thinking in classroom settings.
Instruction facilitating shifts toward scientific thinking
My research team’s current projects have focused on developing and testing supports to promote both shifts in plausibility toward the scientific and deeper understanding of science content. Embedded within these scaffolds is the notion that cognitive engagement increases through purposeful evaluation about the validity of alternative explanations (Lombardi et al., 2016b).
In our projects, we have adapted and used a scaffold, called the Model-Evidence Link (MEL) diagram, which was originally formulated by a team of educational psychologists and science education researchers at Rutgers University (see Chinn & Buckland, 2012, for an overview). We designed and adapted our MELs to assist students in making scientific evaluations about the connections between multiple lines of scientific evidence and alternative explanatory models about an observed phenomenon (Figure 2; Lombardi, 2016).
Within the context of a topic (e.g., causes of current climate change), the MEL diagram and associated support materials present students with two conceptual and explanatory models. For example, in the Climate Change MEL (Figure 2), two models provide competing explanations about the cause of current climate change (i.e., the cause of global increases in temperatures and decreases in surface ice): Model A, where current climate change is caused by increasing amounts of gases released by human activities; and Model B, where current climate change is caused by an increasing amount of energy received from the sun.
When using the MEL diagram as scaffold, students draw arrows in one of four different shapes to indicate their evaluation about how well each line of evidence supports each model. Straight arrows indicate that evidence supports the model; squiggly arrows indicate that evidence strongly supports the model; straight arrows with an “X” through the middle indicate the evidence contradicts the model; and dashed arrows indicate the evidence has nothing to do with the model.
Figure 2. A student example of the climate change model-evidence link (MEL) diagram — top — with explanatory tasks on the bottom.
We have extensively tested the effectiveness of the MEL activities in urban and suburban science classrooms throughout the U.S., involving hundreds of middle and high school students. Quasi-experimental results show that the MEL facilitated shifts in plausibility toward the scientific and deepened student knowledge about various earth science phenomena (Figure 3; Lombardi et al., 2018a).
We have consistently found that the MEL’s key feature of fostering simultaneous evaluation of lines of evidence with two alternative explanations was generally more effective at promoting plausibility reappraisal and knowledge construction than comparison activities where students evaluated only one alternative (i.e., the scientific alternative only). We specifically compared the MEL, where students evaluate connections between lines of evidence and two alternative explanations (i.e., the scientific alternative vs. another alternative), to the MONO, where students evaluate connections between lines of evidence and only one explanation (i.e., the scientific alternative).
We also compared the MEL, where students evaluate connections diagrammatically, to the MET, where students evaluate connections using tables and letter codes. Effect sizes were medium to large, suggesting meaningful increases, which is notable given the complexities of classroom-based research. In plainer language, the MEL consistently resulted in knowledge gains representing about a half a letter grade increase. Given that the dosage of MEL instruction was only eight class days (i.e., about 5 percent of the total instructional dosage in a typical 180-day school years), a half letter grade increase suggested a potent classroom learning activity (Lombardi et al., 2018a,b).
Figure 3. Average pre and post instructional plausibility and knowledge scores by treatment activity, where MEL is the Model-Evidence Link Diagram, MET is the Model-Evidence Link Table, and MONO is the Model-Evidence Link Diagram with only one alternative, the scientific alternative. Plausibility scores are the scientific minus the alternative rating, with possible score ranging was from -9 to +9. The possible score range of knowledge scores was from 20 to 100.Bars on each column indicate ±1 standard error. Source: Lombardi, Bailey, Bickel, and Burrell (2018).
Not a silver bullet, but a promising start
The community of scientists evaluates explanations about phenomena and judge them to be plausible or implausible in relation to other alternatives. Although this is an important scientific practice, we should not assume that learners fully understand how to activate their scientific thinking when considering explanations of observed phenomena. Therefore, instructional scaffolds, such as the MEL, which help learners to evaluate more deeply, should be employed to facilitate scientific thinking (National Research Council, 2012). Such evaluations are in addition to those that judge the quality of evidence, the reliability of measurement, and the credibility of information sources.
However, more work needs to be done. My research team and I have found that students only partially transferred learned critical evaluation skills to other contexts (i.e., learners did not fully use scientific thinking outside of the MEL scaffold). Therefore, we are currently developing additional tools, where students are able to construct their own MEL diagrams.
We call these the build-a-MEL, which has the purpose of facilitating internalization of the scaffold as a scientific habit of mind. The notion of the build-a-MEL emerged from the idea of conceptual agency, where learners who exercise such agency are authors of their own contributions, accountable to the learning community, and have the authority to think about and solve problems (Nussbaum & Asterhan, 2016). Results from preliminary testing of the build-a-MELs showed learners to be more critical in their evaluations, above and beyond the preconstructed MEL (Lombardi, Klavon, Holzer & Kendall, 2019).
Our future research will also delve into the factors underlying engagement during the MEL activities. We are wondering if students express increased cognitive, affective and agentic engagement because they are free to evaluate alternative explanations and make epistemic judgments without a priori knowledge of the scientific explanation. We acknowledge that such engagement may seem counterintuitive to many researchers and instructional practitioners because we want people to deeply understand valid scientific explanations.
However, developing a citizenry that is scientifically literate involves — in part — increasing students’ skills to critically evaluate alternative explanations in a similar manner to what scientists actually do (National Research Council, 2012). However, my research team and I recommend that teachers make scientific explanations clear to all students after completing the MEL activities. Doing so will prevent teaching non-scientific information (e.g., teaching that current climate change is caused naturally, rather than teaching the overwhelming scientific consensus that current climate change is caused by human activities).
My research team and I operationalize scientific thinking within the context of purposeful and explicit evaluation about the connections between evidence and explanations. Such thinking involves reappraisal of epistemic judgments (i.e., plausibility) toward scientifically accurate conceptions. Evaluations that are more critical may facilitate individuals to render plausibility judgments that are more scientific (Lombardi et al., 2016b). Our research suggests that instructional scaffolds can help students think more critically, facilitate scientific judgments and deepen learners’ science knowledge in classroom settings. Our hope in conducting such research is ultimately to prepare a more scientifically literate society that can equitably solve and adapt to local, regional and global challenges in a rapidly changing world.
Acknowledgments
The National Science Foundation, under Grant No. DRL-1316057, supported some of the research mentioned in this article. Any opinions, findings, conclusions, or recommendations expressed are those of the authors and do not necessarily reflect the NSF’s views.
References
Chinn, C. & Buckland, L. (2012). Model-based instruction: Fostering change in evolutionary conceptions and epistemic practices (pp. 211-232). In K.S. Rosengren, E.M. Evans, S. Brem, & G.M. Sinatra (Eds.). Evolution challenges: Integrating research and practice in teaching and learning about evolution. New York: Oxford University Press.
Intergovernmental Panel on Climate Change. (2008). Climate change 2007: Synthesis report. Geneva, Switzerland: World Meteorological Organization.
Lombardi, D. (2016). Beyond the controversy: Instructional scaffolds to promote critical evaluation and understanding of Earth science. The Earth Scientist, 32(2), 5-10.
Lombardi, D., Bailey, J.M., Bickel, E.S., & Burrell, S. (2018a). Scaffolding scientific thinking: Students’ evaluations and judgments during Earth science knowledge construction. Contemporary Educational Psychology, 54, 184-198. doi: 10.1016/j.cedpsych.2018.06.008
Lombardi, D., Bickel, E.S., Bailey, J.M., & Burrell, S. (2018b). High school students’ evaluations, plausibility (re) appraisals, and knowledge about topics in Earth science. Science Education, 102(1), 153-177. doi: 10.1002/sce.21315
Lombardi, D., Danielson, R.W., & Young, N. (2016a). A plausible connection: Models examining the relations between evaluation, plausibility, and the refutation text effect. Learning and Instruction, 44, 74-86. doi: 10.1016/j.learninstruc.2016.03.003
Lombardi, D., Klavon, T.G., Holzer, M.A., & Kendall, R. (2019, April). Evaluating explanations about water resources: Scaffolds to shift students’ epistemic judgments and agency toward the scientific. Paper accepted for presentation as part of the symposium, “Investigating epistemic cognition in relation to food, water, and energy (FEW) issues,” Annual Meeting of the American Educational Research Association, Toronto.
Lombardi, D., Nussbaum, E.M., & Sinatra, G.M. (2016b). Plausibility judgments in conceptual change and epistemic cognition. Educational Psychologist, 51(1), 35-56. doi: 10.1080/00461520.2015.1113134.
Lombardi, D., Seyranian, V., & Sinatra, G.M. (2014). Source effects and plausibility judgments when reading about climate change. Discourse Processes, 51(1/2), 75-92. doi: 10.1080/0163853X.2013.855049.
Lombardi, D., & Sinatra, G.M. (2012). College students’ perceptions about the plausibility of human-induced climate change. Research in Science Education, 42, 201-217. doi: 10.1007/s11165-010-9196-z.
Lombardi, D., & Sinatra, G.M. (2013). Emotions about teaching about human-induced climate change. International Journal of Science Education, 35, 167-191. doi: 10.1080/09500693.2012.738372
Lombardi, D., Sinatra, G. M., & Nussbaum, E. M. (2013). Plausibility reappraisals and shifts in middle school students’ climate change conceptions. Learning and Instruction, 27, 50-62. doi: 10.1016/j.learninstruc.2013.03.001.
National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, D.C.: National Academies Press.
Nussbaum, E.M., & Asterhan, C.S. (2016). The psychology of far transfer from classroom argumentation. In F. Paglieri (Ed.), The psychology of argument: Cognitive approaches to argumentation and persuasion. London, U.K.: College Publications, Studies in Logic and Argumentation Series.
Wheeler, T., & Von Braun, J. (2013). Climate change impacts on global food security. Science, 341(6145), 508-513. doi: 10.1126/science.1239402.