3 new reasons why US educational tests should be dynamic

In our Psych Learning Curve blog post one year ago, we described an approach to educational testing—called dynamic measurement— that has the potential to improve educational testing practice in the U.S.

In dynamic measurement, students are assessed at multiple time-points with targeted instruction in between, and then the growth across that time-span is incorporated into students’ scores. We’ve shown in past work that scores from dynamic measures are less affected by student characteristics such as race, gender, or poverty level than are traditional tests. Now, we want to share three more insights from our research that we think will continue to support the potential for dynamic measurement in U.S. schools.

Dynamic Measurement Better Predicts the Future

Some individuals who are strong proponents of traditional static tests (meaning that a single-time-point of measurement can be high-stakes) come from what we think of as a “tough love” perspective: the argument goes that, even though tests can be painful for students, those tests are very important because they can be used to predict future student abilities and outcomes.

The problem with this argument is that static tests don’t predict students’ futures that well, and we’ve shown in a recent analysis, that dynamic measurement can increase the predictive ability of tests.

Specifically, we used a dataset that followed individuals from age 3 to age 72, measuring them periodically on several cognitive skills. We showed that, if the tests from those individuals’ k-12 years were scored using a dynamic measurement model, those scores predicted age 72 scores three times better than scores from static tests during high school.

Dynamic Measurement can Uncover Hidden Instructional Effects

In the world of educational policy, there are currently debates surrounding whether research-based and highly effective curricula are worth using when students are very young (as in preschool or kindergarten). Even the young students who do not receive the best instruction in the early years tend to catch up to those that did, so why go through the effort (and cost) of giving these curricula to preschoolers?

Using dynamic measurement, we were able to find an important reason. In a recent paper, we showed that the students who received a high-quality mathematics curriculum in preschool learned math more rapidly for the rest of elementary school. Traditional static testing practice was not sensitive enough to detect this difference, but through dynamic measurement, the differences in the learning rates of students were straightforward to establish.

Dynamic Measurement Works across Levels of Schooling

Dynamic measurement is not just applicable to young children or the k-12 setting. In the past year, we also showed that the U.S. medical licensing exams— three tests that are commonly referred to as ‘Boards’ and must be passed if physicians are to practice in the U.S.—can also be reconceptualized as dynamic.

Using a dynamic measurement model, we were able to re-score the Board exams for a sample of recent U.S. medical school graduates. We showed that the new dynamic scores we generated from the Boards were capable of identifying physicians who would be effective at caring for patients when they entered their clinical internships.

Always More to Learn

In sum, based on our recent work, dynamic measurement is a method that is effective at predicting students’ future learning outcomes, detecting otherwise hidden effects of curricula, and that can be applied throughout schooling from preschool through medical school. Our research group will continue to work to refine and apply dynamic measurement methods with educational data, in order to fully uncover the potential of this interesting innovation. There’s always more to learn, and as we discover more, the relevance of dynamic measurement to educational practice in the U.S. keeps increasing.

About the Author

Denis Dumas is Assistant Professor of Research Methods and Statistics in the Morgridge College of Education at the University of Denver. Before coming to DU, Dr. Dumas received his PhD in Educational Psychology and MA in Measurement, Statistics, and Evaluation from the University of Maryland at College Park, and was Assistant Professor of Educational Psychology at Howard University. In general, his research focuses on understanding student learning, cognition, and creativity through the application of latent variable methods, especially multidimensional item-response theory and non-linear growth models. He believes deeply in the power of quantitative research such as this for improving the field’s current understanding of learning, and supporting the academic development of all students. This work has led him to co-develop (with Dr. Daniel McNeish) the Dynamic Measurement Modeling framework as a way to improve the validity of psycho-educational assessment.
Daniel McNeish is an Assistant Professor of Quantitative Psychology at Arizona State University. He received a PhD in Measurement & Statistics at the University of Maryland and previously held academic positions at Utrecht University (Department of Methodology & Statistics) and UNC-Chapel Hill (Center for Developmental Science). His research focuses on statistical problems in behavioral sciences, particularly those related to small sample sizes, best practice, and challenging data structures. Acknowledgement of his research contributions has come from a dissertation award from the American Psychological Association, designation as a Rising Star by the Association for Psychological Science, and elected membership in the exclusive Society for Multivariate Experimental Psychology.