Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them

This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

A key argument of the paper is that research culture often treats measurement as secondary to statistical analysis, creating what the authors call a “measurement schmeasurement” attitude. This mindset allows substantial researcher flexibility in selecting or modifying measures without transparent reporting, producing results that appear rigorous but may rest on unstable foundations. The authors emphasize that even well-powered studies with sophisticated analyses cannot compensate for poor measurement. To address this issue, they advocate greater transparency about measurement decisions, such as clearly defining constructs, reporting how items were chosen or modified, documenting reliability and validity evidence, and making measurement materials openly available. Such practices would allow others to evaluate, replicate, and build upon research more effectively.

The article’s insights translate directly to assessment of student learning, where educators frequently rely on tests, rubrics, and surveys to infer what students know or can do. Just as in research, questionable measurement practices can occur if instructors use poorly aligned assessments or rely on instruments that do not validly capture the intended learning outcomes. The authors’ emphasis on construct clarity suggests that educators should first define precisely what a learning outcome represents and then ensure assessments genuinely measure those constructs rather than convenient proxies such as recall or participation. Increased transparency, such as sharing rubric design, validation processes, and examples of student work, could strengthen the credibility of learning assessments, improve comparability across courses or programs, and support more meaningful interpretations of evidence about student learning.

Read the full article here:

Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393