Skip to content

Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them

This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

Exploring the Impact of Required Justifications in Multiple-Choice Elaboration Questions on Student Experiences and Performance

This study investigated a hybrid assessment format called Multiple-Choice with Elaboration Questions (MCEQs). In these questions, students not only select a multiple-choice answer but also must justify their choice in writing. The research was conducted across four sections of an upper-division psychology research methods course at a large public university.

Feedback in your voice

Rubrics are handy tools for providing clear expectations and consistent feedback to learners, but students also welcome authentic feedback that sounds like it came from you. You can add your own “voice” through the commenting tool on the rubric in Brightspace or by adding multimedia feedback.

Choose your Assessment

The University of Kansas has a fantastic team supporting their CBE program, including a psychometrician (an expert in the measurement of mental capacities and processes) who developed a taxonomy of assessment types. While it is still in development, you can find the verb used in your learning outcome in the list in this database (such as Apply) and see helpful related information. This includes: