This week, I want to highlight a substack post critiquing a recent research article. The author critically examines a widely referenced paper claiming that increased AI use degrades critical thinking skills. Pookins argues that the study’s design and methodology are fundamentally flawed: the sample isn’t representative, the survey measures self-reported beliefs rather than actual critical thinking performance, and many items intended to measure different constructs are essentially redundant. Because of these flaws, he asserts that the paper does not provide reliable evidence that AI use causes a decline in critical thinking, meaning that its frequent citation in media and academic discussions may be misleading or premature. Moreover, he points to evidence that the paper itself may have been AI-generated.
Why is this such an important post to read? The key issue isn’t whether AI might affect cognition. That is a broader and ongoing research question supported by diverse studies on cognitive offloading and educational impacts. Instead, how we interpret and communicate evidence is what is critical here. The post highlights the importance of scrutinizing research methodology before adopting headlines about AI’s harms or benefits. In teaching and policy conversations, this means encouraging nuanced engagement with research on AI and critical thinking, distinguishing between correlation and causation, and integrating AI in ways that support, rather than inadvertently replace, deep learning and reasoning.
If you have found a quality research article on the impact of AI on learning, please share it with us by emailing it to umpi-ctl@maine.edu.
Read the full post here:
Pookins, N. (2026, February 15). Highly-Cited “AI Erodes Critical Thinking” study appears to be AI generated slop. Nebu’s Newsletter. Substack. https://nebu.substack.com/p/highly-cited-ai-erodes-critical-thinking