Sitemap

There is no evidence that AI causes brain damage. (Not yet anyway.)

Unpacking the viral, un-peer-reviewed study on LLM use and brain function.

16 min readJun 24, 2025

--

Press enter or click to view image in full size
Screenshots of numerous headlines misrepresenting the study I talk about in this article.
Reading bad science journalism is bad for your brain, too.

There’s a new (unreviewed draft of a) scientific article out, examining the relationship between Large Language Model (LLM) use and brain functionality, which many reporters are incorrectly claiming shows proof that ChatGPT is damaging people’s brains.

As an educator and writer, I am concerned by the growing popularity of so-called AI writing programs like ChatGPT, Claude, and Google Gemini, which when used injudiciously can take all of the struggle and reward out of writing, and lead to carefully written work becoming undervalued. But as a psychologist and lifelong skeptic, I am forever dismayed by sloppy, sensationalistic reporting on neuroscience, and how eager the public is to believe any claim that sounds scary or comes paired with a grainy image of a brain scan.

So I wanted to take a moment today to unpack exactly what the study authors did, what they actually found, and what the results of their work might mean for anyone concerned about the rise of AI — or the ongoing problem of irresponsible science reporting.

If you don’t have time for 4,000 lovingly crafted words, here’s the tl;dr.

--

--

Devon Price
Devon Price

Written by Devon Price

Social Psychologist & Author of LAZINESS DOES NOT EXIST and UNMASKING AUTISM. Links to buy: https://linktr.ee/drdevonprice

Responses (27)