- The harm is not the tool, it is passive acceptance of its output
- Users who push back, edit, and reject AI gain confidence
- 80 percent of users accept the AI answer even when it is wrong
Nataliya Kosmyna keeps a list of words she would rather journalists not use about her own research. Brain rot. Stupid. Dumb. Terrifying.
Her MIT Media Lab posted the list publicly last summer. Her preprint on AI and essay writing had gone viral, the one that introduced the idea of cognitive debt, and the headlines did exactly what she had asked them not to do.
The paper coined a phrase that stuck. Kosmyna's team had wired 54 students into electroencephalography headsets and asked them to write essays with ChatGPT, with Google, or unaided. The brains using ChatGPT showed the weakest neural connectivity.

Nataliya Kosyma, a research scientist at MIT Media Lab's Fluid Interfaces group and a Visiting Research Faculty at Google. Photo credit: braini.io
When those participants wrote without help in a fourth session, their connectivity stayed weak. The debt carried over.
The press did what the press does. ChatGPT was rotting our brains. Cognitive collapse, one viral thread declared, with a "47 percent reduction in brain connectivity" the authors had not actually claimed.
Definition
Cognitive Debt
The phrase came from Nataliya Kosmyna's MIT preprint in June 2025. It borrows from technical debt in software: a short-term gain that compounds into long-term cost. In cognitive terms, offloading mental effort to AI reduces the practice your brain needs. The 2026 research has sharpened it. Debt accumulates in passive use, not active engagement.
That was June 2025. Ten months later, the research has moved on without most of the coverage noticing.
The 2026 evidence is different
Four studies published since January, using four different methods, have converged on a finding that reframes the entire debate. AI use itself is not the variable. How you use it is.
The most recent landed in April 2026. Sarah Baldeo, a researcher at Middlesex University in London, published in the American Psychological Association journal Technology, Mind, and Behavior.
Sarah Baldeo is a neuroscientist and AI entrepreneur. Photo credit: Benefits Canada

Her sample was large. She recruited 1,923 working adults across the United States and Canada and gave them ten simulated work tasks with whichever AI tool they preferred. She logged their behavior and asked them how they felt afterward.
The headline finding is not that AI users lost confidence. Some did and some did not, and the difference was not the tool.
Participants who accepted the AI's first answer reported lower confidence in their own reasoning. Participants who pushed back reported the opposite. The correlation between override behavior and self-reported confidence sat at r = .61, statistically robust at p < .01.
A regression model accounted for 41 percent of the variance in confidence using just two predictors. How much someone relied on AI. How often they overrode it.
What the statistics mean
r = .61 is the correlation between override behavior and confidence. On a scale where 0 means no relationship and 1 means perfect match, .61 is strong - the two reliably move together.
p < .01 means there is less than a 1 in 100 chance the result is random noise. It is the standard threshold for a real finding.
41 percent of variance means those two behaviors, relying on AI and pushing back, explain almost half of the difference in confidence across participants.
The way you argue is the whole story
Baldeo writes the implication carefully in her paper. "AI use itself was not associated with reduced confidence. Rather, confidence varied as a function of how participants engaged with AI-generated output."
That is the new question, stated quietly.
She then does something rare in this literature. She names the problem with the genre. Her introduction cites Stanley Cohen's 1972 work on moral panics, the social pattern in which a new technology produces alarm out of proportion to evidence.
Calculators were going to make us innumerate. Television was going to rot our brains. Smartphones were going to destroy attention. Each panic produced studies, headlines, hand-wringing, and then evidence that the truth was more interesting and less alarming.
Baldeo seems determined this round will be different. Her own findings, she writes, "do not demonstrate harm, decline, or deficit."
AI use itself was not associated with reduced confidence. Rather, confidence varied as a function of how participants engaged with AI-generated output.
Sarah Baldeo, Technology, Mind, and Behavior, April 2026
Anthropic studies its own product, and finds something awkward
In late January, two researchers at Anthropic posted a randomized controlled trial on arXiv. Judy Hanwen Shen and Alex Tamkin had asked 52 software engineers to learn an unfamiliar Python library called Trio. Half worked with AI assistance. Half worked alone.
The AI-assisted group finished slightly faster. They also scored 17 percentage points lower on a follow-up comprehension test. They could complete the task. They could not explain how.

Judy Hanwen Shen, Stanford University. Photo credit: https://heyyjudes.github.io/
Then Shen and Tamkin did the most useful thing in the paper. They categorized how participants had actually used the AI, and six distinct interaction patterns emerged.
Three of them preserved learning.
Participants who asked the model to explain its suggestions, who alternated between writing and consulting, who treated the AI as a tutor rather than an answer machine, learned just as well as the unaided group. The other three patterns, the ones built around delegation, did not.
The paper notes that "AI assistance should be carefully adopted into workflows to preserve skill formation, particularly in safety-critical domains."
The accompanying Anthropic blog post adds a sentence that reads, in context, like an admission. The setup tested in the study, the authors write, is "different from agentic coding products like Claude Code; we expect that the impacts of such programs on skill development are likely to be more pronounced than the results here."
A company arguing against the default use of its own product category is not common.
The eighty percent
The Wharton School ran the largest study in the cluster. Steven Shaw and Gideon Nave recruited 1,372 participants for three pre-registered experiments involving 9,593 trials. They controlled, secretly, whether the AI's answers were correct.
Participants accepted the AI's answer about 93 percent of the time when it was right. They accepted it nearly 80 percent of the time when it was wrong.
Steve Shaw, Wharton Postdoctoral Research. Photo credit: https://www.whyweconsume.com/

Shaw and Nave call this cognitive surrender. It is distinct from earlier ideas about cognitive offloading, the calculator habit of using a tool while still checking the result. Surrender is what happens when you stop checking.
The user is no longer in the loop.
Key figure
80%
of users accepted AI's answer even when it was wrong (Wharton, 2026)
What unites Baldeo, Shen and Tamkin, Shaw and Nave, and the original MIT study is the same finding from four directions. The user who treats the AI as a tutor and pushes back stays sharp. The user who treats it as an oracle does not.
Cognitive debt is the default
Baldeo's data also captured how rare the good pattern is. Across her ten tasks, participants overrode AI suggestions an average of 0.8 times per task.
Most accepted what they were given.
If the AI solves a problem for you, you don't think and you don't learn. But the reverse is also true. If you make AI act like a tutor and push people, you get improved outcomes.
Ethan Mollick, Wharton School, TIME, April 2026
Ethan Mollick, the Wharton professor who wrote Co-Intelligence, told TIME in April 2026 that the pattern is partly architectural. "Humans are naturally designed to be lazy and take as little effort as possible to do things," he said. "We're deciding what skills to cede to the AI."
His clarifying line is closer to a working principle. "If the AI solves a problem for you, you don't think and you don't learn. But the reverse is also true. If you make AI act like a tutor and push people, you get improved outcomes."

Ethan Mollick, Wharton School Associate Professor and AI researcher. Photo credit: X.
The interfaces most people use do not nudge them toward the tutor pattern. The chat box rewards quick acceptance. The default reply is satisfying enough to pass for a finished thought.
There is no friction asking whether you have considered an alternative, no prompt asking you to articulate your reasoning before the model offers its own. Baldeo's paper sketches one in a small diagram near the end. A decision tree where the system, after detecting repeated unmodified acceptance, asks a question:
Have you tried solving this yourself?
She labels it conceptual and untested. It is also obvious.
What the researchers want us to know
The two scientists at the centre of this story share something in common beyond their findings. Both have publicly objected to how their work has been described.
Kosmyna's lab maintains its list of forbidden words. Baldeo writes a paragraph titled "Moral Panic and Interpretive Restraint" into her own peer-reviewed paper. They are doing what good scientists are supposed to do.
More About Nataliya Kosmyna
AI and Critical Thinking: What Brain Scans Actually Show
An MIT brain-scan study went viral with claims that ChatGPT causes cognitive decline. The reality is more nuanced, and newer research shows that how you…
→Their evidence does not support the cognitive collapse story, and they are saying so, even as that story attaches itself to their work.
The 2026 research has clarified what a healthy interaction with AI looks like. Argue back. Edit. Reject the first answer. Treat the model as a tutor that you sometimes correct, not as an oracle whose output you accept.
The harder question is whether the tools we use will ever be designed to make that the easy path. For now, it is the harder one.
And most people, the data shows, are not taking it.
Sources
- Baldeo, S. (2026). Generative Artificial Intelligence Reliance and Executive Function Attenuation. Technology, Mind, and Behavior.
- Kosmyna, N. et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt. arXiv preprint.
- Shen, J. H. and Tamkin, A. (2026). How AI Impacts Skill Formation. Anthropic / arXiv.
- Anthropic (2026). Research blog: AI assistance and coding skills.
- Shaw, S. D. and Nave, G. (2026). Thinking - Fast, Slow, and Artificial: The Rise of Cognitive Surrender. Wharton School Research Paper.
- Haupt, A. (2026, April 16). Letting AI Do Your Work Erodes Your Confidence. TIME.
Fact Check: Claim-by-Claim Verification Verified
The article's core claims about the Kosmyna MIT preprint, Shen & Tamkin Anthropic arXiv RCT, Shaw & Nave Wharton SSRN paper, and Ethan Mollick's TIME interview all verified. Baldeo's published APA paper (DOI confirmed) is real; her exact internal statistics (r = .61, 41% variance, 0.8 overrides/task) could not be independently verified from press coverage but are not contradicted.
Editor note: These specific figures derive from the article author's reading of the Baldeo paper itself. Press summaries do not quote the exact statistics, so they could not be cross-verified.
Commentary
- Three of Baldeo's quantitative findings (r = .61, 41% variance, 0.8 overrides/task) could not be cross-verified via press because the primary paper is paywalled. DOI resolves and paper is real; exact numbers trace to the author's direct reading.
- The Cohen 1972 reference inside Baldeo's paper could not be independently confirmed without paper access but is consistent with the paper's documented framing.
- The Kosmyna paper remains a preprint, not peer-reviewed; this is correctly labeled in the article.
Sources used for verification
Academic/Peer-reviewed:
- Baldeo, S. (2026). Generative Artificial Intelligence Reliance and Executive Function Attenuation — Technology, Mind, and Behavior, APA
- Kosmyna, N. et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt — arXiv preprint
- Shen, J. H. & Tamkin, A. (2026). How AI Impacts Skill Formation — arXiv / Anthropic
- Shaw, S. D. & Nave, G. (2026). Thinking – Fast, Slow, and Artificial: The Rise of Cognitive Surrender — Wharton / SSRN
Other reliable sources:
- Haupt, A. (2026). Letting AI Do Your Work Erodes Your Confidence — TIME
- AI assistance and coding skills — Anthropic research blog
- Over-reliance on AI can undermine confidence — APA press
- Baldeo study summary — EurekAlert
- Your Brain on ChatGPT — MIT Media Lab study site
Fact-checked by Perplexity Sonar Pro on 2026-04-20
