I watched a colleague scroll through a problem set, paste it into a chat, and copy the answer without a second thought. You felt a small, steady relief when the machine did the heavy lifting. I want you to notice how fast that relief can become a gap.
I’m going to walk you through the evidence, practical fixes, and what to watch for when AI starts doing the thinking for you.
The test room: researchers handed people a chatbot and watched what happened
At a lab table, people who’d been solving problems suddenly stopped trying when the assistant answered for them.
Researchers from Carnegie Mellon, MIT, Oxford, and UCLA ran a blunt experiment: two groups, the same problems, one with an AI assistant (a chatbot powered by OpenAI’s GPT-5) and one without. The AI group used the assistant for a brief session—about 10 minutes—then the tool was removed without warning for the final three questions.
During the AI-assisted portion, math answers came faster and more often for the aided group. But when the assistant vanished, performance cratered: the AI group’s solve rate on the final questions was roughly 20% lower than the control group, and they skipped nearly twice as many items.
The effect wasn’t limited to arithmetic. In SAT-style reading tests, performance held while the assistant was available, then slid when it wasn’t: more wrong answers, more skips. The takeaway from the paper (read the study here) is blunt: a short stint with an assistant can make people stop trying to reason.
One metaphor: your attention becomes a dimmer switch flipped down, not broken but unwilling to go full bright.
Can using AI make you worse at thinking?
Yes—at least in the short term. The people who used AI to get full answers showed the largest drops. Those who used it for hints performed like the control group. Let the assistant solve the work and you’ll struggle more when it’s taken away.
A student froze when the autocomplete disappeared — what cognitive offloading looks like
In classrooms and offices, people drift toward outsourcing: if the tool will give the answer, why bother wrestling with the problem?
Psychologists call this cognitive offloading: delegating mental work to external tools. The new study shows how quickly that delegation happens—ten minutes is enough for people to change strategy. Since the decline appeared across math and reading tasks, the authors argue this is a general consequence of AI-assisted problem solving, not a quirk of one subject.
Self-reported behavior mattered. Participants who asked the chatbot for hints retained their skills. Those who treated it as an answer-machine handed over thinking entirely. This matches other findings: Microsoft’s workplace research found similar declines among knowledge workers who leaned on AI, and a Polish study showed doctors perform worse than baseline once AI support is removed—even as AI improves detection while present.
Second metaphor: reasoning can harden into a rusted key that no longer fits the lock when you stop turning it.
How fast does AI affect reasoning?
Evidence says very fast—about ten minutes in these lab conditions. Repeated, habitual reliance likely compounds the effect over time.
I tell teams to use AI like a co-pilot, not a chauffeur — simple rules that preserve your thinking
On real projects I coach, people who keep their minds active outpace those who hand every task over to the assistant.
Practical habits cut the risk:
– Ask for hints, not full answers. If you prompt ChatGPT or GPT-5 for steps rather than solutions, you practice reasoning while still benefiting from guidance.
– Time-box your AI use. Use the assistant for a defined window—then switch to unaided work for the same period.
– Require explanations. Make the model show its reasoning, and then critique it. That forces retrieval and evaluation.
– Delay the reveal. Try answering first, then check with the assistant. That simple friction preserves effort.
Tools matter: I use ChatGPT (including ChatGPT Plus at $20 (€18) a month) for rapid drafts and hypotheses, but I train teams to annotate where the model helped so they can spot patterns of dependence.
Will relying on AI permanently damage my ability to think?
Not necessarily permanent after a single session, but habits harden. Short, frequent surrender of mental work trains you to expect the assistant to do the thinking. That’s why how you use AI matters more than whether you use it.
This is your brain on AI. Any questions?