Researchers show that using ChatGPT compromises our brain and reasoning ability: our intelligence is threatened by how we use AI

Not because devices got louder, but because help now arrives before effort begins.

Give a machine a prompt and it gives back words that look polished. That short‑circuit feels convenient. It also shifts when and how our own thinking kicks in.

What brain scans reveal when artificial intelligence lends a hand

At MIT, researchers monitored people as they wrote with and without digital assistance. Using electroencephalography, they compared three scenarios: writing unaided, writing while searching the web, and writing with a large language model like ChatGPT. The brain looked busiest when people wrote alone. As assistance increased, overall activity and connectivity fell.

More assistance, less neural engagement: when a model drafts ideas, the brain dials back memory use and the links between regions weaken.

The drop didn’t end at the scalp. After writing with an AI, participants struggled to recall the arguments they had just produced. They felt less ownership of the text as well. That shift matters. We learn by wrestling with ideas, retrieving them, and refining them. When a system drafts the heavy parts, those cognitive loops don’t fire in the same way.

From assistance to amnesia

The team led by Nataliya Kosmyna reported that unaided writers showed stronger activation in memory‑related areas. People supported by a search engine sat in the middle. Those writing with a model showed the lowest engagement. Minutes later, they cited fewer of their own points. The tool worked. The brain worked less.

AI makes writing easier, but easy text is not the same as learned knowledge you can later summon and defend.

How unstructured AI use reshapes learning habits

Psychologists call it cognitive offloading. We outsource tasks to external systems so our minds can focus elsewhere. Shopping lists and calendar reminders fit that pattern and help. The difference now sits in the complexity of what we outsource. Not dates. Reasoning. Nuance. Style. Even the act of building a case.

See also  State pensioners set to be handed £921 payment Stunning Payout from DWP in March

In a controlled trial across several European countries, 150 people were asked to craft political arguments in one of three ways: alone, with free access to an AI assistant, or with a structured AI workflow. The free‑use group produced weaker reasoning and thinner substance. Confidence rose while reflection fell. Even participants proud of their critical thinking slid into the same trap once they started delegating without constraints.

  • Unstructured prompts led to shallow arguments and borrowed logic.
  • Users overestimated the quality of AI‑shaped drafts.
  • Pre‑thinking before prompting improved depth and originality.
  • Structure protected attention and reduced mind‑wandering.

This isn’t a moral failure. It’s friction economics. When a tool reduces effort at the start, we invest less in building the skeleton of a thought. Later, there’s less to remember, less to defend, and less to revise.

➡️ A brown ribbon unexpectedly massive slowly as long as a continent has formed between the Atlantic and Africa, and it’s not a good sign

➡️ The sleep hack that helps you wake up refreshed even with only 5 hours

➡️ Boeing Secures US Air Force Deal to Build Four More MH-139A Helicopters

➡️ The emir of Qatar travels in a private jet so massive that it reportedly helped modernise an entire airport in Sardinia

➡️ Researchers identify ancient meteor fragments containing rare isotopes from before the solar system formed

➡️ The everyday reason your breathing becomes shallow without stress

➡️ Astrologers predict a rare cosmic alignment that will ease the end of the year for a lucky sign

➡️ Breaking Opportunity Alert €5,000 a month and free housing to live six months on a remote Scottish island with puffins and whales

See also  The easy trick to make store-bought tomato sauce taste homemade

A practical way to think with, not through, machines

The fix does not require ditching AI. It asks us to change the order of operations. In the study, a simple five‑step protocol neutralized the worst effects and kept reasoning strong.

A five‑step protocol that keeps you in charge

  • Reflect first: write a quick, private outline of your stance and why.
  • Collect with intent: list specific facts or sources you need to check.
  • Build your argument: draft a skeleton with claims, evidence, and counterpoints.
  • Challenge with AI: ask the model to attack, stress‑test, or broaden, not to write the piece.
  • Revise yourself: integrate useful points in your own words and voice.
Approach Unstructured AI use Structured AI workflow
Brain activity Lower engagement and connectivity Sustained engagement during key steps
Recall and ownership Poor recall, weak sense of authorship Better memory and stronger ownership
Argument quality Shallow logic, borrowed phrasing Clearer reasoning, personal voice
Role of the tool Substitute writer Critical sparring partner

Think first, then ask. Use the model to disagree with you, not to think for you.

Tips for students and knowledge workers

Small guardrails change outcomes fast. They don’t slow you down much, and they preserve the mental work that leads to learning.

  • Set a two‑minute think timer before any prompt. Jot five bullet ideas or questions.
  • Start with a claim, not a topic. “I think X because Y. What would prove me wrong?”
  • Ask for counterarguments and failure cases, not a finished draft.
  • Summarize out loud what you learned before pasting anything into your document.
  • Adopt a no‑copy rule: never paste AI text without rewriting it from memory.
  • Run a recall test later. If you can’t explain the argument without notes, it isn’t yours yet.
  • Rotate focus: AI for research assistance today, human‑only drafting tomorrow.
  • Teach the prompts you use to a colleague. Teaching reveals shortcuts that hurt thinking.
See also  France Shows The World It’s Ready To “Mix It” In Eastern Europe With A First Full-Scale Test Of Its SGTIA In Estonia

What this means for schools and teams

Assignments can reward process, not just output. Ask for pre‑thinking notes, evidence logs, and a reflection on how AI changed the draft. In meetings, reserve the first five minutes for silent writing before any tool runs. For policy documents, log every model suggestion accepted or rejected, with a reason. That audit grows judgment over time.

Expect bias. Anchoring bias shows up when the first AI answer frames your view. Combat it by generating diverse alternatives and scoring them against a checklist. Automation bias pushes us to trust fluent language too quickly. Use red‑team prompts to roughen the surface. Fluency should not equal truth.

Risks, gains, and the next test

There is real risk of intellectual deskilling if we let assistants own the first draft and the final verdict. Memory shrinks when it is never asked to retrieve. Reasoning atrophies if it never meets resistance. Over time, models trained on AI‑generated text may echo our own shortcuts back to us, narrowing the space of ideas.

The gains are real too: speed, breadth, and a tireless partner for critique. The path forward treats generative systems as mirrors, simulators, and opponents. Ask them to argue the other side. Ask for edge cases. Ask where your logic collapses. Keep your hands on the steering wheel, especially at the start and the end.

If you want a simple practice to try this week, run a “thought sandwich.” Think for three minutes. Prompt for two. Think again for three, without the screen. Then write. That rhythm keeps your neural circuitry in play and still takes advantage of the machine’s reach.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top