This January, British journalist Alex Preston submitted a review of Jean-Baptiste Andrea’s novel ‘Watching Over Her’ to The New York Times, which was polished with the help of an artificial intelligence (AI) tool. The piece borrowed phrases from a Guardian review of the same book written by Christobel Kent, published four months earlier. Preston had not noticed, but a reader did. After an investigation, The Times cut ties with him permanently.
The use of AI for writing is a concern that researchers have been trying to quantify. What happens when humans outsource their thinking to AI? Findings show that originality is compromised, and the brain scans now suggest a loss in cognitive function itself.
Before 2022, writing an essay meant struggling through half-formed ideas, bad starts, and rewrites. Today, it can begin and end by typing a few prompts into ChatGPT, launched in November 2022. The result comes fast and looks clear. After revision, the essay will be ready for submission.
For all the efficiency gains that AI promises, the summaries, the drafted emails, the instant answers, researchers began documenting effects on the brain when it is no longer asked to do the heavy lifting.
According to a 2025 study by Wuhan University in China, habitual reliance on AI tools can dull critical thinking, erode memory, and suppress the neural activity that learning depends on, raising the question of whether AI is changing how we think and to what extent.
What the brain scans show
In a 2025 study conducted at Massachusetts Institute of Technology (MIT) Media Lab, researchers divided 54 students from five Boston-area universities into three groups: one that used ChatGPT to write essays, one that used traditional search engines, and one that worked entirely without technological aid.
Over four months, all participants wore electroencephalogram (EEG) headsets, non-invasive electrodes attached to the scalp that record the brain’s electrical activity. Those who exclusively used the AI showed weaker brain connectivity, lower memory retention, and a fading sense of ownership over their work.
The ChatGPT group showed the least amount of brainwave activity, with cognitive function decreasing over time in key areas of their brains. Strikingly, 83 percent of students in that group could not recall key points from their own essays, and none could provide accurate quotes from their papers. According to the study, the ChatGPT group’s participants performed worse than their counterparts in the brain-only group at neural, linguistic, and scoring levels.
Participants who relied on AI assistants showed significantly lower activation in the prefrontal cortex, the region heavily involved in decision-making and critical thinking, compared with those who used Google or no assistance to write at all.
The persistence of the MIT findings is sobering because even after participants stopped using ChatGPT, they showed sluggish brain activity, suggesting that once the brain starts outsourcing its thinking, it does not easily regain control.
The MIT study did not emerge in a vacuum.
Another 2025 study by professor and corporate strategist Michael Gerlich found that heavy reliance on AI tools may gradually erode users’ critical thinking skills, at a measurable cognitive cost.
Broader digital technology research has also shown that frequent GPS use may reduce hippocampus activity, affecting navigation and spatial memory, according to a 2020 study by McGill University in Canada.
Another study, 2021, published in Frontiers in Psychiatry, reveals that frequent mobile phone use has been linked to changes in brain anatomy, including grey matter loss, nerve cell bodies responsible for processing information, in regions tied to memory and executive function. AI, in some ways, takes this even further by doing the work itself.
A challenge Egypt cannot ignore
The stakes are particularly high in countries investing heavily in AI education.
Egypt’s Minister of Education announced in October 2024 that AI and programming would become core subjects for first-year secondary students, as part of an initiative called Gateway to Advanced Technologies and Education (GATE), aimed at equipping the country’s more than 25 million students with digital skills.
Egypt’s Supreme Council of Universities followed in September 2025 with the country’s first regulatory guide for AI use in higher education, calling for the integration of AI ethics into university curricula and training faculty and students on responsible use. That the guide was deemed necessary at all reflects the scale of the concern.
A 2024 cross-sectional study by Zagazig University of 423 medical students across 10 Egyptian universities found that most of them were already using generative AI for grammar checking, completing tasks and homework, and conducting research. The widespread and advocacy of AI usage sits uneasily alongside growing evidence of AI’s cognitive toll.
While researchers are not calling for a rejection of AI, the MIT study recommended new guardrails, such as interactive prompts that require users to engage more deeply with AI outputs, and hybrid models that alternate between AI assistance and independent problem-solving.
The distinction, ultimately, is between using AI as an assistant that supports thinking and using it as a substitute that replaces it. As the EEG data makes clear, the brain notices the difference, even when humans do not.

