Connect with us

Synthetic Divide

How AI Changes Our Brain (and If You Need to Be Alarmed)

mm

The topic is intriguing. Let me start with this Freudian foreword.

The universal narcissism of human intelligence has so far received three deathly blows. The first was when we discovered writing. Socrates said: “For this discovery of yours [writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust the external written characters and not remember of themselves”. The second was when people were introduced to GPS and their spatial orientation abilities degraded. The final blow is probably the most wounding: humans delegated the very thinking to AI.

The MIT Media Lab Study

Opinions on how AI is changing our brains are getting louder—and still more exciting. In the IBM article ‘When AI thinks for us, the brain gets quieter’, I loved how well the title articulated the essence of the process. The article discusses a study by the MIT Media Lab in which the Boston area students took part in multiple writing sessions with and without AI assistance. The research team put EEG caps on students to track their neural activity and response when writing essays with ChatGPT, a modest Google search engine and no tool at all.

The goal was to see what happens in the brain. The team was after neural connectivity, or how well parts of the brain interact when performing a task. When students were using AI, their brains showed lower connectivity across brain zones related to memory and thinking. When students worked on their own, there was more cross-regional communication in the brain.

But the twist came later. In the final stages of the experiment, students were divided into new groups: those who were writing with ChatGPT were asked to proceed without it and vice versa. This revealed an interesting observation. The experiment lead Nataliya Kosmina explains: “If they started out using ChatGPT and then were asked to write on their own, their neural engagement was lower than if they had started without tools and only later used the AI”.

This study by MIT is believed to provide similar findings to what an older study by Sparrow, Liu and Wegner discussed in their article ‘Google effects on memory: cognitive consequences of having information at our fingertips’. One concept the study introduces is cognitive offloading, i.e. the human brain remembers less because information is easily found on the net. The overall takeaway from the study is that the brain develops a new symbiotic relationship with the technology, delegating the internet, drives and clouds large pieces of information and data sets that simply don’t have to be remembered any more. While the research team expresses concern due to this, other commenters believe that it is only natural cognitive adaptation which is inevitable in the increasingly fast-paced world.

What unites the two studies is the concern about human ability impairment. While the 2011 study finds a dependency, where memory mechanisms naturally respond to the changing informational and technological landscape, the findings of the MIT Media Lab are right away alarming. Kosmyna’s experiment shows that the timing matters: the earlier the human brain is introduced to the omnipotent AI, the more difficult it is to develop the normal interaction of brain zones. When AI does the hard work behind learning, like building associations, actually encoding and operating the target information, are we still learning? Is it still effective? I mean—students did write essays after all, technically completing the work. But for the ChatGPT group, personal investment and therefore intellectual effort were not as high as in the AI-free group.

You see, cognitive offloading with AI takes a darker turn. While Google lets us rely on its remembering capacity, it doesn’t rob us of problem-solving skills. Google can remember for me but it cannot solve problems or critically assess information. With AI, though, especially among today’s school students, cognitive offloading is just too deep to be helpful: AI can write a report, build a presentation and so much more. Collaboration, critical assessment, and information retrieval skills are barely practiced.

The Other Side of Things

The 2025 study by Gerlich “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” presents a familiar picture: frequent use of AI is strongly tied to lower engagement in deep thinking and reflection and declined critical thinking abilities.

The study findings seem to suggest that AI tools lead to greater cognitive offloading, ultimately diminishing critical thinking skills. Having heavily relied on AI, the users participating in the study showed diminished ability for critical evaluation of information and reflective problem-solving. Promisingly, Gerlich says that the results should be linked with the wrong use of AI. The study doesn’t definitely point a finger at AI usage as the sole cause of reduced critical thinking and cognitive skills: correlation doesn’t equal causation. Gerlich warns.

As a father myself, I doubt AI will irreversibly change the very mechanics of the human brain or actually impair our abilities. However, I do believe that children need to be somewhat guarded or limited in their use of AI. While I can’t take my son’s phone away from him, I can explain why good old pen-and-paper problem solving is far more beneficial than simply asking AI to do it for you.

Conclusion

To finish on a positive note, I dug around for some actual advice from organizational psychologists and educators that help us, adults, remain productive and not get corrupted by AI’s ease of use.

First, AI is your talented trainee or a junior colleague, not your replacement.

The article by Rob Enderle in Tech News World teaches us that it is crucial to stay mentally present even when using an AI tool to research, draft or even do many mundane work for you. Degradation starts with passive delegation of tasks and blind reliance. Be an ‘aggressive’ editor and interact with the content AI shapes for you.

Second, follow the Vaughn Tan rule: ‘Do NOT outsource your subjective value judgements to an AI, unless you have a good reason to, in which case make sure the reason is explicitly stated’. The rule was formulated based on the scholar Vaughn Tan’s works. He holds PhD from Harvard and is an advisor, author and a tool maker who is now researching how AI influences education in the world. The rule means that what the scientist calls meaning-making activities should be exclusively human. We should not let AI judge the good and the bad.

Third, and my favorite, we should not let AI do our job. The article ‘How to Use AI Without Becoming Stupid’ cites an anecdotal, yet very accurate recommendation: to grow with AI, to benefit from it and to advance with it, you need to stay in the center of work letting AI in only to slightly direct you and point out weaknesses.

As I was writing this, it was at times incredibly tempting to just let AI write something for me. But I believe in the world that thrives with AI and remains human, on the outside and at its very core. Whether AI takes over depends on us, our effort at preserving dominance of the human mind over the machine.

Ilya Romanov is an entrepreneur and AI enthusiast with over 15 years of experience in marketing across industries such as travel, banking, e-commerce, crypto, and AI. This diverse background gives him deep insight into the nature of different businesses. In his writing, he focuses on how AI is applied in business and how it is transforming the world around us.