Is ChatGPT making us dumber? A new MIT study claims using AI tools causes cognitive issues, and it’s not the first – Microsoft has already warned about ‘diminished independent problem-solving’
While research shows AI tools have benefits, they’re having an adverse effect on our brains


While enterprises and consumers alike continue flocking to AI tools, studies into their cognitive impact are coming in thick and fast – and it’s not good news.
A recent study from MIT’s Media Lab suggests that using AI tools impacts brain activity. The study saw 54 subjects - ranging in age from 18 to 39 years old - divided into three separate groups and asked them to write essays.
One group was directed to write an essay using an AI assistant, specifically ChatGPT, another using Google Search, and the other using no tools at all - referred to as the ‘brain-only’ group..
Using electroencephalography (EEG) to record brain activity, researchers found that of the three groups, ChatGPT users recorded the lowest cognitive engagement and performance.
The paper noted that this particular group “performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring”.
Notably, researchers found that the use of AI tools “reduced the friction involved in answering participants’ questions” compared to the use of search engines, but this advantage was mirrored by a clear cognitive impact.
“This convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ‘opinions’ (probabilistic answers based on the training datasets),” the paper reads.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content.”
AI tools are having an impact on education
For younger users, particularly students, the study warned that the use of AI tools could have a long-term negative impact on learning and create an overreliance.
“Prior research points out that there is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores,” the paper reads.
This raises serious questions, mainly due to the fact AI tools are growing in popularity among young people.
A survey conducted by Pew Research Center last year showed that more than a quarter (26%) of teenage students use AI chatbots to support studies and schoolwork. This marked a significant increase compared to the 13% of students using these tools in the year prior.
MIT doubles down on Microsoft’s research
This isn’t the first study to highlight the potential cognitive impact of AI tools in the last year.
Researchers at Microsoft and Carnegie Mellon University showed the frequent use of generative AI at work can lead to the “deterioration of cognitive faculties that ought to be preserved”.
The study, which included a survey of 319 knowledge workers, revealed a marked deterioration in critical thinking skills and found that participants’ cognitive function “atrophied”.
“While AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving,” the paper noted.
According to researchers, users that were wary of the outputs produced by AI tools and double checked the quality of work consistently outperformed those who were more confident in the content they produced.
MIT is keen to highlight that its recent paper is yet to be peer reviewed, and Microsoft said more work was needed on the subject. However, the findings from both studies come in stark contrast to the frequent talking points around the use of generative AI tools - mainly that they’re a productivity booster for workers.
Research from Stanford University shows that workers certainly are more productive and efficient when working with AI assistants, but once again that raises issues. The study warned that users can become over-reliant on the tools.
Worse still, workers reliant on these solutions often become complacent and overly trusting of the content they produce. As with the Microsoft study, those who are vigilant and more skeptical of outputs perform better.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Mistral's new sustainability tracker tool shows the impact AI has on the environment – and it makes for sober reading
News The training phase for Mistral's Large 2 model was equal to the yearly consumption of over 5,o00 French citizens.
-
CFOs were skeptical about AI investment, but they’ve changed their tune since the arrival of agents
News The introduction of agentic AI has CFOs changing their outlook on the technology
-
Mistral's new sustainability tracker tool shows the impact AI has on the environment – and it makes for sober reading
News The training phase for Mistral's Large 2 model was equal to the yearly consumption of over 5,o00 French citizens.
-
VC investment in AI is skyrocketing – funding in the first half of 2025 was more than the whole of last year, says EY
News The average AI deal size is growing as VCs turn their attention to later-stage companies
-
The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
News Recent vibe coding snafus highlight the risks of AI coding assistants
-
Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
News AI models large and small were found to introduce cross-site scripting errors and seriously struggle with secure Java generation
-
‘LaMDA was ChatGPT before ChatGPT’: Microsoft’s AI CEO Mustafa Suleyman claims Google nearly pipped OpenAI to launch its own chatbot – and it could’ve completely changed the course of the generative AI ‘boom’
News In a recent podcast appearance, Mustafa Suleyman revealed Google was nearing the launch of its own ChatGPT equivalent in the months before OpenAI stole the show.
-
Microsoft is doubling down on multilingual large language models – and Europe stands to benefit the most
News The tech giant wants to ramp up development of LLMs for a range of European languages
-
Everything you need to know about OpenAI’s new agent for ChatGPT – including how to access it and what it can do
News ChatGPT agent will bridge "research and action" – but OpenAI is keen to stress it's still a work in progress
-
‘Humans must remain at the center of the story’: Marc Benioff isn’t convinced about the threat of AI job losses – and Salesforce’s adoption journey might just prove his point
News Marc Benioff thinks fears over widespread AI job losses may be overblown and that Salesforce's own approach to the technology shows adoption can be achieved without huge cuts.