Is ChatGPT making us dumber? A new MIT study claims using AI tools causes cognitive issues, and it’s not the first – Microsoft has already warned about ‘diminished independent problem-solving’
While research shows AI tools have benefits, they’re having an adverse effect on our brains
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
While enterprises and consumers alike continue flocking to AI tools, studies into their cognitive impact are coming in thick and fast – and it’s not good news.
A recent study from MIT’s Media Lab suggests that using AI tools impacts brain activity. The study saw 54 subjects - ranging in age from 18 to 39 years old - divided into three separate groups and asked them to write essays.
One group was directed to write an essay using an AI assistant, specifically ChatGPT, another using Google Search, and the other using no tools at all - referred to as the ‘brain-only’ group..
Using electroencephalography (EEG) to record brain activity, researchers found that of the three groups, ChatGPT users recorded the lowest cognitive engagement and performance.
The paper noted that this particular group “performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring”.
Notably, researchers found that the use of AI tools “reduced the friction involved in answering participants’ questions” compared to the use of search engines, but this advantage was mirrored by a clear cognitive impact.
“This convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ‘opinions’ (probabilistic answers based on the training datasets),” the paper reads.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content.”
AI tools are having an impact on education
For younger users, particularly students, the study warned that the use of AI tools could have a long-term negative impact on learning and create an overreliance.
“Prior research points out that there is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores,” the paper reads.
This raises serious questions, mainly due to the fact AI tools are growing in popularity among young people.
A survey conducted by Pew Research Center last year showed that more than a quarter (26%) of teenage students use AI chatbots to support studies and schoolwork. This marked a significant increase compared to the 13% of students using these tools in the year prior.
MIT doubles down on Microsoft’s research
This isn’t the first study to highlight the potential cognitive impact of AI tools in the last year.
Researchers at Microsoft and Carnegie Mellon University showed the frequent use of generative AI at work can lead to the “deterioration of cognitive faculties that ought to be preserved”.
The study, which included a survey of 319 knowledge workers, revealed a marked deterioration in critical thinking skills and found that participants’ cognitive function “atrophied”.
“While AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving,” the paper noted.
According to researchers, users that were wary of the outputs produced by AI tools and double checked the quality of work consistently outperformed those who were more confident in the content they produced.
MIT is keen to highlight that its recent paper is yet to be peer reviewed, and Microsoft said more work was needed on the subject. However, the findings from both studies come in stark contrast to the frequent talking points around the use of generative AI tools - mainly that they’re a productivity booster for workers.
Research from Stanford University shows that workers certainly are more productive and efficient when working with AI assistants, but once again that raises issues. The study warned that users can become over-reliant on the tools.
Worse still, workers reliant on these solutions often become complacent and overly trusting of the content they produce. As with the Microsoft study, those who are vigilant and more skeptical of outputs perform better.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Why Anthropic sent software stocks into freefallNews Anthropic's sector-specific plugins for Claude Cowork have investors worried about disruption to software and services companies
-
B2B Tech Future Focus - 2026Whitepaper Advice, insight, and trends for modern B2B IT leaders
-
What the UK's new Centre for AI Measurement means for the future of the industryNews The project, led by the National Physical Laboratory, aims to accelerate the development of secure, transparent, and trustworthy AI technologies
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
-
What Anthropic's constitution changes mean for the future of ClaudeNews The developer debates AI consciousness while trying to make Claude chatbot behave better
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.
-
Workers are wasting half a day each week fixing AI ‘workslop’News Better staff training and understanding of the technology is needed to cut down on AI workslop
-
Retailers are turning to AI to streamline supply chains and customer experience – and open source options are proving highly popularNews Companies are moving AI projects from pilot to production across the board, with a focus on open-source models and software, as well as agentic and physical AI

