Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving'
AI tools might be convenient for workers, but there's a risk they'll become too reliant in the future
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Using generative AI at work may impact the critical thinking skills of employees — and that's according to Microsoft.
Researchers at Microsoft and Carnegie Mellon University surveyed 319 knowledge workers in an attempt to study the impact of generative AI at work, raising concerns about what the rise of the technology means for our brains.
Concerns about the negative impact are valid, the report noted, with researchers pointing to the “deterioration of cognitive faculties that ought to be preserved”.
That referenced research into the impact of automation on human work — which found that depriving workers of the opportunity to use their judgement left their cognitive function "atrophied and unprepared" to deal with anything beyond the routine.
Similar effects have also been noticed with reduced memory and smartphones, and attention spans and social media users.
"Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving," researchers said.
The study noted that users engaged in critical thinking mostly to double check quality of work, and that the more confidence a worker had in the generative AI tool in question, the less likely they were to use their own critical thinking to engage with their work.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship," the research found.
Researchers said more work was needed on the subject, especially because generative AI tools are constantly evolving and changing how we interact with them.
They called for developers of generative AI to make use of their own data and telemetry to understand how these tools can "evolve to better support critical thinking in different tasks."
"Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows," the researchers added. "To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers."
Reliance on AI tools could become a big problem
All of this is a problem as Microsoft has pushed its AI-powered Copilot tools into its wider software package, a trend across the wider industry — though some workers are sneaking it into their companies without explicit approval, too.
RELATED WHITEPAPER
Beyond cutting costs, one of the long cited assumptions about AI is that it could remove routine tasks from day-to-day work — helping employees do less drudgery and shift to more creative work.
Achieving that requires finding the right balance between fully automated tasks, those with a human in the loop, and wholly human work.
Research from Stanford has suggested workers are more effective and productive working alongside an AI assistant, but also found we easily slip into overreliance on such tools, sparking compliance or too much trust in the technology.
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Low-budget devices are the biggest casualty of the RAM crisisNews Say goodbye to budget devices; vendors are doubling down on high-end options to absorb costs
-
Sectigo taps Clint Maddox to lead global field operationsReviews The appointment follows a year of strong momentum for the security vendor as it expands its global channel footprint
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Concerns are mounting over the cognitive impact of AI as workers report experiencing ‘brain fry’ – and it’s causing "increased employee errors, decision fatigue, and intention to quit"News Research from Boston Consulting Group backs earlier studies in highlighting the negative cognitive impact of AI at work
-
Anthropic's Claude Cowork tool is coming to Microsoft CopilotNews The new Copilot Cowork tool will be made available through a new Microsoft 365 tier at the end of March
-
Swamped with decisions to make, managers turn to AINews Worryingly, many UK leaders are outsourcing key judgments to AI, despite a lack of data
-
If you thought RTO battles were bad, wait until AI mandates start taking hold across the industryOpinion Forcing workers to adopt AI under the threat of poor performance reviews and losing out on promotions will only create friction
-
Sam Altman just said what everyone is thinking about AI layoffsNews AI layoff claims are overblown and increasingly used as an excuse for “traditional drivers” when implementing job cuts
-
Microsoft Copilot bug saw AI snoop on confidential emails — after it was told not toNews The Copilot bug meant an AI summarizing tool accessed messages in the Sent and Draft folders, dodging policy rules
-
Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its modelsNews Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
