Fake it till you make it: 79% of tech workers pretend to know more about AI than they do – and executives are the worst offenders
Tech industry workers are exaggerating their AI knowledge and skills capabilities


Find AI confusing? Don't worry, so do most of your colleagues — almost eight in ten of them, but they likely won’t admit it.
That's the key finding from a survey of 1,200 technology workers and executives in the US and UK by training company Pluralsight, which found that 79% of tech workers pretend to know more about AI than they actually do.
Executives may be even more dishonest about their AI skill, the survey found, with around 91% of bosses admitted to faking AI knowledge. On the upside, half of companies are now offering AI training, so perhaps less dishonesty will be necessary in the future, Pluralsight noted.
The research also found that — despite admitting to faking AI knowledge — nine-in-ten tech workers and execs believe they had the necessary skills to use AI in their day-to-day roles, though most thought their colleagues lacked the same skills.
"One potential explanation for this gap is the Dunning-Kruger effect — a well-researched phenomenon where a person’s lack of knowledge and skill in a specific area causes them to greatly overestimate their competence," the report noted.
"If this is the case, then it’s likely that a large percentage of the workforce believe they have greater AI skills than they do because they lack enough knowledge about AI to 'know what they don’t know'."
Shadow AI at work
This phase of the AI revolution appears to be a bit confusing for everyone. Not only are those in the tech industry lying about their own capabilities, but they're told to use AI and then called lazy for doing so.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The survey showed 95% of tech executives believe AI skills are crucial for job security, but 61% of their employees and 73% of execs reported a perception that using generative AI to assist their own work was seen as "lazy" by their company.
That leads to the use of "shadow AI", the report noted, which can cause security and compliance issues.
"With this stigma, workers often use… AI tools for work projects without giving credit or acknowledgement for its use," the report noted.
"Two in three people have noticed coworkers use AI without admitting it, while one in three report hidden AI use being widespread in their workplace."
From a worker standpoint, the use of shadow AI matters because those not using such tools are being compared to those that are, making it difficult to keep pace without access to the same tools.
"It also gives a mistaken impression that nobody is using AI, so there is no urgency to utilize it, when in reality colleagues may be getting AI assistance regularly," the report added.
Shadow AI has been creeping into a range of tech industry professions over the last two years. Research in January this year, for example, found software developers are increasingly using non-approved tools, prompting major concerns over potential security lapses.
The security aspect of shadow AI in particular is a recurring talking point. Analysis from BC, the Chartered Institute for IT, shows that using non-approved tools raises the risk of breaching data privacy rules or exposing organisations to potential security vulnerabilities.
Job losses and AI shortages
This fluctuating perception of AI tools in the workplace comes amid a period of heightened concern among workers.
The study from Pluralsight found 90% of respondents believe it’s somewhat likely their jobs will be replaced by the technology, despite half of employers actually adding AI-related jobs.
Skills shortages are also placing pressure on both employers and employees alike. Two-third said they’ve had to abandon an AI project due to a lack of skilled staff, for example.
"In addition to the systemic misrepresentation of AI knowledge, Pluralsight also found that AI is complicating perceptions about how work is getting done," said Chris McClellen, Chief Product and Technology Officer at Pluralsight.
"Fears about AI supplanting jobs is becoming the new norm and employees are quietly worried that using AI in their daily routine looks lazy."
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
News Workers are comfortable working alongside AI agents, according to research from Workday, but the prospect of having an AI 'boss' is a step too far.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers
-
DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree shows it's scrambling to catch up in the AI race
News DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree is "rational" given the company's current position in the generative AI space.
-
Mistral's new sustainability tracker tool shows the impact AI has on the environment – and it makes for sober reading
News The training phase for Mistral's Large 2 model was equal to the yearly consumption of over 5,o00 French citizens.
-
VC investment in AI is skyrocketing – funding in the first half of 2025 was more than the whole of last year, says EY
News The average AI deal size is growing as VCs turn their attention to later-stage companies
-
The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
News Recent vibe coding snafus highlight the risks of AI coding assistants
-
Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
News AI models large and small were found to introduce cross-site scripting errors and seriously struggle with secure Java generation
-
‘LaMDA was ChatGPT before ChatGPT’: Microsoft’s AI CEO Mustafa Suleyman claims Google nearly pipped OpenAI to launch its own chatbot – and it could’ve completely changed the course of the generative AI ‘boom’
News In a recent podcast appearance, Mustafa Suleyman revealed Google was nearing the launch of its own ChatGPT equivalent in the months before OpenAI stole the show.