Fake it till you make it: 79% of tech workers pretend to know more about AI than they do – and executives are the worst offenders
Tech industry workers are exaggerating their AI knowledge and skills capabilities


Find AI confusing? Don't worry, so do most of your colleagues — almost eight in ten of them, but they likely won’t admit it.
That's the key finding from a survey of 1,200 technology workers and executives in the US and UK by training company Pluralsight, which found that 79% of tech workers pretend to know more about AI than they actually do.
Executives may be even more dishonest about their AI skill, the survey found, with around 91% of bosses admitted to faking AI knowledge. On the upside, half of companies are now offering AI training, so perhaps less dishonesty will be necessary in the future, Pluralsight noted.
The research also found that — despite admitting to faking AI knowledge — nine-in-ten tech workers and execs believe they had the necessary skills to use AI in their day-to-day roles, though most thought their colleagues lacked the same skills.
"One potential explanation for this gap is the Dunning-Kruger effect — a well-researched phenomenon where a person’s lack of knowledge and skill in a specific area causes them to greatly overestimate their competence," the report noted.
"If this is the case, then it’s likely that a large percentage of the workforce believe they have greater AI skills than they do because they lack enough knowledge about AI to 'know what they don’t know'."
Shadow AI at work
This phase of the AI revolution appears to be a bit confusing for everyone. Not only are those in the tech industry lying about their own capabilities, but they're told to use AI and then called lazy for doing so.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The survey showed 95% of tech executives believe AI skills are crucial for job security, but 61% of their employees and 73% of execs reported a perception that using generative AI to assist their own work was seen as "lazy" by their company.
That leads to the use of "shadow AI", the report noted, which can cause security and compliance issues.
"With this stigma, workers often use… AI tools for work projects without giving credit or acknowledgement for its use," the report noted.
"Two in three people have noticed coworkers use AI without admitting it, while one in three report hidden AI use being widespread in their workplace."
From a worker standpoint, the use of shadow AI matters because those not using such tools are being compared to those that are, making it difficult to keep pace without access to the same tools.
"It also gives a mistaken impression that nobody is using AI, so there is no urgency to utilize it, when in reality colleagues may be getting AI assistance regularly," the report added.
Shadow AI has been creeping into a range of tech industry professions over the last two years. Research in January this year, for example, found software developers are increasingly using non-approved tools, prompting major concerns over potential security lapses.
The security aspect of shadow AI in particular is a recurring talking point. Analysis from BC, the Chartered Institute for IT, shows that using non-approved tools raises the risk of breaching data privacy rules or exposing organisations to potential security vulnerabilities.
Job losses and AI shortages
This fluctuating perception of AI tools in the workplace comes amid a period of heightened concern among workers.
The study from Pluralsight found 90% of respondents believe it’s somewhat likely their jobs will be replaced by the technology, despite half of employers actually adding AI-related jobs.
Skills shortages are also placing pressure on both employers and employees alike. Two-third said they’ve had to abandon an AI project due to a lack of skilled staff, for example.
"In addition to the systemic misrepresentation of AI knowledge, Pluralsight also found that AI is complicating perceptions about how work is getting done," said Chris McClellen, Chief Product and Technology Officer at Pluralsight.
"Fears about AI supplanting jobs is becoming the new norm and employees are quietly worried that using AI in their daily routine looks lazy."
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Is ChatGPT making us dumber? A new MIT study claims using AI tools causes cognitive issues, and it’s not the first – Microsoft has already warned about ‘diminished independent problem-solving’
News A recent study from MIT suggests that using AI tools impacts brain activity, with frequent users underperforming compared to their counterparts.
-
‘Agent washing’ is here: Most agentic AI tools are just ‘repackaged’ RPA solutions and chatbots – and Gartner says 40% of projects will be ditched within two years
News Agentic AI might be the latest industry trend, but new research suggests the majority of tools are simply repackaged AI assistants and chatbots.
-
‘Digital first, but not digital only’: Customer service workers were first on the AI chopping block – but half of enterprises are now backtracking amid a torrent of consumer complaints and poor returns on AI
News While businesses have been keen on replacing customer service workers with AI, adoption difficulties mean many are now backtracking on plans.
-
‘I don’t think this is on people’s radar’: AI could wipe out half of entry-level jobs in the next five years – and Anthropic CEO Dario Amodei thinks we're all burying our heads in the sand
News With AI set to hit entry-level jobs especially, some industry execs say clear warning signs are being ignored
-
‘A complete accuracy collapse’: Apple throws cold water on the potential of AI reasoning – and it's a huge blow for the likes of OpenAI, Google, and Anthropic
News Apple published a research paper on the effectiveness of AI 'reasoning' models - and it seriously rains on the parade of the world’s most prominent developers.
-
Questions raised over AI’s impact as studies tout conflicting adoption outcomes
News Two reports highlight the difficulty of judging the impact of AI on jobs, productivity, and wages
-
Sick and tired of spreadsheets? Perplexity’s new tools can help with that
News Perplexity Labs is available now for Pro subscription users
-
Meta faces new ‘open washing’ accusations with AI whitepaper
News The tech giant has faced repeated criticism for describing its Llama AI model family as "open source".