Fake it till you make it: 79% of tech workers pretend to know more about AI than they do – and executives are the worst offenders
Tech industry workers are exaggerating their AI knowledge and skills capabilities
Find AI confusing? Don't worry, so do most of your colleagues — almost eight in ten of them, but they likely won’t admit it.
That's the key finding from a survey of 1,200 technology workers and executives in the US and UK by training company Pluralsight, which found that 79% of tech workers pretend to know more about AI than they actually do.
Executives may be even more dishonest about their AI skill, the survey found, with around 91% of bosses admitted to faking AI knowledge. On the upside, half of companies are now offering AI training, so perhaps less dishonesty will be necessary in the future, Pluralsight noted.
The research also found that — despite admitting to faking AI knowledge — nine-in-ten tech workers and execs believe they had the necessary skills to use AI in their day-to-day roles, though most thought their colleagues lacked the same skills.
"One potential explanation for this gap is the Dunning-Kruger effect — a well-researched phenomenon where a person’s lack of knowledge and skill in a specific area causes them to greatly overestimate their competence," the report noted.
"If this is the case, then it’s likely that a large percentage of the workforce believe they have greater AI skills than they do because they lack enough knowledge about AI to 'know what they don’t know'."
Shadow AI at work
This phase of the AI revolution appears to be a bit confusing for everyone. Not only are those in the tech industry lying about their own capabilities, but they're told to use AI and then called lazy for doing so.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The survey showed 95% of tech executives believe AI skills are crucial for job security, but 61% of their employees and 73% of execs reported a perception that using generative AI to assist their own work was seen as "lazy" by their company.
That leads to the use of "shadow AI", the report noted, which can cause security and compliance issues.
"With this stigma, workers often use… AI tools for work projects without giving credit or acknowledgement for its use," the report noted.
"Two in three people have noticed coworkers use AI without admitting it, while one in three report hidden AI use being widespread in their workplace."
From a worker standpoint, the use of shadow AI matters because those not using such tools are being compared to those that are, making it difficult to keep pace without access to the same tools.
"It also gives a mistaken impression that nobody is using AI, so there is no urgency to utilize it, when in reality colleagues may be getting AI assistance regularly," the report added.
Shadow AI has been creeping into a range of tech industry professions over the last two years. Research in January this year, for example, found software developers are increasingly using non-approved tools, prompting major concerns over potential security lapses.
The security aspect of shadow AI in particular is a recurring talking point. Analysis from BC, the Chartered Institute for IT, shows that using non-approved tools raises the risk of breaching data privacy rules or exposing organisations to potential security vulnerabilities.
Job losses and AI shortages
This fluctuating perception of AI tools in the workplace comes amid a period of heightened concern among workers.
The study from Pluralsight found 90% of respondents believe it’s somewhat likely their jobs will be replaced by the technology, despite half of employers actually adding AI-related jobs.
Skills shortages are also placing pressure on both employers and employees alike. Two-third said they’ve had to abandon an AI project due to a lack of skilled staff, for example.
"In addition to the systemic misrepresentation of AI knowledge, Pluralsight also found that AI is complicating perceptions about how work is getting done," said Chris McClellen, Chief Product and Technology Officer at Pluralsight.
"Fears about AI supplanting jobs is becoming the new norm and employees are quietly worried that using AI in their daily routine looks lazy."
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Google DeepMind CEO Demis Hassabis thinks startups are in the midst of an 'AI bubble'News AI startups raising huge rounds fresh out the traps are a cause for concern, according to Hassabis
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
AWS has dived headfirst into the agentic AI hype cycle, but old tricks will help it chart new watersOpinion While AWS has jumped on the agentic AI hype train, its reputation as a no-nonsense, reliable cloud provider will pay dividends
-
AWS CEO Matt Garman says AI agents will have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
-
Westcon-Comstor partners with Fortanix to drive AI expertise in EMEANews The new agreement will help EMEA channel partners ramp up AI and multi-cloud capabilities
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Anthropic announces Claude Opus 4.5, the new AI coding frontrunnerNews The new frontier model is a leap forward for the firm across agentic tool use and resilience against attacks
-
Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disasterNews Staff need to be educated on the risks of shadow AI to prevent costly breaches


