Disinformation security is a major concern for cyber teams – here's what your business can do
Attackers can impersonate employees and fake data with increasing ease


While new for cybersecurity, disinformation is not new to the world overall. We’ve had misinformation, disinformation and propaganda for centuries, and ‘social engineering’ is an evolution of this with a cybersecurity bend.
People have been creating fake or falsified videos and images for as long as we’ve had cameras and editing software, notes Daniel Ayoub, senior director analyst at Gartner. He adds the challenge is growing thanks to the ease with which technology can now be used to enhance or perform these actions with high quality and low barriers to entry.
“Being able to imitate anyone’s voice with only a few seconds of audio or create a nefarious video of someone with only a few pictures found online – that’s the scary part.”
Enterprise vulnerabilities
Businesses, and enterprises in particular, are vulnerable to disinformation due to their reliance on public trust, reputation and digital infrastructure.
As Lisa Venture, a member of BCS, The Chartered Institute for IT and founder of the Cyber Security Unity community explains, an enterprise’s online presence creates multiple attack surfaces ripe for exploitation. This can include its social media platforms, websites, and digital marketing channels.
Additionally, organizations often operate in competitive environments where rivals or adversaries may seek to gain an edge by spreading false or misleading information.
“Their complex supply chains and partnerships also increase exposure, as disinformation campaigns may target associated entities, creating a cascading effect of reputational harm,” she says.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Insider threats
Employees can be another weak link, Venture adds, especially when phishing attacks, fake internal communications or impersonation campaigns exploit their access to sensitive data or systems.
Social media remains a major vector, where employees may unknowingly encounter and/or share false information about their organization, leadership or industry developments. Messaging apps and collaboration tools, such as Slack and Microsoft Teams, have also emerged as internal disinformation conduits.
“This trend is amplified by remote and hybrid work environments, where digital interactions dominate, making it easier for disinformation to blend seamlessly into regular communications channels,” Venture says.
Disgruntled employees can also have an impact, as it may not be possible to stop them from spreading disinformation. “Rumor mills will always exist within large organizations, exposing them to risk. These echo chambers of disinformation can amplify false narratives and erode employee confidence,” notes Amanda Finch, CEO at the Chartered Institute of Information Security (CIISec).
The impact of disinformation on businesses
Disinformation can pose a significant threat to companies of all sizes as, if left unchecked, it has the ability to damage reputations and operational integrity.
One of the biggest challenges is the speed and scale false information can spread, as the perpetrators create fake accounts and leverage bots to amplify their narrative across a variety of digital platforms.
These kinds of campaigns can undermine consumer confidence and trust, causing significant damage to a brand’s reputation. “At worst, cybercriminals can spread misleading and inaccurate information about share prices and financial results, which can have severe consequences for a company’s bottom line,” notes Finch.
“We’ve seen major consumer brands become the victims of boycotts due to mis- or disinformation campaigns online eroding their revenue and impacting share prices,” continues Ayoub. “We’ve also seen targeted disinformation attacks, like deepfakes, lead to tens of millions of dollars in fraud from just a single incident.”
One example is that of the CEO of advertising group WPP, who was the victim of a deepfake campaign where his likeness and voice were cloned to deceive customers into making payments and divulging personal details. Although this was unsuccessful, other companies haven’t been so lucky. Take Arup, for example, which saw one employee tricked into transferring approximately £20m of company funds to cybercriminals via an AI-generated video call.
How businesses can fight disinformation
We’re still in the early stages of disinformation campaigns, says Ayoub, with attacks originating from both in- and outside a business. But there are several ways organizations can fight back.
In terms of technology, Gartner is seeing three main tools being adopted. The first is media monitoring, or narrative intelligence, which looks at how information is being spread online.
“Monitoring internal systems and tools to increase resilience is something most organizations are aware of, however less obvious is monitoring external sources that are outside the organization’s control. We’re seeing new tools emerging which aim to close these gaps, but the market is still in early stages,” Ayoub says.
“Narrative intelligence builds on sentiment analysis techniques that would normally be used by marketing teams to gauge customer satisfaction and instead combines knowledge gaps, bot management and large language models to track what’s being said, by whom, where and how it’s being spread.”
Then there’s trust assessment, which builds upon fact checking but goes beyond to correlate information based on multiple public and private sources. “It also works with generative AI to prevent hallucinations or giving responses that are untrue,” Ayoub adds.
Lastly, there’s deepfake detection, which can be used to determine whether audio, video or images were created using generative AI to imitate real/authentic content.
While employees can often be the cause of disinformation, they can also be the solution to this problem. Just as companies prioritize employee training to defend against cyber attacks, says Finch, businesses should also equip their teams to recognise disinformation and how to respond effectively to minimise harm.
This focus on education should also extend to customers, she points out. “Banks serve as a strong example of how industries can tackle disinformation, offering clear and actionable guidance, such as instructing customers to never share personal information over the phone. If customers fall victim to disinformation, the resulting loss of trust can be as harmful to an organization as direct financial losses.
“Ultimately, governments will need to step in to regulate disinformation, particularly on platforms like social media, which often serve as testing grounds for malicious actors,” she adds.
Time to prepare
Disinformation is poised to evolve as a more pervasive and sophisticated cybersecurity threat in the next five years says Venture, driven by technological advancements and the increasing integration of digital ecosystems.
The good news, however, is that by 2030, Ayoub expects that a lot of the low hanging fruit, or easy gaps to be addressed, will be introduced within existing tools and platforms, making it harder to pull off these kinds of attacks.
Experts predict security-orientated features like digital watermarking, authenticated and secure communications, secret safety passphrases and sentiment analysis will fully penetrate the market, while at the same time governments and regulatory bodies are likely to respond with stricter frameworks.
The threat from disinformation may be growing but so are the necessary tools to safeguard reputations and operations. As these continue to become more readily available, now’s the time for enterprises to take a proactive approach to protecting their organisation.
Keri Allan is a freelancer with 20 years of experience writing about technology and has written for publications including the Guardian, the Sunday Times, CIO, E&T and Arabian Computer News. She specialises in areas including the cloud, IoT, AI, machine learning and digital transformation.
-
Tech professionals are on the job hunt again – here’s what they’re looking for
News Work-life balance, wellbeing, and flexibility are key motivators for tech workers seeking new opportunities
-
Multiverse wants to train 15,000 new AI apprentices across the UK
News The program, open to workers across the UK, is designed to support the UK government's AI Opportunities Action Plan