Time for vendors to get off the fence with IoT security

fence

At the end of January, the National Cyber Security Council (NCSC) announced it would establish a £70 million fund to 'design out' cyber threats, and conversely, 'design in' security for IT systems and hardware. It's thought the fund could subsidise research into improving security using AI or integrated security chips.

It's a bold move by the government, and provides a vital incentive for manufacturers to create new ways of ensuring their products are 'secure by design'. The truth is, however, that all over the world, vendors must step up and take more responsibility for the security of their products in the field. Crucially, they must help integrators and resellers ensure devices are properly installed, managed and regularly updated throughout their lifespans.

That said, while government interventions are welcome, the fact that they are deemed to be necessary is a sad reflection on the state of the technology industry in general. We have to get our act together - and fast.

Too many high-profile security breaches are related to zero-day flaws in Internet of Things (IoT) equipment and application software. Last year, hackers made headlines when they breached a database in a Las Vegas casino by gaining entry to the network via a thermostat.

Botnets made up of compromised IoT devices are growing in size, and becoming more dangerous. Some of this growth is down to new techniques for attacking devices, but much of this is also down to known vulnerabilities remaining unpatched. This is despite the wealth of information out there, and a multitude of well-publicised botnet incidents. Based on the evidence, things will get worse before they get better.

The next step is to re-define cybersecurity processes

The most important thing for technology vendors to do is to embrace the principles of security-by-design. It's not enough to bundle off-the-shelf components with off-the-shelf operating systems: full risk assessments for any new IoT product must be done at the very start of the design process. Developers must mitigate any threats, and a clear programme of support should be devised to ensure new firmware can be delivered to protect against emerging vulnerabilities.

Right now, there's still too much emphasis on how quickly a product can hit the market, and not enough on the long-term welfare and protection of customers and their assets. As vendors, we must also improve our communication with the rest of the channel, and the way that we provide education and awareness around weaknesses created during the installation process. We can design securely, but are we doing enough to ensure that equipment is properly configured? Have we empowered the channel with the right tools to test and verify that the addition of IoT devices connected to a network hasn't created an unexpected vulnerability somewhere else?

Mitigating human nature

We also have a role to play in end-user education, and helping organisations develop a culture of cyber security through staff training and awareness programmes. After all, no matter how secure we make our equipment, human nature will always be a weakness.

That means equipment doesn't just need protecting at the time of installation. What happens, for instance, when the network is expanded further down the line, or when new users are onboarded? Are we providing the right materials to ensure that future expansions are properly configured too, and that the correct levels of threat monitoring are in place?

A recent report by Swiss cybersecurity firm Gemalto suggests 58% of UK businesses would be unable to detect an IoT-related security breach. The onus is therefore on vendors to help slash that number. None of this is easy, and the government's efforts are a welcome recognition that vendors can't achieve full security-by-design by themselves. The IoT ecosphere is too big, and too important, not to make us all reliant on partners in one respect or another.

Steve Kenny is industry liaison for architecture and engineering at Axis Communications

Latest in Security
Ransomware concept image showing a warning symbol in red with binary code in background.
Healthcare systems are rife with exploits — and ransomware gangs have noticed
Application security concept image showing a digitized padlock placed upon a digital platform.
ESET looks to ‘empower’ partners with cybersecurity portfolio updates
NHS logo displayed on a smartphone screen in white lettering on a blue background.
NHS supplier hit with £3m fine for security failings that led to attack
OpenAI logo and branding pictured at Mobile World Congress 2024 in Barcelona, Spain.
OpenAI announces five-fold increase in bug bounty reward
Cybersecurity concept image symbolizing third-party data breaches with give padlock symbols and one pictured in red, signifying a security breach.
These five countries recorded the most third-party data breaches last year
Phishing concept image showing an email symbol with fishing hook.
Have I Been Pwned owner Troy Hunt’s mailing list compromised in phishing attack
Latest in Feature
A photo of UNSW's Sunswift 7 car pictured in front of Uluru in Australia's Northern Territory.
How UNSW’s Sunswift Racing and Ericsson achieved cross-country connectivity in Australia’s outback
Matt Clifford speaking at Treasury Connect conference in 2023
Who is Matt Clifford?
Open source vulnerabilities concept image showing HTML code on a computer screen.
Open source risks threaten all business users – it’s clear we must get a better understanding of open source software
An abstract CGI image of a large green cuboid being broken in half with yellow, orange, and red cubes to represent ransomware resilience and data encryption.
Building ransomware resilience to avoid paying out
The words "How effective are AI agents?" set against a dark blue background bearing the silhouettes of flowchart rectangles and diamonds to represent the computation and decisions made by AI agents. The words "AI agents" are yellow, while the others are white. The ITPro Podcast logo is in the bottom right-hand corner.
How effective are AI agents?
An illustration showing a mouth with speech bubbles and question marks and a stylized robot alien representing an AI assistant chirping away with symbols and ticks, to represent user annoyance with AI assistants.
On-device AI assistants are meant to be helpful – why do I find them so annoying?