AI could revolutionize the IT department but companies need to consider data risks, says expert

Three software engineers watching a laptop screen, with one sat down at a desk on the laptop and the other two leaning on the desk to either side of him. They are in an office dimly-lit with purple light.
(Image credit: Getty Images)

AI implementation can lead to direct savings in the IT department right now, but CIOs need to carefully consider the value their company can derive from it against the risks it brings.

Tools that harness the power of generative AI models have been of particular interest over the year so far, carrying the potential for unmatched insight into vast data sets and for helping seasoned workers complete complex tasks such as code evaluation.

At the same time as AI deployment has hit fever pitch, businesses are being asked to carefully consider whether fully relying on a public cloud AI such as ChatGPT is appropriate.

“This is a really interesting issue that we’re going to face over how much you share with a public model,” Gavin Millard, deputy CTO of VM at Tenable, told ITPro.

“It’s the risk of doing so, which is the disclosure of internal data, versus the advantage of leveraging every customer service also sharing that data.”

Millard drew a comparison to cloud adoption, noting that businesses will need to carefully assess what value the tech brings to their company and where it is best suited.

Many companies rely on cloud providers such Azure, AWS, or Google Cloud to access AI models that allow text and image generation. But there have also been calls for firms to focus more on training smaller models on their own data for more personalized results.

For example, an IT support team could use AI to address common problems and free up human workers to help colleagues with more complex issues.

RELATED RESOURCE

Whitepaper cover with black and white image of man's face wearing glasses and with beard on the right side

(Image credit: Mimecast)

The board's evolving perceptions of cyber risk

What leadership must do to be protected

DOWNLOAD FOR FREE

To achieve this, the team may choose to train a chatbot on the most common issues with an internal product.

“That would be a huge benefit. If an organization has a hundred or a thousand tech support employees receiving 5,000 calls per day, how much of that can be automated?

Businesses could derive the most value from what Millard dubbed a “dynamic data set with a local flavor”, in which sensitive information is kept localized while other data points such as indicators of compromise are shared amongst the wider developer community.

One way that companies are already securely passing certain data to LLMs is via API access, such as ChatGPT API or Bard API.

“The data you share with those community models could be very restricted, and the import APIs being leveraged would hopefully have that embedded into them anyway - just strip out all the stuff they don’t want.”

Outside of security, Millard noted that AI use cases are becoming clearer.

For example, an internal AI tool could be used to summarize insights into sales targets or customer profiles, and Millard urged businesses to draw up plans for AI adoption now to avoid having to play catch-up down the line.

“We've seen lots of organizations saying ‘you can't use AI’, that's kind of like ‘you can't bring your own laptop into an organization’,” he said. 

“This is going to happen - people are going to bring in resources, people are going to bring their own AI, so it’s important that the rules are put in place policies are defined, and that they’re verified.”

Shared AI models tasked with threat detection could become far more effective through this method than localized models run on a company-by-company basis.

Another area within IT that AI is being sold as integral for going forward is code generation. 

Developers who utilize GitHub Copilot, for example, now let the tool generate 61% of their Java code and 46% of all code on average.

Millard acknowledged the role that generative AI can play in easing the workloads of developers, while firmly stating it could not let total amateurs generate like experts.

“Put simply, a good calculator doesn’t make you a great mathematician,” Millard said.

“In order to leverage AI to create code, you’re going to have to understand what you need to accomplish and the answer that you’re getting back.”

He noted that in conversations with developers using AI to generate code, they have noted that AI code hygiene is generally good and sometimes exceeds their own skill level.

However, the same developers stressed that they continue to validate all code generated by the model they are using. One developer specifically noted that they evaluate both the output and their input, to ensure that unwanted elements were not introduced to the code.

These anomalous elements could lead to more vulnerabilities in a company’s code base, an unintended risk of AI tools that executives should also consider when drawing up AI adoption plans.

AI tools and chatbots in particular carry the potential to empower attackers. Darktrace measured a 135% increase in novel social engineering attacks driven by AI at the start of 2023, while CyberArk researchers created polymorphic malware using ChatGPT.

Millard acknowledged these potential risks and gave the example of a hacker using an AI tool to analyze and discover web app vulnerabilities. 

But he rejected the idea that this should be the main takeaway from the technology, which also seeks to empower defensive teams.

“Let’s not forget that we’re still living in a time where behind every breach is a known flaw. It’s not as if attackers are leveraging zero days against every organization, automated up to the hilt and bringing systems down with things we don’t understand.

“The problem we have today, for many companies, is they’re not finding these critical issues before attackers take advantage of them. So the biggest advantage of AI and ML is actually helping companies defend themselves better.

“Like, ransomware should not exist today because it's just the monetization of poor cyber hygiene.”

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.