Inside Bugcrowd's plans to “demystify” AI security with new vulnerability reporting scheme

Digital padlock on circuit board
(Image credit: Getty Images)

Crowdsourced security platform Bugcrowd has launched an update to its Vulnerability Rating Taxonomy (VRT) to include vulnerabilities in large language models (LLMs)

The update is part of an ongoing effort to define and prioritize vulnerabilities in a standardized way to boost understanding and participation from hackers and consumers alike, chief strategy officer Casey Ellis told ITPro.

Long-term, Ellis said the aim is to “demystify” the technology and create a more transparent vulnerability reporting environment. This, he added, will help alleviate lingering security and privacy concerns associated with the use of generative AI models.

Bugcrowd’s VRT is an open-source platform created to facilitate the sharing of information on known software vulnerabilities; it is continually updated to reflect the current threat landscape, with AI being one of the most significant technologies to shake up the cyber security environment.

Ellis believes the VRT platform will drive a better understanding of LLMs. When asked whether he thought the system would improve trust, he agreed.

“100% – Firstly, by demonstrably and transparently improving the security in those systems and having a true positive impact on risk. Secondly, by socializing and demystifying some inherently complex, difficult to understand, and in many ways ‘magical’ technology for the average Internet user.”

“Bugcrowd has seen this phenomenon in many other verticals ranging from connected cars to medical devices to voting equipment - The average internet using layperson might never fully understand the technology that powers these systems, but they can easily grasp the concept of ‘Neighborhood Watch for the Internet’ which gives them a greater sense of confidence and trust.” 

Bugcrowd wants a collaborative approach to vulnerability reporting

Ellis emphasized the benefits a crowd-sourced approach has for finding and highlighting vulnerabilities. This plays a critical role in allowing the broader technology community to disclose vulnerabilities and create a safer operating environment for organizations globally.

“Many eyes, and the right incentives and frameworks, make all bugs shallow, and when you consider the crowd of adversaries and threat actors who are actively looking to exploit flaws and weaknesses in computer systems, engaging the help of an army of allies simply makes sense. 

“On top of this, AI itself operates in ways that could be considered autonomous (even though, strictly speaking, they aren't), so the broader the pool of defenders acting in the interest of public safety and security, the better," Ellis added.

The rapid adoption of generative AI tools in the last year has unlocked marked benefits for organizations and individual workers. However, concerns over security and data privacy have been a recurring talking point throughout this period. 

Ellis said many companies are waking up to the fact that the value they can unlock from generative AI is often matched by additional security and safety considerations. 

RELATED RESOURCE

Whitepaper cover with title over image of colleagues chatting in an office with red circular digital icons around them

(Image credit: Zscaler)

Discover how you can stop attackers with a zero trust strategy

DOWNLOAD NOW

“We're at a point where it's broadly agreed that, alongside its incredible utility, the power of AI introduces serious considerations around security and safety. The problem is, the potential scope is so vast that it's difficult to know where to start attacking the problem.”

“The VRT is designed to simplify conversations around scope and impact, help the process of getting people on the same page, and to make security conversation easier and more accessible. This last part, accessibility, definitely benefits general awareness. AI is here to stay and I'd like to see everyone in security have at least some taxonomical understand of AI security - This release is a step towards that.”

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.