Big tech is clamping down on open source ‘AI slop’ reports

Firms including Microsoft, OpenAI, and Google have pledged funding to bolster open source security and cut down on slop reports

Human hand pointing at a laptop computer with AI symbol on screen and blue gunge with 'slop' icon pouring out of keyboard.
(Image credit: Getty Images)

A host of big tech firms have handed over $12.5 million in funding to advance open source security and try to eliminate "AI slop" bug reports.

Firms including OpenAI, Anthropic, AWS, Google, Microsoft, and GitHub have pledged funding for Alpha-Omega and the Open Source Security Foundation (OpenSSF), both security initiatives within the Linux Foundation.

The aim is to develop long-term, sustainable security solutions that support open source communities worldwide.

The move comes as open source maintainers contend with an unprecedented number of security reports, many of which are generated by automated systems.

Mark Ryland, director of the Office of the CISO for AWS, said these AI-generated reports are overwhelming their ability to review them.

"Many of the reports are of very low quality — a reality given rise to the new industry term 'AI slop'," he said. "Many projects have already elected to put guidelines in place for AI submissions, while others have shut down upstream contributions entirely to prevent a flood of AI-generated pull requests."

Closer ties with open source maintainers

The new investment will allow Alpha-Omega and OpenSSF to work directly with maintainers and their communities to make emerging security capabilities accessible, practical, and aligned with existing project workflows.

“Grant funding alone is not going to help solve the problem that AI tools are causing today on open source security teams,” said Greg Kroah-Hartman of the Linux kernel project.

“OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving.”

The GitHub Secure Open Source Fund is adding an additional $5.5 million in Azure credits and funding to provide training and expertise.

GitHub Security Lab, meanwhile, is improving the security advisory experience on GitHub and Private Vulnerability Reporting (PVR) features, with an eye on reducing the burden of low-quality reports and helping maintainers manage increased volume.

Google, meanwhile, will provide AI-powered tools like Big Sleep and CodeMender from Google DeepMind – already used to protect the company's own systems. It's also extending research initiatives like Sec-Gemini to open source projects.

“Our commitment remains focused: to sustainably secure the entire lifecycle of open source software,” said Steve Fernandez, general manager of OpenSSF.

“By directly empowering the maintainers, we have an extraordinary opportunity to ensure that those at the front lines of software security have the tools and standards to take preventative measures to stay ahead of issues and build a more resilient ecosystem for everyone.”

AI slop reports are skyrocketing

Concerns about AI slop bug reports have been voiced by a number of organizations, including the Python Software Foundation.

Developers behind cURL, an open source command line interface (CLI) tool which allows developers to transfer data, recently shut down its bug bounty scheme in response to a growing number of slop reports.

As ITPro reported at the time, lead maintainer Daniel Stenberg said the current volume of submissions is placing a “high load” on the security team.

FOLLOW US ON SOCIAL MEDIA

Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

TOPICS
Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.