Big tech is clamping down on open source ‘AI slop’ reports
Firms including Microsoft, OpenAI, and Google have pledged funding to bolster open source security and cut down on slop reports
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
A host of big tech firms have handed over $12.5 million in funding to advance open source security and try to eliminate "AI slop" bug reports.
Firms including OpenAI, Anthropic, AWS, Google, Microsoft, and GitHub have pledged funding for Alpha-Omega and the Open Source Security Foundation (OpenSSF), both security initiatives within the Linux Foundation.
The aim is to develop long-term, sustainable security solutions that support open source communities worldwide.
The move comes as open source maintainers contend with an unprecedented number of security reports, many of which are generated by automated systems.
Mark Ryland, director of the Office of the CISO for AWS, said these AI-generated reports are overwhelming their ability to review them.
"Many of the reports are of very low quality — a reality given rise to the new industry term 'AI slop'," he said. "Many projects have already elected to put guidelines in place for AI submissions, while others have shut down upstream contributions entirely to prevent a flood of AI-generated pull requests."
Closer ties with open source maintainers
The new investment will allow Alpha-Omega and OpenSSF to work directly with maintainers and their communities to make emerging security capabilities accessible, practical, and aligned with existing project workflows.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Grant funding alone is not going to help solve the problem that AI tools are causing today on open source security teams,” said Greg Kroah-Hartman of the Linux kernel project.
“OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving.”
The GitHub Secure Open Source Fund is adding an additional $5.5 million in Azure credits and funding to provide training and expertise.
GitHub Security Lab, meanwhile, is improving the security advisory experience on GitHub and Private Vulnerability Reporting (PVR) features, with an eye on reducing the burden of low-quality reports and helping maintainers manage increased volume.
Google, meanwhile, will provide AI-powered tools like Big Sleep and CodeMender from Google DeepMind – already used to protect the company's own systems. It's also extending research initiatives like Sec-Gemini to open source projects.
“Our commitment remains focused: to sustainably secure the entire lifecycle of open source software,” said Steve Fernandez, general manager of OpenSSF.
“By directly empowering the maintainers, we have an extraordinary opportunity to ensure that those at the front lines of software security have the tools and standards to take preventative measures to stay ahead of issues and build a more resilient ecosystem for everyone.”
AI slop reports are skyrocketing
Concerns about AI slop bug reports have been voiced by a number of organizations, including the Python Software Foundation.
Developers behind cURL, an open source command line interface (CLI) tool which allows developers to transfer data, recently shut down its bug bounty scheme in response to a growing number of slop reports.
As ITPro reported at the time, lead maintainer Daniel Stenberg said the current volume of submissions is placing a “high load” on the security team.
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
How IT leaders are tackling vendor sprawlIndustry Insights Vendor sprawl strains MSP margins, security, and operations. Consolidation can restore control, efficiency, and value
-
SPECIAL EDITION: How AI is changing educationSponsored Podcast With the right support and communication, educational organizations can use AI to empower teachers and students alike
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers
-
Anthropic says ‘code review has become a bottleneck’ – this new Claude Code feature aims to solve thatNews Anthropic’s new tool aims to address code review bottlenecks with AI tool — but it won't come cheap
-
Microsoft CEO Satya Nadella says 'anyone can be a software developer' with AI, but skills and experience are still vitalNews AI will cause job losses in software development, Nadella admitted, but claimed many will reskill and adapt to new ways of working
-
Everything you need to know about the new E7 Microsoft 365 tier, including features, pricing, and release dateNews The new premium bundle for Microsoft 365 adds AI capabilities to traditional tiers
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks
-
Anthropic says Claude Code can help streamline 'cost-prohibitive' COBOL modernization, but IBM says it's not that simple – 'decades of hardware-software integration cannot be replicated by moving code'News Research from Anthropic claims Claude Code can simplify modernization of COBOL systems
-
Automated code reviews are coming to Google's Gemini CLI Conductor extension – here's what users need to knowNews A new feature in the Gemini CLI extension looks to improve code quality through verification
-
Claude Code creator Boris Cherny says software engineers are 'more important than ever’ as AI transforms the profession – but Anthropic CEO Dario Amodei still thinks full automation is comingNews There’s still plenty of room for software engineers in the age of AI, at least for now
