A torrent of AI slop submissions forced an open source project to scrap its bug bounty program – maintainer claims they’re removing the “incentive for people to submit crap”

Curl isn’t the only open source project inundated with AI slop submissions

Digitized cross symbol overlaid on a flowing background in red, orange, and blue colors.
(Image credit: Getty Images)

A bug bounty program run by a popular open source data transfer service has been shut down amidst an onslaught of AI-generated ‘slop’ contributions.

Daniel Stenberg, lead maintainer of Curl, a command line interface (CLI) tool which allows developers to transfer data, confirmed the decision to shut down the bug bounty scheme in a GitHub commit last week.

In a subsequent email outlining the move, Stenberg revealed seven bug bounty submissions were recorded within a sixteen hour period, with 20 logged since the beginning of the year.

Although some of these uncovered bugs, not a single one actually detailed a concrete vulnerability.

“Some of them were true and proper bugs, and taking care of this lot took a good while,” he said. “Eventually we concluded that none of them identified a vulnerability and we now count twenty submissions done already in 2026.”

Stenberg added that the current volume of submissions is placing a “high load” on the security team, and the decision to shut down the program aims to “reduce the noise” and number of AI-generated reports.

“The main goal with shutting down the bounty is to remove the incentive for people to submit crap and non-well researched reports to us,” he wrote.

“We believe, hope really, that we still will get actual security vulnerabilities reported to use even if we do not pay for them. The future will tell.”

cURL maintainer names and shames bug hunter

Stenberg revealed he had a “lengthy discussion” with an individual who submitted one of the AI-generated vulnerability reports after publicly ridiculing them on social media site Mastodon.

“It was useful for me to make me remember that oftentimes these people are just ordinary mislead humans and they might actually learn from this and perhaps even change,” he said.

The individual in question, however, insists their professional life has been “ruined”.

According to Stenberg, the naming and shaming approach appears to be the only option for cutting down on this growing issue.

“This is a balance of course, but I also continue to believe that exposing, discussing, and ridiculing the ones who waste our time is one of the better ways to get the message through: you should NEVER report a bug or vulnerability unless you understand it - and can reproduce it.”

AI slop reports are a major headache

This isn’t Stenberg’s first run-in with this topic. In January 2024, he aired previous concerns over the growing volume of poor reports. A key talking point here lay around the fact that these appeared legitimate and maintainers wasted time dissecting slop.

"When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means," he said.

Curl isn’t alone in contending with AI-generated ‘slop’ security reports, either. As ITPro reported in December 2024, another open source maintainer lamented over the strain placed on open source developers and maintainers.

Seth Larson, a security report triage worker for a handful of open source projects, revealed they were facing an “uptick in extremely low-quality, spammy, and LLM-hallucinated security reports” to open source projects.

This torrent of AI-generated slop was having a huge impact on open source maintainers, wasting time and effort, and leading to higher levels of burnout. As with Stenberg, Larson noted that these seemingly legitimate reports were becoming a major headache.

“The issue is in the age of LLMs, these reports appear at first glance to be potentially legitimate and thus require time to refute,” he wrote in a blog post.

FOLLOW US ON SOCIAL MEDIA

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.