OpenAI announces five-fold increase in bug bounty reward
New maximum reward reflects commitment to high-impact security, says company


OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
In a blog post confirming the move, the organization set out plans to expand its cybersecurity grant program. So far, the tech giant has given funding to 28 research projects looking at both offensive and defensive security measures, including autonomous cybersecurity defenses, secure code generation, and prompt injection.
The program is now soliciting proposals for five new areas of research: software patching, model privacy, detection and response, security integration, and agentic AI security.
It’s also introducing what it terms microgrants in the form of API credits for “high quality proposals”.
In addition to the expanded grant program, the company also announced it was expanding its security bug bounty program, which was first launched in April 2023.
The primary change is an increase of the maximum bounty award from $20,000 to $100,000, which OpenAI said “reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems”.
Additionally, it’s launching “limited-time promotions”, the first of which is live now and ends on 30 April 2025. During these periods, researchers can receive additional bounty bonuses for valid work in a specific bug category. More information can be found on OpenAI’s bugcrowd page.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
OpenAI is still all-in on AGI
OpenAI has pitched the extended grant program and the increased maximum bug bounty payout as crucial to its “ambitious path towards AGI (artificial general intelligence)”.
AGI is commonly understood to mean AI that has a level of intelligence similar to a human and isn’t constrained by one particular specialism.
It’s also a controversial topic that has divided those in the AI community and beyond into three camps: those who believe its development is inevitable and necessary, those who believe its development could mean the end of civilization, and those who believe it’s both impossible and undesirable.
OpenAI CEO Sam Altman, CEO is firmly in the first camp, stating in a January 2025 post to his personal blog: “We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial.”
More recently, he said “systems that start to point to AGI are coming into view” and laid out how this may play out over the next ten years. He also caveated the statement, however, saying OpenAI “[doesn’t] intend to alter or interpret the definitions and processes that define [its] relationship with Microsoft”.
Altman said the footnote may seem “silly”, but “some journalists will try to get clicks by writing something silly”.
Nevertheless, it’s unsurprising the specter of Microsoft was raised in the context of this blog; Microsoft CEO Satya Nadella has been openly critical of the AI industry’s focus on AGI.
In a recent podcast appearance, Nadella described the industry's focus on AGI as "nonsensical benchmark hacking".
MORE FROM ITPRO

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May