OpenAI announces five-fold increase in bug bounty reward
New maximum reward reflects commitment to high-impact security, says company


OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
In a blog post confirming the move, the organization set out plans to expand its cybersecurity grant program. So far, the tech giant has given funding to 28 research projects looking at both offensive and defensive security measures, including autonomous cybersecurity defenses, secure code generation, and prompt injection.
The program is now soliciting proposals for five new areas of research: software patching, model privacy, detection and response, security integration, and agentic AI security.
It’s also introducing what it terms microgrants in the form of API credits for “high quality proposals”.
In addition to the expanded grant program, the company also announced it was expanding its security bug bounty program, which was first launched in April 2023.
The primary change is an increase of the maximum bounty award from $20,000 to $100,000, which OpenAI said “reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems”.
Additionally, it’s launching “limited-time promotions”, the first of which is live now and ends on 30 April 2025. During these periods, researchers can receive additional bounty bonuses for valid work in a specific bug category. More information can be found on OpenAI’s bugcrowd page.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
OpenAI is still all-in on AGI
OpenAI has pitched the extended grant program and the increased maximum bug bounty payout as crucial to its “ambitious path towards AGI (artificial general intelligence)”.
AGI is commonly understood to mean AI that has a level of intelligence similar to a human and isn’t constrained by one particular specialism.
It’s also a controversial topic that has divided those in the AI community and beyond into three camps: those who believe its development is inevitable and necessary, those who believe its development could mean the end of civilization, and those who believe it’s both impossible and undesirable.
OpenAI CEO Sam Altman, CEO is firmly in the first camp, stating in a January 2025 post to his personal blog: “We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial.”
More recently, he said “systems that start to point to AGI are coming into view” and laid out how this may play out over the next ten years. He also caveated the statement, however, saying OpenAI “[doesn’t] intend to alter or interpret the definitions and processes that define [its] relationship with Microsoft”.
Altman said the footnote may seem “silly”, but “some journalists will try to get clicks by writing something silly”.
Nevertheless, it’s unsurprising the specter of Microsoft was raised in the context of this blog; Microsoft CEO Satya Nadella has been openly critical of the AI industry’s focus on AGI.
In a recent podcast appearance, Nadella described the industry's focus on AGI as "nonsensical benchmark hacking".
MORE FROM ITPRO

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
Customizing for Every Customer
Personalise customer experiences at scale with CRM+AI+Data+Trust. True 1-to-1 personalisation is finally possible.
-
The Data Foundation for the Age of AI
See how you can build a data strategy for the age of AI. How Data Cloud unifies data for use in personalisation and grounded AI.
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
The rise of GhostGPT – Why cybercriminals are turning to generative AI
Industry Insights GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity
-
Think DDoS attacks are bad now? Wait until hackers start using AI assistants to coordinate attacks, researchers warn
News The use of AI in DDoS attacks would change the game for hackers and force security teams to overhaul existing defenses
-
Okta and Palo Alto Networks are teaming up to ‘fight AI with AI’
News The expanded partnership aims to help shore up identity security as attackers increasingly target user credentials
-
Despite the hype, cybersecurity teams are still taking a cautious approach to using AI tools
News Research from ISC2 shows the appetite for AI tools in cybersecurity is growing, but professionals are taking a far more cautious approach than other industries.