OpenAI unveils its Operator agent to help users automate tasks – here's what you need to know
OpenAI has made its long-awaited foray into the AI agents space


OpenAI has unveiled its first AI agent — but "Operator" remains a research preview, rather than a final product.
AI agents are believed by many to be the "killer app" for generative AI, allowing the much-hyped technology to take on practical workloads by automating processes and taking action, rather than only providing information.
The end of last year saw a slew of AI agent announcements, with the arrival of an experimental agent in Anthropic's Claude, Google including a limited release of agents in Gemini 2.0, and a public preview for Microsoft's Copilot agents.
But industry leader OpenAI made clear its agent wouldn't arrive until this year, with CEO Sam Altman saying that 2025 was the year that "agents will work".
Only a few weeks into the year, OpenAI has unveiled Operator, an agent that uses its own web browser to perform tasks, such as typing, clicking and scrolling, the company explained in a blog post.
"The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses," the company said.
How OpenAI’s Operator agent works
The agent is powered by its own model, which uses GPT-4o's vision and text reading skills. Operator takes a screenshot, analyzing the image to decide where action can be taken — such as a form or a button.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Users type in the task they'd like done, sending it off to take action, though workflows can also be personalized with custom instructions or preferences.
"If it encounters challenges or makes mistakes, Operator can leverage its reasoning capabilities to self-correct," the company said. "When it gets stuck and needs assistance, it simply hands control back to the user, ensuring a smooth and collaborative experience.
Indeed, OpenAI made clear the tool was in the early stages — that echoes warnings from rival Anthropic, which said its Claude agent was experimental, as it was "at times cumbersome and error-prone".
Google and Microsoft both have kept their agents in preview modes as well.
"Operator is currently in an early research preview, and while it’s already capable of handling a wide range of tasks, it’s still learning, evolving and may make mistakes," the OpenAI blog post noted.
"For instance, it currently encounters challenges with complex interfaces like creating slideshows or managing calendars. Early user feedback will play a vital role in enhancing its accuracy, reliability, and safety, helping us make Operator better for everyone."
OpenAI touts safety features
OpenAI detailed a series of safeguards built into the system, highlighting efforts to keep users in control and prevent abuse.
The company said Operator was trained to always ask for input at critical points, such as typing in sensitive information like passwords or payment details, as well as asking for confirmation before taking significant actions, like placing an order or hitting send on an email.
Operator is trained to decline sensitive tasks such as banking transactions or making decisions on job applications, the company says, and though it can be used with email or banking sites, it will ask for closer supervision to help avoid mistakes.
On the data privacy front, OpenAI said the agent has an easy training opt out, so user data and activity won't be used for training models, and users can easily delete browsing data, logins and conversations.
RELATED WHITEPAPER
Recognizing that hackers will start targeting AI agents, OpenAI has included defensive measures into Operator's behaviour and browser, letting it detect and ignore prompt injections and pause a task over suspicious behaviour, which will be updated via automated and human moderation.
"We know bad actors may try to misuse this technology," the company said. "That’s why we’ve designed Operator to refuse harmful requests and block disallowed content."
But, the post also warned that it wouldn't be possible to catch everything. "While Operator is designed with these safeguards, no system is flawless and this is still a research preview; we are committed to continuous improvement through real-world feedback and rigorous testing," the post said.
What's next
So far, Operator is only available for Pro level subscribers in the US. OpenAI said it planned to expand availability for the agent by offering it to Plus, Team and Enterprise subscribers and adding it directly into ChatGPT — but not until the company was confident in its "safety and usability at scale."
That said, OpenAI was already working with corporate partners to build agents using Operator, including DoorDash, Instacard, Uber and more.
"By releasing Operator to a limited audience initially, we aim to learn quickly and refine its capabilities based on real-world feedback, ensuring we balance innovation with trust and safety," the company explained.
Beyond working on extending availability and addressing user feedback, OpenAI said it was working to improve Operator's ability to handle longer and more complex workflows, and would make it available via the API so developers could build their own agents.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Box reveals new AI capabilities at BoxWorks 2025
News Extract and Automate will help businesses make better use of their data, the cloud company claims
-
Big tech CEOs are fueling the fire of AI confusion
Opinion Mixed messaging on the effectiveness of AI only raises fears that the technology will steal human jobs
-
Businesses are being taken for fools with AI agents
Opinion AI agents are still not very good at their 'jobs', or at least pretty terrible at producing returns on investment – yet businesses are still buying into the hype.
-
Salesforce CEO Marc Benioff says the company has cut 4,000 customer support staff for AI agents so far
News The jury may still be out on whether generative AI is going to cause widespread job losses, but the impact of the technology is already being felt at Salesforce.
-
Microsoft says these 10 jobs are at highest risk of being upended by AI – but experts say there's nothing to worry about yet
News Microsoft thinks AI is going to destroy jobs across a range of industries – while experts aren't fully convinced, maybe it's time to start preparing.
-
Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
News Workers are comfortable working alongside AI agents, according to research from Workday, but the prospect of having an AI 'boss' is a step too far.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers
-
Three things we expect to see at OpenAI’s GPT-5 reveal event
Analysis Improved code generation and streamlined model offerings are core concerns for OpenAI
-
Everything you need to know about OpenAI's new open weight AI models, including price, performance, and where you can access them
News The two open weight models from OpenAI, gpt-oss-120b and gpt-oss-20b, are available under the Apache 2.0 license.
-
‘LaMDA was ChatGPT before ChatGPT’: Microsoft’s AI CEO Mustafa Suleyman claims Google nearly pipped OpenAI to launch its own chatbot – and it could’ve completely changed the course of the generative AI ‘boom’
News In a recent podcast appearance, Mustafa Suleyman revealed Google was nearing the launch of its own ChatGPT equivalent in the months before OpenAI stole the show.