Cyber researchers have already identified several big security vulnerabilities on OpenAI’s Atlas browser
Security researchers have uncovered a Cross-Site Request Forgery (CSRF) attack and a prompt injection technique
With OpenAI’s Atlas browser just over a week old, cyber experts have already identified several vulnerabilities and potential security risks for users.
Researchers have discovered a vulnerability in the AI browser that allows attackers to inject malicious instructions directly into ChatGPT's memory and execute remote code.
According to researchers at LayerX, the flaw can affect ChatGPT users on any browser, but is particularly dangerous for users of OpenAI’s new agentic browser, ChatGPT Atlas.
"LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge," researchers said.
Users are also logged in to ChatGPT by default, while LayerX also said testing indicates the Atlas browser is up to 90% more exposed than Chrome and Edge to phishing attacks.
In this exploit, attackers can use a Cross-Site Request Forgery (CSRF) request to 'piggyback' on the victim’s ChatGPT access credentials, and inject malicious instructions into ChatGPT’s memory.
When the user then attempts to use ChatGPT for legitimate purposes, the ‘tainted memories’ will be invoked. They can execute remote code that allows the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Notably, researchers warned the exploit can persist across devices and sessions, enabling remote code execution and potential takeover of a user account, browser, or connected systems without them realizing anything is wrong.
More security issues for Atlas
The findings from LayerX mark the latest in a string of warnings over the potential security risks associated with the new browser.
Researchers at NeuralTrust, for example, demonstrated a prompt injection attack that's also affecting Atlas, whereby its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.
In this instance, an attacker crafts a string that appears to be a URL but is malformed, and won't be treated as a navigable URL by the browser. The string embeds explicit natural language instructions to the agent.
When the user pastes or clicks this string so it lands in the Atlas omnibox, the input fails URL validation, and Atlas treats the entire content as a prompt. The embedded instructions are now interpreted as trusted user intent with fewer safety checks.
The attackers can then execute the injected instructions with elevated trust.
Jamie Akhtar, CEO and co-founder at CyberSmart, said the recent findings are a prime example of the “security pitfalls of LLMs and AI browsers”.
“Although these technologies have ushered in a future of possibilities for cybersecurity, they’ve also been partly responsible for the democratization of cyber crime," he said.
"Threats like prompt injections aren’t particularly difficult for any cyber criminal with rudimentary knowledge to use (once they’ve been created), despite their sophistication,” Akhtar added.
“What makes them so dangerous is the ability to manipulate the AI's underlying decision-making processes and effectively turn the agent against the user."
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
HPE plots a balancing act with Juniper partner futureNews Does the company embrace specialists or want a full portfolio push? The answer, it seems, is both
-
Google DeepMind partners with UK government to boost AI researchNews The deal includes the development of a new AI research lab, as well as access to tools to improve government efficiency
-
Trend Micro issues warning over rise of 'vibe crime' as cyber criminals turn to agentic AI to automate attacksNews Trend Micro is warning of a boom in 'vibe crime' - the use of agentic AI to support fully-automated cyber criminal operations and accelerate attacks.
-
Cyber budget cuts are slowing down, but that doesn't mean there's light on the horizon for security teamsNews A new ISC2 survey indicates that both layoffs and budget cuts are on the decline
-
NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to knowNews Many organizations see prompt injection as just another version of SQL injection - but this is a mistake
-
Chinese hackers are using ‘stealthy and resilient’ Brickstorm malware to target VMware servers and hide in networks for months at a timeNews Organizations, particularly in the critical infrastructure, government services, and facilities and IT sectors, need to be wary of Brickstorm
-
AWS CISO Amy Herzog thinks AI agents will be a ‘boon’ for cyber professionals — and teams at Amazon are already seeing huge gainsNews AWS CISO Amy Herzog thinks AI agents will be a ‘boon’ for cyber professionals, and the company has already unlocked significant benefits from the technology internally.
-
OpenAI hailed for ‘swift move’ in terminating Mixpanel ties after data breach hits developersNews The Mixpanel breach prompted OpenAI to launch a review into its broader supplier ecosystem
-
The Scattered Lapsus$ Hunters group is targeting Zendesk customers – here’s what you need to knowNews The group appears to be infecting support and help-desk personnel with remote access trojans and other forms of malware
-
Impact of Asahi cyber attack laid bare as company confirms 1.5 million customers exposedNews No ransom has been paid, said president and group CEO Atsushi Katsuki, and the company is restoring its systems
