Everything you need to know about ChatGPT’s new Advanced Account Security features
OpenAI has introduced new tools to tightening up access to ChatGPT, Codex, and its other AI tools
OpenAI has announced new security-focused tools to help users lock down accounts and prevent data leakage.
New Advanced Account Security features will add extra security to ChatGPT accounts for those at "increased risk of digital attacks" as well as anyone else in need of stronger levels of protection.
"Over time, a ChatGPT account can hold sensitive personal and professional context, and sit at the center of connected tools and workflows," the company said in a blog post.
"For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security-conscious, the stakes are even higher."
Here’s what users can expect with the new security tools.
What is Advanced Account Security?
Advanced Account Security is essentially a multi-factor authentication (MFA) system, but there are some extra features too.
To start, Advanced Account Security requires users to choose two sign-in methods; beyond a password, one of which can be a passkey or a physical key. The aim here is to reduce the likelihood of phishing attacks, as hackers can't access accounts with a password alone.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Alongside that, users will also receive login alerts when the account is accessed on a new device, and it will be possible to view where your account is signed in to help manage access.
OpenAI will also require more frequent sign-ins to reduce exposure if compromised.
Beyond multi-factor authentication, the Advanced Account Security system features stricter account recovery, disabling email or SMS recovery in favor of backup passkeys, security keys and recovery keys.
This, the company said, will make it harder for attackers to abuse recovery tools to gain access – but also more difficult for users locked out of their own attacks.
"Because account recovery is restricted to these more secure methods, OpenAI Support will not be able to assist with account recovery for users enrolled in Advanced Account Security," the company said in the blog post.
On its website, OpenAI added: "Advanced Account Security prioritizes security over convenience."
In use, OpenAI won't use conversations with ChatGPT to train its systems when Advanced Account Security is enabled.
"People working with especially sensitive information may opt not to have those conversations used for model training," the company added.
"With Advanced Account Security enabled, that preference is automatic: conversations from those accounts will not be used to train our models."
How to set up ChatGPT's Advanced Account Security
Setting up the new Advanced Account Security features is fairly straightforward for users. Here are some quick steps:
- Head to your ChatGPT account page
- Select ‘Settings’
- Select ‘Security’
- Scroll down to ‘Advanced account security’
- Choose ‘Enrol’
Alternatively, users can head directly to the sign-up page here.
Make sure you have your password, as you'll need to login to all devices again after enrolling. You’ll then be able to choose two secure sign-in methods, including a mix of a passkey linked to a device, a password stored in a password manager, or a security key, such as a YubiKey.
These are physical security keys used for hardware-based authentication, with OpenAI teaming up with Yubico for a discount offer for a YubiKey bundle that includes a USB-C Nano key that can be left in a laptop for convenience as well as a backup key for £61.
However, Advanced Account Security will work with any other FIDO-compliant security key.
Once enabled, the Advanced Account Security protections will apply to ChatGPT as well as Codex accounts associated with the same login.
OpenAI will require security professionals to enable Advanced Account Security by the beginning of June if they are part of the Trusted Access for Cyber programme.
Alternatively, organizations that are part of the program can show they have phishing-resistant authentication.
So far, the Advanced Account Security protections are enabled on an individual account basis, rather than rolled out across an organization.
"We expect to extend this work to additional audiences, including enterprise environments, where stronger account security can matter just as much," the company said.
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Five Eyes agencies sound alarm over risky agentic AI deploymentsNews Security agencies have urged organizations to establish clear boundaries and guardrails for AI agents
-
What is Dell PowerProtect?Dell PowerProtect gives enterprises modern, cyber-resilient backup and recovery capabilities for edge, on-prem, and multi-cloud environments
-
OpenAI is cracking down on AI misuse with a new bug bounty programNews Submissions don't have to be security vulnerabilities, OpenAI says, just the potential to cause material harm
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
OpenAI hailed for ‘swift move’ in terminating Mixpanel ties after data breach hits developersNews The Mixpanel breach prompted OpenAI to launch a review into its broader supplier ecosystem
-
Cyber researchers have already identified several big security vulnerabilities on OpenAI’s Atlas browserNews Security researchers have uncovered a Cross-Site Request Forgery (CSRF) attack and a prompt injection technique
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b modelNews Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
OpenAI is clamping down on ChatGPT accounts used to spread malwareNews Tools like ChatGPT are being used by threat actors to automate and amplify campaigns
-
OpenAI announces five-fold increase in bug bounty rewardNews OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
-
Hackers stole OpenAI product secrets in 2023 data breach – reportsNews While OpenAI hasn't confirmed the breach, there are concerns that its systems could be vulnerable to nation-state hackers
