RSAC Conference day two: A focus on what attackers are doing
From quantum to AI, experts discussed how new and experimental technologies could be used by hackers to access and decrypt sensitive data

A major focus of the second day of the RSAC Conference was sharing intelligence on what attackers are actually doing with emerging capabilities like AI, as well as quantum computing.
In separate keynote sessions Tuesday, senior executives from Google offered different perspectives about what threat actors, including nation-states, are doing with artificial intelligence tools.
Sandra Joyce, vice president of Google Threat Intelligence, detailed how advanced persistent threat (APT) groups from more than 20 countries, especially Iran, China, and North Korea, have accessed Google’s public Gemini AI services to enhance their attacks.
She provided evidence that attackers performed reconnaissance on target organizations, researched vulnerabilities, sought assistance with malicious scripting, fine-tuned phishing messages, and looked up evasion techniques.
Fundamentally, though, the attacker activity surfaced by Google Threat Intelligence was relatively low level.
“We haven’t yet seen indications of adversaries developing any fundamentally new attack vectors with these models,” Joyce said.
AI safety controls blocked some APT actors from carrying out more sophisticated AI-powered research and attacks, Joyce explained. Meanwhile, the tools themselves are capable of discovering vulnerabilities. She gave the example of Big Sleep, a previously shared vulnerability that Google uncovered with the help of an LLM. “We believe that this is the first public example of an AI agent finding a previously unknown exploitable memory issue in widely used real world software,” Joyce said.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
While the empirical approach to analyzing data on the way that malicious actors are trying to use Gemini, Microsoft Copilot or ChatGPT can offer valuable clues to the actions of the attacker underworld, another Google executive in a keynote panel discussion on Tuesday provided important context about the limitations of relying exclusively on that type of data.
John 'Four' Flynn, vice president of security and privacy at Google DeepMind, pointed out that the operational security protocols of the most serious nation-state actors leave the industry mostly blind to their activities.
“I posit that most adversarial work will likely be on on-prem, open-weight models, or some sort of customized models that they’re building, because there is an issue of visibility,” Flynn said. “If you’re an attacker, obviously you’re going to be testing out all the things that are out there, but if you’re doing some really heavy lifting with AI, it may or may not be something that you do in the open.”
Another panelist with Flynn provided context about how quickly attackers appear to be moving in concert with the evolution of technological capabilities of AI itself to create new threats.
Jade Leung, CTO of the UK AI Security Institute – a UK government team of about 200 researchers – focused on how AI might affect national security risks in areas like chemical and biological attacks and terrorism.
“Clearly [AI] capabilities are moving faster than safety and security. There is a sense in which folks who are in the field, who work on these types of issues, feel like we are barely keeping up,” Leung said.
“Capabilities are not quite there yet in terms of posing significant, severe risk. But it’s not just the snapshot that matters, it’s the trend line that matters, and so the trend lines are pretty steep,” Leung said. “It is astonishing how much more capable [these models and systems are] getting in a very tiny amount of time.”
While AI is front and center as a security issue at RSAC Conference, a main stage panel on Tuesday also addressed another emerging threat — the potential for quantum computing to undermine current encryption practices.
Participants on that cryptography panel agreed that quantum computing was likely still more than a decade or more from becoming a decryption threat, but they made it clear that nation-state actors are taking offensive action now.
Raluca Ada Popa, associate professor and senior staff research scientist at UC Berkeley and Google DeepMind, called the technique “harvest now, decrypt later.” She said, “Attackers can import encrypted, confidential data now, and decrypt them later when quantum computers are ready.”
Whitfield Diffie, a pioneer of public-key cryptography, chimed in to explain why harvesting matters even if quantum computing is decades away. “There are vast tape libraries at NSA and all the rest of those organizations running back decades,” Diffie said. “I am quite confident that the oldest thing in NSA’s tape libraries probably comes from World War I, and surely is no later than World War II. So, of course, people are going to be working on our current traffic through the rest of the century.”
MIT professor Vinod Vaikuntanathan recommended that organizations protect sensitive data by employing one of the newer post-quantum encryption algorithms on top of a current algorithm like RSA or Diffie-Hellman. “The pragmatic thing to do is be conservative and employ what’s called hybrid encryption.”
Scott Bekker is an analyst with ActualTech Media. For 20 years, Scott edited and reported for technology magazines focused on enterprise technologies and the IT channel.
-
The IT industry’s shift to circular, low-carbon solutions
Maximize your hardware investment and reach your sustainability goals with HP’s Renew Solutions
-
Lenovo ThinkPad X9 14 Aura Edition review
Reviews This thin and light ultraportable will draw you in with its vibrant screen – but it isn't as powerful as some of its competitors
-
RSAC Conference Day One: Vibe Is 'All In' on AI for Security
News Artificial intelligence took center stage as RSAC Conference looks at how the discussion has moved from generative AI to agentic AI
-
RSAC Conference 2025 live: All the day-two news and updates
Live blog It's day two at RSAC Conference 2025 – keep track of everything that's announced live through our coverage
-
Cisco takes aim at AI security at RSAC with ServiceNow partnership
News The companies claim Cisco AI Defense and ServiceNow SecOps will help address new challenges raised by AI
-
What to look out for at RSAC Conference 2025
Analysis Convincing attendees that AI can revolutionize security will be the first point of order at next week’s RSA Conference – but traditional threats will be a constant undercurrent
-
'You need your own bots' to wage war against rogue AI, warns Varonis VP
News Infosec pros are urged to get serious about data access control and automation to thwart AI breaches
-
CrowdStrike CEO: Embrace AI or be crushed by cyber crooks
News Exec urges infosec bods to adopt next-gen SIEM driven by AI – or risk being outpaced by criminals
-
Microsoft security boss warns AI insecurity 'unprecedented' as tech goes mainstream
News RSA keynote paints a terrifying picture of billion-plus GenAI users facing innovative criminal tactics
-
APIcalypse Now: Akamai CSO warns of surging attacks and backdoored open source components
NEWS Apps and APIs bear the brunt as threat actors pivot to living off the land