RSAC Conference day two: A focus on what attackers are doing

From quantum to AI, experts discussed how new and experimental technologies could be used by hackers to access and decrypt sensitive data

A professional woman office worker in red lighting looking at a monitor
(Image credit: Getty Images)

A major focus of the second day of the RSAC Conference was sharing intelligence on what attackers are actually doing with emerging capabilities like AI, as well as quantum computing.

In separate keynote sessions Tuesday, senior executives from Google offered different perspectives about what threat actors, including nation-states, are doing with artificial intelligence tools.

Sandra Joyce, vice president of Google Threat Intelligence, detailed how advanced persistent threat (APT) groups from more than 20 countries, especially Iran, China, and North Korea, have accessed Google’s public Gemini AI services to enhance their attacks.

She provided evidence that attackers performed reconnaissance on target organizations, researched vulnerabilities, sought assistance with malicious scripting, fine-tuned phishing messages, and looked up evasion techniques.

Fundamentally, though, the attacker activity surfaced by Google Threat Intelligence was relatively low level.

“We haven’t yet seen indications of adversaries developing any fundamentally new attack vectors with these models,” Joyce said.

AI safety controls blocked some APT actors from carrying out more sophisticated AI-powered research and attacks, Joyce explained. Meanwhile, the tools themselves are capable of discovering vulnerabilities. She gave the example of Big Sleep, a previously shared vulnerability that Google uncovered with the help of an LLM. “We believe that this is the first public example of an AI agent finding a previously unknown exploitable memory issue in widely used real world software,” Joyce said.

While the empirical approach to analyzing data on the way that malicious actors are trying to use Gemini, Microsoft Copilot or ChatGPT can offer valuable clues to the actions of the attacker underworld, another Google executive in a keynote panel discussion on Tuesday provided important context about the limitations of relying exclusively on that type of data.

John 'Four' Flynn, vice president of security and privacy at Google DeepMind, pointed out that the operational security protocols of the most serious nation-state actors leave the industry mostly blind to their activities.

“I posit that most adversarial work will likely be on on-prem, open-weight models, or some sort of customized models that they’re building, because there is an issue of visibility,” Flynn said. “If you’re an attacker, obviously you’re going to be testing out all the things that are out there, but if you’re doing some really heavy lifting with AI, it may or may not be something that you do in the open.”

Another panelist with Flynn provided context about how quickly attackers appear to be moving in concert with the evolution of technological capabilities of AI itself to create new threats.

Jade Leung, CTO of the UK AI Security Institute – a UK government team of about 200 researchers – focused on how AI might affect national security risks in areas like chemical and biological attacks and terrorism.

“Clearly [AI] capabilities are moving faster than safety and security. There is a sense in which folks who are in the field, who work on these types of issues, feel like we are barely keeping up,” Leung said.

“Capabilities are not quite there yet in terms of posing significant, severe risk. But it’s not just the snapshot that matters, it’s the trend line that matters, and so the trend lines are pretty steep,” Leung said. “It is astonishing how much more capable [these models and systems are] getting in a very tiny amount of time.”

While AI is front and center as a security issue at RSAC Conference, a main stage panel on Tuesday also addressed another emerging threat — the potential for quantum computing to undermine current encryption practices.

Participants on that cryptography panel agreed that quantum computing was likely still more than a decade or more from becoming a decryption threat, but they made it clear that nation-state actors are taking offensive action now.

Raluca Ada Popa, associate professor and senior staff research scientist at UC Berkeley and Google DeepMind, called the technique “harvest now, decrypt later.” She said, “Attackers can import encrypted, confidential data now, and decrypt them later when quantum computers are ready.”

Whitfield Diffie, a pioneer of public-key cryptography, chimed in to explain why harvesting matters even if quantum computing is decades away. “There are vast tape libraries at NSA and all the rest of those organizations running back decades,” Diffie said. “I am quite confident that the oldest thing in NSA’s tape libraries probably comes from World War I, and surely is no later than World War II. So, of course, people are going to be working on our current traffic through the rest of the century.”

MIT professor Vinod Vaikuntanathan recommended that organizations protect sensitive data by employing one of the newer post-quantum encryption algorithms on top of a current algorithm like RSA or Diffie-Hellman. “The pragmatic thing to do is be conservative and employ what’s called hybrid encryption.”

TOPICS
Analyst

Scott Bekker is an analyst with ActualTech Media. For 20 years, Scott edited and reported for technology magazines focused on enterprise technologies and the IT channel.