The UK government is in the final day of preparations ahead of its AI Safety Summit, which will see world leaders and high-profile tech figures alike descend on Bletchley Park with the aim of coming to a broad agreement on the path to safe AI development.
For months, the government has talked about its goal of cementing the UK as a world-leader in AI, and in recent days prime minister Rishi Sunak has set out his focus on the potential risks the technology poses.
Those expected to attend include key figures from OpenAI, Meta, Google DeepMind, as well as US vice president Kamala Harris and Ursula von der Leyen, president of the EU Commission. The full list of 100 or so attendees has not been revealed.
Amidst brewing global legislation on AI, and following months in which the UK appeared to be lagging behind the EU on AI, the conference could prove a key moment for the government to outline its stance on AI risks and to shape the global approach to the technology.
There have been concerns that Sunak’s messaging around the risks of AI is out of step with business concerns.
The prime minister has predominantly focused on theoretical threats such as the use of generative AI to write self-replicating malware or create chemical weapons rather than business concerns over hallucinations or the potential for models to leak data.
In the short term, many businesses are also focused on the concrete responsibilities legislation could place on them and are seeking consistency across borders on the issue.
“Those now looking to intervene on AI safety have two tasks ahead of them,” said Greg Rivers, director of government affairs & policy at Kyndryl.
“The first, like any multilateral summit, is to agree on the intellectual principles which will guide any future decision-making. The second, learning from data privacy outcomes, is to build a pragmatic approach to applying those principles which aligns markets and enables businesses to operate safely, compliantly, and innovatively.
“Whatever concerns we might hold about AI, it is clear that there are real opportunities in terms of global productivity and efficiency when applied in the right way. The Global AI Safety Summit must not forget its responsibility to unlock that opportunity, for all of us.”
Division on the AI Safety Summit focus
Even as guests arrive for the event, the actual subject matter for the summit is still in contention.
Sunak is reportedly pushing for a focus on the potential existential threats posed by AI, and has aimed to create global consensus on the issue by inviting government, private sector, and academic representatives from a range of territories.
“There are mixed views ahead of Rishi Sunak’s AI Summit, particularly with the attendance of China,” said Alex Hazell, head of legal and privacy, EMEA at Acxiom.
“While some believe getting global and tech leaders together in the same room will help align AI regulation, others predict only loose guidelines when there is a pressing need to curb a potentially dangerous technology.
“The overarching focus of the attendees must be how they will mitigate the most serious risks posed by the technology, such as bioweapons and even human extinction.”
Others have argued that the summit’s focus should be on ensuring that guidelines are met while promoting open-source AI development, and are drafted in broad enough terms to take evolving models into account.
This webinar provides a customizable ZTNA adoption roadmap that fits into your organization’s IT plans
Bern Greifeneder, founder and CTO at Dynatrace, told ITPro that leaders need to have a broader discussion around alternative AI models such as causal AI, rather than solely focusing on frontier models.
“I think this brings the wrong perspective to the industries, it gives them a wrong perspective of actually leveraging AI in a responsible way,” he said.
“We need to classify not only the uses, which the EU AI Act is attempting and is actually on a good path, far from perfect but with good aspects and thinking. But what the EU AI Act is missing is categorizing the types of AI, so that the actual industries get a bit more differentiated on those.”
In addition to these discussions, Greifeneder emphasized that it is “mandatory to look at the horizon” when it comes to risk, particularly those in the medium-to-long term such as deepfake scams, the potential for AI to be used for cyber crime, and how destructive cyber attacks could be to national security.
Yann LeCun, chief AI scientist at Meta, has publicly aired his concerns that the AI Safety Summit’s expected focus on existential threats could lead to a scenario in which a handful of companies control the majority of the AI market.
“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun wrote in a post on X (formerly Twitter).
Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.They are the ones who are attempting to perform a regulatory capture of the AI industry.You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.If…October 29, 2023
“They are the ones who are attempting to perform a regulatory capture of the AI industry.[Max Tegmark], Geoff [Hinton], and Yoshua [Bengio] are giving ammunition to those who are lobbying for a ban on open AI R&D,” LeCun continued.
“If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI.”
The Founders Forum Group, which brings together business leaders, sent Sunak an open letter calling for regulatory clarity alongside a commitment to provide researchers with open access to datasets from organizations such as the NHS and Companies House.
In its approach to AI legislation to date, the UK government has underlined the risks while stating that it is reluctant to overregulate AI firms to avoid crushing innovation in the space. For example, the government’s white paper A pro-innovation approach to AI regulation argued against enforcing safety principles on a statutory basis.
While many in the industry have praised the government for its focus on AI innovation, calls for more precise regulation show clear dividing lines in the sector that could rear their head during the Summit.
“The suggestion that regulating AI stifles innovation is misguided,” said Stephen Ferrell, VP IT governance and software assurance at Ideagen.
“The aim of regulation is to safeguard and protect, not to control. The pitfall of jumping straight into discussions about the potential of frontier AI for the Summit is that we don’t have a regulatory structure in place that adequately addresses how we control AI and human interactions.
“While it is true that most countries will want to develop their own capabilities in such a lucrative space, different regulatory controls could stifle AI’s positive potential. A single set of standards that are robust enough, but not punitive is critical, and the backbone for achieving such an aim is international collaboration.”
What does ‘success’ look like for the AI Safety Summit?
Hopes that concrete legislation could arise from the talks could be doused in the immediate aftermath of the Summit, according to Chris Royles, EMEA field CTO at Cloudera - the reality is that any form of legislation will be years in the making.
“Although there are hopes that the AI Safety Summit will kickstart discussions around making AI safer, we know any government-led regulation or guidance around the use of AI is likely to take years to develop – and the speed of adoption will outpace legislation,” he said..
“As such, the onus will be on organizations to ensure that AI is used responsibly and safely in the interim.”
Many nation-states are progressing with their own approaches to AI outside of the UK’s AI Safety Summit. The Biden administration announced a new executive order requiring firms to be more transparent about the data used for training and requiring AI content to be tagged.
Direct, manage, and monitor AI activities to ensure positive outcomes
G7 member nations have also agreed to a new “code of conduct” ahead of the Summit, in the form of a non-binding code that sets out broad recommendations on developing ethical AI while also encouraging innovation.
It is clear that AI regulation across different regions is already moving at different speeds and with different emphasis. For example, the EU’s approach emphasizes categorizing AI model types by their potential risk, an approach that the UK has avoided pursuing.
“Getting everyone on the same page and thinking about this is one critical success factor,” Cindi Howson, chief data strategy officer at ThoughtSpot told ITPro.
“I do like that the EU has risk categories, but I look at some of the things they describe as ‘low risk’ and I question some of the ways in which it’s categorized. Job applications are low risk, and yet classes of people are not able to get jobs because of the way that algorithms work and that lack of transparency.”
“One thing in the UK that I really like is the education element, which goes back to primary schools. This is where, when you look at survey data on who trusts AI and who doesn't, there is a difference between youth and older people.”
The UK government has also announced £8 million for 800 AI scholarships across the country, and will spend £118 million on AI training to support projects such as the creation of 12 more Centers for Doctoral Training on AI development.
Howson added that calls for an AI moratorium, including by the CEO of AI firm Conjecture and Elon Musk, are “ridiculous”.
“Sorry, things are already progressing and we have to look at the global implications. If one of the countries with better humanitarian paused, one of the countries that don’t have a good record will advance further, faster.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Rory Bathgate is a staff writer at ITPro covering the latest news on artificial intelligence and business networks. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, after four years in student journalism. You can contact Rory at firstname.lastname@example.org or on LinkedIn.