Google CEO says it's time to regulate AI

Sundar Pichai argues that guardrails are needed in an age of deepfakes and facial recognition

Google and Alphabet CEO Sundar Pichai has called for the regulation of artificial intelligence in order to avoid the misuse of the technology and the spread of misinformation.

Despite positive work in AI, Pichai argued that history is full of examples of tech missteps that should serve as a warning, in a recent opinion piece published in the Financial Times.

"The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread," said Pichai.

Such lessons should make it clear that tech could go wrong again, he argued, pointing to "real concerns about the potential negative consequences of AI", including deepfakes and facial recognition. Market forces shouldn't be left to decide how such technologies are used, he added.

"Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it."

Perhaps unsurprisingly, Pichai points to Google's own AI principles — which say AI should be used to social benefit, avoid bias, be safe and accountable, be based in science, and incorporate privacy by design — as well as efforts to develop tools for testing AI.

However, it's worth noting that Google's AI principles and its ethical AI board were established following backlash from privacy groups and its own employees a year after Google signed a deal with the Pentagon to use AI to analyse drone footage; Google has since refused to renew that contract and said it won't use AI to build weapons, though it will continue to work with the military.

In-house guidance alone won't be enough, Pichai admits, calling for government regulation of AI, suggesting the EU's GDPR can serve as a "strong foundation" to develop new frameworks.

"Regulation can provide broad guidance while allowing for tailored implementation in different sectors," argued Pichai. "For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits."

Google may be calling for AI regulation, but it continues to fight government lawmakers in other quarters. Indeed, while Pichai praises GDPR, Google has been hit with fines for breaching data protection regulations. And, in 2018, the company outspent all other tech companies in lobbying governments, despite multiple other tech leaders setting records for such spending that year.

Featured Resources

2021 Thales cloud security study

The challenges of cloud data protection and access management in a hybrid and multi cloud world

Free download

IDC agility assessment

The competitive advantage in adaptability

Free Download

Digital transformation insights from CIOs for CIOs

Transformation pilotes, co-pilots, and engineers

Free download

What ITDMs did next - and what they should be doing now

Enable continued collaboration and communication for hybrid workers

Most Popular

Microsoft 365 prices to soar by 20% for pay monthly subscribers
Managed service provider (MSP)

Microsoft 365 prices to soar by 20% for pay monthly subscribers

7 Dec 2021
What are the pros and cons of AI?
machine learning

What are the pros and cons of AI?

30 Nov 2021
What is single sign-on (SSO)?
single sign-on (SSO)

What is single sign-on (SSO)?

2 Dec 2021