Google "upends" internal teams to counter threat posed by ChatGPT

Google CEO Sundar Pichai at Lazienki Palace in Warsaw, Poland.
(Image credit: Getty Images)

Google is in a state of ‘code red’ as the tech giant looks to meet the challenge posed by recent ChatGPT developments, according to sources inside the company.

The firm has reportedly reassigned a number of internal departments to help develop and release new AI products to keep pace with the rapid acceleration of generative AI systems rolled out in recent months.

RELATED RESOURCE

Getting board-level buy-in for security strategy

Why cyber security needs to be a board-level issue

FREE DOWNLOAD

Central to this renewed charge at Google is the lingering threat that ChatGPT might pose to its core products and services.

Since its release three weeks ago, ChatGPT has been a source of intense excitement in the broader global technology ecosystem amidst claims that the technology could become a major disruptor across a host of industry verticals.

CEO Sundar Pichai appears to be leading this rapid pivot, according to internal memos and audio obtained by the New York Times. Sources close to the matter said he has “upended” the work of several groups inside the company to “respond to the threat” that ChatGPT poses.

“From now until a major conference expected to be hosted by Google in May, teams within Google’s research, trust and safety, and other departments have been reassigned to help develop and release new AI prototypes and products,” the NYT reported.

It is claimed that employees have also been directed to build generative AI products comparable to OpenAI’s DALL-E, which has been used widely to create artwork and other digitally curated images.

DALL-E has been used by more than three million people since its release in January 2021.

These competing products could be made available as part of the tech giant’s AI Test Kitchen, according to the report.

Agile competitors

A key motivating factor in this shift at Google, the NYT suggested, is that the company is seriously concerned about the prospect of competing with smaller, more agile competitors in the artificial intelligence space.

OpenAI was founded in late 2015 by Sam Altman, Elon Musk, and a host of investors. Since its launch, the company has grown significantly and positioned itself as a key industry player.

And with the release of ChatGPT in late November, the company looks set to grow significantly and is courting investors.

A recent report from Reuters revealed that the company expects revenues to grow to around $200 million in 2023 and surpass the $1 billion mark in 2024.

More broadly speaking, the generative AI space as a whole has witnessed significant investment over the last two years.

Data from PitchBook shows that venture capital investment in generative AI has surged 425% since 2020, with $2.1 billion invested across 2022 alone.

This rapid growth, combined with the emergence of dynamic new industry players, could pose a serious threat to established organisations such as Google, which has invested heavily in AI products.

Google has spent the last several years developing chatbots, and earlier this year was embroiled in a controversial incident involving its Language Model for Dialogue Applications system, known as ‘LaMDA’.

Viewed as a potential long-term competitor to ChatGPT, LaMDA was a source of enormous interest – and concern – after a Google engineer claimed it was sentient.

While Google refuted claims made by Blake Lemoine, the incident did serve to highlight significant leaps in chatbot technology made in recent years.

According to audio from a meeting obtained by NYT, executives claimed that the company intends to release the LaMDA chat technology as a cloud computing service for external clients.

Similarly, in the meeting, it was also suggested that the tech giant could incorporate the technology into “simple customer support tasks”.

Accuracy concerns

This in itself could raise future ethical issues for Google, as executives noted they may limit prototype products to just 500,000 users and warn them that the technology could produce false or offensive content and/or statements.

The accuracy of generative AI products and systems has been a lingering concern for several years, and one executive warned in the meeting that AI “can make stuff up”, employ toxic language, and contain bias.

In 2017, Microsoft famously released its ‘Tay’ chatbot prototype, which was found to use racist and xenophobic language. The chatbot was subsequently taken down.

This example, among others, is a key factor behind why Google has previously been reluctant to share its technology. The company has grown increasingly concerned that AI prototypes may harm users or society, according to an internal memo.

However, in a recent meeting, one executive warned that smaller organisations have “fewer concerns” about releasing such tools.

This presents Google with a high-stakes decision; capitalise on current trends and begin releasing prototypes, or risk being left behind by an industry that is developing at a rapid pace.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.