Meta ditching its responsible AI team doesn't bode well

The Meta logo (a stylized 'M' next to the word 'Meta) written in blue on a frosted glass window, with blue lights visible behind the glass.
(Image credit: Getty Images)

With Meta reportedly breaking up its Responsible AI (RAI) team, the tech giant has become the latest in a growing list of firms to dance dangerously with AI safety concerns. 

Members of the RAI team will be distributed among other areas of Meta, including its generative AI product team, with others set to focus on AI infrastructure projects, The Information reported last week. 

A Meta spokesperson told Reuters that the move is intended to "bring the staff closer to the development of core products and technologies".

This does, to a degree, make sense. Embedding those responsible for ethical AI within specific teams could offer alternative voices within the development process that will consider potential harms.  

However, the move means Meta’s responsible AI team, which was tasked with fine-tuning the firm’s AI training practices, has more or less been completely gutted. Earlier this year, the division saw a restructuring which left it as a “shell of a team”, according to Business Insider

The team was hamstrung from the get-go, reports suggest, with the publication noting that it had “little autonomy” and was bogged down by excessive bureaucratic red tape. 

Ordinarily, a restructuring and redistribution of staff from this team would raise eyebrows, but given intense discussions over AI safety in recent months the decision from Meta seems perplexing to say the least. 

Concerns over AI safety and ethical development have been growing in intensity amidst claims that generative AI could have an adverse impact on society.

RELATED RESOURCE

A whitepaper from Nvidia on how to deliver secure, trustworthy, and scalable AI

(Image credit: Nvidia)

Learn more about a software platform that can deliver secure, trustworthy, and scalable AI solutions.

WATCH NOW

AI-related job losses, the use of generative AI tools for nefarious purposes such as disinformation and cyber attacks, and the potential for discriminatory bias have all been flagged as lingering concerns. 

Lawmakers and regulators on both sides of the Atlantic have been highly vocal on the topic in a bid to get ahead of the curve. The European Union (EU) has been highly aggressive in its positioning on AI regulation with the EU AI Act, for example. 

The US government has also been pushing heavily for AI safeguards in recent weeks, with President Biden signing an executive order aimed specifically at forcing companies to establish AI safety rules. 

This confluence of external pressure has prompted big tech to act in anticipation of pending legislation, suggesting that they are willing to bow to pressure and avoid heightened regulatory scrutiny.

In July, a host of major players in the AI space, including Anthropic, Google, Microsoft, and OpenAI, launched a coalition aimed specifically at fine-tuning AI safety standards, for example. 

Yet despite this, Meta appears intent on completely swerving safety concerns as it looks to double down on AI development, disregarding its “pillars of responsible AI” which include transparency, safety, privacy, and accountability. 

RELATED RESOURCE

A CGI image of cubes rippling at slightly varying heights, each marked with a '1', a '0', or the image of an orange padlock to represent data security. It is lit in blue and purple light.

(Image credit: Getty Images)

Discover how watsonx.data can help your organization successfully scale analytics and AI workloads for all your data.

WATCH NOW

Jon Carvill, senior director of communications for AI at Meta, told The Information that the company will still “prioritize and invest in safe and responsible AI development” despite the decision. 

Team members redistributed throughout the business will “continue to support relevant cross-Meta efforts on responsible AI development and use”. 

While these comments appear to be aimed at alleviating concerns over AI safety, it’s unlikely they will put minds at ease long-term and merely serve to highlight the fact Meta is disregarding safety in pursuit of contending with industry competitors. 

Playing catch up

Meta’s sharpened focus on generative AI development appears to be the key reasoning behind gutting its responsibility team. 

While Microsoft, Google, and other big tech names were going all in on generative AI, Meta was left playing catch-up, prompting CEO Mark Zuckerberg to scrap the company’s metaverse pipedream and pivot to the new-found focus. 

Stiff competition in the AI space meant the firm has been forced to pour money, resources, and staff into its generative AI push, which so far has delivered positive results. 

Earlier this year, Meta released its own large language model (LLM), dubbed LLaMA. The 7-65 billion parameter model was its first major foray into the space, and was followed by the more powerful Llama 2 model and Code Llama; Meta’s own equivalent of GitHub Copilot.

A collage of people's faces with blurred lines emanating from the center, to represent big data harvesting on social media.

(Image credit: Getty Images)

Who owns the data used to train AI?

Meta isn’t alone in cutting resources for responsible AI development. X (formerly Twitter) cut staff responsible for ethical AI development following Elon Musk’s takeover in November last year right around the time the generative AI ‘boom’ ignited with the launch of ChatGPT. 

Microsoft, too, cut staff in its Ethics and Society team, one of the key divisions that led research on responsible AI at the tech giant. 

Meta is no stranger to criticism and has found itself in repeated battles with regulators on both sides of the Atlantic on topics such as data privacy in recent years, racking up astronomical fines in the process. 

This latest move from the tech giant should have alarm bells ringing. A company willing to disregard its own internal AI safety teams in a bid to drive innovation at all costs isn’t the best look and may create long-term headaches for the firm. 

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.