Does Meta know where it's going with AI?
After a rocky start to 2025, Meta's new AI subdivisions could set the company on a path to AI success


Meta will split its AI division into four subdivisions, according to reports from The Information, marking the latest overhaul of the tech giant’s generative AI strategy.
The social media giant formed Meta Superintelligence Labs in June, to be headed up by former GitHub CEO Nat Friedman and ex-Scale AI chief executive Alexandr Wang, and backed with billions of dollars in funding.
Under the rumored changes, the new departments are expected to be an infrastructure team, a product team focused on offerings such as Meta’s AI assistant for consumers, Meta’s Fundamental AI Research (FAIR) lab, and a ‘TBD’ lab for unspecified projects.
It is likely that the final of the four will turn its attention to new frontier AI models aimed at exceeding the performance of its current Llama range.
Meta’s AI offerings have been widely embraced by the industry over the last two years. The models, which boast performance comparable to lighter models by Google and OpenAI, have also been used by the likes of DeepSeek to train its own models.
But Llama 4, currently available as its lightweight Scout and mid-tier Maverick variants, was the subject of some scorn in the developer community when it was released.
Specialist communities on Reddit criticized the latest Llama models for their poor performance relative to competitor models and users have accused Meta of burning its goodwill with the locally-run AI community.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The backlash was such that Meta’s VP of generative AI Ahmad Al-Dahle felt the need to publicly deny claims the company skewed performance metrics for Llama 4 Maverick on public leaderboards - such as LMArena - by training it on test sets.
Similarly, the firm has also faced accusations of ‘open washing’ – with the Open Source Initiative noting in February that despite being advertised as open source, Llama models fall short of that definition.
Meta targets stronger focus on AI development
Superintelligence Labs, with its newly-formed subdivisions, could mark a major reset for Meta, moving away from its iterative free model releases and closer to a focus on AGI alongside other frontier labs such as Google DeepMind, OpenAI, and Anthropic.
Steve Grant, managing director at Figment, told ITPro the overhaul shows a “deliberate shift” toward a more focused approach on AI development.
“Each team handles a specific area, from developing new language models to integrating AI into products, scaling infrastructure, and pursuing long-term research,” he said.
“This structure is designed to make innovation more manageable and help the company respond faster to internal challenges, including staff turnover and previous model releases that did not meet expectations.”
Grant added that by allocating specialist teams, Meta could benefit from a direct route to competing in the AI space, but that this would depend on targeted investment and harnessing the right talent - the latter of which Meta has focused on heavily in recent months.
The tech giant has embarked on a major AI hiring spree across 2025, with reports of attempts to poach top AI industry experts with pay packages ranging up to over a billion dollars.
For example, the Wall Street Journal recently reported that Meta was targeting OpenAI researchers with attractive pay offers to fill positions at TBD Lab.
Elsewhere, a whopping billion-dollar plus pay package for an exec at former OpenAI CTO Mira Murati’s startup shocked the industry.
Meta appears more than content to splash the cash on this front and across other key areas, such as infrastructure. In its Q2 earnings report, Meta predicted its capital expenditure for 2025 to hit between $64 and $72 billion, with higher figures expected in 2026.
The firm said it is “aggressively pursuing opportunities to bring additional capacity online to meet the needs of our artificial intelligence efforts and business operations”.
This includes a 5GW data center intended to significantly boost its AI training and inference capacity.
Is Meta sweating about catching up?
Aggressive hiring and investment on the AI front may suggest the company is aware that it’s fallen behind key competitors in the space, according to industry figures.
In late June, Demis Hassabis, CEO at Google DeepMind, suggested Meta’s hiring spree shows it’s lagging behind in the AI race during a podcast appearance. Hassabis added that researchers could think twice about accepting a position at Meta if they’re seeking to help shape AI safety.
If Meta is conscious of its failings on this front, bringing FAIR directly into the fold at Superintelligence Labs could also signal a shift designed to ease fears about loose guardrails or lack of ethical oversight when it comes to Meta’s AI development.
More broadly, Superintelligence Labs will work toward Meta CEO Mark Zuckerberg’s stated aim of producing “personal superintelligence that empowers everyone”.
Though Zuckerberg hasn’t explained what this means in practical terms, he has predicted widespread upheaval in tech jobs, stating in January that mid-level engineers at Meta could be matched by AI before the end of 2025.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Channel Focus: All you need to know about Snowflake's partner program
Snowflake wants partners to help with its mission to make 'the AI era' easy, efficient, and trusted...
-
Customizing for Every Customer
Personalise customer experiences at scale with CRM+AI+Data+Trust. True 1-to-1 personalisation is finally possible.
-
Microsoft says these 10 jobs are at highest risk of being upended by AI – but experts say there's nothing to worry about yet
News Microsoft thinks AI is going to destroy jobs across a range of industries – while experts aren't fully convinced, maybe it's time to start preparing.
-
Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
News Workers are comfortable working alongside AI agents, according to research from Workday, but the prospect of having an AI 'boss' is a step too far.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers
-
DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree shows it's scrambling to catch up in the AI race
News DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree is "rational" given the company's current position in the generative AI space.
-
Mistral's new sustainability tracker tool shows the impact AI has on the environment – and it makes for sober reading
News The training phase for Mistral's Large 2 model was equal to the yearly consumption of over 5,o00 French citizens.
-
VC investment in AI is skyrocketing – funding in the first half of 2025 was more than the whole of last year, says EY
News The average AI deal size is growing as VCs turn their attention to later-stage companies
-
The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
News Recent vibe coding snafus highlight the risks of AI coding assistants
-
Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
News AI models large and small were found to introduce cross-site scripting errors and seriously struggle with secure Java generation