'Local' machine learning promises to cut the cost of AI development in 2024

machine learning stock image showing two-way data flows
(Image credit: Getty Images)

2024 could see a concerted shift toward a 'local' machine learning development and training approach in a bid to cut prohibitive costs, according to a leading industry figure. 

With 2023 dominated by headlines on powerful large language models (LLMs) and the steep energy and cost requirements needed for AI development, Hugging Face CTO Julien Chaumond suggested “local machine learning” could become an emerging trend in the year ahead.

Chaumond’s comments came in a year’s end prediction post on LinkedIn in which he outlined a key area to watch.

“Local ML is going to be huge,” he said. “It will be in part driven by the adoption of Apple Silicon and other innovative hardware, but also on raw CPU and mobile devices.”

“In many cases except for the largest of LLMs, local inference will become a viable alternative to hosted inference.”

Chaumond’s comments regarding Apple Silicon specifically follow the recent launch of a batch of machine learning tools designed primarily for use on Apple hardware.

In December 2023, the tech giant quietly announced the launch of Apple MLX, a machine learning framework developed by its internal ML research group. The framework will enable Apple device users to harness in-house silicon for AI inferencing.

The framework is available through open source libraries such as PyPI and GitHub, and could represent a step change in how developers build AI tools and platforms.

Ben Wood, chief analyst and CMO at CCS Insight echoed Chaumond’s comments on local machine learning inferencing, adding that he expects this to be an “essential step” for developers moving forward.

A key factor here, he told ITPro, will be lingering privacy concerns surrounding the development and deployment of AI tools and services. Organizations such as Apple appear to be framing a more localized approach to inferencing as a vital method that curtails the risk of data exposure either during model training practices or by active exploitation.

In early 2023, OpenAI-owned ChatGPT came under intense scrutiny due to a flaw that exposed user conversations. A slew of organizations globally implemented rules to prevent staff from using the platform due to the risk of data exposure.

“Privacy will be a major focus with companies such as Apple extolling the virtues of using on-device AI to keep sensitive data on the device rather than having to send it to the cloud,” he explained.

Local machine learning could cut costs

AI training and development costs have been a key hurdle for many organizations over the last year, with analysis in May 2023 showing that the task of training an LLM such as GPT-3 could surpass $4 million.

Research from CCS Insight in October, for example, warned that generative AI could experience a “cold shower” in 2024 due to these prohibitive development costs.

Wood suggested that developers could circumvent cumbersome costs and computing requirements through on-device training methods.

“The intense computing requirements for cloud-based AI will mean the cost of deployment will be a factor in the shift towards more on-device AI processing,” he said.

RELATED RESOURCE

Brain hovering above a chip on a motherboard, denoting AI and hardware

(Image credit: Getty Images)

The enterprise’s guide for Generative AI

Change the way your organization operates with new innovative technology


DOWNLOAD NOW

In recent weeks, Google has emerged as a leading champion of on-device AI capabilities, Wood added, with the announcement that its Gemini Nano model will be available on certain devices.

This has been specifically designed for on-device tasks, he added, and marks a “surefire sign” that the tech giant views the practice as the way forward.

Major industry players such as Google and Apple aren’t alone in this sharpened focus toward local, on-device machine learning inferencing. Pointing toward Qualcomm, Wood noted that the manufacturer has been “working hard” to support on-device AI through its latest Snapdragon platform generation.

Samsung, too, is rumored to be exploring such capabilities on its upcoming Galaxy S24 Ultra smartphone.

This, Wood said, is tipped to be powered by Qualcomm silicon and will have “numerous on-device AI functions”.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.