Dell to bring model tuning capabilities to AI platform

The logo for Dell Technologies displayed at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on March 2, 2023
(Image credit: Getty Images/Josep Lago)

Dell Technologies has unveiled new features for fine-tuning generative AI models within its AI solutions and a new multi-cloud data lakehouse to help businesses harness their data more effectively.

Through Dell Validated Design for Generative AI with Nvidia for Model Customization, the firm will help customers tailor pre-trained models to be more effective for their use cases, or meet sector-specific demands.

“Over one-third of enterprises are already considering building their enterprise-specific LLMs,” said Carol Wilder VP ISG, cross-portfolio software and solutions at Dell.

“They’re already finding that their pre-trained models are not sufficient for their success, they’re having to customize those models.”

Dell has also committed to providing customers with guidance and examples for deriving value from tuning and prompt engineering through Dell Professional Services for Generative AI. 

Through its partnership with Nvidia, Dell will continue to provide customers models via Nvidia NeMo through Dell Validated Designs for Generative AI, and its preparation, implementation, management, and education services have been updated to include information and assistance on fine-tuning. 

Preparing data is key to fine-tuning, and to meet this need Dell and data analytics firm Starburst have announced a new modern data lakehouse solution. The partnership follows a previous announcement at Big Data London 2023 in which the two firms committed to collaborating on data lakehouse technology

The data lakehouse solution will be open and sit on top of existing data sources across a customer’s hybrid environment, powered by Dell Object and File Storage, Dell PowerEdge,  and Starburst's own platform. 

“Many of our customers feel like they’re on a treadmill where they need to consolidate all of their data in one place before their data scientist can start to use it,” said Greg Findlen, SVP ISF, data management at Dell. 

“With this solution, customers can leverage the data where it exists and as their data scientists are using that data they can also get a better understanding of what data is the most important to consolidate.”

“The number one priority is accelerating how quickly the data science teams and the AI developer teams can get access to that data from across the organization.”

Clear data structure requirements

At Dell Technologies World 2023, Dell global CTO John Roese told ITPro that firms need a clear structure for their data in order to make the best use of large language models (LLMs), and branded the lack of awareness around this issue “disturbing”. 

Findlen clarified that the firm seeks to create a fully integrated set of tools for data management so that customers do not have to manually draw together technologies.

This modern data lakehouse solution will be made available in H1 2024. Dell has no plans at present to release the software through a preview or beta.

Dell Validated Design for Generative AI with NVIDIA for Model Customisation will be made available globally through Dell’s traditional channels, and will also be made accessible through Dell APEX from the end of October. 

Benefits of fine-tuning

Inferencing refers to the process through which a pre-built model reacts to information a company gives to it. For example, a model can infer context from a company’s knowledge base and use that information to respond to user inputs in a way consistent with its training. 

RELATED RESOURCE

Whitepaper from Dell on their world-record performance for AL and ML with image of metal sculpture from the ground up

(Image credit: Dell)

Download this study to understand how Dell servers deliver real-world performance for training and inferencing on AI and ML models

DOWNLOAD FOR FREE

Customization and fine-tuning involve developers having an understanding of how a pre-built model works, and running it through additional rounds of training based on new data to improve its performance at a given task. 

This can be used to help make models more efficient, or accurate at producing company-aligned outputs. If a company used a specific in-house programming language, it could train a pre-built code generation model to become more effective at producing code in that specific language. 

Meta used fine-tuning to produce a Python-specific version of its code completion LLM Code Llama, named Code Llama — Python. It trained Code Llama, itself created through training Meta’s LLM Llama 2 on 500 billion tokens of code and programming information, on an additional 100 billion tokens of Python data. 

In August, OpenAI announced that developers could now fine-tune its LLM GPT-3.5 Turbo, and that fine-tuning for GPT-4 would be made available in the final months of 2023. 

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.

TOPICS