Fine-tuning for GPT-3.5 Turbo opens door for company-specific models, GPT-4 level performance
OpenAI has stated customer data will not be used for its own training purposes
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Developers are now able to fine-tune OpenAI’s model GPT-3.5 Turbo using their own data, to make models work better for their specific use case or brand.
This could benefit firms that currently use the OpenAI API for their internal artificial intelligence (AI) needs, such as powering a client-facing chatbot or for generating coding advice.
By refining GPT-3.5 Turbo with good quality data, OpenAI says developers can produce a fine-tuned model that exceeds the capabilities of GPT-4, its most powerful large language model.
It cited the benefits of fine-tuning models including more reliable output for specific formats such as code, better control over the tone of text output, and improved steerability – otherwise known as the capacity for the model to accurately follow user instructions.
Businesses could also seek to improve the efficiency of models and reduce the amount of time that workers have to put into each prompt.
In the post on the announcement, OpenAI stated that fine-tuning models with common instructions cut prompt sizes by up to 90% in early tests.
Fine-tuning GPT-3.5 Turbo will cost businesses $0.008 per 1,000 tokens, while input and output costs $0.012 and $0.016 respectively per 1,000 tokens.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The firm has committed to bringing fine-tuning capabilities for GPT-4 to customers in the final quarter of the year.
OpenAI has stated that data used to train the default model is not used by OpenAI or another organization to train additional models. It is, however, passed through its Moderation API and GPT-4 moderation system.
This is retained for up to 30 days to check for abuse, meaning that OpenAI holds onto sensitive data each and every time it is input.
RELATED RESOURCE
Discover the IT improvements businesses are seeing with the adoption of automation
DOWNLOAD FOR FREE
Ruth McGuinness, data and AI practice lead at Kainos, told ITPro that the ability to fine-tune GPT-3.5 Turbo is welcomed and comes with a range of benefits, but that OpenAI's approach to data privacy and security calls for scrutiny.
“Organizations should establish risk-based boundaries for fine-tuning data,” said McGuinness.
“Avoiding sensitive content, particularly PII, is still advised. Other methods that enhance responses without extensive data sharing should be explored. For example, the ability to incorporate cloud vendor tools like search (vector stores) and code could also help in better customization of the model - offering domain context for organizations without direct fine-tuning.”
“When using fine-tuning services, it’s still advisable for organizations to consider broader data security concerns. Cloud vendors might offer enhanced data protection, although reviewing privacy statements and terms before integration is recommended.”
OpenAI’s track record on data storage and processing has come under scrutiny in recent months.
In March, the company revealed that a bug that caused a temporary outage of ChatGPT had inadvertently exposed the chat history titles of some users to other users.
A further investigation revealed that payment information, names, and email addresses of 1.2% of ChatGPT Plus subscribers active in a nine-hour period on the day of the outage were exposed.
Apple has banned its staff from using ChatGPT over concerns that data could be leaked, and that employees could pass proprietary information into the chatbot.
The private AI boom
A growing number of vendors are catering to businesses that want to train their own AI models for specific use cases or to ensure that they adhere to an in-house company style.
Not all firms can afford the likes of Nvidia’s largest chips for AI, or to build supercomputers as Microsoft has done for its own AI pursuits. But trainable models and accelerator hardware can be accessed through Azure AI, as well as AWS Bedrock and Google’s Vertex AI platform.
VMware and Nvidia have also announced a slew of new AI solutions to support firms training AI models, under the banner of VMware Private AI Foundation with Nvidia.
VMware specifically highlighted the legal problems over collecting sensitive data for AI training, with CEO Raghu Raghuram noting the extra help that many firms could face from AI models trained using less secure methods.
HPE’s GreenLake for Large Language Models service allows customers to train AI models on their own data, via a remote private cloud. The firm has also aimed to address concerns over the carbon intensity of AI systems, which draw on a great deal of energy, by using almost 100% renewable power for the service and reusing wastewater for cooling.
RELATED RESOURCE
Gain the ability to comprehend the behavior of a system, visualize possible performance, and presenting information to different stakeholders.
DOWNLOAD FOR FREE
Dell has also announced Dell Validated Design for Generative AI with Nvidia, through which customers can train their own models using Dell infrastructure, either from a pre-built default model or entirely from their own data.
IBM’s watsonx platform brings together data and AI. It aims to help businesses structure data in a way that makes sense for AI training. In practice, this often means keeping both structured and unstructured data in order, as AI can be better trained without human labels that the algorithm sees as ‘arbitrary’.
They can then use the platform to train a foundation model using their data, while retaining oversight and ultimate control of their data.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Tomorrow's fraud techniquesITPro Podcast Leaders need to proactive as attackers launch more consistent, sophisticated attacks
-
Met Office hails huge efficiency gains in first year of cloud supercomputing with Microsoft AzureNews In moving to the cloud, the Met Office has bolstered operational resilience and helped to deliver more accurate forecasts
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Will AI hiring entrench gender bias?ITPro Podcast This International Women's Day, it's more important than ever to consider the inherent biases of training data
-
Why Amazon’s ‘go build it’ AI strategy aligns with OpenAI’s big enterprise pushNews OpenAI and Amazon are both vying to offer customers DIY-style AI development services
-
February rundown: SaaS-pocalypse now?ITPro Podcast Geopolitical uncertainty is intensifying public and private sector focus on true sovereign workloads
-
‘A huge vote of confidence’: London set to host OpenAI's largest research hub outside USNews OpenAI wants to capitalize on the UK’s “world-class” talent in areas such as machine learning
-
Sam Altman just said what everyone is thinking about AI layoffsNews AI layoff claims are overblown and increasingly used as an excuse for “traditional drivers” when implementing job cuts
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
Amazon’s rumored OpenAI investment points to a “lack of confidence” in Nova model rangeNews The hyperscaler is among a number of firms targeting investment in the company