NetApp has unveiled a raft of new portfolio updates amidst an ongoing focus on supporting AI development.
The cloud storage firm announced it aims to become the “best data pipeline” for firms exploring the development and integration of AI technologies at its annual Insight conference, held in Las Vegas.
NetApp announced its AFF C-Series capacity flash storage options will now be integrated with its ONTAP AI architecture.
The ONTAP AI infrastructure stack draws upon the NVIDIA DGX supercomputing system to enable enterprises develop and scale AI workloads in hybrid multi-cloud environments.
NetApp said the move will help clients tackle “the most complex AI workloads” in an efficient and streamlined manner.
“There are two major disruptions today for customers: the opportunity of AI and the threat of ransomware,” said Harv Bhela, chief product officer at NetApp.
“Today, NetApp unveiled new innovations that make the AI data pipeline simple to deploy as well as scalable and performant across your hybrid multi-cloud data estate.
“These solutions position us at the forefront of creating successful business AI outcomes for our customers.”
NetApp said the expansion of its capacity flash option directly addresses current hurdles faced by organizations exploring the development and use of generative AI tools.
In particular, the firm highlighted difficulties encountered in unlocking value from “disparate” datasets and unstructured data by clients that have made rapid shifts to focus on generative AI development.
Webinar: How to scale AI workloads taking an open data lakehouse approach
Discover the benefits of an open lakehouse approach and see watsonx.data live in action
“The datasets necessary to feed these pipelines may live in data lakes throughout the enterprise, on premises and/or in the cloud,” the firm said. “This increases design and operational complexity where data silos lead to a lack of visibility into data types and locations, making it difficult to manage or use in AI workloads.”
The expansion of NetApp’s cloud storage options will enable customers to build “modern data lakes” capable of accommodating for increasingly large AI workloads and accelerate innovation in the field.
“Users can trace multiple AI model versions in production back to their training data to ensure they are using AI responsibly and storage is integrated with MLOps platforms so data scientists experience easier consumption and increased productivity.”
The announcement from NetApp follows a period of intense focus on support for AI development at the cloud storage firm.
In the opening keynote session at the conference, CEO George Kurian highlighted the close relationship between NetApp and hyperscalers such as Google Cloud, AWS, and Microsoft Azure as a key focus as it looks to offer hybrid multi-cloud options for customers exploring and scaling AI solutions.
In discussion with the NetApp chief executive, Google Cloud CEO Thomas Kurian highlighted the ongoing partnership between NetApp and the hyperscaler which has so far included the integration of Vertex AI within the NetApp Cloud Volumes platform.
Channel Pro Newsletter
Stay up to date with the latest Channel industry news and analysis with our twice-weekly newsletter
Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.