IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

Samsung debuts 'industry-first' AI-powered memory

This architecture combines high-bandwidth memory with AI processing power to boost performance and lower power consumption

Samsung has developed a computing architecture that combines memory with artificial intelligence (AI) processing power to double the performance of data centres and high-performance computing (HPC) tasks while reducing power consumption.

Branded an 'industry first', this processor-in-memory (PIM) architecture brings AI computing capabilities to systems normally powered by high-bandwidth memory (HBM), such as data centres and supercomputers. HBM is an existing technology developed by companies including AMD and SK Hynix.

The result, according to Samsung, is twice the performance in high-powered systems, and a reduction in power consumption by more than 70%. This is driven largely by the fact the memory and processor components are integrated and no longer separated, vastly reducing the latency in the data transferred between them.

“Our groundbreaking HBM-PIM is the industry’s first programmable PIM solution tailored for diverse AI-driven workloads such as HPC, training and inference,” said Samsungs vice president of memory product planning, Kwangil Park. 

“We plan to build upon this breakthrough by further collaborating with AI solution providers for even more advanced PIM-powered applications.”

Related Resource

The trusted data centre

Best practices and business results for organisations based in Europe

Data centre best practices and business results for organisations based in Europe - whitepaper from DellDownload now

Most computing systems today are based on an architecture which uses separate memory and processor units to carry out data processing tasks, known as von Neumann architecture. 

This approach requires data to move back and forth on a constant basis between the two components, which can result in a bottleneck when handling ever-increasing volumes of data, slowing system performance.

HBM-PIM, developed by Samsung, places a DRAM-optimised AI engine within each memory bank, enabling parallel processing and minimising the movement of data.

“I’m delighted to see that Samsung is addressing the memory bandwidth/power challenges for HPC and AI computing,” said Argonne’s associate laboratory director for computing, environment and life sciences, Rick Stevens. Argonne National Laboratory is a US Department of Energy research centre.  

“HBM-PIM design has demonstrated impressive performance and power gains on important classes of AI applications, so we look forward to working together to evaluate its performance on additional problems of interest to Argonne National Laboratory.”

Samsung’s innovation is being tested inside AI accelerators by third-parties in the AI sector, with work expected to be completed within the first half of 2021. Early tests with Samsung’s HBM2 Aquabolt memory system demonstrated the performance improvements and power consumption reduction cited previously.

Featured Resources

Activation playbook: Deliver data that powers impactful, game-changing campaigns

Bringing together data and technology to drive better business outcomes

Free Download

In unpredictable times, a data strategy is key

Data processes are crucial to guide decisions and drive business growth

Free Download

Achieving resiliency with Everything-as-a-Service (XAAS)

Transforming the enterprise IT landscape

Free Download

What is contextual analytics?

Creating more customer value in HR software applications

Free Download

Most Popular

16 ways to speed up your laptop

16 ways to speed up your laptop

13 May 2022
Europe's first autonomous petrol station opens in Lisbon

Europe's first autonomous petrol station opens in Lisbon

23 May 2022
Linux-based Cheerscrypt ransomware found targeting VMware ESXi servers

Linux-based Cheerscrypt ransomware found targeting VMware ESXi servers

26 May 2022