What is cache memory?

A person wearing a blue glove holding a CPU
(Image credit: Getty Images)

Cache memory, sometimes referred to as simply 'cache', is a software or hardware component that is able to store frequently used data so that can be readily accessed by the central processing unit (CPU). Cache memory serves a supportive function by giving the CPU a faster way of retrieving data, and therefore speeding up processing tasks.

In the context of computing, 'memory' is the term used to describe information storage, but there are some memory components that have uses and meanings beyond that remit. Such as the encoding and retrieval of data, which is actually a central part of cache memory.

On its own, cache memory is almost useless, but it plays an extremely important role alongside other parts of a computer system.

Cache enables computer functions to hold recently accessed data close by so that it can be used again and again, rather than going through the same set of instructions repeatedly. It's for this reason that systems with a larger capacity of cache memory appear to be faster because they can hold more easily accessible data.

Cache memory vs RAM

Given its classification as short term memory, you may be thinking that cache sounds very similar to random-access memory (RAM) – however there are key differences. Cache memory is designed to store and serve data for operational tasks that are frequently running. That's compared to RAM, which generally stores application and operational data that is not currently in use.

Cache memory is also faster than RAM, given its close proximity to the CPU, and is typically far smaller.

Cache memory types

Cache memory is a bit of a complicated beast. It operates differently to the standard RAM that most people will be familiar with, and there are also different kinds of cache memory.

Each type of cache memory has its advantages and disadvantages, usually resulting in higher and lower hit rate ratios - the measure of how many content requests a cache is able to process successfully against the total number it receives. The various differences all boil down to the way cache memory is mapped.

Direct mapping

A direct mapping cache is its simplest form, with each block of memory being mapped to one line in a cache using an index, organised by multiple sets.

In this case, if a line is already occupied, the new block needing to be mapped is loaded, and the old block is removed.

The function of the cache in this case is to store the tag field for the memory address, with the index field stored in the system’s main memory.

Advantages of direct mapping

This type of mapping is typically used on simple machines, given the relative simplicity of its placement policy means it's not as power intensive.

Disadvantages of direct mapping

This simplicity also means there is only ever one line available in each set within the cache, and the need to replace this line when a new address is mapped results in a lower hit rate ratio.

Fully associative mapping

Instead of having only one line per set available across multiple sets, fully associative mapping sees addresses first mapped together into a single set with multiple cache lines. This means the block being loaded can occupy any available line.

Advantages of fully associative mapping

The good thing about fully associative mapping is that it provides far greater flexibility for the placement of blocks, potentially allowing for each block to be fully utilised before expiring. Fewer block replacements also increases the number of content requests the cache is able to handle, leading to a higher hit rate ratio. It’s also considered the fastest form of mapping.

Disadvantages of fully associative mapping

The downside is this process is more time consuming than direct mapping, as a system needs to iterate through a greater number of lines in a cache to locate a memory block. This also increases power consumption, and requires more powerful hardware to perform efficiently.

Set associative mapping

Set associative mapping acts as a halfway-house between direct and fully associative mapping, in that every block is mapped to a smaller subset of locations within the cache.

Instead of having only a single line that a block can map to (direct mapping), lines are grouped together into sets. Memory blocks are then mapped to specific sets, and then assigned to any line within that set.

Advantages of set associative mapping

This is considered a trade off between direct mapping and fully associative, given it provides some flexibility without excessive power and hardware requirements.

Disadvantages of set associative mapping

The downside is that it’s naturally not as efficient as fully associative mapping, resulting in a lower hit rate.

Cache memory grading

RELATED RESOURCE

Software-defined storage for dummies

Control storage costs, enable hybrid cloud and simplify storage management

FREE DOWNLOAD

There are three different categories, graded in levels: L1, L2 and L3. L1 cache is generally built into the processor chip and is the smallest in size, ranging from 8KB to 64KB. However, it's also the fastest type of memory for the CPU to read. Multi-core CPUs will generally have a separate L1 cache for each core.

L2 and L3 caches are larger than L1, but take longer to access. L2 cache is occasionally part of the CPU, but often a separate chip between the CPU and the RAM.

Graphics processing units (GPUs) often have a separate cache memory to the CPU, which ensures that the GPU can still speedily complete complex rendering operations without relying on the relatively high-latency system RAM.

Keumars Afifi-Sabet
Contributor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.