IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

What is cache memory?

We explain the different categories of cache memory and how it differs from RAM

Much like the human brain, computer systems use different types of memory in tandem to keep them running smoothly.

Data-intensive functions are handled by types of long-term memory, whereas shorter memory functions deal with regular and everyday tasks. No matter the type, all memory is vital to the overall performance of the computer's hardware and software.

In the context of computing, 'memory' is the term used to describe information storage, but there are some memory components that have uses and meanings beyond that remit. Such as the encoding and retrieval of data, which is actually a central part of cache memory.

On its own, cache memory is almost useless, but it plays an extremely important role alongside other parts of a computer system.

Cache enables computer functions to hold recently accessed data close by so that it can be used again and again, rather than going through the same set of instructions repeatedly. It's for this reason that systems with a larger capacity of cache memory appear to be faster because they can hold more easily accessible data.

Cache memory vs RAM

It may sound like a similar function to random-access memory (RAM), but there is a noticeable difference with cache memory. For start, data is stored within cache memory for future operational tasks, to be accessed and used right away, whereas application and operational data that is not currently in use are stored on RAM.

Cache memory is also faster, mainly due to its close proximity to the central processing unit compared to RAM. What's more, cache memory is typically smaller than RAM as it only stores what information it needs. 

Cache memory types

Cache memory is a bit of a complicated beast. It operates differently to the standard RAM that most people will be familiar with, and there are also different kinds of cache memory.

Each type of cache memory has its advantages and disadvantages, usually resulting in higher and lower hit rate ratios - the measure of how many content requests a cache is able to process successfully against the total number it receives. The various differences all boil down to the way cache memory is mapped.

Direct mapping

A direct mapping cache is its simplest form, with each block of memory being mapped to one line in a cache using an index, organised by multiple sets.

In this case, if a line is already occupied, the new block needing to be mapped is loaded, and the old block is removed.

The function of the cache in this case is to store the tag field for the memory address, with the index field stored in the system’s main memory.

Advantages of direct mapping

This type of mapping is typically used on simple machines, given the relative simplicity of its placement policy means it's not as power intensive.

Disadvantages of direct mapping

This simplicity also means there is only ever one line available in each set within the cache, and the need to replace this line when a new address is mapped results in a lower hit rate ratio.

Fully associative mapping

Instead of having only one line per set available across multiple sets, fully associative mapping sees addresses first mapped together into a single set with multiple cache lines. This means the block being loaded can occupy any available line.

Advantages of fully associative mapping

The good thing about fully associative mapping is that it provides far greater flexibility for the placement of blocks, potentially allowing for each block to be fully utilised before expiring. Fewer block replacements also increases the number of content requests the cache is able to handle, leading to a higher hit rate ratio. It’s also considered the fastest form of mapping.

Disadvantages of fully associative mapping

The downside is this process is more time consuming than direct mapping, as a system needs to iterate through a greater number of lines in a cache to locate a memory block. This also increases power consumption, and requires more powerful hardware to perform efficiently.

Set associative mapping

Set associative mapping acts as a halfway-house between direct and fully associative mapping, in that every block is mapped to a smaller subset of locations within the cache.

Instead of having only a single line that a block can map to (direct mapping), lines are grouped together into sets. Memory blocks are then mapped to specific sets, and then assigned to any line within that set.

Advantages of set associative mapping

This is considered a trade off between direct mapping and fully associative, given it provides some flexibility without excessive power and hardware requirements.

Disadvantages of set associative mapping

The downside is that it’s naturally not as efficient as fully associative mapping, resulting in a lower hit rate.

Cache memory grading

Related Resource

Software-defined storage for dummies

Control storage costs, enable hybrid cloud and simplify storage management

Whitepaper cover with cartoon face of man wearing glasses in a yellow circle, with blue, black and yellow backgroundFree Download

There are three different categories, graded in levels: L1, L2 and L3. L1 cache is generally built into the processor chip and is the smallest in size, ranging from 8KB to 64KB. However, it's also the fastest type of memory for the CPU to read. Multi-core CPUs will generally have a separate L1 cache for each core.

L2 and L3 caches are larger than L1, but take longer to access. L2 cache is occasionally part of the CPU, but often a separate chip between the CPU and the RAM.

Graphics processing units (GPUs) often have a separate cache memory to the CPU, which ensures that the GPU can still speedily complete complex rendering operations without relying on the relatively high-latency system RAM.

Featured Resources

IT best practices for accelerating the journey to carbon neutrality

Considerations and pragmatic solutions for IT executives driving sustainable IT

Free Download

The Total Economic Impact™ of IBM Spectrum Virtualize

Cost savings and business benefits enabled by storage built with IBMSpectrum Virtualize

Free download

Using application migration and modernisation to supercharge business agility and resiliency

Modernisation can propel your digital transformation to the next generation

Free Download

The strategic CFO

Why finance transformation propels business value

Free Download

Most Popular

The big PSTN switch off: What’s happening between now and 2025?

The big PSTN switch off: What’s happening between now and 2025?

13 Mar 2023
Why Amazon is cutting staff from AWS

Why Amazon is cutting staff from AWS

21 Mar 2023
Why – and how – IP can be the hero in your digital transformation success story

Why – and how – IP can be the hero in your digital transformation success story

6 Mar 2023