What is cache memory?
We explain the different categories of cache memory and how it differs from RAM
Computer systems, in a way that is similar to humans, use various types of memory that work together to ensure they keep running smoothly. Some are types of long-term memory for more data-heavy functions whereas others are used for shorter, regular and simple tasks. However, all are vital to the overall operation of both the hardware and software of a computer.
'Memory' is a term often used to describe information storage, but there are memory components that have uses and meaning beyond that remit, like encoding and data retrieval, which is a central part of cache memory. Cache memory is near useless as a single entity but it plays an extremely important role when it interacts with other parts in a computer system.
This enables computer functions to hold recently-accessed data close by, so it can be used repeatedly, instead of using the same set of instructions again and again. This explains why systems with a bigger capacity of cache memory often seem to operate quicker as they can hold more data.
Cache memory vs RAM
In a technical sense, random-access memory (RAM) and cache memory sound like similar functions, but they both have notable differences. For example, data is stored within cache memory for future operational purposes, so those functions can be accessed straight away, whereas application and operational data that is not currently in use is stored on RAM.
Furthermore, cache memory is also faster as it is situated closer to the central processing unit (CPU) than RAM is. Cache memory also tends to be generally smaller than the RAM as it only has to store information that the CPU relies on for future operations.
Cache memory types
Cache memory can be complicated, however; not only is it different to the standard DRAM that most people are familiar with, but there are also multiple different kinds of cache memory.
Cache memory generally tends to operate in a number of different configurations: direct mapping, fully associative mapping and set associative mapping.
Direct mapping features blocks of memory mapped to specific locations within the cache, while fully associative mapping lets any cache location be used to map a block, rather than requiring the location to be pre-set. Set associative mapping acts as a halfway-house between the two, in that every block is mapped to a smaller subset of locations within the cache.
Cache memory grading
The UK 2020 Databerg report
Cloud adoption trends in the UK and recommendations for cloud migrationDownload now
There are three different categories, graded in levels: L1, L2 and L3. L1 cache is generally built into the processor chip and is the smallest in size, ranging from 8KB to 64KB. However, it's also the fastest type of memory for the CPU to read. Multi-core CPUs will generally have a separate L1 cache for each core.
L2 and L3 caches are larger than L1, but take longer to access. L2 cache is occasionally part of the CPU, but often a separate chip between the CPU and the RAM.
Graphics processing units (GPUs) often have a separate cache memory to the CPU, which ensures that the GPU can still speedily complete complex rendering operations without relying on the relatively high-latency system RAM.
Next-generation time series: Forecasting for the real world, not the ideal world
Solve time series problems with AIFree download
The future of productivity
Driving your business forward with Microsoft Office 365Free download
How to plan for endpoint security against ever-evolving cyber threats
Safeguard your devices, data, and reputationFree download
A quantitative comparison of UPS monitoring and servicing approaches across edge environments
Effective UPS fleet managementFree download