What is latency?
We explore the nature of network latency and the steps you can take to reduce it
If you work in cloud or with on-premise servers, or networking, you might fear the term 'latency'. Where data is concerned, latency refers to its transfer speed and often suggests it's going a bit slow.
To put it simply, latency is the measurement of delay between two points, specifically any pauses that data incurs when moving across a network.
Now, it doesn't just apply to data, in a specific sense, because it can also be attributed to the movement of anything between two points. Radio and sound waves are another example, or even people moving between two points.
However, the most common use of the term is for data movement and how long it takes for information to move from one place to another. How long information travels from a website to an end-point, such as a smartphone, for instance.
When it comes to network latency, the measurement is made by calculating the time it takes the data to get all the way around - there and back - like inputting a command and waiting for the response back. This is measured in milliseconds (ms) with the lower numbers indicating power latency, meaning faster operations. In this context, it's hard to judge whether the latency is low without knowing the context of the measurement.
This delay is measured in milliseconds (ms), with lower numbers producing a more responsive experience for the user. What constitutes low latency depends heavily on the system being used. For example, the average home ethernet connection will normally operate at around 10ms, producing a noticeable performance drop if it exceeds 150ms. For 4G mobile connections, however, normal operations happen at around 45ms to 60ms, while 3G connections can be double this.
What contributes to latency?
In an ideal world, every connection would have zero latency, however, there are so many interacting variables that this is unlikely to ever be achieved.
Even in the perfect scenario, the act of transferring a packet of data from one node to another at the speed of light, known as propagation, will produce some delay. What's more, the larger the size of the packet, the longer it will take to travel across a network.
There's also the role of the infrastructure and hardware. Cable connections will produce varying degrees of latency depending on the type of line used, whether that's coaxial or fibre, and if the packet has to travel over a Wi-Fi connection this will add yet more delay to the process.
Latency vs bandwidth
Latency and bandwidth are not interchangeable terms - they are both important for assessing the effectiveness of a network.
Bandwidth is concerned with the capacity of the network. A line with a high bandwidth is able to support more traffic travelling simultaneously across a network. In the case of a business network, this means more employees can perform network functions at the same time.
However, this doesn't imply how fast the data travels. For that, you need to assess the network's latency, which needs to be low if you want to have responsive services.
Latency can be very difficult to reduce given how complicated networking can prove itself to be, so all kinds of changes both big and small will have to be made in order to genuinely reduce the size of this headache. This largely involves making alterations to each facet of one's network a data packet will travel through.
How to maximise the value of your data and apps with IaaS
Free yourself from infrastructure complexityDownload now
Improving the networking infrastructure is a great starting point for reducing the degree of latency you're suffering, whether it means exchanging legacy wiring with newer cables or something else. Networking operators may also help by assessing the networking schematics to identify any logjams or even servers that need extra power to improve in such a way that reduces the burden on data packest as they move through networks.
Organisations running across several areas may also benefit from using content delivery networks (CDNs), which offer reserved pathways that sit at at the edge, and by definition closer to end-users. They considerably reduce the distance a data packet will have to travel, but may prove financially difficult to support and also impose limitations on the content they support. It's a balancing act, more often than not, and could prove not worth the investment.
Connecting one's infrastructure directly to a provider's data centre could also prove itself a viable alternative, as it circumvents a cloud agent serving as a middle-man. These are similarly costly, however, and aren't, therefore, the best choice by default. It's also feasible to reduce latency by getting rid of unnecessary software or bloatware than may be dampening your connectivity.
It's also worth bearing in mind that network performance can be affected by a number of issues, latency being one of them.
High latency can render a network inoperable, but it's just as likely that poor performance is the result of a poorly designed application or shoddy infrastructure. It's important to ensure that all the applications or edge devices that rely on your network are running correctly and aren't hogging too much of your network's resources.
Four strategies for building a hybrid workplace that works
All indications are that the future of work is hybrid, if it's not here alreadyFree webinar
The digital marketer’s guide to contextual insights and trends
How to use contextual intelligence to uncover new insights and inform strategiesFree Download
Ransomware and Microsoft 365 for business
What you need to know about reducing ransomware riskFree Download
Building a modern strategy for analytics and machine learning success
Turning into business valueFree Download