From edge to cloud – and everywhere in between
How we use IT infrastructure has changed. How we manage it needs to change too

It’s often said that data is the new oil, to the extent it’s become something of a cliché in the tech industry. Perhaps another, less tired comparison is that data is like work in Parkinson’s Law – it will expand not to fill the time available for its completion, but into every element of infrastructure and activity that an organisation has available.
This is increasingly true, in fact, as the Big Data revolution made all data interesting and valuable to a business, not just the information it was used to looking at. By analysing these vast data sets, new value streams can be uncovered, and both productivity and profits increased. With this in mind, it’s important to consider where new sources of data may come from and the infrastructure that underpins their collection, analysis and storage.
Data, data everywhere
Nowadays, technology moves rapidly. Even millennials and older members of Generation Z have experienced this incessant churn, with things that once seemed core to computing, such as floppy disks, becoming obsolete in their lifetime.
But it’s worth remembering that the pace of change between the 50s and early 90s was much slower. Even until the 2000s, the data centre and mainframe were the core of companies’ IT infrastructure. Almost without exception, these behemoths comprised hardware that was fully owned by the business and located on premises.
By the turn of the millennium, colocation providers had started to spring up, offering SMBs and enterprises the opportunity to lease hardware from another company, which would also look after software and infrastructure management. Over the following decade, this evolved into what we now know as public cloud computing.
It’s easy to think of this evolution as linear – as tapes were replaced by floppies, floppies were replaced by CDs, and CDs have in turn been displaced by flash thumb drives or direct download, so the humble data centre has been replaced by the public cloud.
That is not the case, however.
Many businesses, particularly from medium-sized up, still have an on-premises data centre. Some still use colocation. Most, if not all, organisations use public cloud services of one kind or another, be that infrastructure as a service, such as that offered by Amazon Web Services (AWS) and Microsoft Azure, or software as a service products like Workday or Box.
What has emerged is a melange of technologies – most recently joined by edge computing – that organisations are using to fulfil different business needs. But this expansion has been carried out in a largely unplanned way, which can make it difficult to manage.
More infrastructure, more problems
While concerns over ‘shadow IT’ in the early cloud years may have been overplayed, there’s no denying that modern IT infrastructure is far more difficult to manage than it was even 20 years ago.
Cloud sprawl, the addition of new technologies and services like edge and multi-cloud, and the rise of the distributed workforce have all made things trickier to manage. What’s more, we can’t expect the pace of change to stop – with increased mobility, the still growing Internet of Things, and technologies that haven’t even been invented yet, this labyrinth will grow ever harder to navigate and manage.
Yet there are few alternatives for IT departments than to accept that all these different elements are needed. If there’s an edge computing instance in their infrastructure, they will know of its existence and its importance; for this particular data source, analysis and feedback has to happen with minimum latency and there’s no alternative to edge for that. If the development or data science teams are using several AWS instances as well as on-premises infrastructure, it’s because they need it for a given project.
In short, not all data is created equal or performs the same functions. This doesn’t mean that IT departments are doomed to look after an increasingly complex infrastructure that doesn’t always play nicely together and can be both time consuming and costly to manage, though.
Managing a modern set-up
Some vendors have seen the problems IT departments face in confronting this increasing tech sprawl and have moved to resolve it, most often through computing on demand.
Computing on demand helps IT departments manage their companies’ use of infrastructure across all settings, from public cloud to on-premises data centres, to colocation, to branch offices and the edge.
One of the first movers in this area is Hewlett Packard Enterprise (HPE), with its comprehensive GreenLake offering.
GreenLake proposes not only better control over these types of mixed environments, it can also offer a more cost-effective solution. Designing and managing on-premises infrastructure, including data centres, edge environments, and branch offices, has long come with concerns over adequate provisioning. How do you ensure that you will have enough capacity for what the business will be doing in five or ten years time? This has often led to overprovisioning.
GreenLake, however, allows IT departments to spin up new compute instances, extend memory and more in a cloud-like way, all while keeping the data and processes on premises thanks to consumption-based pricing. This means the capacity is there should you need it, but you don’t actually pay for it unless you use it, which removes the problems associated with overprovisioning.
How we store and use data, and the technology we use to do that, will continue to evolve. Who would have thought, 10 years ago, that the cloud would play the role in business and in life that it does now? And that’s to say nothing of the growth of edge computing. As these new ideas continue to spring up and be added to organisations’ existing IT strategies, only truly adaptive solutions like GreenLake can help IT departments manage their infrastructure from the data centre, to the edge, to the cloud, and beyond.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.
For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Does speech recognition have a future in business tech?
Once a simple tool for dictation, speech recognition is being revolutionized by AI to improve customer experiences and drive inclusivity in the workforce
By Jonathan Weinberg Published
-
HPE unveils Mod Pod AI ‘data center-in-a-box’ at Nvidia GTC
News Water-cooled containers will improve access to HPC and AI hardware, the company claimed
By Jane McCallion Published
-
‘Divorced from reality’: HPE slams DOJ over bid to block Juniper deal, claims move will benefit Cisco
News HPE has criticized the US Department of Justice's attempt to block its acquisition of Juniper Networks, claiming it will benefit competitors such as Cisco.
By Nicole Kobie Published
-
HPE plans to "vigorously defend" Juniper Networks deal as DoJ files suit to block acquisition
News The US Department of Justice (DoJ) has filed a suit against HPE over its proposed acquisition of Juniper Networks, citing competition concerns.
By Nicole Kobie Published
-
HPE Discover Barcelona: What’s the business benefit of supercomputers?
ITPro Podcast With potential in fields such as AI to scientific modelling, global interest in supercomputers continues to rise
By Jane McCallion Published
-
El Capitan powers up, becomes fastest supercomputer in the world
News Earth’s newest supercomputer is fast, efficient, and its use cases are rather different
By Jane McCallion Published
-
HPE ProLiant DL145 Gen11 review: HPE pushes EPYC power to the network edge
Reviews A rugged and very well-designed edge server offering a remarkably high EPYC core count for its size
By Dave Mitchell Published
-
Inside Lumi, one of the world’s greenest supercomputers
Long read Located less than 200 miles from the Arctic Circle, Europe’s fastest supercomputer gives a glimpse of how we can balance high-intensity workloads and AI with sustainability
By Jane McCallion Published
-
The Gorilla Guide® to... IT infrastructure modernization
Whitepaper How to boost IT productivity and lower support costs with Intel-powered HPE Proliant servers
By ITPro Published