Move over VMs, the future of app deployment is in containers

Crane lifting shipping container, surrounded by lots of shipping containers

Containerisation is fast becoming one of the most popular methods of deploying applications in a virtual environment, and is widely considered to be making 'virtual machines' a thing of the past.

Yet what exactly are containers and why should you bother moving from a tried and trusted VM?

Containers? You mean like boxes for moving our computer gear?

Not exactly we're not talking about packaging up physical appliances here. But in the IT operational sense, containers are pretty much the same idea, only for applications. Docker, which is the best-known proponent of the technology, defines a container as a "lightweight, standalone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries and settings".

RELATED RESOURCE

Deliver secure automated multicloud for containers with Red Hat and Juniper

Learn how to get started with the multicloud enabler from Red Hat and Juniper

FREE DOWNLOAD

Is 'container' just a fashionable term for VM?

Containers and virtual machines do share some similarities, particularly to do with resource isolation. But they're not the same thing. A virtual machine is primarily an abstraction of a hardware platform an approach that makes it easy to turn one physical server into lots of independent virtual ones. In a setup like this, each VM runs its own operating system and application stack.

Containers, by contrast, focus on virtualising an operating environment. Multiple containers can run concurrently under a single OS, just like regular applications. It's a more efficient technology, and much more portable.

And we need this why, exactly?

Containers are tremendously useful when it comes to moving software between different computing environments for example, moving an application from testing into production, or from physical hardware into the cloud.

You can be confident that things will continue to work as expected, even if the supporting software environment has a completely different network topology, security policy or hardware configuration. And since containers don't require a complete OS installation, you can fit more containerised apps than VMs onto a single server.

So containers are only good for porting applications?

Containers are also very useful for development. They're a great fit for the microservices, modular way of doing things. The key is that you don't need to run everything within a single container: you can connect together multiple containers to build an application out of known quantities.

This is a huge help when it comes to management and development, as individual modules can be updated individually - and it's efficient too, as each container is only initiated (in an almost instant, "just in time" fashion) when it's needed.

That sounds good. But do we have to tie ourselves to Docker?

Not at all containers have been built directly into Linux for years now, under the umbrella of the LXC user space interface for kernel containment features (you can read more on this here). Another free, open source container system is Kubernetes. However, if flexibility and support are priorities, Docker is probably the biggest and best-known cross-platform container technology vendor.

Will we be locking ourselves into the framework we choose?

You're right to raise the question: app container images can be proprietary. For example, Docker and CoreOS have had differing specifications in the past. However, since 2015 the Linux Foundation's Open Container Initiative (OCI) has been working on a standard container format. Both Docker and CoreOS are sponsors, along with the likes of AWS, Google, HP, IBM, Microsoft, Oracle, Red Hat and VMware. So things are only going to get easier.

So are containers more secure than VMs?

One aspect of container technology that seems to cause endless debate is security. The concern is that, because multiple containers can run on one host platform, a single compromise could affect a whole stack of containers. That's less of a concern with virtual machines, since each one is completely isolated from the other VMs running on the same hardware. What's more, hypervisors don't expose the entire functionality of the Linux kernel, so the attack surface of a VM is smaller, which again reduces the risk.

But containers have security strengths too. The model allows for a microservices approach, which modularises an application into a well-defined interface and limited package services making it hard for anything to slip through the cracks.

Containers can also be scanned on access, and network segmentation can be used to isolate application clusters. In all, a well configured, properly deployed container should be just as secure as a virtual machine; the only catch is that you need to ensure that your containers meet those standards.

RELATED RESOURCE

Deliver secure automated multicloud for containers with Red Hat and Juniper

Learn how to get started with the multicloud enabler from Red Hat and Juniper

FREE DOWNLOAD

So, the big question: how do we get management to buy in?

As we've mentioned, containers can save money versus virtual machines, as the hardware demands are lesser. There's also the potential for quicker deployment: when you need to roll out application updates, it's much easier to replace a few containers than to update an entire virtual machine.

Containers also bring flexibility to the party: your developers can write in almost any language, and deploy painlessly to both Windows and Linux, so they're not wasting time adapting to the idiosyncrasies of your environment. And, of course, since test, staging and deployment environments are identical, bugs are much less likely to make it into the final production code.

Should we just ditch our VMs and switch entirely to containers?

If you need to run a big stack of apps on a modest allocation of resources then containers probably make more sense than VMs. But even the container vendors admit that virtualisation and containers work best when used together.

One option is to run your containers within VMs: this provides even better isolation and better security, as well as allowing you to easily manage your virtual hardware infrastructure management so for many scenarios it's the best of both worlds.

Image: Shutterstock

Davey Winder

Davey is a three-decade veteran technology journalist specialising in cybersecurity and privacy matters and has been a Contributing Editor at PC Pro magazine since the first issue was published in 1994. He's also a Senior Contributor at Forbes, and co-founder of the Forbes Straight Talking Cyber video project that won the ‘Most Educational Content’ category at the 2021 European Cybersecurity Blogger Awards.

Davey has also picked up many other awards over the years, including the Security Serious ‘Cyber Writer of the Year’ title in 2020. As well as being the only three-time winner of the BT Security Journalist of the Year award (2006, 2008, 2010) Davey was also named BT Technology Journalist of the Year in 1996 for a forward-looking feature in PC Pro Magazine called ‘Threats to the Internet.’ In 2011 he was honoured with the Enigma Award for a lifetime contribution to IT security journalism which, thankfully, didn’t end his ongoing contributions - or his life for that matter.

You can follow Davey on Twitter @happygeek, or email him at davey@happygeek.com.