Containers have been a hot topic of late, many are suggesting it is the next step in the virtualization evolution and spells doom for vSphere, Hyper-V or other “traditional” virtualization platforms. For those who are not aware of containers, its is a packaged application that can run isolated from other applications. It is not too dissimilar from ThinApps in a virtual desktop environment but focused on server applications such as Apache, Tomcat or other custom applications on top of a single Operating System.
Traditional OS level virtualization generally looks something like this:
Some, such as Linux Journal suggest it is the future and that traditional OS level virtualization and the hypervisor is not needed. With containers, you could remove the hypervisor layer and run your OS directly on baremetal and run the “virtualized” applications helping to save resources (including budget) which would look something like this:
You can read more about Linux Journal’s take on Containers here: Containers—Not Virtual Machines—Are the Future Cloud | Linux Journal
There seems to be a small group who think – as it generally is with most technology, that traditional OS/hypervisor virtualization and containers are complimentary. Scott Lowe has a blog post here walking through the initial setting up along with some thoughts on LXC: A Brief Introduction to Linux Containers with LXC – blog.scottlowe.org – The weblog of an IT pro specializing in virtual…
My question is, why does it have to be Containers versus Virtualization? Why can’t it be a marriage of the two technologies to offer the best of each, or something like this:
With this model, you can still isolate and manage resources at the OS level which has been proven over and over again as well as drop containers on top of the virtualized OS? Not only can I still leverage hypervisor features such as high availability, but I can reduce the number of VM’s needed by adding more containers to a single VM. Scale up resources when needed for adding containers OR scale out resources when needed for redundancy or additional throughput? What are your thoughts?
You can find out more about popular container at the following sites:
lmctfy (Google): https://github.com/google/lmctfy
Recently Gigaom published an article called The sorry state of server utilization and the impending post-hypervisor era, while I hate sending traffic to such a horribly hit targeted article, I just need to finally call out the writers at Gigaom, a site I used to really enjoy reading… one that I officially stop visiting as a matter of habit starting now. While at least one of their writers is clearly anti-VMware, as many of her articles are misinformed at best, to put it nicely (to be fair she has had a couple that were conversation worthy) this article suggests that all server virtualization vendors have failed. And why? Because of low CPU utilization.
As most of my readers, I think, would agree – the bottle neck in modern x86 server virtualization is not the CPU, its more likely to be storage (of course depending on your workload). To say that server virtualization is a failure, strictly by pointing to low CPU utilization rates is either a bold acknowledgement that this person should not be writing about virtualization, or a clear play to get hits and stir up controversy….which unfortunately I am playing right into…. DAMN.
The main point of the article, in another fairly clear anti-big-vendor angle, is that Linux containers will be the new breed of server virtualization. If you are wondering what containers are, head over to Scott Lowe’s blog for a great introductory post. Yes, this technology, if you are a Linux shop (and not everyone should be) is a great piece of technology and to couple this with the enterprise management features from VMware, Citrix, or Microsoft is a great way for IT departments to respond to demand. But… that container, like its traditional virtual server relatives before it, still needs to read or write data from somewhere, the bottleneck of the entire stack will continue to be storage.
Flash based arrays and the various server side caching vendors such at Infinio and PernixData will start to improve the levels of CPU utilization by enabling a higher density of VMs to a single physical host. In addition, CPU is generally not the main expense when in comes to x86 servers and virtual infrastructure, it’s, you guessed it storage!