**Disclaimer: I have previously published a book with Packt Publishing. This book review was not approved or seen in advance of Packt Publishing and is my own opinion. This book was provided to me at not cost to read and review**
Long book title, long blog post title! Packt Publishing has given me the opportunity to review Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager (http://bit.ly/1kosrhz).
The book is very straight forward and is very easily consume. The book covers installation and configuration of VMware Site Recovery Manager and vSphere Replication. While the book is mostly a step by step guide, the author does include design and installation considerations where appropriate as well as a review of background tasks happening which might not otherwise be controlled by the administrator; for example the cleanup tasks after testing a recovery.
Even though the book is on the shorter side, it is a worthy read for anyone interested in implementing either VMware SRM, vSphere Replication or both.
Packt has also provided me with 2 eBook copies to give away for readers of my blog. To participate, please follow me on Twitter @jfrappier and re-tweet this article by July 3rd (be sure to include my Twitter handle, @jfrappier so that I can track the RTs if you are using something like Buffer which may not show an RT on the original tweet). Only July 4th, I will select two winners.
Posted in Tech Tagged with: automation, BCP, Book Review, business continuity, business continuity planning, disaster, Disaster Recovery, dr, ESXI, Home, hypervisor, Reviews, RPO, rto, Shared, Technology, vcenter, Vendors, Virtualization, VMware, vSphere, vsphere replication
Containers have been a hot topic of late, many are suggesting it is the next step in the virtualization evolution and spells doom for vSphere, Hyper-V or other “traditional” virtualization platforms. For those who are not aware of containers, its is a packaged application that can run isolated from other applications. It is not too dissimilar from ThinApps in a virtual desktop environment but focused on server applications such as Apache, Tomcat or other custom applications on top of a single Operating System.
Traditional OS level virtualization generally looks something like this:
Some, such as Linux Journal suggest it is the future and that traditional OS level virtualization and the hypervisor is not needed. With containers, you could remove the hypervisor layer and run your OS directly on baremetal and run the “virtualized” applications helping to save resources (including budget) which would look something like this:
You can read more about Linux Journal’s take on Containers here: Containers—Not Virtual Machines—Are the Future Cloud | Linux Journal
There seems to be a small group who think – as it generally is with most technology, that traditional OS/hypervisor virtualization and containers are complimentary. Scott Lowe has a blog post here walking through the initial setting up along with some thoughts on LXC: A Brief Introduction to Linux Containers with LXC – blog.scottlowe.org – The weblog of an IT pro specializing in virtual…
My question is, why does it have to be Containers versus Virtualization? Why can’t it be a marriage of the two technologies to offer the best of each, or something like this:
With this model, you can still isolate and manage resources at the OS level which has been proven over and over again as well as drop containers on top of the virtualized OS? Not only can I still leverage hypervisor features such as high availability, but I can reduce the number of VM’s needed by adding more containers to a single VM. Scale up resources when needed for adding containers OR scale out resources when needed for redundancy or additional throughput? What are your thoughts?
You can find out more about popular container at the following sites:
lmctfy (Google): https://github.com/google/lmctfy