Today Ravello has announced a new version of their platform. I first met Ravello at VMworld and was impressed with what they were building. Ravello, in my opinion, makes it easy to explore public cloud solutions and get comfortable with a range of technologies out side the traditional virtualization admin world – specifically I am looking at them as I continue my journey in learning DevOps methodologies and tools.
With their new release, Ravello is also close to being able to run ESXi as a virtual machine on AWS – for lab enthusiast this means no more expensive home lab equipment. Fire up your AWS hosted ESXi virtual machines and run your lab for as long as you need and power it off when you’re done.
Ravello also provides you the ability to run native ESXi virtual machines on AWS, I can think of several situations where I could have leveraged this functionality for disaster recovery and/or disaster recovery testing.
Full press release after the break…
Official Press Release:
Ravello Systems, Inc. today announced that it has released a major version of its nested virtualization technology, HVX, which wraps complex application environments in selfcontained capsules that can run on any cloud. Founded in 2011 by the team that created the KVM hypervisor, Ravello is driving a fundamental increase in pace for companies by instantly cloud enabling any application.
Delivered as a service, Ravello is a breakthrough offering that enables entire application environments with existing VMware or KVM virtual machines and complex networking, to be deployed on any cloud, without any changes. In addition to seamless cloud usage, Ravello has enabled enterprises to reduce provisioning time for complex application environments from months to minutes. Ravello’s cloudbased smart labs enable enterprises to accelerate their development, test, training, sales and support processes. With the new major release today, Ravello has further enhanced all components of its technology:
1. HVX: nested hypervisor the nested hypervisor now includes nested^2 functionality through support for virtualization extensions such as Intel VT and AMD SVM. This means, in addition to running unmodified VMware or KVM virtual machines on public clouds, Ravello can also run thirdparty hypervisors such as KVM today and soon ESXi on top of AWS or Google cloud. This enables hardwareless hypervisor labs and OpenStack labs in the public cloud.
2. HVX: overlay networking and storage the overlay networking technology now includes full support for VLANs as well as mirror ports on top of AWS or Google Cloud. When combined with the unique ability to support broadcast and multicast in public clouds, the new networking functionality enables applications to have full layer 2 access and use the cloud just like the data center.
3. Management the Ravello management UI has undergone a complete refresh. It now has a new look and feel, with improved user experience and a unified private library that serves as a repository of all resources such as VMs, application blueprints, disk Ravello Systems went into a successful public beta in February 2013 and launched the product globally in August 2013. Since then Ravello’s technology has been adopted by a wide variety of companies ranging from the Fortune 500 to midsize and smaller companies.
I’d like to state up front I have no inside knowledge on CodeSpaces, these opinions were formed based on the information they posted at http://www.codespaces.com/
I don’t know any of the people at CodeSpaces, however what I do know of them drives home the point that developers are not operations people (operations people are also not developers, just to be fair) and the mindset that a developer can and should be in charge of operation decisions is wrong (no offense, go do what you are good at).
I’ve seen many companies who think that because developers are technical, they can do any technical job. This is simply not true. Developers are good at writing code, systems administrators and engineers are good at operations, and maybe its that clash of opposing forces that has lead business to listen to the people writing the software instead of the people trying to keep the lights on (IT is utility right? (wrong)!). To again be fair, there are quite a few admins/engineers that still to this day do not realize they are service providers, there to help the business run efficiently.
The first item that jumped out at me was the fact that their backup systems and production systems were stored, essentially together. Any sysadmin worth his weight knows you need to keep your backups offsite. Well, you say, there backups were offsite at Amazon! True, but they were not offsite from their production systems, a massive failure to Amazon would impact their ability to operate normally as well as their ability to recover, so in this case the backups may have well as been sitting on top of the server they were running on.
Second, as a “cloud” provider, in this case a SaaS type SVN/code hosting service, should have operated on multiple IaaS or PaaS providers, not just Amazon. At the very least a disaster recovery site should have been setup on some other service. Had they set it up in just another availability zone they would have been just as easily and critically impacted.
CodeSpaces fate also shows the need for multifactor authentication. While I would consider their lack of foresight to place their backups on a separate provider unfortunate and based on poor design, not having multifactor authentication in place was downright lazy. Amazon offers a virtual “fob” which generates random codes as the second level authentication for…wait for…FREE. Thankfully they were not storing private keys at Amazon so customer data was presumably not accessed, just completely and totally lost.
So, what can you learn from CodeSpaces?
– Offsite to you does not mean offsite to your data. If you are using a 100% cloud service for running your business, you need a SEPARATE vendor for backup and DR,
– Use other providers for general high availability where you are running your application in multiple providers, not just mutliple availability zones with the same provider.
– Use two-factor authentication everywhere possible, at the very least where ever customer data and production systems are stored.
While its sad to see all the hardwork that went into building codespaces, as well as all the hardwork their customers lost, let this be a lesson to current and future startups as well as operations teams. It may also be a handy article to give to the CFO if your request for offsite backup or multifactor auth budget is denined!
Amazon recently released an AWS vCenter plugin that allows direct provisioning of AWS instances from vCenter. You can read more about the plugin here: http://aws.amazon.com/ec2/vcenter-portal/
Shortly after, VMware responds with a post pointing out the negatives with a solution that does not allow for true hybrid management which you can read here: http://www.theregister.co.uk/2014/06/02/vmware_amazon_counter_marketing
My question is, so what? Doesn’t VMware’s own vCloud Automation Center (vCAC) provide this exact same functionality – provision AWS instances without the ability to migrate workloads? I don’t think this plugin from Amazon takes away from the vCAC market, in fact it probably hurts smaller vendors such as CloudBolt more than it hurts VMware. While you could argue that it might take away from vCloud Hybrid Serivce (vCHS) sales, doesn’t that really depend on your use case? In some cases the ability to provision AWS instances such as test/dev for example may not need to be migrated, so AWS would be fine where as VMs provisioned in response to increase demand may warrant moving back and forth between vCHS and a private vSphere/vCAC cloud.
I wonder what your thoughts are?