That is a wrap on getting the basics of a home lab up and running in VMware Workstation. Within Workstation we have a working Windows 2012 Domain Controller, two virtual ESXi hosts both capable of running nested 64-bit virtual machines thanks to the RVI support in the processor of the 8-core home lab system build and vCenter running. In vCenter we have our datacenter and cluster created with both virtual ESXi hosts added. The cluster has DRS enabled, a virtual distributed switch setup with both hosts attached and port group and VMkernel interface setup and running and demoed using vMotion to move a virtual machine from one host to another. Not bad for 4 virtual machines barely consuming any memory on the host computer!
All that though, was leading up to this; setting up vRealize Automation and Application Services. In my next series I will go over some of the basics of getting vRealize Automation setup in your home lab so you can start to get a feel for the various roles, requirements and setup. Here is some handy reading in the interim (ignore anything that says vSphere SSO can’t be used)
Thank you for following along with the home lab series setup, I know there may be a few holes but again the goal was to get this setup to have an environment as the foundation to test other tools.
Posted in Tech Tagged with: ESXI, HOL, Home, home lab, lab, Lab Series, Microsoft, nested esxi, PowerCLI, Shared, Technology, Training, vcenter, vcsa, Vendors, Virtualization, VMware, vmware workstation, vRA Home Lab, vSphere, web client, Windows, wrap-up
One of the first steps generally in setting up vCloud Automation Center or vRealize Automation would be to deploy the identity appliance, however you can also use an existing vSphere SSO implementation. In fact, VMware has gone so far as to publish a technical white paper on how to configure vSphere SSO for high availability for use with vCloud Automation, now I won’t be doing that just yet but know its possible.
Here is the support matrix for currently supported authentication providers for vCloud Automation Center 6.1, as you can see vSphere SSO 5.5 U2b, U1c and U1b are all certified and supported. A quick check on the vCenter server version (vSphere Web Client >> vCenter >> vCenter Servers >> vxprt-vc01; Version Information portlet) shows that the vCenter Server Appliance that was deployed is 5.5.0 build 2183111; cross reference on the VMware KB (1014508) shows that the build number correlates to vSphere 5.5 Update 2b – certified to work with vCloud Automation Center / vRealize Automation 6.1
Given we are running in a lab environment with limited resources, I am going to go ahead and use the vSphere SSO implementation bundled with the vCenter Server Appliance. If you would like more information on deploying the vCloud Automation Center / vRealize Automation Identity Appliance, check out Emad Younis’ post on his blog.
Whether or not to use your vSphere SSO implementation for production deployments would have several factors (if resource constraint is one of them please talk to your boss about why you’re deploying vRA in the first place). For enterprise deployments it may make sense to leverage an existing vSphere SSO deployment since your employees are likely to share a common Active Directory already. Yes it becomes a single point of failure for both vCenter and vCloud Automation Center / vRealize Automation (unless you deploy the HA solution referenced above), but if SSO for vSphere is down, or the identity appliance is down, you’re still not getting much done until that is restored. It may make sense to simplify support in that regard.
For organizations supporting external users, you may want to separate the authentication domains so they are not co-mingled, creating a separate security domain. Another reason may be politics – maybe one group manages vCenter and another vCloud – if that is the case, again please talk to your boss.
I’d love to hear some of your pros and cons for separate identity appliance or using vSphere SSO. Up next we’ll deploy the vCloud Automation Center / vRealize Automation appliance.
Posted in Tech Tagged with: Cloud, HOL, Home, home lab, identity appliance, lab, Shared, SSO, Technology, Training, vcac, vcac vsphere sso, vcloud, vcloud automation center, vCloud Automation Center (vCAC), vcsa, Vendors, Virtualization, VMware, vra, vRA Home Lab, vrealize automation, vRealize Automation (vRA), vSphere
vCenter is built, now we can start doing some of the cooler things VMware vSphere has to offer; up first – Dynamic Resource Scheduler. DRS can be run in either manual, partially automated or fully automated mode. Partially automated will make initial placements of new virtual machines and virtual machines during power on operations and suggest how to rebalance the cluster. Fully automated, well its fully automated. It will balance cluster resources based on how aggressive you want it to be. For a deeper dive into DRS, check out the Clustering Deep Dive book, basically the bible for all things HA and DRS.
To enable DRS, log into the vSphere web client and perform the following steps:
So enabling DRS – not to hard; understanding all of the settings and how it impacts your environment – well that is typically the harder part. As for our home lab setup, we are ready to setup vMotion – a requirement for DRS to be fully automated!
Posted in Tech Tagged with: automation, cluster, drs, dynamic resource scheduler, ESXI, HOL, Home, home lab, lab, Lab Series, nested esxi, Shared, Technology, Training, vcenter, vcsa, Vendors, Virtualization, VMware, vSphere, vsphere cluster, web client
So you’ve got vCenter up and running and hosts added, it’s time to enable the cool things vCenter can do – namely vMotion, HA and DRS. I’ve gone back and forth on how I wanted to present vMotion and networking in the home lab. On one hand many existing deployments are likely running 1Gbps, though newer ones are likely to start with 10Gbps as prices have dropped. After a quick Twitter chat I decided to move forward as I would if I had 10Gbps networking and not have separate physical interfaces in my host for different traffic types.
When we setup our ESXi templates there was only a single NIC, let’s add a 2nd NIC to the VM’s. For purposes of this labs (and maybe I’m still old like this) I will keep my management network on a standard switch and my VM network and vMotion traffic on a distributed switch.
Once the ESXi virtual machine has been restarted, you should see two interfaces in the vSphere Web Client. Repeat for your 2nd host.
In the vSphere Web Client, click on the network tab in the navigator so we can create the VDS.
Your hosts will be added to the VDS and vMotion will be enabled on the newly created VMkernel adapter. To test, I have created an empty virtual machine on vxprt-esxi02 in the silver datastore, I am going to vMotion and Storage vMotion that virtual machine to vxprt-esxi01. Here you can see the screenshot
You can see the progress of the vMotion in the Running Tasks window. After a few minutes you should now see your virtual machine on vxprt-esxi01
Posted in Tech Tagged with: create vds, distributed switch, ESXI, HOL, Home, home lab, Lab Series, migrate virtual machine, nested esxi, network, port group, Shared, Storage, Technology, Training, vcenter, vcsa, vDS, Vendors, virtual machine, Virtualization, VM, vmk, vmkernel, vmotion, vmotion vm, VMware, vSphere, web client
The home lab is getting close! With the vCenter Server Appliance deployed and basic configuration done, its time to get vCenter setup – AD permissions, Data Center, Cluster and adding hosts to the cluster. While there are only 2 hosts so far in the home lab, its still good to get an idea of all of the functions / features so here we go.
vCenter can be a bit of a memory hog, and given our limited resources I really don’t want to force my home lab box into memory crunching. With 32GB of RAM in the host, 2x ESXi virtual machines each with 4GB of RAM and 1x Domain Controller with 2GB of RAM I am using roughly 19% of my total physical memory available (according to Windows) – that is pretty efficent given that I have assigned about 32% of the total system memory to virtual machines alone, never mind Windows 8.1 running, my anti-virus, etc… etc… However when I boot the VCSA which has 8GB assigned, my utilization jumps to almost 50% and I have a lot more virtual machines to deploy! After finishing the VCSA setup wizard I shut the virtual machine off, edited the VM settings to only assign 4GB of RAM – thanks again to William Lam for the research on that. Now power back on the VCSA, in my environment I went from 50% down to 33% – a nice savings for sure.
So, time to log in and get vCenter setup, navigate to https://192.168.6.6:9443 assuming you are using the same IP scheme as me.
A quick aside, you may be wondering why I am using IP’s instead of FQDN’s – my host OS, where I am doing all my work from is not using my lab DC for DNS, thus I have no way to resolve the names without editing my host file. If you want to access vCenter by name, simply add an entry to your host file or point your DNS to the IP of your domain controller.
When the vSphere Web Client login page comes up, log in as [email protected] with the password you set during the VCSA configuration wizard. Once logged in, the first thing you want to probably do is get rid of that pesky welcome screen, however if you are new to the vSphere Web Client take some time to check it out before disabling it and check out my other tips on the vSphere Web Client. So here we go…
Now that we are logged into vCenter with an administrative user, it’s time to setup our datacenter. Now a data center is really just a construct/container in vCenter. If I wanted I could create multiple data centers inside vCenter even if all of the compute resources were in the same physical location, even in the same rack, even sharing the same network. What I would not be able to do as of vSphere 5.5 is vMotion or “live migrate” virtual machines from one data center to another. In my lab I will be creating a single data center.
A quick aside here; keep in mind when naming components in vCenter that down the road you may need to integrate other products. For example in the past I have seen problems when trying to use Vagrant to bring up virtual machines in vSphere because of spaces in the names datacenters or clusters. Keep your naming simple but identifiable. I also prefer all lower case for my naming scheme.
With the datacenter created, we can now create our clusters. Clusters play a big role in vSphere so while they are simple to create, managing the settings at the cluster level are vital to understand. We’ll get into that in a future post but just know a cluster is where your add resources such as compute/servers and configure VSAN, EVC, HA and DRS, assuming you have the appropriate licensing.
With different physical hosts, EVC is one item you may have enabled right away. EVC “standardizes” on a processor feature set so that virtual machines can move between different physical hosts with different processors, assuming those processors can share a common processor feature set. You cannot mix Intel and AMD processors. In this home lab build with all virtual ESXi hosts, our hosts processors will all be identical.
Now with our cluster build, we can now add the two virtual ESXi hosts we created in workstation earlier. There will be warnings/errors after adding the hosts, don’t worry those are expected and we’ll take care of them!
If you added local storage to your first ESXi hosts like we did in a previous posts you should see that host added with no warnings. The 2nd ESXi hosts has no storage available for logs because it was only setup with a 1GB drive for the OS.
You now have the basics of a working vCenter setup, up next is some of the more advanced features like vMotion and HA!
Posted in Tech Tagged with: cluster, data center, ESXI, HOL, Home, home lab, home lab series, lab, Lab Series, Linux, Management, nested esxi, permission, Shared, SSO, Technology, Training, vcenter, vcenter setup, vcsa, Vendors, Virtualization, VMware, vSphere, web client