April 23rd, 2015 by JFrappier

Jonathan Frappier Virtxpert

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

One of the upcoming tools I will be working with is ViPR SRM. ViPR SRM is a storage management tool that allows for monitoring the end-to-end health of your environment. I know what you’re thinking, “C’mon now Frapp that sounds awfully marketingy” and you’re right – it does, BUT let me give you an example of why some of the tools in ViPR SRM interest me.

network-is-fineHave you ever went over to a friends cube to chat and they say the app it ain’t no good? The reports are slow, the app keeps crashing, and the chicken taste like wood. Okay, but seriously how many times has someone walked over and said “my application is slow/down/broken” with no further detail, leaving it up to you to isolate what is going on? It has happened to me often. Worse is when you are the personal responsible for storage and someone else responsible for networking does the Jedi hand wave and says the network is fine, it must be storage.


That is where ViPR SRM comes it, it can show you the relation from virtual machine, through the hypervisor, datastore, data path to the storage array hosting the virtual machine. Further, for heterogeneous it supports multiple types of applications, operating systems, hypervisors and storage arrays. Of course it supports more EMC products, since it is an EMC product but you don’t necessarily have to run an EMC array to leverage ViPR SRM.

Below are some of the systems supported by ViPR SRM, an updated list can always be found at emc.com.


While getting ready for the installation, know that you can deploy as either a pre-packed vApp or install the application on 64-bit versions of RedHat, CentOS, SUSE, or Windows; during my post I will be deploying the vApp version which includes 4 virtual machines. The 4 virtual machines each have unique roles as a typical multi-tier application would – there is a web front end for UI and reporting, database backend for storing data, and collector for, well, collecting data. In large environments with multiple arrays you may deploy multiple collectors.


In my next few blog posts I’ll be reviewing the installation of ViPR SRM, and review some of the dashboards and how they might help you in the day to day monitoring, and troubleshooting of your environment. If you’d like to learn along with me check out the ViPR SRM free e-Learning on ECN.

Getting to know ViPR SRM

Posted in Tech Tagged with: , , , , , , , , , , , , ,

April 4th, 2015 by JFrappier

Jonathan Frappier Virtxpert

Boy, social media is proving tough to have some discussions for me lately – here is the first of two three blog posts to set my position straight. Yesterday a conversation got started on Twitter based on a tweet shared by John Troyer “Devs Rool, IT Droolz” – in fact here is another point of view on the conversation from Rynardt Spies. Now as an “infrastructure” person you may think my take here is about saving my job, or staying relevant or some such thing but it is nothing at all about that – its about working together. In fact, those who know me well know that I am trying to push the “infrastructure people” into a more application focus, not necessarily development, but stop being infrastructure focused and work on being able to deliver applications and value to the business. I’ve never had a CFO walk into my office and say a virtual machine was down or a VLAN was misconfigured, but they know when their application is down.

I was fortunate enough in my first IT job to work for a manager who “got it,” as well as several other thought leaders in the technology management space such as Gary Beach. I came out of that job thinking that all IT shops must run like this, I mean after all, it only makes sense that IT’s role is to enable technology for the the business and the people in it. My job as an infrastructure person is to understand the business needs, take the business needs and translate those into functional technology requirements and make sure they are working. In some cases it might be working with sales to ensure an SFA/CRM fits the business model, or that developers have access to the tools and systems to do their jobs, as well as to work with them to ensure access and security requirements are met.

Where I got into trouble in both cases recently (and another blog post is to follow) are “general” statements that some people are taking as “blanket” statements. When I say devs shouldn’t work on infrastructure, I’m not saying they can’t, or don’t have the skills; I’m saying that as a business – I don’t want them to. Now yes, advances in technology allow some level of orchestration at the network layer which allows people to instantiate networks on the fly, things like NSX or Neutron (I’m still pissed at you Neutron by the way) but those technologies still need a strong hardware foundation to be built upon to function and perform properly in support of the business requirements – not just software requirements. A well built network allows orchestration at the software layer to enable the needs of the business.

As I have said many times, there is no silver bullet or magical piece of technology that supports all businesses, and all workloads. “Generally” there will need to be several things, working together to provide everything a business needs. The same holds true for people; if IT is not talking to the business – Finance, Sales, Executives, Legal, Marketing, and engineering teams (development, QA, security) etc… then IT droolz. But the same holds true for developers; if devs are not taking to the business, not talking with IT, not talking with security then those bad devs drool just as much as the bad IT people do.

I have worked at enough software companies to see first hand what can be accomplished when people work together – its not about devs vs IT, its about devs AND IT working together in support of the business. Can developers setup a switch, VLAN, or ACL on a firewall – sure but it’s “generally” not their primary skill set, just as my primarily skill set is not coding or automation. However, working together each person can learn something from the other. Our skills, our experience, our roles provide us with a unique view of that no other person can have. So when you work together you may find that the way you interpreted a requirement needs to be adjusted, either because of technology or business; say for example a business security requirement could be met by coding the application a certain way, but there may be other requirements on the business that require additional layers of security. Again, it’s all about working together. If you are a CIO who isn’t talking to the business and CTO everyday then shame on you. If you are a CTO who isn’t talking to the business and CIO everyday then shame on you as well.

Some of the smartest people I have ever had the privilege to work with and for were developers (Chris, Michael, Igor, Sebby, Cayla, Sarah) . Those people have helped me grow professionally, in way I am so grateful for – they probably don’t even realize the impact they had on me; still don’t think I’d want them setting up my switches, servers, and domain controllers though :)

Devs Rool, IT Roolz

Posted in Tech Tagged with: , , , ,

November 11th, 2014 by JFrappier

Jonathan Frappier Virtxpert

The home lab is getting close!  With the vCenter Server Appliance deployed and basic configuration done, its time to get vCenter setup – AD permissions, Data Center, Cluster and adding hosts to the cluster.  While there are only 2 hosts so far in the home lab, its still good to get an idea of all of the functions / features so here we go.

vCenter can be a bit of a memory hog, and given our limited resources I really don’t want to force my home lab box into memory crunching.  With 32GB of RAM in the host, 2x ESXi virtual machines each with 4GB of RAM and 1x Domain Controller with 2GB of RAM I am using roughly 19% of my total physical memory available (according to Windows) – that is pretty efficent given that I have assigned about 32% of the total system memory to virtual machines alone, never mind Windows 8.1 running, my anti-virus, etc… etc…  However when I boot the VCSA which has 8GB assigned, my utilization jumps to almost 50% and I have a lot more virtual machines to deploy!  After finishing the VCSA setup wizard I shut the virtual machine off, edited the VM settings to only assign 4GB of RAM – thanks again to William Lam for the research on that.  Now power back on the VCSA, in my environment I went from 50% down to 33% – a nice savings for sure.

So, time to log in and get vCenter setup, navigate to assuming you are using the same IP scheme as me.

A quick aside, you may be wondering why I am using IP’s instead of FQDN’s – my host OS, where I am doing all my work from is not using my lab DC for DNS, thus I have no way to resolve the names without editing my host file.  If you want to access vCenter by name, simply add an entry to your host file or point your DNS to the IP of your domain controller.

When the vSphere Web Client login page comes up, log in as [email protected] with the password you set during the VCSA configuration wizard.  Once logged in, the first thing you want to probably do is get rid of that pesky welcome screen, however if you are new to the vSphere Web Client take some time to check it out before disabling it and check out my other tips on the vSphere Web Client.  So here we go…

vCenter Setup:  Permissions

  • Once logged into the vSphere Web Client click on Administration >> Configuration
  • If your domain is not listed, click on the green + icon
  • Select the first radio button under Identity Source Type:  Active Directory (Integrated Windows Authentication)
  • Ensure use machine account is selected and click the OK button
vCenter Setup:  vSphere Web Client - AD added to SSO

vCenter Setup: vSphere Web Client – AD added to SSO

  • Select your domain and click the the CD looking icon with the blue arrow, this will set your domain as the default so you can log in as adusername instead of [email protected]
  • Click on Users and Groups, change the Domain pull down menu to your domain; ensure you can see users in your Active Directory and do not receive any error messages
  • Click on the home link >> vCenter >> vCenter Servers; click on your vCenter server – in my case vxprt-vc01
  • Click on the manage tab >> permissions
  • Click on the green + icon, then click the Add button
  • Ensure the domain pull down is your Active Directory Domain
  • In my AD I have created a group called vcAdmins and an administrative user for myself; jfadmin which is a member of the vcAdmins group.  In the search box type vcAdmins or which ever group name you wish to assign full administrative privileges to.
  • Highlight the group, click the Add button then click OK
  • In the Assigned Role pane select Administrator
vCenter Setup:  assign AD group permission

vCenter Setup: assign AD group permission

  • Click the OK button and ensure the group was added
  • Log out of vCenter and log back in with a user account that is part of the vcAdmins group
  • Once signed back in, click on vCenter and confirm you can see the vCenter server in inventory

vCenter Setup:  Create Datacenter

Now that we are logged into vCenter with an administrative user, it’s time to setup our datacenter.  Now a data center is really just a construct/container in vCenter.  If I wanted I could create multiple data centers inside vCenter even if all of the compute resources were in the same physical location, even in the same rack, even sharing the same network.  What I would not be able to do as of vSphere 5.5 is vMotion or “live migrate” virtual machines from one data center to another.  In my lab I will be creating a single data center.

  • Click on Hosts and Clusters
  • Ensure you are on the summary tab and see vCenter in the Navigation pane
  • Right click on your vCenter and select New Datacenter…
  • As you can see from the wizard, there isn’t a lot we can do in terms of settings at the data center level, name your datacenter, I prefer short names so I am going with dc01; click the OK button

A quick aside here; keep in mind when naming components in vCenter that down the road you may need to integrate other products.  For example in the past I have seen problems when trying to use Vagrant to bring up virtual machines in vSphere because of spaces in the names datacenters or clusters.  Keep your naming simple but identifiable.  I also prefer all lower case for my naming scheme.

vCenter Setup:  Create Cluster

With the datacenter created, we can now create our clusters.  Clusters play a big role in vSphere so while they are simple to create, managing the settings at the cluster level are vital to understand.  We’ll get into that in a future post but just know a cluster is where your add resources such as compute/servers and configure VSAN, EVC, HA and DRS, assuming you have the appropriate licensing.

  • Right click on your newly created datacenter and select new cluster
  • You can see that there are quite a few options here for the cluster, we won’t enable them all now but have a peak through and see what is available
  • Name your new cluster, again I prefer simple so cl01 will be my cluster name; click the OK button

With different physical hosts, EVC is one item you may have enabled right away.  EVC “standardizes” on a processor feature set so that virtual machines can move between different physical hosts with different processors, assuming those processors can share a common processor feature set.  You cannot mix Intel and AMD processors.  In this home lab build with all virtual ESXi hosts, our hosts processors will all be identical.

vCenter Setup:  Add hosts to cluster

Now with our cluster build, we can now add the two virtual ESXi hosts we created in workstation earlier.  There will be warnings/errors after adding the hosts, don’t worry those are expected and we’ll take care of them!

  • Right click on the cluster you just created and select Add hosts…
  • Type the hostname of one the ESXi hosts you created in DNS, in my case vxprt-esxi01 and vxprt-esxi02, click next
  • Type the username (root) and password you set during setup, click next
  • When prompted to accept the certificate, click yes
  • Review the information and click next
  • On the license screen you can use the trial license or if you have license keys, enter the key here and click next
  • Click next on the lockdown mode screen without selecting the Enable lockdown mode checkbox (lockdown mode disables access to the DCUI)
  • Click finish, the host will be added to the cluster
  • Repeat for the remaining virtual ESXi hosts you created

If you added local storage to your first ESXi hosts like we did in a previous posts you should see that host added with no warnings.  The 2nd ESXi hosts has no storage available for logs because it was only setup with a 1GB drive for the OS.

You now have the basics of a working vCenter setup, up next is some of the more advanced features like vMotion and HA!

Home lab vCenter setup - datacenter, cluster, hosts

Home lab vCenter setup – datacenter, cluster, hosts

vCenter Setup – VMware Workstation Home Lab Setup Part 11

Posted in Tech Tagged with: , , , , , , , , , , , , , , , , , , , , , , , ,

November 5th, 2014 by JFrappier

Jonathan Frappier Virtxpert

I have seen an uptick in the Pets vs Cattle arena of late and wanted to add to the discussion.  As a technologist, I am not religious about any one tool, product, operating system or vendor – I will always chose the right platform based on system requirements.  To take an independent stance in that regard you have to first accept that no vendor has a product that meets the demands of every possible business scenario.  Not only do I not care about all those things I mentioned above, I don’t care about my servers or virtual servers (and whatever platform they are running on).

So, what do I care about then? Data.  Step out of your role for a second and ask what people, what your organization, your CEO really cares about – its not server up time, although server up time plays a roll, whether you are using containers or whether you are “DevOps” – again your CEO to your office manager could care less if you have organized around the characteristics/tenants/culture of DevOps.  What they care about is that they have access to the data they need to do their job.  All of the other things we do is simply a means to provide access to that data.

The goal is to engineer an environment where data is protected and accessible according to all of your business requirements so that people can do their jobs.  Hopefully your organization has purchased or built the applications that deliver that data in such a way that allow you to simply treat them like cattle – a web or application server not responding, no worries delete it and spin up a new one – just make sure you are treating your data like a pet!

Data is my only pet, everything else is cattle

Posted in Tech Tagged with: , , , , , , , , , ,

July 4th, 2014 by JFrappier

Jonathan Frappier Virtxpert

I was DevOps before it was cool and had trendy tools like Ansible and Docker

This example precedes the DevOps movement we are currently in. I was working at a HR SaaS company in 2007, deployment times were in hours, migrations as well as backups were days. Our problem, nothing was standardized; some customers were on servers with one OS and one version of the software and others were on another server with a different OS and very few were configured the same way. Same with our database layer, some were on default instances, some on named instances, some with DB and log backups, some without and none of the DB dumps were in the same location.

After talking to the VP of IT and CTO we decided to start standardizing, a key step in my opinion in achieving ITaaS/Automation (those steps by the way 1. Understand business requirements, 2. Document business requirements, 3. Standardize by creating standard operating procedures from 1 and 2, 4. Automate and everything after that stars to fall into place).

We (release engineering, which today would be called DevOps) worked with the development team to identify application requirements and best practices and started to manually apply those by following the SOPs we created during a data center migration. As physical servers were moved, we reconfigured the application so that everything was configured in exactly the same manner – application files were in the same spot, web and database server settings were exactly the same. Over the course of the data center migration, we validated these settings – fewer application errors as well as quicker response to alerts and application errors when they occurred. Life was starting to look good.

In addition to a more stable environment, that was easier to monitor and troubleshoot we also saw deployment times go to minutes instead of hours, and migrations in many cases to < 1 hour from days. Another side effect from all of this, we were able to drastically decrease our backup window because we no longer had to account for non-standard systems; we knew exactly where everything was all the time – our multi-day full backups were now achieved in about 6 hours.

While doing this manually was working, it wasn’t sustainable. Engineering then built tools that automated the configuration of the web servers as well as the application deployment. We added new tools to centralize automated processes that were spread across various batch files on various servers – we knew when they worked and when they failed and we could fix them with but one or two button clicks. This wasn’t achieved overnight however, it was a working process conceived and supported across development, QA, IT and release engineering. We had to change the mindset of people – no more cowboys just “getting stuff done” – you did it our way, the same way, all the time and we were all better for it in the end.

When servers started to have problems, we didn’t spend hours troubleshooting them anymore, we spun up a new image, kicked off the installation procedures that were automated and our application could be back online in minutes.  The longest part of the process was copying files from point A to point B.  My VM’s were cattle years ago.

Life was good, the people I worked with were all unicorns – we weren’t the smartest people in the world, or the most talented but we had pride in ourselves to get the job done, we cared about one another in way, both personally and professionally I’ve never experienced since. We hung together even on holiday weekends when things weren’t going well all making sure our pieces were in place (I did es tag fam no longer get) so we could help the other get over hurdles even if they weren’t our own (Igor, Jay, John, and so many more – I love you guys).

DevOps is more about people than it is about the tools you use, while we have tools today like Ansible and Docker that make achieving DevOps/ITaaS/Cloud…whatever you call it easier, you can’t do it without people.

I was DevOps before it was cool…then we got sold (FWIW – the company that bought us was much larger, lots of money and swagger and their IT and release group couldn’t touch our 6 person team)!

I was DevOps before it was cool – My Pets and Cattle story

Posted in Tech Tagged with: , , , , , , , , , , , ,