April 23rd, 2015 by JFrappier

Jonathan Frappier Virtxpert

A couple of weeks ago a question was posted on the Ansible LinkedIn group stemming from an Ansible role for security CentOS. The question, whether Automation is the only way to ensure security. My brief social media shorted response was

Completely agree. If you aren’t automating then you can’t really claim to be secure

This caused some fuss on the post, with most disagreeing with me. Now I stand by my answer, you cannot be secure if you are not automating, but to further my answer, you are not necessarily secure just because you are automating. Security is not just something you turn on, said another way its not binary, you don’t just turn on security. Security consists of many layers, not the least of which is truly understanding your companies business, goals, requirements, processes, and people. With that understanding, you can now apply any specific security measures you may need to abide by. For example if accept credit cards then appropriate safe guards need to be taken to ensure data is encrypted and certain elements such as the validation number are not stored.

Now, if you are not adhering to those requirements, there is no automation process in the world that can secure you. However, even with the most specific of run books, security teams, engineers, and auditors ensuring you have done everything technically possible to ensure security, you cannot truly say you are secure with a means to automate the installation, and configuration as you have defined them.

heartbleedAnother argument in the group discussion was that automation can also lead to widespread vulnerabilities by opening security holes. And while this is true, my previous statements still hold true – you need to have the proper security processes, and details in place before you automate them. Now, say for example, something like Heartbleed comes along again – how quickly would it take you to patch even 10 systems? What about 100? or 1000 if you are doing it by hand? Much longer than it would take to leverage something to patch the systems automatically.

Automation, configuration management, devops; none of these things are a panacea – however security teams should need be relying on automation, not manual efforts to configure, and secure systems.

Security needs automation, but automation does not mean you are secure

Posted in Tech Tagged with: , , ,

November 1st, 2014 by JFrappier

Jonathan Frappier Virtxpert

In what should be a multi-part series (unless work gets insane) I will be setting up the supporting infrastructure for my home lab.  For this lab I will be using the 8-core home lab build I wrote about in the past.  I am currently running Windows 8.1 with VMware Workstation 10.  I have two volumes setup in Windows that will be dedicated for VMs – 1 is a single 120GB Neutron SSD that I will use for some of the “heavier” VMs such as SQL server and the vRealize IaaS server.  The other is a ~1.3TB RAID0 dynamic volume built in Windows on 3x 500GB Seagate hybrid drives which will be used for common VMs such as the domain controller I am setting up here.

I will be starting with all the VMs using NAT in VMware Workstation.  First I am setting up a Windows VM that we will use throughout the lab build – why am I doing this first, mostly because of how long patches are going to take to be totally honest, you could just as easily start with your virtual ESXi boxes (should be the next post) but alas here you are reading this.

First, create a new virtual machine in VMware Workstation

vmware-workstation

VMware Workstation – Create a New Virtual Machine

  • On the New Virtual Machine wizard page select Custom (I prefer control over which settings I chose) and click Next
    Select Workstation 10.0 and click Next
  • Select I will install the operating system later radio button (old habit I’m hanging onto from old Workstation and Ubuntu days) and click Next
  • Select Microsoft Windows and select the version from the pull down menu. I am using Windows Server 2012; click Next.
  • If you have set your drives up like me, click the browse button and select your preferred Windows volume, in my case I have selected the “V” drive where my RAID0 volume is. I also have create a folder on this drive called VMs because OCD.
  • Name your virtual machine and pasted that along with a leading into the location field aver V:VMs to create the VM in its own folder like so and click the Next button.  In my setup I am actually using vxprt-win-tmp01:
VMware Workstation VM destination folder and virtual machine name

VMware Workstation VM destination folder and virtual machine name

  • I am staying with a single processor, single core – after all we don’t have unlimited resources in this home lab, click the Next button
  • I’m also sticking to 2GB of RAM (Next), NAT (Next), LSI Logic SAS (Next), SCSI (Next), and creating a new virtual disk (Next)
  • On the Specify Disk Capacity page, I typically chose to store my virtual disks as a single file, this is up to you – I don’t like having a bunch of files in my VM folder, feels messy. Also leave Allocate all disk space now unchecked to thin provision your disk and click Next
  • Optionally you can rename your disk file, this again I prefer to have the same as my VM name, click Next and Finish. Your VM will be created, albeit with no OS yet.
  • Right click on your new VM and select settings
  • Click on CD/DVD and select the appropriate option to install Windows, in my case I have a downloaded ISO so I have selecte the Use ISO image file radio button and selected the desired ISO image.  Click OK to close the settings window.
  • Right click on your VM, go to Power and click on Start Up Guest.

From here on out you’ve got a standard Windows install wizard to follow.  Once Windows is installed and you set your password, install VMware Tools by right clicking on the VM and selecting Install VMware Tools – follow that wizard, reboot and patch your Windows VM.  Next up a quick post on cloning VMs in Workstation so we can get to the fun part.

VMware Workstaion Home Lab Setup Part 1 – Windows VM

Posted in Tech Tagged with: , , , , , , , , , , , , , , ,

June 24th, 2014 by JFrappier

Jonathan Frappier Virtxpert

I’d like to state up front I have no inside knowledge on CodeSpaces, these opinions were formed based on the information they posted at http://www.codespaces.com/

I don’t know any of the people at CodeSpaces, however what I do know of them drives home the point that developers are not operations people (operations people are also not developers, just to be fair) and the mindset that a developer can and should be in charge of operation decisions is wrong (no offense, go do what you are good at).

I’ve seen many companies who think that because developers are technical, they can do any technical job. This is simply not true. Developers are good at writing code, systems administrators and engineers are good at operations, and maybe its that clash of opposing forces that has lead business to listen to the people writing the software instead of the people trying to keep the lights on (IT is utility right? (wrong)!).  To again be fair, there are quite a few admins/engineers that still to this day do not realize they are service providers, there to help the business run efficiently.

The first item that jumped out at me was the fact that their backup systems and production systems were stored, essentially together. Any sysadmin worth his weight knows you need to keep your backups offsite. Well, you say, there backups were offsite at Amazon! True, but they were not offsite from their production systems, a massive failure to Amazon would impact their ability to operate normally as well as their ability to recover, so in this case the backups may have well as been sitting on top of the server they were running on.

Second, as a “cloud” provider, in this case a SaaS type SVN/code hosting service, should have operated on multiple IaaS or PaaS providers, not just Amazon. At the very least a disaster recovery site should have been setup on some other service. Had they set it up in just another availability zone they would have been just as easily and critically impacted.

CodeSpaces fate also shows the need for multifactor authentication. While I would consider their lack of foresight to place their backups on a separate provider unfortunate and based on poor design, not having multifactor authentication in place was downright lazy. Amazon offers a virtual “fob” which generates random codes as the second level authentication for…wait for…FREE.  Thankfully they were not storing private keys at Amazon so customer data was presumably not accessed, just completely and totally lost.

So, what can you learn from CodeSpaces?

– Offsite to you does not mean offsite to your data. If you are using a 100% cloud service for running your business, you need a SEPARATE vendor for backup and DR,
– Use other providers for general high availability where you are running your application in multiple providers, not just mutliple availability zones with the same provider.
– Use two-factor authentication everywhere possible, at the very least where ever customer data and production systems are stored.

While its sad to see all the hardwork that went into building codespaces, as well as all the hardwork their customers lost, let this be a lesson to current and future startups as well as operations teams. It may also be a handy article to give to the CFO if your request for offsite backup or multifactor auth budget is denined!

What to learn from CodeSpaces and how it could have been avoided

Posted in Tech Tagged with: , , , , , , , , , , , ,

June 11th, 2014 by JFrappier

Jonathan Frappier Virtxpert

Containers have been a hot topic of late, many are suggesting it is the next step in the virtualization evolution and spells doom for vSphere, Hyper-V or other “traditional” virtualization platforms.  For those who are not aware of containers, its is a packaged application that can run isolated from other applications.  It is not too dissimilar from ThinApps in a virtual desktop environment but focused on server applications such as Apache, Tomcat or other custom applications on top of a single Operating System.

Traditional OS level virtualization generally looks something like this:

esxi-os-app

Some, such as Linux Journal suggest it is the future and that traditional OS level virtualization and the hypervisor is not needed.  With containers, you could remove the hypervisor layer and run your OS directly on baremetal and run the “virtualized” applications helping to save resources (including budget) which would look something like this:

container

You can read more about Linux Journal’s take on Containers here:  Containers—Not Virtual Machines—Are the Future Cloud | Linux Journal

There seems to be a small group who think – as it generally is with most technology, that traditional OS/hypervisor virtualization and containers are complimentary.  Scott Lowe has a blog post here walking through the initial setting up along with some thoughts on LXC:  A Brief Introduction to Linux Containers with LXC – blog.scottlowe.org – The weblog of an IT pro specializing in virtual…

My question is, why does it have to be Containers versus Virtualization?  Why can’t it be a marriage of the two technologies to offer the best of each, or something like this:

container

With this model, you can still isolate and manage resources at the OS level which has been proven over and over again as well as drop containers on top of the virtualized OS?  Not only can I still leverage hypervisor features such as high availability, but I can reduce the number of VM’s needed by adding more containers to a single VM.  Scale up resources when needed for adding containers OR scale out resources when needed for redundancy or additional throughput?  What are your thoughts?

You can find out more about popular container at the following sites:

Parallels: OS Virtualization Solution for Windows and Linux — Parallels Virtuozzo Containers – Parallels

Docker:  Docker – Build, Ship, and Run Any App, Anywhere

LXC: https://linuxcontainers.org/

lmctfy (Google):  https://github.com/google/lmctfy

Containers vs Virtualization or Containers + Virtualiztion

Posted in Tech Tagged with: , , , , , , , , , , , , ,

April 1st, 2014 by JFrappier

Jonathan Frappier Virtxpert

During a recent vCenter deployment using the VCSA I ran into an error I hadn’t run into before with the VCSA (or vCenter/SSO on Windows for that matter).  After an error free install and setup wizard, I logged in to vCenter as [email protected] to set my roles and assign my AD groups permission.  However I noticed that there was no identity source for my Active Directory domain, no problem add it in, boom now hop on over to my vCenter permissions tab and get people vCentering.  This is where I ran into errors.  When trying to search for a user I received a pop up that said

Cannot load users for the selected domain

Before I ran the setup wizard, I had SSH’d to the VCSA did some pings and digs to make sure the network bits were flowing properly and everything seemed fine.  I could ping and dig both local and remote  AD resources so I was confident that was all working fine.  Easy fix I assumed, so I headed over to the global KB search tool known as Google and was lead to this KB, http://kb.vmware.com/kb/2033742 which suggested checking DNS, time synchronization and joining and re-joining to the domain.  I manually re-checked DNS records were present, that the AD join process had worked correctly and the account was still enabled.

Looking through the /storage/log/vmware/sso/ssoAdminServer.log I saw several exceptions with the following error (stripped excess text)

Failed to establish server connection

I searched the KB again for this error, but all I found was problems related to accent characters which wasn’t my thing.  At this point it was worth while to open a support case.  I wanted to make sure I had all my tier 1 support boxes ticked so I rebooted, verified I could ping/dig records, went back to the Identity Source page removed and re-added the domain and went to look up an AD group to get the error message and… all my users were listed.  I’ve not quite tracked down what was wrong, but if you are getting this error, and you know your DNS was square try just re-adding the sdentity source.

vCenter Active Directory SSO Error – Cannot load users for the selected domain

Posted in Tech Tagged with: , , , , , , , , , , , , , ,