April 23rd, 2015 by JFrappier

Jonathan Frappier Virtxpert

I’ve spent a fair amount of time over the last two years preparing for my VCDX, while the VCAP-DCA/VCIX-DCA is the last step before I fully dive in, I’ve been preparing myself not just for an exam/test but to be the best possible architect I can be. To that end, preparation is an on-going, ever evolving process.

During this time, I have seen many blog posts about doing designs, talking about application requirements in terms of CPU, memory, disk (space, I/O, throughput), network, and security. I have seen some posts (even by current VCDX’s) from vendors talking about how their solutions solve problems, even if they might not be the best solution. I’ve long said that most organizations have the need for different products to support their infrastructure, and that no one solution is a fit for all workloads. Now, as this has got me in trouble in the past, nothing is 100% valid, just like no product is a 100% fit for all workloads. There very well could be environments that run, fully supported, on one “stack” of vendor hardware.

For example maybe a solution for application A is preferred, but for application B it might be ranked lower. Some peoples take, is it worth deploying different solutions for different applications and adding in complexity to your environment? That answer, as I think any good architect would say, “It depends” – and not just on raw performance statistics like IOPS, or CPU utilization. Let’s take a look.

If were starting a design today, I would gather requirements from the business (including things like local, state, regional, or industry regulations), application owners, application performance requirements, team/training considerations, space, cooling, availability – its a long list. I would then compile an initial list of those requirements, known risks, and constraints before I even begin considering hardware. If you are engaging a vendor or VAR to help you in this process, and they are leading with a hardware solution and fitting your business onto that – it could be a red flag that you and  your organizations best interest are not front and center, but rather just making a sale of a product.

Now that I have all that information gathered, I need to once again step back and dive further into the applications the organization relies upon. Let’s say for example that from a performance perspective, and business perspective that 3x Isilon S210 nodes will provide sufficient near term performance and capacity, and since adding additional nodes is easy, the organization can expand as needed. Great right? Well not if we failed to take a look at the application requirements. In my earlier example I mentioned application A and application B, let’s put a name with those – call it Microsoft SQL 2014 and Microsoft Exchange 2013. Taking a look at the SQL server requirements, it would appear as though our hardware selection is a fit – there are some considerations for analysis services, and for clustering but we can adhere to those requirements with the Isilon S210.

Now, let’s take a look at Exchange; hmmm interesting note here. It seems as though Exchange is NOT supported on NFS storage – even if it is presented and managed via a hypervisor:

A network-attached storage (NAS) unit is a self-contained computer connected to a network, with the sole purpose of supplying file-based data storage services to other devices on the network. The operating system and other software on the NAS unit provide the functionality of data storage, file systems, and access to files, and the management of these functionalities (for example, file storage).

All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

So, here is a scenario where the proposed hardware solution, which is NAS/NFS based is not supported by one of the applications. In this case, the application vendor has a specific requirement around the application, above and beyond things like CPU, memory, or I/O. There is now an additional design decision – leverage two different solutions – one NFS such as Isilon, or NetApp for applications that support NFS, and a block solution for Exchange? This also dives into another level of “it depends” – if certain applications are expecting NFS storage some place, is it worth having two hardware solutions or making changes to the design of how the application works? For example, if you decided to go all block, either the application requiring NFS needs to change, or you still add another layer of complexity and management by possibly virtualizing an NFS server. Or the flip side, do we run Windows servers to present SMB 3.0 since there is “limited support” for SMB 3.0? Is that really any less complex than two hardware solutions? There are many possibilities, it is all about building a reslient, reliable, and SUPPORTED solution. Make sure all OTS and custom applications are documented, and considerations such as things like supported configurations are taken into account before you move on to design and hardware selections.

Having this level of information helps you properly identify all the actual project requirements, after all – hardware, virtualization, storage, SDN, SDS, etc… is all about application availability and supportability, not about what kind of metal it runs on! If your VAR/vendor is trying to sell you an unsupported solution, it is probably in your best interest to move along.

**Now that I’ve said that applications are the MOST important consideration, I’m sure someone will find that use case where its not, but standing by this one – I’ve never had a CFO ask me if their virtual machine was having problems – its all about the apps!**

The MOST important infrastructure design consideration – the Applications!

Posted in Tech Tagged with: , , ,

April 23rd, 2015 by JFrappier

Jonathan Frappier Virtxpert

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

In my last two posts I touched on what ViPR SRM can do, and the quick installation.

With the ViPR SRM installation out of the way, it’s time to start adding Solution Packs. Solution Packs are use to connect to various systems, such as VMware vCenter, so ViPR SRM can collection information about virtual machines, ESXi hosts, datastores, and HBA’s. Additionally, you connect ViPR SRM to your switches and storage for, quite literally, an end to end view of your environment.

  • First, log into http://:58080/APG and click on Administration (upper right corner)
  • Once you are in the Administration interface, click on Centralized Management on the left navigation menu, a new window or tab will open
  • In the new window, click on Solution Pack Center (back in the upper right corner)

vipr-srm-solution-packs

  • In the search box in the upper right corner, type vCenter to filer the results, and click on VMware vCenter
  • When the vCenter box opens, click on the install button.

virp-srm-vcenter-install-pack

  • Follow the wizard and review the options; it’s a basic wizard – next, next; if using PowerPath click Enable the Host PowerPath alerts for example and click next, next, next, next, and finally install. ViPR SRM will go through and install the selected components.

vipr-srm-solutions-pack-vcenter-installed

  • Click OK. Repeat the above steps for your environment. At the very least, the Storage Compliance pack is useful. Here is the EMC XtremIO solution pack which I will be installing to show examples from.

vipr-srm-solution-pack-xtremio

  • With the solution packs installed, we need to provide each some information. Expand Discovery Center in the left navigation menu, expand Devices Management and click on VMware vCenter
  • Click on the Add new device… button and fill in the information to connect to vCenter. I suggest using dedicated accounts for external services, so for example here is my app_viprsrm user account which has admin privileges in vCenter. Click the test button to confirm the account has access, and then click OK. Repeat for multiple vCenters or the storage in your environment you added a pack for.

info

Don’t forget to click the Save button!

save

vcenter-vipr-srm-credsDepending on your environment, you may also want to add your FC switches as well. Switch monitoring is done by adding a Solution pack for your switch, and connecting to it via SNMP. While logged in as admin go to http://:58080/device-discovery, click Collectors, click New Collector, and Save. This will add an SNMP collector to the local VM. Once the collector is added click on Devices, New Device, and fill in the appropriate information.

vipr-srm-snp-device-discovery

With all switches added, click the check box next to it, and click the magnifying glass icon under actions; this will discover the switch.

ViPR SRM will now start collecting data, to expidite the process click on Scheduled Tasks (left navigation menu), check box for the “import-properties-default” task, and click the Run Now button. If you return to the User Interface (back in the Administration page, click User Interface) and go to Explore >> Hosts you should see your vCenter hosts as well as virtual machines.

vipr-srm-vcenter-hosts

If you navigate to Explore >> Storage you should also see the storage devices you added.

vipr-srm-storage

With the configuration out of the way, I can now start to explore my environment with the various reports available, which I will do in the next post!

ViPR SRM Solution Packs for vCenter and XtremIO

Posted in Tech Tagged with: , , , , , , , , , , , , , , ,

November 15th, 2014 by JFrappier

Jonathan Frappier Virtxpert

I was going to do a post on NFS versus iSCSI, to be honest that is such old hat in my opinion it doesn’t really matter.  Whether you use iSCSI or NFS is up to you, your application and business requirements along with any constraints in your infrastructure that may force you to lean one way or another.  Since I am an NFS networking ninja, clearly I am going to go the NFS route.  Let’s get started on setting up NFS, if you are not already log into your Synology DSM.

  • Click on the main menu button on the upper left and open Control Panel
  • Click on the File Services icon
  • I have no need for CIFS or AFP at this time so I am going to disable those; expand the Windows File Service section and uncheck Enable Windows File Service; repeat for Mac File Service
  • Expand NFS service and check enable NFS
  • Click the Apply button
  • In the left navigation window click on Shared Folder
  • Click the create button
  • Provide the necessary details for your folder I am naming my folder vxprt-silver01-ds01 which will be on the SATA drives; click OK
  • Click on the NFS permissions tab and click the Create button
  • In the hostname/IP field enter the range for your ESXi hosts, in my case its all the same network so 192.168.0.0/16
  • Click OK twice
  • Make note of the mount path value, we’ll need that later
  • Repeat for the folder on the SSD volume, I am naming htis folder vxprt-gold01-ds01
  • You should now have two folders created
Synology NFS shares created in DSM

Synology NFS shares created in DSM

Next I need to connect to my NFS share from the ESXi hosts.  Typically I’d have NFS on its on VLAN, but sans a switch in my home lab to VLANs it will be riding with all my other network traffic.

  • Log into the vCenter Web Client
  • Click on vCenter >> Hosts and Clusters
  • Select your cluster, click on the Related Objects tab >> Datastores
  • Click the icon to add a new datastore, click Next
  • Select next NFS and click Next
  • Enter the datastore name, in my case vxprt-silver01-ds01
  • Enter the server IP address and the path you note in the previous section, in my case /volume1/vxprt-silver01-ds01 – click next
  • Select both/all hosts in the cluster you want to have access and click next then finish

The datastore should now be available on both hosts (Click on the host >> related objects >> datastores) as seen below.  Repeat for the gold datastore.

synology-nfs-datastore

Now that the datastores are created, I am going to create an “ISO” folder on the silver datastore to hold my linux ISOs and build virtual machines in vCenter.

Setting up NFS on the Synology Diskstation 1513+ for ESXi

Posted in Tech Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , ,

November 14th, 2014 by JFrappier

Jonathan Frappier Virtxpert

In order to provide shared storage to my home lab, I am going to use a Synology DS1513+.  In my lab I have my DS1513+ connected to a switch, which is connected to my home router, this allows me to use http://find.synology.com to start configuring my DS1513+.

Synology DS1513+

Synology DS1513+

My Synolog is configured with 2x 120GB SSD Corsair Neutron drives and 3x 2TB Seagate SATA drives.  On the https://find.synology.com page, click on the Connect button to get started.

  • Log in as admin with no password
  • Click on the Main Menu button in the upper left corner and start Control Panel
  • The Synology used DHCP to find an address on your network so we could connect and set it up.  We do not want DHCP to continue providing the address, especially since we will be using this for ESXi host storage (at least I will)
  • In Control Panel click on Network >> Network Interface, selected the connected port and click the Edit button
  • With the networking configuration done, time to start configuring storage!

My Corsair drives do not seem to be compatible with Synology SSD cache, I don’t have the option to create it even though I should have enough memory for at least a portion of the SSDs to be used as cache.  In any case, give what I had for parts I’ll just use the 2x SSDs as an all flash volume for my hosts and the 3x SATA drives as another.

  • Chose manual configuration, enter an IP address outside the scope of your DHCP server (or home router) and click the OK button
  • Click on the Main Menu button in the upper left corner and start Storage Manager
  • When storage manager opens click on volumes (depending on your SSDs you could poke around and see if you can do SSD cache or not)

If your Synology ships with drives already, it likely had a volume created which is now unavailable because you removed two of the drives.  In that scenario remove any existing volumes.  If it was ordered with no drives, then I believe as older models did for me you can just create the new volumes and do not need to delete anything.

Synology Storage Manager

Synology Storage Manager

  • Click on the Volume menu and then click the create button
  • For general purpose use I put my trust in Synology SHR volumes, in my case here I want a bit more control and am not so concerned over data loss since its just a lab.  I am going to chose Custom in the wizard to select my own RAID type
  • Chose either single or multiple volume on RAID (I’ve selected single)
  • Select the 3x 2TB drives, click OK when prompted about erasing the disk
  • On the RAID selection screen, chose the RAID type you are most comfortable with given what you are running…for me – RAID0 across all 3 drives
  • In most cases chose yes to check the disks, these shipped with the Synology and are new so I’ve selected No here for times sake
  • Click Apply – your volume will be created
  • If like me you still have drives in your Synology to use, repeat for the remaining drives.  Once the volume is created for the SSD, click on the SSD Trim button to enable.

And there you have it, Synology volumes are created.  Up next, iSCSi or NFS? (Hint I passed the Chris Wahl NFS Ninja training at the Boston VMUG)

Setting Up the Synology DS1513+

Posted in Tech Tagged with: , , , , , , , , , , , , , , , ,

November 14th, 2014 by JFrappier

Jonathan Frappier Virtxpert

I just got a Synology DS1513+ and wanted to try out the SSD cache.  Having never powered it on I pulled two of the 2TB Seagate drives and installed 2x Corsair SSDs.  Once I powered on the device, it started beeping and wouldn’t stop.  Turns out that when shipped with drives there is an existing volume already created.  The beeping was an error because I basically broke the volume removing the two 2TB drives.  To turn off the beeping, do the following:

  • Log into DSM, since I am assuming this is a new deployment you can find the IP at https://find.synology.com
  • Log in as admin with no password
  • The control panel window will open
  • Click on Beep off, take aspirin to fix the headache
  • Close the control panel window
  • In storage manager you will see Volume 1 in a crashed state, highlight it and click remove
  • Click OK then yes to confirm deleting the volume
  • You should now see no volumes in storage manager and the disk station health change to good
  • You can now go about creating volumes as you see fit

Having purchased other Synology’s with no drives in them I didn’t expect the volume to already exist.  If your Synology is beeping, log in and check it out!

Synology DS1513+ beeping after installing SSDs (New deployment)

Posted in Tech Tagged with: , , , , , , , , , , , , ,