I’ve spent a fair amount of time over the last two years preparing for my VCDX, while the VCAP-DCA/VCIX-DCA is the last step before I fully dive in, I’ve been preparing myself not just for an exam/test but to be the best possible architect I can be. To that end, preparation is an on-going, ever evolving process.
During this time, I have seen many blog posts about doing designs, talking about application requirements in terms of CPU, memory, disk (space, I/O, throughput), network, and security. I have seen some posts (even by current VCDX’s) from vendors talking about how their solutions solve problems, even if they might not be the best solution. I’ve long said that most organizations have the need for different products to support their infrastructure, and that no one solution is a fit for all workloads. Now, as this has got me in trouble in the past, nothing is 100% valid, just like no product is a 100% fit for all workloads. There very well could be environments that run, fully supported, on one “stack” of vendor hardware.
For example maybe a solution for application A is preferred, but for application B it might be ranked lower. Some peoples take, is it worth deploying different solutions for different applications and adding in complexity to your environment? That answer, as I think any good architect would say, “It depends” – and not just on raw performance statistics like IOPS, or CPU utilization. Let’s take a look.
If were starting a design today, I would gather requirements from the business (including things like local, state, regional, or industry regulations), application owners, application performance requirements, team/training considerations, space, cooling, availability – its a long list. I would then compile an initial list of those requirements, known risks, and constraints before I even begin considering hardware. If you are engaging a vendor or VAR to help you in this process, and they are leading with a hardware solution and fitting your business onto that – it could be a red flag that you and your organizations best interest are not front and center, but rather just making a sale of a product.
Now that I have all that information gathered, I need to once again step back and dive further into the applications the organization relies upon. Let’s say for example that from a performance perspective, and business perspective that 3x Isilon S210 nodes will provide sufficient near term performance and capacity, and since adding additional nodes is easy, the organization can expand as needed. Great right? Well not if we failed to take a look at the application requirements. In my earlier example I mentioned application A and application B, let’s put a name with those – call it Microsoft SQL 2014 and Microsoft Exchange 2013. Taking a look at the SQL server requirements, it would appear as though our hardware selection is a fit – there are some considerations for analysis services, and for clustering but we can adhere to those requirements with the Isilon S210.
Now, let’s take a look at Exchange; hmmm interesting note here. It seems as though Exchange is NOT supported on NFS storage – even if it is presented and managed via a hypervisor:
A network-attached storage (NAS) unit is a self-contained computer connected to a network, with the sole purpose of supplying file-based data storage services to other devices on the network. The operating system and other software on the NAS unit provide the functionality of data storage, file systems, and access to files, and the management of these functionalities (for example, file storage).
All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.
So, here is a scenario where the proposed hardware solution, which is NAS/NFS based is not supported by one of the applications. In this case, the application vendor has a specific requirement around the application, above and beyond things like CPU, memory, or I/O. There is now an additional design decision – leverage two different solutions – one NFS such as Isilon, or NetApp for applications that support NFS, and a block solution for Exchange? This also dives into another level of “it depends” – if certain applications are expecting NFS storage some place, is it worth having two hardware solutions or making changes to the design of how the application works? For example, if you decided to go all block, either the application requiring NFS needs to change, or you still add another layer of complexity and management by possibly virtualizing an NFS server. Or the flip side, do we run Windows servers to present SMB 3.0 since there is “limited support” for SMB 3.0? Is that really any less complex than two hardware solutions? There are many possibilities, it is all about building a reslient, reliable, and SUPPORTED solution. Make sure all OTS and custom applications are documented, and considerations such as things like supported configurations are taken into account before you move on to design and hardware selections.
Having this level of information helps you properly identify all the actual project requirements, after all – hardware, virtualization, storage, SDN, SDS, etc… is all about application availability and supportability, not about what kind of metal it runs on! If your VAR/vendor is trying to sell you an unsupported solution, it is probably in your best interest to move along.
**Now that I’ve said that applications are the MOST important consideration, I’m sure someone will find that use case where its not, but standing by this one – I’ve never had a CFO ask me if their virtual machine was having problems – its all about the apps!**
**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**
In my last two posts I touched on what ViPR SRM can do, and the quick installation.
With the ViPR SRM installation out of the way, it’s time to start adding Solution Packs. Solution Packs are use to connect to various systems, such as VMware vCenter, so ViPR SRM can collection information about virtual machines, ESXi hosts, datastores, and HBA’s. Additionally, you connect ViPR SRM to your switches and storage for, quite literally, an end to end view of your environment.
Don’t forget to click the Save button!
Depending on your environment, you may also want to add your FC switches as well. Switch monitoring is done by adding a Solution pack for your switch, and connecting to it via SNMP. While logged in as admin go to http://:58080/device-discovery, click Collectors, click New Collector, and Save. This will add an SNMP collector to the local VM. Once the collector is added click on Devices, New Device, and fill in the appropriate information.
With all switches added, click the check box next to it, and click the magnifying glass icon under actions; this will discover the switch.
ViPR SRM will now start collecting data, to expidite the process click on Scheduled Tasks (left navigation menu), check box for the “import-properties-default” task, and click the Run Now button. If you return to the User Interface (back in the Administration page, click User Interface) and go to Explore >> Hosts you should see your vCenter hosts as well as virtual machines.
If you navigate to Explore >> Storage you should also see the storage devices you added.
With the configuration out of the way, I can now start to explore my environment with the various reports available, which I will do in the next post!
I was going to do a post on NFS versus iSCSI, to be honest that is such old hat in my opinion it doesn’t really matter. Whether you use iSCSI or NFS is up to you, your application and business requirements along with any constraints in your infrastructure that may force you to lean one way or another. Since I am an NFS networking ninja, clearly I am going to go the NFS route. Let’s get started on setting up NFS, if you are not already log into your Synology DSM.
Next I need to connect to my NFS share from the ESXi hosts. Typically I’d have NFS on its on VLAN, but sans a switch in my home lab to VLANs it will be riding with all my other network traffic.
The datastore should now be available on both hosts (Click on the host >> related objects >> datastores) as seen below. Repeat for the gold datastore.
Now that the datastores are created, I am going to create an “ISO” folder on the silver datastore to hold my linux ISOs and build virtual machines in vCenter.
Posted in Tech Tagged with: datastore, ESXI, Home, home lab, lab, Lab Series, NAS, nested esxi, network, NFS, nfs datastore, Shared, Storage, Synology, synology nfs datastore vmware, Synology Setup Series, Technology, Training, vcenter, vcsa, Vendors, Virtualization, VMware, vpshere, vSphere, web client
In order to provide shared storage to my home lab, I am going to use a Synology DS1513+. In my lab I have my DS1513+ connected to a switch, which is connected to my home router, this allows me to use http://find.synology.com to start configuring my DS1513+.
My Synolog is configured with 2x 120GB SSD Corsair Neutron drives and 3x 2TB Seagate SATA drives. On the https://find.synology.com page, click on the Connect button to get started.
My Corsair drives do not seem to be compatible with Synology SSD cache, I don’t have the option to create it even though I should have enough memory for at least a portion of the SSDs to be used as cache. In any case, give what I had for parts I’ll just use the 2x SSDs as an all flash volume for my hosts and the 3x SATA drives as another.
If your Synology ships with drives already, it likely had a volume created which is now unavailable because you removed two of the drives. In that scenario remove any existing volumes. If it was ordered with no drives, then I believe as older models did for me you can just create the new volumes and do not need to delete anything.
And there you have it, Synology volumes are created. Up next, iSCSi or NFS? (Hint I passed the Chris Wahl NFS Ninja training at the Boston VMUG)
I just got a Synology DS1513+ and wanted to try out the SSD cache. Having never powered it on I pulled two of the 2TB Seagate drives and installed 2x Corsair SSDs. Once I powered on the device, it started beeping and wouldn’t stop. Turns out that when shipped with drives there is an existing volume already created. The beeping was an error because I basically broke the volume removing the two 2TB drives. To turn off the beeping, do the following:
Having purchased other Synology’s with no drives in them I didn’t expect the volume to already exist. If your Synology is beeping, log in and check it out!