**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**
There were two software related announcements at EMC World this week which I found very exciting. Building on the free for no production use of RecoverPoint for Virtual Machines from VMworld 2014, EMC announced the same for ScaleIO. ScaleIO allows you build your own Hyperconverged Infrastructure solution (HCI). This is the same software used in the new VxRack from VCE which was also announced at EMC World.
In addition to ScaleIO, EMC also announced CoprHD which is an open source version of EMC ViPR (@coprhd). ViPR (which is also free for non production use) is a solution that allows you to manage multiple arrays and present those as virtual volumes to hosts. In addition to managing the arrays, it also provides a self-service and automation at the storage layer. EMC ViPR also supports ScaleIO, assuming this carries over to CoprHD you could deploy a fully managed, and automated storage solution on commodity hardware for test/dev or QA (I hope they publish more specific guidelines on just what they mean by “non-production”).
Last, but not least, the community version of the VNXe which you can use to provide full block and file servers on commodity hardware. The vVNX will later come in a supported ROBO and cloud edition.
My hope is that CoprHD, ScaleIO, and the community edition of the vVNX will lead to more solutions being open sourced and offered in a free to use model. CoprHD should be available on GitHub by June, ScaleIO by the end of May, whereas the vVNX is available now for download.
Posted in Tech Tagged with: automation, Cloud, CoprHD, devops, EMC, free, Home, NAS, open source, SAN, scaleio, sds, Shared, Software Defined, software defined storage, Storage, Technology, Training, Vendors, ViPR, vipr controller, virtual vnx, Virtualization, VNX, VNXe, vVNX
If you are running EMC ViPR SRM, and your license key expires you will no longer be able to log into the UI where you could have installed a new license key. Instead you will need to update the license(s) via the command line. The directions I had found had
a mistake were unclear, so thought I’d publish the steps that worked for me here.
First and foemost, obtain your new license key by submitting a SR (Service Request) via support.emc.com and follow the steps below.
/opt/APG/bin/manage-licenses.sh install /opt/APG/licenses.zip
/opt/APG/bin/manage-modules.sh service restart tomcat
You should now be able to log in
If you are a customer, partner, or EMC employee and you are subscribed to the ETA notifications list, you probably got a heads up about potential incompatibility between the VNX, RecoverPoint, and VAAI under certain conditions. For those of you who are not subscribed, fellow blogger Cormac Hogan wrote a quick little post about the issue.
I’m proud to announce that there is a fix available for this and it can be found in VNX Block OE 05.32.000.5.206 (released this week). Simply apply this fix (you can do it yourself using USM). For those of you with a FILE front end, make sure you update to 22.214.171.124 as well.
If you are an EMC Customer (with support zone credentials), you can read the full description of solution emc327099 (now stored on the new knowledgebase solution powered by Salesforce). If the direct link is not working, simply login to http://support.emc.com and search for “emc327099” and your first result should be the solution.
With a new year, comes a HUGE update to the VNX family. As Chad Sakac reveled earlier in the year, INYO was the code name for the VNX FILE OE 7.1 and BLOCK OE 05.32 code release that surfaced last year. Now, the time has come for a major update to the code, and with it some exciting new features.
One the FILE side, the biggest (and what I think is the most exciting) feature coming is support for SMB 3.0 and the VNX is the first array to support this. Back in October of 2012, Microsoft released it’s latest versions of the Windows Operating system (Windows 8 and Server 2012). With that came the latest enhancements to the SMB protocol (for more information, click here to read a great blog post by Microsoft). With this upgrade (and the use of the SMB 3.0 protocol) you get a much less disruptive failover which includes keeping the open state of a file and file lock. You will also notice enhanced throughput by being able to take advantage of the Multi Path IO over SMB 3.0 without the need to configure LACP or EtherChannel.
One the BLOCK side, the VNX gains support for ODX support and the ability to Offloaded copies to the array. This cuts down on host CPU as well as SAN bandwidth as the transfers don’t leave the array. This is done by breaking down the copy into a series of tokens and passing them between the hosts while the data is passed between luns (as demonstrated in the chart below)
A couple of things to note. This does require an enabler, but you do not need to reboot the SP for that. You will have to reboot the host for that (it’s a limitation of Microsoft, not EMC). You will have to use Microsoft MPIO or the latest versions of PowerPath as well as an NTFS file system (with an allocation size of 8k or larger for better performance).
Also included with this release was several enhancements revolving around VAAI support on the VNX. Most this included the XCOPY fix as described in solution emc313487 as well as a big performance improvement to VAAI Fastclones. Chad has more on that subject here.
Finally, there was also another enhancement that I wish I had when I was in tech support. Starting with the new version of Unisphere Service Manager (126.96.36.199.0068) you will now find a 1-click health check available after the main login. You may remember a previous blog post I did on how to run health checks on the VNX. Now you can run a single check to verify the health of your array (BLOCK, FILE, or Unified). Just click the health check link on the right hand side. I have attached a screen shot below to show what the output of a healthy array looks like.
So what are you waiting for? Get out there and enjoy these new enhancements. Remember, you don’t have to wait for EMC to upgrade your array, you can do it yourself using USM.
Source: Thulin Around
I recently sat in on an internal VNX (and CLARiiON) performance crash course that was put together to help our new hires get up to speed. Once of the things that stuck out to me was the subject of iSCSI and how it works with host TCP delayed acknowledgement (Delayed ACK).
So what is delayed ACK? As part of TCP, for every packet that is sent to a destination server, that server must send some sort of acknowledgement back to the source server. This way the source server knows the information was successfully transmitted. This adds a good amount of overhead, so in an effort to improve performance, TCP Delayed Acknowledgement (RFC 1122) was created which allows a destination server to respond to every 2nd packet instead. This has become so popular, that support for delayed ACK is enabled by default in many popular client operating systems including Microsoft Windows and VMware ESX/i.
The problem with this is that many storage arrays do not support delayed ACK for one reason or another (usualy has to do with chipset drivers). What happens in this case is that the array will send a packet, it will then wait for an acknowledgement before sending a 2nd packet. Meanwhile, the host is waiting for a 2nd packet before sending an acknowledgement. This standoff between the array and the host will last until the acknowledgement timeout (usually around 200ms) before continuing on. This wreaks havoc on performance if every packet has to wait 200 milliseconds before sending another. So if you’ve setup iSCSI and you are immediately seeing a performance issue, check your hosts to see if Delayed ACK is enabled, and turn it off to see if performance improves.
Disabling Delayed ACK in Microsoft Windows
In Microsoft Windows operating systems, you can simply set the TcpAckFrequency registry value to 1. More information can be found Microsoft kb 328890. On a side note, I found that if the registry value is missing, you can create it in the path specified in the kb and reboot the host.
Disabling Delayed ACK in VMware ESX and ESXi
VMware has created KB 1002598 to address this as well. This adjustment is made per adapter instance and you can change this setting on a discovery address, a target, or (in my case) globally. Once you’ve made your change, reboot the host and enjoy the performance boost.
I hope you’ve found this information useful. It may not solve your iSCSI performance problem, but it is a good place to start.
Source: Thulin Around