With the impending release of Windows Server 2012, the Windows-based hypervisor will finally move beyond some of the embarrassing limitations found in the R2 release – such as 4 virtual CPUs per VM. Consequently, for the past year we have seen the Windows virtualization team beating the specsmanship drum with a mighty fervor. Yes, specsmanship has run rampant as PowerPoint decks around the globe extol the wonders: 4TB RAM! 1024 VMs per host! ”Hyper-V has no limits!” proclaimed the marketeers last month at Microsoft TechEd, where we witnessed the latest round of specsmanship in the form of an industry-standard storage benchmark.
Standardized benchmarks are essential when comparing two different entities — provided the test parameters are, in fact, comparable. When measuring IO performance, one crucial element of an experiment is block size — measurements taken using different size IO blocks cannot be meaningfully compared and are the equivalent of the proverbial apples-to-oranges.
With that in mind, it was surprising — if not entirely inappropriate — to see Jeff Woolsey on stage repeatedly comparing the latest Windows Hyper-V IOPS capabilities with results from a paper published by the VMware performance team over a year ago that was based on a completely different set of parameters. For some reason, the Microsoft virtualization marketing machine felt it was just fine to cherry-pick an old vSphere 5.0 8KB result and put it up against a 4KB test on their latest hypervisor.
To clarify, the VMware paper from 2011 was intended to highlight differences between certain configurations and not to demonstrate absolute performance limitations, explicitly stating:
The results presented in this paper are by no means the upper limit for the I/O operations achievable through any of the components used for the tests. The intent is to show that a vSphere virtual infrastructure, such as the one used for this study, can easily handle even the most extreme I/O demands that exist in datacenters today.
There is no denying that Microsoft was very proud of their new Windows-based hypervisor, bragging that the extra-large VM with 64 vCPUs was pushing the quad-socket hardware to the limit, as seen in demos from TechEd North America and Europe:
Evidently, it did not occur to the Hyper-V gang to ask VMware to provide an updated result using a more recent product release and equivalent Iometer benchmark specifications. Fortunately for everyone else, the VMware performance team recently had occasion to put together a quick IOPS performance demonstration with vSphere 5.1 in support of VMworld 2012. The results are intriguing, and appear to reveal why Microsoft preferred to showcase their new product against the year-old vSphere score that happened to be based on IO blocks double the size.
Here is what the scorecard looks like today:
One might draw an obvious conclusion from these results: Although Hyper-V 3 has substantially higher configuration maximums than it did in the paltry R2 release, it takes up to eight times more virtual resources to approach the awesome performance of VMware ESXi 5.1! Actual results trump specsmanship every single time.
While few VMs will ever need a million IOPS, it’s clear that storage can be a bottleneck in any serious virtual environment. Couple that with the dynamic nature of cloud computing and one can easily see why it’s more important than ever to be able to prioritize and fairly distribute access to storage resources across entire clusters — not just within the confines of an individual host. Storage I/O Control is the innovative technology designed to do that very thing, and it’s only available in VMware vSphere, not Windows Hyper-V 3. For efficient, predictable, and reliable storage performance for any workload, trust VMware vSphere for the foundation of your cloud.