Hyper-V not ready yet

A while ago, I came up with a list of reasons why Hyper-V isn’t yet competitive with vSphere. Following on from TA 8188 Competitive Platform Shootout, I thought I’d update the list.

It’s now up to 35 items:

  1. Limited supports guest OSes (eg RHEL4 not supported)
  2. Paravirtualised drivers, not full virtualisation
  3. Only uniprocessor Linux VMs supported.
  4. 4-way VMs only supported in Windows 2008
  5. No 8-way or above VMs
  6. 20% lower performance across 1,2,4 way VMs (compared with ESX4i)
  7. No SCSI driver for Linux (IDE only supported). (SCSI is supported only as a paravirtualised driver)
  8. Lack of support in the wider ecosystem (such as Lab Manager, Site Recovery Manager, NetApp SMVI).
  9. No support for running VMs on Network filesystems (NTFS on FC or iSCSI only)
  10. vMotion equivalent is “Quick Migration only” (small outage when changing hosts).
  11. No Storage vMotion
  12. NTFS isn’t designed as multi-write filesystem (Cluster Shared Volumes do add a shim on top, via a coordinator node)
  13. Limited support from third-party DR and backup apps to support CSVs.
  14. No equivalent at this stage of VMware DRS, DPM, HA, FT etc.
  15. No Memory over-commitment to VMs
  16. Requires Vista, Windows 7 or Windows 2008 to administer (No client for Windows XP).
  17. NIC Teaming in Hyper-V isn’t supported
  18. Can’t remove VHDs with SCVMM
  19. Cannot do simultaneous “Live Motions” [8 at once with ESX* 4.1]
  20. HyperV has a huge disk footprint compared with ESXi
  21. HyperV has a large memory footprint
  22. VMware drivers are specific to the task in hand – virtualisation, and not general purpose OS drivers (fewer drivers, with more attention to correctness and stability).
  23. All IO in HyperV is passed through the parent partition, rather than direct.
  24. No support for large and nested memory pages
  25. Can’t online patch HyperV (eg with Update Manager)
  26. Can’t hot-add resources to a VM
  27. No storage and network resource maps.
  28. Can’t cleanly shutdown a Linux VM
  29. No single pane of glass for Hyper-V management [eg Cluster Manager, VMM, Perfmon, NIC Teaming … ]
  30. No upgrade path from Hyper-V R1 to R2
  31. NAS Support lacking from Hyper-V
  32. Can’t extend virtual disks in Hyper-V
  33. Thin-provisioning not supported in Hyper-V
  34. Poor snapshot management
  35. No resource controls in Hyper-V
Advertisements

7 Responses to Hyper-V not ready yet

  1. ian0x0r says:

    Point 10 is incorrect, you can Live migrate with Hyper-V R2 (and you dont need to have a specific version of ESX/i for it either)
    Point 14, you can get some of this functionality by integrating with Operations manager and PRO tips for SCVMM, probably system centre essentials as well
    Point 15. TPS for newer os’s isnt as effective on ESX. DYanmic memory on Hyper-V allows a different kind of memory over commit.
    Point 16, there are ways of getting client to work for XP, although granted it is a PITA
    Point 25. Though it was best practice to put an ESX host into maintenance mode before patching?. Thus evacutating the VM.s before remediation. Hyper-V would be the same.
    Point 26. You CAN hot add a VHD attached to SCSI bus

  2. Jeff Wouters says:

    Alright, here I go:
    10.vMotion equivalent is “Quick Migration only” (small outage when changing hosts).
    Ever heard of “Live Migration”? No outage!
    11.No Storage vMotion
    It’s a hypervisor, not a storage-tool. VMWare is more then just a hypervisor, it also has storage tooling in it… but you pay for it aswell…
    14.No equivalent at this stage of VMware DRS, DPM, HA, FT etc.
    Use System Center.
    15.No Memory over-commitment to VMs
    Dynamic Memory is in Hyper-V 2008 R2 SP1… accomplish the same in another way.
    16.Requires Vista, Windows 7 or Windows 2008 to administer (No client for Windows XP).
    Half correct, use tooling from http://www.utharam.com
    17.NIC Teaming in Hyper-V isn’t supported
    Use Hyper-V 2008 R2 or later.
    18.Can’t remove VHDs with SCVMM
    Again, use Hyper-V 2008 R2 or later.
    21.HyperV has a large memory footprint
    Take a look at this nice post: http://blogs.technet.com/b/virtualization/archive/2009/08/12/hypervisor-footprint-debate-part-1-microsoft-hyper-v-server-2008-vmware-esxi-3-5.aspx
    22.VMware drivers are specific to the task in hand – virtualisation, and not general purpose OS drivers (fewer drivers, with more attention to correctness and stability).
    26.Can’t hot-add resources to a VM
    What resources? Memory? Use Dynamic Memory. Hot-add CPU’s? Correct, not possible.
    27.No storage and network resource maps.
    Again, it’s an hypervisor! Use System Center for this.
    30.No upgrade path from Hyper-V R1 to R2
    Look at “Method one” in http://support.microsoft.com/kb/957256 . The error is only in the product to protect system administrators from themselves…
    32.Can’t extend virtual disks in Hyper-V
    It is possible in Hyper-V 2008 R2 SP1, don’t know for sure about earlier versions.
    34.Poor snapshot management
    Be more specific! What are the functionalities you seem to be missing in here?

    In general, I like posts from people who tell what they are missing in product. This makes the life of product managers easier so they can hear from the world what to improve.
    *) But please, if you write an article be sure to base it upon the latest version of the products. If I should write that Active Directory doesn’t have recycle bin, then it would surely not be based upon Windows Server 2008 R2 (SP1).
    *) Elaborate your opnions, don’t make just a huge list of examples but explain why it ended up in that list!

    • Chris Wells says:

      @10 – Ok – Good – Live migration now exists. How many can you do at once?
      @11 – for a large datacenter, storage migration is a hugely beneficial tool
      @15 – Is’nt R2SP1 still Beta?
      @17 – Are heterogeneous NIC teaming supported?
      @18 – I’ll take your word for that
      @21 – Yeah – read that ages ago. ESXi 4 can be cut down to about 60MB. The majority of the installation image is actually the client. Also, have a look at http://www.vcritical.com/2009/08/the-vmware-esxi-4-64mb-hypervisor-challenge/
      @22 – and that’s a positive thing. By limiting your support to Enterprise-only hardware, you can ensure that more time and effort is spent getting the drivers right, rather than supporting every esoteric hardware under the sun.
      @26 – Yes. vCPUs. But it’s moot anyway, since it’s easy to add and remove vCPUs with a VM that’s off, and powered up quickly again.
      @11/27 – No. Those features are part of vCenter – the administration GUI. The monitor and VMM are part of ESX VMkernel.
      @30 – I think you’re proved the point; Hardly easy.
      @34 – I believe (from vcritical.com guy), that you can’t remove snapshots whilst the VM is on. Also – and I’m reparrorting what he said here – that Snapshots are discouraged by Microsoft.

      Thanks for the feedback. It’s a list not an essay, but I’m glad you had a read.

      *) For what it’s worth, we have 3000 VMs, so we’re only going to install an Enterprise-ready system, and Hyper-V didn’t exist 7 years ago, and other than price **, there’s no competitive advantage in switching.
      **) Not that that’s a bad thing. Always good to keep vendors honest.

  3. @1 Hyper-V Integrations are now native part of Linux Distribution. This is not something VMware can boast of. This makes # of supported Linux guest no longer an issue
    @2 A virtualized driver uses emulation = sloooowww Paravirtualized uses very fast VMbus. No interaction with User Mode in Parent Partition in contrast to emulated devices
    @3 With Linux Integrations for Hyper-V v2.1 now 4 virtual processors are supported
    @4 Windows Server 2008, Server 2008 R2, Linux RH 5 and up
    @5 Correct, will problably only need a Windows Update. Unlike VMware, Microsoft does not have experimental status for new features.
    @6 Difficult to sustain without unbiased performance tests
    @7 There is SCSI support for Linux, but as a synthetic (paravirtualized) device. SCSI is not supported at boottime until the Integrations are loaded. Afther the synthetic driver is loaded, the VM is very fast. No real limitation.
    @8 Very wide support on the other side of vFront. HP, Citrix and many others are working in ver increasing enthusiasm. We see great benefit from their partnership.
    @9 Fiber Channel, iSCSI and now SAS is added for great storage support. NAS
    proved to be unriliable. Don’t know about NFS, but we don’t see that a lot in Windows world
    @10 vMotion is NOT equivalent to Quick Migration. Live Migration is available from R2 onwards and supports zero downtime migration of VM’s. Often without losing 1 ping.
    @11 Virtual Machine Manager adds Quick Storage Migration which takes VSS snapshots to capture the hanges during the storage migration. Yes during the swichover a short downtime is incurred.
    @12 CSV is a very clever construct which builds on years of experience, compabibility, reliability and allows for Direct I/O for multiple VM’s spread across multiple hosts to a single CSV. It can grow bigger, can grow and shrink online, failover without downtime to VM’s. Yes it requires an owner (coordinator), but only for metadata updates (change name, create folder, change properties etc).
    @13 HP has native support for CSV with HP P4000 storage. HP Continuous Access does not support CSV yet. So yes, partly true
    @14 DRS=PRO, DPM=Core Parking, HA is default function of Hyper-V Failover Clustering, FT is not available but let’s be honest: the list of limitations (1 processor, no live migration, no snapshots …. is too long to mention here). Besides FT is still experimental and MS does not do experimental
    @16 XP is extinct, and if you still use XP, you can RDP to the servers
    @17 Not by MS but by the different NIC vendors or their OEMs. HP fully supports NIC Teaming and we use it a lot in enterprise implementations. If you have an end-to-end SMB2 connection, you can reach fault tolerance and load balancing by using two or more virtual NIC’s in the VM which are mapped to physical NICs in the host.
    @18 It is absolutely no problem to hot add or remove VHD’s with both Hyper-V Manager or SCVMM
    @19 Only one LM per host-pair. In 16-node cluster you can do 8 simultaneous Live Migrations. SCVMM queues the migrations for you
    @20 The hypervisor itself is less than 64KB. The Parent partition size depends on its implementation. Core and Hyper-V server are much smaller. Not really important.Many blogs have shown that VMware must be patched in much greater portions than Hyper-V. Since its birth, there has been only one security update which is a great accomplishment.
    @21 Well 512KB for the parent and up to 2GB if you want to add antivirus, backup agents, monitoring agents etc. I’d call that very decent. In SP1 we will be able to set a fixed amount of memory for the parent partition.
    @22 On the other hand Hyper-V can be installed on any server. Flexibility is nice to have too. The majority use HP or similar hardware vendors anyway. Not really an issue
    @23 Not true. VMQ goes straight to the NIC. CPU processing goes straigt to the CPU. As mentioned earlier only emulated drivers depend on processing in the parent. VMbus is a superfast channel for communication between virtual devices and physical devices Direct I/O will be a trend in Hyper-V as well.
    @24 Hyper-V supports Large Memory Pages by default
    @25 Neither can VMware. Both require a reboot after patching. SCVMM can place host in maintenance mode so VM’s are evacuated off the server. VMST 3.0 is the tool for patching VM’s in a controlled fashion.
    @26 Yes you can. With synthetic SCSI controllers you can hot add or remove disks.
    @27 Don’t know about that or what it is used for
    @28 Yes you can. Linux VM’s with Hyper-V Integrations or Linux VM’s with native Hyper-V Integrations can be shutdown gracefully from SCVMM, Hyper-V Manager or Cluster Failover Manager
    @29 System Center is the single pane of glass, not only for Hyper-V but also for the physical server hardware, the virtual machines including all the software that is running inside the VM’s. It has a much deeper reach and knowledge about what is going on than anything VMware currently has to offer.
    @30 There are documented upgrade paths for Hyper-V R1 to R2 upgrade, although not in-place.
    @31 NAS is not lacking but left out intentionally. It was actually taken out as in R1 we were able to place VM’s on SMB network shares
    #32 You have to take the VM offline to extend the VHD. If you use Pass-through disks you can shrink and grow disks online.
    #33 Dynamic disks is a form of thin provisioning. Storage vendors support thin provisioned storage with Hyper-V. Dynamic Memory in SP1 will be a thin provisioned method of allocating memory
    #34 Snapshot are discouraged in VM’s because they lead to downtime when AVHD’s have to be merged back into the VHD’s as the VM must be shutdown. Apart from this several type of VM’s should never be used with snapshots such as Domain Controllers, Exchange Server and probably others. The same applies to VMware.
    #35 Cannot think of anything but the dynamic way resources are controlled. You can set thresholds and ceilings, but this is not often used. Resources can be controlled by Operations Manager and PRO (Performance & Resource Optimization) integration with Virtual Machine Manager.

    • Chris Wells says:

      Hi Hans. Thanks for the reply.

      @1 – By “Hyper-V now native part of Linux Distribution”, I’m assuming you’re talking about the *experimental* feature that appeared and disappeared in kernel 2.6.33 (citation :http://www.kroah.com/log/linux/staging-status-12-2009.html)
      @1 – Actually it is an issue. RHEL4 isn’t supported (citation http://www.microsoft.com/windowsserver2008/en/us/hyperv-supported-guest-os.aspx)
      @2 – Please explain why a paravirtualised driver is any faster than a transparent driver
      @3 – I won’t comment on a BETA product.
      @6 – in house tests. Sorry, I can’t release these.
      @7 – PV SCSI driver not included in the RedHat installation medium. This will make recovery difficult, if not impossible. Also, last time I checked, it didn’t run in vSMP mode
      @8 – I welcome it. Let me know when you have a virtual appliance market place up and running, though I suspect this will be difficult for MS to bear, since this would mean giving away copies of Windows within the appliances.
      @8 Is there a SRM equivalent? What about a Cisco Nexus 1000V? What about VAAI support in Storage array vendors (array side provisioning, copying, block zeroing).
      @9 – NAS may be unreliable on Hyper-V. It’s a first-class citizen in vSphere
      @10 – same as vMotion then.
      @11 – What about guest OS’s that don’t have VSS, or are you talking about VSS in hyper-v itself?
      @12 – fundementally though, the problem is that NTFS is a proprietary filesystem which was never designed to be multi-write, unlike VMFS2 / VMFS3 (and vSphere 4.1 looks to be even better – locking now occurs at the block level, not file or filesystem level).
      @13 – what about the EMCs and NetApps of the world. Do they support CSVs on Hyper-V?
      @14 – DPM isn’t about core parking. (by which you actually mean Intel Speed Step and Intel Turbo Boost).
      @16 – perhaps you’d like to explain that to our 40,000 users that XP is extinct.
      @17 – Yeah, but heterogeneous NIC teaming isn’t supported. (eg teaming Intel e1000 with Broadcom BCM57xx). And the reason you want to do that is to eliminate the single-point-of-failure in the PCI nexus tree hierachy.
      @18 – Whilst the guest is on???
      @19 – great.
      @20 – ESXi patching not ESX. VUM has shown that only 5 patches have ever come our for ESXi 4.0. You still can’t get away from that fact that Hyper-V Server core is 3.2GB.
      @21 – I think you mean 512MB. ESXi hypervisor footprint is 30MB.
      @23 – My mistake. let me clarify. All disk IO to the NTFS partition has to go through the parent node
      @24 – Please can you give a citation
      @25 – yes. You can patch an entire ESX/ESXi cluster (up to 32 nodes), with zero interruption to any of the VMs.
      @26 – If PV SCSI is your thing, then ok.
      @27 – seeing the relationships between hosts, VMs, datastores, network switches etc etc.
      @28 – I think this is still BETA. let me know if this has gone GA.
      @29 – Can SC do performance monitoring, MSCS cluster management and NIC teaming?
      @30 – yuck – or so I’ve heard
      @32 – pity
      @33 – dynamic disks. Yuck. yet another bastardised microsoft proprietary reimplementation of MBR partition tables.
      @34 – vSphere doesn’t have that limitation
      @35 – I was thinking in terms of vSphere’s resource guarantees, limits, shares, not to mention the new features such as NetIOC and SIOC. [for the latter two, see http://vpivot.com/2010/07/22/vsphere-4-1-performance-improvements/%5D

      Cheers.

      • Timothy Mukaibo says:

        @34 – when you say “vSphere doesn’t have that limitation”, I assume you’re meaning that vSphere will still allow you to take a snapshot of a DC, Exchange box, etc. The same is true in Hyper-V, XenServer, KVM – whatever hypervisor you choose. However, if you revert to an earlier snapshot in a production environment, you will almost certainly have issues with data corruption.

        If you’re not sure as to why this may be the case Chris, I suggest that you do some reading into how Windows Domain Controllers operate, and why snapshots (regardless of vendor) will mess things up.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: