VMware vSphere can virtualize itself + 64-bit nested guests

Running VMware ESXi inside a virtual machine is a great way to experiment with different configurations and features without building out a whole lab full of hardware and storage. Virtualization enthusiasts everywhere have benefited from the ability to run ESXi on ESXi, first introduced with the vSphere 4 release.

VMware vSphere 5 makes it easier than ever to virtualize hypervisor hosts. With new capabilities to run nested 64-bit guests and take snapshots of virtual ESXi VMs, the sky is the limit for your cloud infrastructure development lab. Heck, you can even run Hyper-V on top of vSphere 5 — not that you’d want to.

Physical Host Setup

The physical host running VMware ESXi 5 requires just a few configuration changes; here is a guide:

  • Install VMware ESXi 5 on a physical host and configure networking, storage, and other aspects as needed
  • Configure a vSwitch and/or Port Group to have Promiscuous Mode enabled
  • Create a second Port Group named “Trunk” with VLAN ID All (4095) if you want to use VLANs on virtual hypervisors
  • Log in to Tech Support Mode (iLO or ssh) and make the following tweak to enable nested 64-bit guests
    echo 'vhv.allow = "TRUE"' >> /etc/vmware/config

Virtual VMware ESXi Machine (vESXi) Creation

For various reasons, it’s not feasible to clone virtual ESXi VMs. As an alternative, create a fully-configured shell VM to use as a template — it can be cloned before ESXi is installed.

Create a new VM with the following guidance:

  • Guest OS: Linux / Red Hat Enterprise Linux 5 (64-bit)
  • 2 virtual sockets, 2+ GB RAM
  • 4 NICs — connect NIC 1 to the management network and the rest to the “Trunk” network:
  • Thin provisioned virtual disks work fine
  • Finish creating the VM, then edit the following settings
    • Options/General Options: change Guest Operating System to Other – VMware ESXi 5.x
    • CPU/MMU Virtualization: Use Intel VT … EPT… ( bottom radio button)
  • Don’t power this VM on — keep it to act as a template
  • Clone and install VMware ESXi via ISO image or PXE boot
  • Add to vCenter and configure virtual ESXi hosts for action

Nested 64-bit Guests

With the release of VMware vSphere 5, nested guests can be 64-bit operating systems. Just be sure to make the change to /etc/vmware/config on the physical host as indicated above.

Nested guests can be migrated with vMotion between virtual or physical VMware ESXi hosts; this requires a vMotion network and shared storage.

Nested Hyper-V Virtual Machines

It is possible to run other hypervisors as vSphere virtual machines, and even power on nested VMs. Here you can see Hyper-V running a CentOS virtual machine — all on VMware ESXi. Talk about disrupting the space-time continuum!

A couple of extra tweaks are needed to enable this, and performance is not great. Nevertheless, an amazing feat of engineering from VMware!

Do the following to enable Hyper-V on VMware ESXi:

  • Add hypervisor.cpuid.v0 = FALSE to the VM configuration

  • Add —-:—-:—-:—-:—-:—-:–h-:—- to the CPU mask for Level 1 ECX (Intel)

For another take, check out William Lam’s post on this topic.

Parting Thoughts

Given the right hardware, it is possible to create a fully-functional VMware test lab that is completely virtual. Go ahead and experiment with the Distributed Virtual Switch, vShield, vCloud Director, and everything else without deploying a ton of servers and storage.

How are you taking advantage of a virtual vSphere environment?

Related posts:

  1. VMware ESX 4 can even virtualize itself
  2. What are you doing with nested virtualization?
  3. VMware ESXi 5 Interactive PXE Installation Improvements
  4. PowerShell Prevents Datastore Emergencies
  5. Taking snapshots of VMware ESX 4 running in a VM

Tags: , , ,

  1. ram’s avatar

    Going to try everything you stated above. Can’t want to get hold of vSphere 5. Having hard time to find beta version on the NET. I expect VMware to release vSphere 5 to general public soon…..

    I have 2 PE 1950 with 16GB, 1PE 2950 with 16GB (Xeon 5410 & 5355), storcenter 4TB EMC NAS and 1GB 24 port switch.

    Reply

  2. hussain’s avatar

    Very enthusiest to try all the new v5 features

    Reply

  3. Rickard Nobel’s avatar

    Very cool to do the ESXi in ESXi nesting, it did work on 4.x, but nice to see that it is supported with an own “Guest operating system” option.

    But what about the infamous vRAM licensing for this? If running a physical ESXi 5.0 host with 512 GB of RAM and inside this running two virtual ESXi 5.0 hosts with 256 GB each, how much vRAM do you need? :)

    Reply

  4. Steve Flanders’s avatar

    In addition to be supported, vSphere 5.0 allows nested 64-bit VMs (thus the title of the post). In vSphere 4.x only 32-bit “worked”. While this may seem trivial to some, this is a major change and one that is necessary to fully leverage the power of nested ESXi/VMs.

    Reply

  5. ram’s avatar

    I was able to create vesx but cannot put them into CLUSTER – I am getting this error
    ” The host’s CPU hardware should support the cluster’s current Enhanced vMotion Compatibility mode, but some of the necessary CPU features are missing from the host. Check the host’s BIOS configuration to ensure that no necessary features are disabled (such as XD, VT, AES, or PCLMULQDQ for Intel, or NX for AMD). For more information, see KB article
    1003212.”

    I followed all the steps while creating vesx1 – made necessary change to options – cpu and other stuff.

    Added vhv.allow = “TRUE” command to /etc/vmware/config

    Look forward to your input pls….

    Reply

  6. ram’s avatar

    Ignore above post… Disabled EVC on the cluster – Was able to add vesx1 into the cluster.

    I am trying to run VSA manager. In order to configure VSA (Virtual Storage Manager Appliance) new feature in vsphere 5 i am getting unsupported hardware error. I have 2 physical esx host and 2 virtual esx host. The virtual esx hosts has 4 nics and physical hosts has 2 nics each.

    In order to configure VSA the requirement is 2 or more hosts + 4 nics. Since, I have just 2 nics on the physical host, I thought of using 2 virtual esx host with 4 nics, yet, I cannot proceed with the VSA configuration. Keep getting unsupported hardware error for virtual esx host and for physical host the error is insufficient nics.

    Reply

    1. Bjørn’s avatar

      I am trying to do the same thing but as far as I can tell it isn’t possible because VSA creates an EVC enabled cluster.

      Reply

  7. Vladan’s avatar

    Virtyaly everything is possible… -:). Great read.

    Best
    Vladan

    Reply

  8. Yoda’s avatar

    Hi,

    Do you have a trick to do the same on VMware workstation 7.1 or workstation 8.0 beta ?

    Thanks :)

    Reply

    1. Nishant’s avatar

      yes…you can do the same on the Vmware workstation 8.0,but you will not be able to run 64 bit guest machines.

      Reply

  9. Yoda’s avatar

    For information, in VMware Workstation 8.0 we have to choose “ESXi 5.0″ in “Guest Operating System” but the CPU must have Intel EPT or AMD RVI… mine does not :( .

    Reply

  10. chris’s avatar

    i have tried the above steps but my I still cannot enable my HypverV on Microsoft 2008R2?

    Reply

  11. Hussain’s avatar

    Hi,
    I have successful installed ESXi5 on ESX 4.1 without any issue. I will play around with it, and I will install vCenter as a VM and an additional ESXi5..

    Reply

    1. Peter’s avatar

      Hi,
      Any extra tips to get this working.
      I want to test ESXi 5 on top of a ESXi 4.1 vCenter cluster.

      Reply

    2. french’s avatar

      I am having issues with creating an esxi 5 host in esxi 4, i have added the host in vcenter but, unable to create new vm inside the new host. Any suggestions, it tells me hardware incompatiblity issues.

      Reply

  12. JR’s avatar

    Great post!

    Do you know if XenServer can be nested in the same way that Hyper-V can be?

    Thanks

    JR

    Reply

  13. Datto’s avatar

    DATTO’S TOP TEN SUGGESTIONS FOR RUNNING NESTED HYPER-V UNDER ESX 5.0

    Quick overall notes before starting the Top Ten — vcritical.com seems to be the home of nested hypervisor information so I’m posting my information here to continually centralize the location about nesting. When I reference ESX below I’m referencing ESXi 5.0 only and when I reference Hyper-V below I’m referencing Hyper-V 2008 R2 SP1 only (specific flavors of Hyper-V 2008 R2 SP1, such as Full Installation, Server Core or the free Hyper-V Server, are called out when applicable). The information below may not be applicable to any other versions although you may be able to spend time getting them to work for yourself. Note that if your physical server equipment is AMD based, you’re going to need 3rd Generation AMD CPUs on your physical ESX 5.0 host (these would be Opteron 13xx, 23xx and 83xx series CPUs or one of the recent retail variety AMD 3rd generation CPUs for use in white boxes).

    10) Nesting Hyper-V VMs that run their own VMs can be made to work while running under ESX 5.0. I suggest you use the ESX 5.0 Virtual Machine Version 8 method rather than the Virtual Hardware Version 7 Method in order to get the nesting to work — mentioned here on vcritical.com and on virtuallyghetto.com and elsewhere on the Internet — use Google to search on the string hypervisor.cpuid.v0 — since the Virtual Hardware Version 8 method requires the least amount of effort and setup. Do this Google search first before diving into your nested Hyper-V experiment so you can become familiar with the steps necessary to use the Virtual Machine Version 8 method.

    9) Get your complete Hyper-V Windows VM setup in it’s entirety including multiple vNICs if applicable (but without the Hyper-V Role engaged) prior to switching the nested Hyper-V OS type over to ESX’s “ESXi 5.x” choice (note that this choice won’t show up until you’ve actually built the VM). Then, once you’re sure it’s ready for Hyper-V, perform a shutdown of your intended Hyper-V VM and then make the switch of the VM OS type to ESXi 5.x (under the Option Tab of the VM) and hand-add the hypervisor.cpuid.v0 = “FALSE” addition to the bottom of the vmx file so Hyper-V doesn’t know it’s running on a virtualization platform. Once you change the OS type over to ESXi 5.x you may be pretty much committed to making all future configuration changes to the Hyper-V VM directly into the vmx by hand so make sure your VM is ready. Then boot up the Hyper-V VM and engage the Hyper-V role. My experience has been that some (not all) changes made to the nested Hyper-V VM from that point forward might hang the VM if you make changes to the nested Hyper-V VM configuration via the ESX 5.0 GUI.

    8) My experience has been that it’s much easier to get the Hyper-V Full Installation working as a nested Hyper-V VM than the Server Core and Free Hyper-V Server versions. One thing I had to do with Server Core is to not boot up a bunch of nested Hyper-V Server Core VMs all at the same time — the VMware Tools might not start correctly inside the Server Core VM for unknown reasons if you concurrently boot up a bunch of nested Server Core Hyper-V VMs. If I waited and gave each Server Core Hyper-V VM some time after bootup before booting up the next Server Core VM the VMware Tools seemed to run more reliably. Note that I didn’t have this problem under the Hyper-V Full Installation and that’s why I suggest you use the Hyper-V Full Installation to start rather than trying to make the Server Core or the Free Hyper-V Server work as your first nested Hyper-V VM.

    7) Although nested VMs running on a nested Hyper-V running on an ESX 5.0 physical server works, the speed of the nested VMs running on the nested Hyper-V server is no more than about half of the speed that you would experience running that same VM right on ESX 5.0 alone. So don’t get your hopes up that any of this will go into production. Instead, look at it as an experiment and a learning exercise.

    6) Hyper-V R2 SP1 Dynamic Memory does work and display correctly when used in a nested Hyper-V R2 SP1 VM (and ESX 5.0 does seem to employ its own memory conservation techniques on the nested Hyper-V VM) so if you’re wanting to see Hyper-V R2 SP1 dynamic memory in action it’s possible with a nested Hyper-V setup (you’d need R2 SP1 to see it though and the VMs running on that nested Hyper-V R2 SP1 also will likely have to be Windows 7 SP1 or 2008 R2 SP1 — don’t go using the Home varieties of Windows in any of your setup — you’d be asking for a bigger time sink than this experiment already is).

    5) The nested Hyper-V VMs can be a part of a Microsoft Failover Cluster and you can utilize Cluster Shared Volumes if you choose to do so (assuming you have that all setup properly from a Windows viewpoint that is). As a suggestion, don’t try to use NFS for VM disk storage — I used iSCSI storage for my setup and it worked fine.

    4) You can perform Live Migrations of VMs running on nested Hyper-V VMs (Live Migrations are Microsoft’s feature similar to a light version of VMotion) assuming you have the correct versions of Microsoft infrastructure in place that support Live Migration. You may be able to use Live Migration to migrate VMs between nested Hyper-V VMs and physical Hyper-V VMs but you’ll likely need to engage some kind of CPU masking to do so (look first at CPU masking inside the VMs that are running on the nested Hyper-V VM rather than messing around with masking the Hyper-V VM itself). If you have VCenter 5.0 setup to manage your ESX 5.0 host, you may be able to mask the CPU at the ESX cluster level but I didn’t do that as part of my experiement. Also note that you can utilize SCVMM 2008 R2 to manage your nested Hyper-V VMs. You can also manage your nested Hyper-V VMs with plain vanilla Hyper-V Manager.

    3) You can engage multiple vCPUs for your Hyper-V VM and ESX 5.0 seems to allow all that to work for Hyper-V. If you have to add a vCPU to your Hyper-V VM after the VM has already been changed over to the “ESXi 5.x” OS type, you can edit the vmx file of the Hyper-V VM to make that change rather than using the GUI that might hang your Hyper-V VM.

    2) I suggest you take full advantage of snapshots while you’re getting this setup on your own equipment so you can more easily revert back to a stable setup if something goes awry.

    1) I saw a noticeable performance boost in the nested Hyper-V VMs if I manually chose the type of CPU virtualization in the ESX GUI setup of the VM (prior to engaging the Hyper-V VM role inside the VM) rather than letting the CPU virtualization type remain on Automatic which is the default. I’ve seen this same thing occur on some Dell D830 laptops when I ran nested ESX hosts under Workstation on those Dell laptops. You might get a noticeable performance boost like I did with this setup change or you might not depending upon your own setup and your own hardware.

    Datto

    Reply

    1. Eric Gray’s avatar

      Excellent, thanks for posting.

      Reply

  14. ram’s avatar

    I am unable to run 2008 r2 nested. I have 2 vesxi5 running. I have made necessary changes on the physical host for vhv.allow. Created virtual cluster and added 2 vesxi5 host. Was able to create virtual machine and installed 2008 r2. After finishing the install, I restarted the machine. The OS boot but BLACK screen. Why, I am unable to run nested virtual machine on vesxi5 host. No clue. Still searching for an answer. Hope somebody here can help me

    Reply

  15. Michael Armstrong’s avatar

    Hi Eric,

    I’ve just download the official released version today and it looks like the option has been removed. I’m using a 60 days license but I would have thought that should not have mattered.

    Can you confirm?

    Michael

    Reply

  16. Datto’s avatar

    If you’re meaning the ESXi 5.0 and ESX 4.0 options under the Option Tab, you might want to build the VM first, then look at the Option Tab and see if the ESXi 5.0 option appears under the Other radio button choices.

    Datto

    Reply

  17. Michael Armstrong’s avatar

    Datto,

    Yes, you are correct. Thanks very much. Seems strange that they didn’t add it to the original list.

    Thanks

    Michael

    Reply

  18. Datto’s avatar

    ESXi 5.0 also runs nested Citrix XenServer 5.6.1 SP2 as a VM and that nested VM can run its own 64 bit Windows nested VMs under XenServer. I successfully installed and ran 64 bit Win2008 R2 SP1 in a VM under XenServer while XenServer was running as a VM under ESXi 5.0. For AMD based servers, you’ll need 3rd generation AMD Opteron CPUs (13xx, 23xx, 83xx series) or recently manufactured retail AMD CPUs made for white boxes.

    To create the nested XenServer VM I used the Virtual Machine Version 8 method again (as was used for successfully running nested Hyper-V) except I didn’t need the hypervisor.cpuid.v0 line in the VMX file since XenServer doesn’t prevent itself from running in a virtual environment.

    Datto

    Reply

  19. Datto’s avatar

    Also, if you have multiple nested Citrix XenServer 5.6.1 SP2 VMs (free version) in a XenServer Pool on the same ESXi 5.0 physical host, you can perform live XenMotion (similar to a light VMotion) of nested 64 bit Windows VMs between your nested XenServer hosts (assuming you have shared storage and the correct XenServer configuration of course). Use the Citrix XenCenter Management App (free download) running in a Windows VM to get the XenServer pool of multiple nested XenServer hosts setup so they’re easily managed and allow you to more easily perform XenMotions of live VMs.

    Datto

    Reply

  20. Keyser’s avatar

    Hi Datto

    I would like to do some test and dev running Hyper-V inside my ESXi box, but you keep writing: “For AMD based servers, you’ll need 3rd generation AMD Opteron CPUs (13xx, 23xx, 83xx series) or recently manufactured retail AMD CPUs made for white boxes”

    Is it not possible at all to run a Hyper-V under ESXi 5 on Opteron 82xx series CPU’s?
    If so, do you know why?

    Keyser

    Reply

  21. Datto’s avatar

    Keyser — It’s possible to do all I’ve written about regarding running 64 bit guests under a nested hypervisor with *some* Intel CPUs. The challenge is determining whether your Intel CPU has the required instructions/features built into the CPU. Intel only puts the features you’d need for running 64 bit guests under nested hypervisors running on ESXi 5.0 into certain Intel CPUs rather than into their entire line of CPUs. Much of my recent work on the subject of nesting has been in my AMD lab because that was the equipment I had available at the time I needed it and quite a few people seem to be trying out nesting with AMD CPUs. But that same capability is available in certain Intel CPUs.

    For your Intel CPUs you’ll likely need EPT capability in the CPU (Extended Page Table capability) if you want to run 64 bit Windows guests under a nested Hyper-V VM or a nested Citrix XenServer VM using ESXi 5.0. Your motherboard BIOS must also support this capability. If you have an Intel Nehalem or Intel Westmere CPU you likely have the required CPU instructions built into the CPU. If you have a Core2Duo CPU you probably don’t have the capability for running 64 bit guests in a nested Hyper-V or Citrix XenServer VM under ESXi 5.0. Between those two Intel CPU families (between C2D and Nehalem/Westmere) is a gray area where Intel didn’t put the EPT capability into all CPUs, thus the confusion in the IT world and the higher profit for Intel (since historically you’d need to buy a fancier Intel server to get the EPT capability). Also, Intel’s retail CPUs — some have EPT and some don’t — you can’t assume any recently made Intel CPU is going to have the required EPT instruction set.

    Note this EPT required feature of the Intel CPU is different than just plain Intel VT capability or plain hardware virtualization. The feature you need — the EPT capablity in Intel CPUs — is beyond plain hardware virtualization and from an Intel standpoint, only some of the modern day Intel CPUs have that capability.

    Datto

    Reply

  22. Datto’s avatar

    Keyser — more on point with your question — AMD Opteron 12xx, 22xx and 82xx series CPUs don’t have the built-in AMD instructions needed to run 64 bit VMs under a nested hypervisor under ESXi 5.0. The feature in the AMD world is RVI/SVN and the Opteron 12xx, 22xx, 82xx CPUs don’t have that capability. The feature only started being introduced into the Opteron line of CPUs with the 13xx, 23xx and 83xx CPUs (and some retail CPUs for use in white boxes).

    Datto

    Reply

  23. Keyser’s avatar

    Datto, thanks for the answer to my question.

    Just one clarifying followup. Does this missing RVI support on the AMD 82xx series prevent me from installing and running a virtual Hyper-V host or does it only prevent me from running a 64bit VM inside that virtual Hyper-V Host (nested 64bit VM)?

    If it is the latter, can i still run a 32bit virtual machine inside the virtual Hyper-V host?

    Keyser

    Reply

  24. stan’s avatar

    So it’s not possible at all to run nested 64 bit VMs in ESX 4.1? I need 5 then? Any workarounds?

    Reply

  25. stan’s avatar

    also will ESX 5 work on ESX 4.1 host?

    Reply

  26. ricky’s avatar

    Anyone got a nested Hyper-V VM on ESX5i GA? All I get is a black screen when Hyper-V boots.

    Reply

  27. Datto’s avatar

    >> If it is the latter, can i still run a 32bit virtual
    >> machine inside the virtual Hyper-V host?

    Keyser — I could not get 32 bit virtual machines to run under a nested Hyper-V R2 SP1 VM that was running under ESXi 5.0 when the physical server had 2nd generation AMD Opteron CPUs in it. I used the Virtual Hardware Version 8 method.

    Here’s how I tested — I have an existing nested Hyper-V R2 SP1 VM (Full Installation) that runs 32 bit and 64 bit VMs correctly under ESXi 5.0 that is located on a physical server that has dual Opteron 2384 CPUs in it (these CPUs are 3rd generation AMD Opterons and have RVI as well as AMD-V capability). I moved that existing nested Hyper-V R2 SP1 VM over to an ESXi 5.0 physical server that has 2nd generation Opteron 2220 CPUs in it. Although the nested Hyper-V R2 SP1 VM booted up successfully on the 8220 CPUs and the Hyper-V services show to have started correctly, when I tried to start or even create a 32 bit VM under the nested Hyper-V VM using the physical 2nd Gen AMD CPUs the VM wouldn’t start and said in an on-screen error that the Hyper-V Virtualization Services weren’t started.

    I moved the test VM setup back to the original 3rd Gen AMD Opteron ESXi 5.0 server and that 32 bit VM started successfully.

    So it looks like, for running any VMs on a nested Hyper-V on ESXi 5.0, you will need 3rd generation Opteron CPUs (13xx, 23xx, 83xx) or recently made retail desktop CPUs used in white boxes. Or, there is some secret sauce that is needed in one of the VMX files that is unknown at this time.

    Datto

    Reply

  28. Datto’s avatar

    >> Anyone got a nested Hyper-V VM on ESX5i GA?
    >> All I get is a black screen when Hyper-V boots.

    Ricky — yes, I am running nested Hyper-V 2008 R2 SP1 VMs that run 32 bit and 64 bit VMs — all that is running fine for me under ESXi 5.0 GA.

    Datto

    Reply

  29. Datto’s avatar

    Ricky –

    Suggestions:

    1) Make sure the physical CPU you’re using has RVI capability (for AMD CPUs) or EPT capability (for Intel CPUs). If you’re trying to do this on a laptop, it’s unlikely the physical CPUs in your laptop have the capability. You’ll need to visit the CPU manufacturer’s website to get info about your CPU to determine whether your CPUs have this capability. Note this is beyond just having Intel VT-x or AMD-V capability. You likely need that *PLUS* EPT (Intel) or RVI (AMD) capability to run VMs on a nested Hyper-V running on ESXi 5.0.

    2) If you’re absolutely sure you have the capability described above and you have virtualization engaged in your system BIOS, then make sure you read and understand Tips 10), 9) and 8) further above, especially the part about having to setup the Hyper-V VM completely without the Hyper-V Role engaged (make sure the VM boots and networking for the VM is working properly — make sure you can ping the Hyper-V VM from another physical computer) before switching the O/S type for the VM to the ESXi 5.0 Type and inserting the extra information into the VMX file and subsequently engaging the Hyper-V role inside the VM. Otherwise, you’ll not likely be able to get a nested Hyper-V working properly under ESXi 5.0. To make life easy (assuming your CPUs have RVI or EPT capability) use the Full Installation of Windows 2008 R2 SP1 (not Server Core or Hyper-V Server versions).

    Datto

    Reply

  30. Keyser’s avatar

    Datto.

    Thanks for the Info and effort.

    Reply

  31. Ricky’s avatar

    Update: tried nested Hyper-V on fairly new hardware Nahlem/Corei7 ..my colleauge tried in on Cisco UCS server with no joy either…However something new…I managed to get working on the beta of the next version of VMware Workstation.

    Reply

  32. Ricky’s avatar

    Update: My Lab servers are enabled EPT for sure…Coz when I put Workstation on there instead of ESX5i and Hyper-V works its suggests that Workstation is passing the EPT properly…in ESX5i my nested Windows 2008 R2 complains that there is no EPT when trying to enable Hyper-V role.

    Reply

  33. Ricky’s avatar

    OK I’ve pretty much mastered the art of this now…and have a consistent method…thanks for the pointers but I think I will create a blog entry with my findings on how to do it.

    Reply

  34. Datto’s avatar

    Keyser — you may want to look at the following document for more secret sauce that might allow 2nd gen Opterons to run nested Hyper-V and get some VMs powered on under nested Hyper-V — I don’t know whether the vhv.allow = TRUE line in the /etc/vmware/config file of your physical ESXi 5.0 host would make an Opteron 22xx work with VMs under Hyper-V or not but you might start with that suggestion if you wanted to pursue this further:

    http://communities.vmware.com/docs/DOC-8970

    Datto

    Reply

  35. Datto’s avatar

    Keyser — I tried the vhv.allow line in the physical ESXi 5.0 AMD Gen 2 host and it didn’t make any difference. Hyper-V VMs upon startup under Hyper-V still report that the hypervisor isn’t running. Also tried a few other things and those didn’t work either for AMD Gen 2 hosts.

    At this point, it still looks like RVI is required for running VMs on nested hypervisors under AMD CPUs — you’ll need Gen 3 AMD CPUs (13xx, 23xx, 83xx Opteron or recently manufactured retail desktop AMD CPUs) to get any kind of nested Hyper-V to work that can also run VMs under the nested Hyper-V.

    Ah well, it was worth a shot.

    Datto

    Reply

  36. patrick’s avatar

    Now how can we support EVC in nested esxi?

    Have not been able to find out how

    Reply

  37. Shawn’s avatar

    I have two nested ESXi5 host VMs running on a physical ESXi5 host. I followed the instructions above and everything seemed to work perfectly for my lab environment. However, when I try to vMotion a (double nested?) VM between the two nested ESXi5 host VMs I get the following error:

    “A general system error occurred: The VMotion falied because the ESX hosts were not able to connect over the VMotion network. Please check your VMotion network settings and physical network configuration.”

    “The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network. Check the vMotion network settings and physical network configuration.
    vMotion migration [167772929:1316093575509377] failed to create a connection with remote host : The ESX hosts failed to connect over the VMotion network
    Migration [167772929:1316093575509377] failed to connect to remote host from host : Timeout”

    I have double and triple checked that the networks were setup properly, including turning on promiscuous on the vSwitch on physical ESXi5 host. The Windows VMs running on the same physical ESXi5 hosts have no problem communicating, and the nested vCenter has no problem managing the two nested ESXi5 host VMs. It seems that the problem only occurs when the two nested ESXi5 host VMs need to communicate for vMotion.

    Help?

    Reply

    1. Ricky’s avatar

      Shawn did you set the port group Promisciuos mode to Accept?

      Reply

      1. Shawn’s avatar

        Yessir, “including turning on promiscuous on the vSwitch on physical ESXi5 host.”

        Reply

        1. ed kaczmarek’s avatar

          Did you clone one of those ESXi VMs in question ?

          If so, try again with a fresh install of that second esxiVM.

          Something gets screwy with cloned ESXi VMs on the same vswitch….

          Reply

          1. B. G.’s avatar

            I set every single port on the vSwitch (where the hosts vNICs reside and in the nested hosts’ vSwitches/dvSwitches).

            The issue for me is that the ip on the vmkernel port MUST be on the same exact network as everything else. The “observed IP ranges” of the vNIC on the nested host are “192.168.1.x.”

            I wanted to use 192.168.10.x for the vMotion enabled vmkernel ports in the nested ESXi hosts, but I simple can NOT get it to work. Any suggestions?

            Reply

            1. Tom’s avatar

              B.G – Did you ever solve this when using another network other than the observed ranges? I am having the same problem..I am using the mgmt nics on the 172.16.10.X subnet and the two nested machines can communicate. However, I have configured another nic on each of the nested hosts to communicate over 10.10.10.X subnet but they cannot communicate…Something is funky here..

              Reply

  38. Ricky’s avatar

    BTW I gave vCritical full creds for this post but I created a blog post of my own finidngs and think I have a concrete step by step:

    http://www.veeam.com/blog/nesting-hyper-v-with-vmware-workstation-8-and-esxi-5.html#comment-8634

    Reply

  39. Datto’s avatar

    Just wanted to mention that the skipMicrocodeCompatCheck fix is still effective in ESXi 5.0 if you’re using an AMD processor that has the TLB bug. That will allow ESX to install and work properly without the PSOD on the install or subsequent bootup. Intel procs don’t have the TLB bug.

    Also note that if you’re going to install a nested ESXi 5.0 host into a physical ESXi 5.0 host that has procs that have the TLB bug, you’ll need to use the skipMicrocodeCompatCheck fix for the nested ESXi 5.0 server as well as the physical ESXi 5.0 host.

    If you think your AMD procs might have the TLB bug, you can Google the term skipMicrocodeCompatCheck and get the steps needed to make ESX work.

    Datto

    Reply

  40. Datto’s avatar

    I performed considerable experimentation this weekend on the effects of running nested 32 bit and 64 bit Windows VMs on nested ESXi 5.0 VMs on various branded rack servers with Opteron 23xx and 83xx CPUs (sorry, I don’t have any 13xx CPUs or 24xx series Opterons in my lab right now to test against).

    What I found is that my brand name rack servers that have 23xx and 83xx Opteron CPUs of B3 and C2 stepping do relatively well when running nested ESXi 5.0 VMs that have 64 bit and 32 bit Windows VM guests. The term “well” means the Windows VMs running on nested ESXi 5.0 VMs will be somewhere shy of 50% of the performance you’d get if you ran the Windows VMs as guests on the physical ESXi 5.0 host.

    The brand name rack servers that have 23xx and 83xx Opteron CPUs of B2 stepping (and thus have the TLB bug) have considerable performance problems when running nested ESXi 5.0 VMs hosting 64 bit and 32 bit Windows VMs. It appears the necessary settings required to get the ESXi 5.0 bootup past the TLB bug in the B2 stepping 23xx and 83xx Opteron CPUs makes nested Windows VMs running on nested ESXi 5.0 VMs perform very poorly (I could get the 64 bit and 32 bit Windows VMs to boot up but it took 4x longer for the 32 bit VMs and 10x longer for the 64 bit VMs to boot up than with a similar Opteron CPU that doesn’t have the TLB bug requiring necessary setting changes for ESXi 5.0 to boot successfully).

    So, it appears to me that if you’re going to buy/create servers with 23xx and/or 83xx series Opteron CPUs and you want to run Windows VMs on nested ESXi 5.0 VMs, you should choose 23xx or 83xx CPUs that are B3 stepping or above. This avoids the TLB bug in the Opteron CPUs that VMware is protecting you from with the default ESXi 5.0 settings.

    If you’re not intending to ever run Windows VMs on nested ESXi 5.0 hosts then some virtual machine performance lag will still exist between TLB affected Opteron servers and non-TLB affected Opteron servers but the lag won’t be near as noticeable.

    Datto

    Reply

  41. Datto’s avatar

    I found a box around here with an Opteron 1354 CPU to test against for nesting purposes.

    I used the same test that I’ve used to previously test the 23xx and 83xx Opteron CPUs and involves running nested Windows 64 bit VMs running on nested ESXi 5.0 VMs that run on a physical ESXi 5.0 server. This 1354 Opteron has B3 stepping (it appears the 1354 and 1356 Opterons only have the B3 stepping in released CPUs whereas the 1352 Opterons may have both the B2 problematic TLB bug stepping as well as the B3 stepping possible levels in released CPUs from AMD).

    This Opteron 1354 seems to operate as expected when running nested 64 bit Windows VMs running on nested ESXi 5.0 VMs and doesn’t exhibit the outrageous slowness of the TLB-bug-affected CPUs. The term “as expected” means the performance of the nested Windows 64 bit VM running on a nested ESXi 5.0 VM is somewhere shy of 50% of the performance you’d experience if that same Windows 64 bit VM ran on the physical ESXi 5.0 host. The TLB bug affected Opteron CPUs seem to have a much worse performance level when used in nesting situations and make the TLB bug Opteron CPUs unusable for nested VMs on nested ESXi 5.0 VMs.

    Might make for a component an ultra low budget single server white box learning lab if at least 8GB of system memory can be put into the white box.

    Datto

    Reply

  42. Datto’s avatar

    Just as a quick example of how superior ESX’s memory management is, I nested on an ESXi 5.0 physical server three 16GB XenServer 5.6 VMs each running several Linux and Windows VMs. After XenServer WLB got done (tuned for maximum performance — the default) and Dynamic Memory was engaged on the VMs, each XenServer was using about 12GB of memory of the 16GB allotted to each nested XenServer VM.

    The ESXi 5.0 physical host was able to economize the physical memory with transparent page sharing and other methods and was only using 14GB of physical memory to host three 16GB XenServers each using 12GB of memory.

    Oh, and within that 14GB of physical memory used by ESXi 5.0 on the physical server was also a running nested Hyper-V R2 SP1 VM and the Windows-based XenCenter management console VM used to manage the three nested XenServer VMs.

    Pretty amazing memory economization by ESXi 5.0.

    In a ~2000 VM VMware environment we found the cost of the hypervisor software (Enterprise Plus Licensing) to be less than 5% of the total cost of the virtualization environment in the datacenters. The really big costs were #1 Salaries/Building and #2 Storage Related Costs. The cost of VMware licensing was way down the list.

    Datto

    Reply

  43. ela2014’s avatar

    hello

    in my network i have vsphere server 4.1 and i want upgrade to 5.what happen if i want vmfs 3 upgrade to vmfs 5 for my VMs in vsphere?

    Reply

  44. Brad’s avatar

    So I have completed all your steps (made sure /etc/vmware/config is updated). But during the installation of vSphere after the scan of the hardware I am getting the following error – “. I am trying this on Cisco UCS B 200 M1 blades that have the E5540 intel cpu. I have verified that the CPU supports and enabled Intel VT and EPT. Also another question I have is that during the set up of the VM I don’t have the option to set the Guest Operating System to ESXi 5.0. What am I over looking. Any help would be apperciated.

    Reply

    1. Ricky El-Qasem’s avatar

      Brad, the Guest option for ESXi 5 only appears after you’ve setup the VM..go back into the VM settings and you’ll now see it…

      Reply

  45. Brad’s avatar

    Ok…i think I got everything set right. When running a Windows 64bit nested VM on a VM running ESX, I am getting a HOST CPU Incompatible with the virtual machines’ requirement at CPUID level 0×800000001 register ‘edx’. Any thoughts?

    Reply

    1. Datto’s avatar

      Is your nested ESX host inserted into a VC cluster where EVC has been engaged? If so, EVC may have masked away the CPUs ability to run nested 64 bit VMs on a nested ESX host.

      Datto

      Reply

      1. Brad’s avatar

        Thanks for the response…No it is not. Its actually a single ESX host in cluster.

        Reply

        1. Datto’s avatar

          If possible, you may want to temporarily remove the physical server from the cluster and from VC and see if the problem goes away.

          Datto

          Reply

  46. Datto’s avatar

    Also, double check that your physical ESX host has vhv.allow = “TRUE” in the /etc/vmware/config file.

    Datto

    Reply

  47. ChrisH’s avatar

    Hi
    I’ve followed the steps and all is good until I go to instal ESXi5 on to the VM. The installation asks “Select a Disk to Install or Upgrade” and there are no storage devices listed.

    I’ve missed something but don’t know what.

    My host system is an HP ML350 with 475GB disk space and 8 GB RAM.

    Reply

  48. Datto’s avatar

    Questions:

    1) When you say you’ve followed the instructions, are you using VMware Workstation 8.0 on Windows 7 for the host where your ESXi 5.0 virtual machine will be deployed or are you running something else on the host?

    2) If you inspect the properties of the VM where ESXi 5.0 is going to be installed, what does it say for the size of the virtual disk in the VM?

    3) Which controller is used for the virtual disk in the VM used for installing ESXi 5.0?

    4) Which generation of ML350 do you have (there should be a “G” number associated with it)?

    Datto

    Reply

  49. ChrisH’s avatar

    Hi Datto

    1) I am running vSphere client from a Windows 7 32bit to acess the ESXi host – the ML350.

    2) Provisioned storage 40Gb

    3) Paravirtual – None

    4) The ML350 is a G5.

    I hope the above makes sense. I am fairly new to VM, not having that much experience

    Cheers

    Chris

    Reply

  50. Datto’s avatar

    Here’s what I would do.

    1) Create a new VM with a O/S type of RedHat 5x 64 bit (this is likely the problem — you probably created it with RedHat 6.x).

    2) After the VM is created, edit the properties of the VM and change the OS type to Other – ESXi 5.x

    3) Start the VM and start the install of ESXi 5.0. Write back and tell us whether you can see the vdisk in the newly created VM. If you can, go ahead and complete the installation and write back so we know it worked.

    Datto

    Datto

    Reply

    1. Simon’s avatar

      Hi Datto

      Do you have a motherboard CPU & memory combo that will do everything in the VMware lab as getting this info is very hard

      Please advise
      cheers

      Reply

      1. Datto’s avatar

        Simon:

        You might want to check these links for motherboards and white box combos — further updates for ESXi 5.0 compatibility should be coming soon on these websites:

        http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php

        http://ultimatewhitebox.com/

        I haven’t yet gotten back to checking the white box motherboards that I have in the lab as far as checking ESXi 5.0 compatibility. I’m still busy with checking the rack servers and lab SANs for compatibility with ESXi 5.0 and running the ESXi 5.0 work on those rack servers/SANs.

        Datto

        Reply

  51. ChrisH’s avatar

    Hi Datto

    Well I’ve got it going!! The problem was I selected Red Hat 6x not 5x.. Once 5x was selected all was good.
    Just in the process of downloading vCenter Server.

    Thanks for the help.

    Cheers

    Chris

    Reply

    1. Datto’s avatar

      That’s great to hear. Glad I could help.

      Datto

      Reply

  52. Simon’s avatar

    Hi Datto.

    Can you give me a list of parts that are needed to build a windows 7 ultimate box that I can use vmware workstation 8 to build a fully working lab that is not hindered in any way.
    I want to be able to do everything that the ESXI5 & Vcentre can do.

    I was looking at this set but I am sure it wont do everything.

    http://www.overclockers.co.uk/showproduct.php?prodid=BU-033-OP&groupid=43&catid=2051&subcat

    Please advise
    thanks
    Simon

    Reply

  53. simon’s avatar

    I am looking at the following hardware will this run all aspects of esxi insite vmware workstation ?

    http://ark.intel.com/products/52213/Intel-Core-i7-2600-Processor-(8M-Cache-3_40-GHz)

    The above CPU has support for the following :-

    Intel Turbo Boost Technology
    Intel vPro Technology
    Intel Hyper-Threading Technology
    Intel Virtualization Technology ( VT-X )
    Intel Virtualization Technology for Directed I/O ( VT-d )

    Intel Desktop Board DQ67SQ as it supports the following

    Intel vPro Technology
    Intel Active Management Technology (Intel AMT) 7.0
    Intel Trusted Execution Technology (Intel TXT)
    Intel Fast Call for Help (Intel FCFH)
    Intel Virtualization Technology (Intel VT)
    Intel Virtualization for Directed I/O (Intel VT-d)
    Hardware-based Keyboard-Video-Mouse (KVM)

    I am looking for this to also have 16 Gig of ram

    please advise
    thanks
    SImon

    Reply

  54. Jo’s avatar

    My host has just one physical NIC. Is that still going to work or do I need a 2nd NIC?

    Reply

    1. Datto’s avatar

      If you’re using this in a lab environment and not in production, you should be able to setup a simple virtualization lab with most VMware VSphere 5.0 features while using a single NIC. Just make sure your NIC is a NIC that ESXi 5.0 can understand (make sure it’s a NIC that’s a model that is on the VMware HCL for ESXi 5.0 components). Also, make sure you’re hard drive controller is one that VMware ESXi 5.0 understands (ESXi 5.0 probably won’t understand any inexpensive software RAID so don’t setup any cheap software RAID for your disks).

      If your components aren’t on the VMware HCL for ESXi 5.0, you still may be able to get them to work by checking the comments of people at these two websites to see what experiences they had with your components:

      http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php

      http://ultimatewhitebox.com/

      Datto

      Reply

  55. TooMeeK’s avatar

    Lol? so.. can I run KVM on ESXi 5.0?
    I’m going to try this..

    Reply

  56. ap’s avatar

    I have installed ESX 4.1 as a virtual machine on a ESX 5 machine. And i want to create VM’s on the virtual ESX host (i.e. ESX 4.1), i created a 64-bit VM and when i power it on it gives the following error : This virtual machine is configured for 64-bit guest OS. However, 64-bit operation is not possible. This hoist does not support VT.

    Please help.
    Thanks in advance

    Reply

    1. Datto’s avatar

      Assuming you have turned on VT in your computer BIOS and have followed the written instructions above, what you’re experiencing is the symptom of not having SLAT capability in your physical CPU or that it’s not engaged for some reason. Nesting 64 bit VMs running on an ESXi VM requires SLAT capability in the physical CPU (for Intel CPUs this would be EPT and for AMD CPUs this would be RVI). Note that Core 2 Duo CPUs don’t have SLAT capability — some i series CPUs do have SLAT (i7 CPUs for instance).

      Try running a 32 bit VM on your nested ESXi 4.1 VM — if a 32 bit VM boots fine then you’ll know that you have VT engaged but SLAT capability is missing. If so, then verify your physical CPU actually has SLAT capability and inspect the BIOS options in your computer and see if there is an option to engage it (might be called IOMMU or Enhanced VT or something like that in the BIOS).

      Datto

      Reply

  57. Eric Gray’s avatar

    Please take a moment to share what you’re doing with nested virtualization at:

    http://www.vcritical.com/2012/02/what-are-you-doing-with-nested-virtualization/

    Reply

  58. JC’s avatar

    Thank you, Eric, for such a great post. Much more helpful than VMware’s doc (#8970).

    I was hoping (praying, begging, groveling) perhaps someone in this thread could shed some light on what I might be doing wrong…

    I’m trying to nest ESXi 5.0 inside ESXi 5.0. I havent had any issues creating the guest hypervisor, however my attempts at 64-bit nested guests keep falling short.

    I’ve added the vhv.allow = “TRUE” to etc/vmware/config on the physical boxes, confimed HV Support is enabled, and yet I still get the “…64-bit operation is not possible…” error message when I try to create a x64 nested guest on my vESXi host.

    When making the vESXi guest, I was not able to choose the OS as “Other: VMware ESXi 5.x” as outlined in the post, not in the initial config, nor after creating the machine. It just isn’t showing up as an option.

    My only thought currently is that perhaps my (physical) hosts’ processors are not beefy enough to support it? Is there a minimum requirement?

    Thank you in advance; if more information on my setup is needed to help troublshoot, please don’t hesitate to ask!

    Reply

    1. Datto’s avatar

      Your physical CPUs need to have SLAT capability or you will only be able to run 32 bit nested VMs on your nested ESXi 5.0 hosts. Try running 32 bit nested VMs on your nested ESXi 5.0 installation — if those work and 64 bit don’t then you’re likely having a SLAT problem.

      SLAT capable CPUs for Intel are generally i7 CPUs in the home/laptop variety and Nehalem and Westmere class CPUs in the server-class variety. There may be some i5 CPUs that ended up getting SLAT capability also but it’s not clear which i5 CPUs got it.

      For AMD SLAT capable server-class CPUs you’d need 23xx or 83xx CPUs or higher that are preferably C1 or higher stepping (some B3 stepping may also have usable SLAT capability).

      Datto

      Reply

      1. JC’s avatar

        Figured it wouldn’t work on the cluster with Xeon 5140′s, but my other cluster is all X5570 and X5550, and I get the same results there.

        32 bit nested guests work fine. I get errors when I try to build a x64, and I never get the option to choose “VMware ESXi 5.x” in the drop down.

        Any suggestions on where to start troubleshooting? I’ve built and re-built over a dozen vESXi guests with various tweaks to no avail.

        Reply

  59. Entrails’s avatar

    Hi here!

    I have a little bit of a problem.
    I’m currently running a ESXi 4.1 with vCenter 5.0 which I don’t have full admin rights to. On top of that I’m running a XenServer on the ESXi 4.1 that I’m trying to install a Windows 7 32 bit OS on it, but It keeps saying that I need to activate HVM even though It is activated. I tried adding the parameters vhv.allow TRUE and the stuff written in the guide but it still isnt working

    Reply

  60. William’s avatar

    while VMware claims that they support nested virtualization with vSphere 5.x, this configuration is completely and totally unsupported by Microsoft.

    Reply

  61. Bjørn’s avatar

    Of course it is unsupported by Microsoft. Hyper-V is not able to expose hardware assisted virtualization to a VM, so if you want to virtualize Hyper-V you have to do it in a VMware product (workstation or ESXi). It works perfectly even though it is not supported.

    Reply

  62. Datto’s avatar

    A few late tips on running Hyper-V R3 Beta (the one released in February 2012) under ESXi 5.0 Update 1 — note: make sure you’ve updated to ESXi 5.0 Update 1 to make it easy on yourself (you could put individual fixes from VMware into plain Jane regular ESXi 5.0 GA and make it work but ESXi 5.0 Update 1 has the fixes in it)

    1) I had to remove the quotes around the word TRUE for vhv.allow = TRUE that goes into the physical ESXi 5.0 /etc/vmware/config file — previously the TRUE worked with quotes around it for Hyper-V 2.0 nested and Hyper-V R3 Developer Preview

    2) I had to remove the quotes around the word FALSE for hypervisor.cpuid.v0 = FALSE that goes into the VMX file associated with the Hyper-V R3 Beta VM (the outside VM as it is called) — just a note — the v0 in that line is a zero, not an Oh — previously the TRUE worked with quotes around it for Hyper-V 2.0 nested and Hyper-V R3 Developer Preview

    3) I had to add the line mce.enable = TRUE (without any quotes around TRUE again) to the VMX file associated with the Hyper-V R3 Beta VM (the outside VM as it is called)

    4) I had to manually select the bottom choice in the CPU/MMU Virtualization choice under the Options Tab in the Edit Settings of the Hyper-V R3 VM (this is nested on a 16 core AMD server that has SLAT capable CPUs which is required in order to make Hyper-V R3 Beta work as a nested VM)

    5) I had to leave the O/S type in the Edit Settings of the Hyper-V R3 Beta VM at Microsoft Windows 8 Server (64 bit) and then copy the following CPU mask into the VMX file for the Hyper-V R3 Beta VM (I couldn’t get it to work if I chose the O/S type to be ESXi 5.0 or ESXi 4.0 in the Edit Settings of the Hyper-V R3 Beta VM like was done in previous version of Hyper-V running nested):

    cpuid.1.ecx=”—-:—-:—-:—-:—-:—-:–h-:—-”
    cpuid.80000001.ecx.amd=”—-:—-:—-:—-:—-:—-:—-:-h–”
    cpuid.8000000a.eax.amd=”hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh”
    cpuid.8000000a.ebx.amd=”hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh”
    cpuid.8000000a.edx.amd=”hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh”

    Do all of the above, and the other written notes in the above formal text of this webpage prior to engaging the Hyper-V role in Hyper-V R3 Beta.

    Hope this helps others out there.

    Datto

    Reply

    1. Datto’s avatar

      Hyper-V Server 2012 RTM and KVM on CentOS 6.3 — I’m back from long-distance hiking and am able to confirm that Hyper-V Server 2012 RTM and KVM on CentOS 6.3 both run as nested hypervisors that run their own 64-bit Windows VMs. I used ESXi 5.0u1 and AMD Opteron 8382 CPUs that have SLAT (RVI) capability in some HP DL rack servers.

      Also wanted to mention that RemoteFX 3.0 under Hyper-V Server 2012 with relatively inexpensive VisionTek HD5450 2GB video cards (RemoteFX 3.0 requires DirectX 11) runs fine (not with nested hypervisors but running on the actual physical hardware — requires SLAT capability in the CPUs for RemoteFX). Late-model VSphere may also have a similar capability of directly addressing video card hardware with VMware View using high end NVidia cards but I haven’t tried that out.

      Some of you might find the #unsuported session documentation from VMworld (made available on the Internet recently) quite interesting when considering how many levels deep you can now go with nesting and the changes to vhv.allow / vhv.enable settings in ESXi 5.1.

      Datto

      Reply

  63. Hussain’s avatar

    Hello,
    I have a complete lab running on Virtualized ESXi5 on ESXi5 5.0.0,623860. The lab and ESXs working fine without any issue nor the Nested VMs 32bit. I don’t see a requirement to run a 64bit nested VM on a double virtualized layer of Virtualization..

    The lab created for educational and testing purposes,, what is the requirement to let you run Nested VMs 64bit?

    Reply

    1. Bjørn’s avatar

      There are several reasons why you might want to run 64 bit nested VM’s.
      You might want to try the new vCenter Appliance, the vMA or just try to run a Windows server 2008 R2. All of these requires 64 bit.
      The only thing you need is a CPU that supports SLAT (AMD RVI or Intel EPT).

      Reply

  64. Hussain’s avatar

    hello Bjørn,
    I accept the vCenter Appliance as nested VM or vCenter Windows 64bit, but other windows 64 bit for experimental within the Windows Roles itself or Windows 32bit nested to run for instant as DHCP or DNS or IIS application, it’s too deep to go to that level of experimental in the Nested VMs. “It’s my feeling”

    Thanks,

    Reply

  65. Vannda thai’s avatar

    It was error number of NICs is less than 4 Unsupported Hardware to install VSA. Can we install VSA with 2 NIC only?

    Reply

  66. venkat’s avatar

    Hi,
    Nice Post, i managed to do Installation & configuration as mentioned above. i connected with vsphere client from my laptop but when i try to access vm’s console vsphere client goes unresponsive for few seconds and displays the following error “unable to connect to the mks, failed to connect to serverport:902″, after a couple of try’s i’ll get the console.

    if i select someother vm issue comes again…can you help me here.

    Reply

    1. vDesktop’s avatar

      Do you have DNS resolving your hostnames of your ESXi host?

      Reply

  67. Joe’s avatar

    I have followed the instruction and created vESXi with 2 virtual sockets.
    I’m puzzled why vCPU 0 is forever 100% utilised.

    Reply

  68. Datto’s avatar

    I was able to do 4th level nesting of VMs earlier today. This is:

    ESXi 5.0U1 physical machine hosting an
    ESXi 5.1 VM hosting an
    ESXi 5.1 VM hosting a
    Windows XP 32 bit VM.

    The 4th level VM is faster than I thought it would be considering all the nesting going on (a little less than 50% of what the speed of a normal VM would be but of course, you gain quite a bit of flexibility with nesting).

    Requires SLAT capability in your CPUs (EPT from Intel CPUs or RVI from AMD CPUs). I used Opteron 8354 CPUs. By the way, if you haven’t figured it out by now, never buy another CPU in your servers, blades, laptops, desktops that doesn’t have SLAT capability — you’ll be wasting your money.

    Details of 4th level nesting are described in the slides from the #NotSupported VMworld presentation found on the Internet nowadays.

    Datto

    Reply

    1. B’s avatar

      Datto,

      I am running ESXi5.1 on my hardware with vCenter 5.1 and I am trying to install VMTools inside of a nested ESXi VM’s guest OS (ESXi). Is this possible?

      Also, is there a forum you know of (outside of this) where nested virtualization is a primary focus?

      Thanks,
      B

      Reply

      1. Datto’s avatar

        Here’s a link to the Nested Forum on the VMware Community Forums:

        http://communities.vmware.com/community/vmtn/bestpractices/nested

        Datto

        Reply

      2. Datto’s avatar

        In case you missed it, there is now a VMware Fling that allows you to get VMware Tools installed into a nested ESXi 5.5 host. Worked for me — I think it’s ESXi 5.5 only as I remember.

        Details are here:

        http://labs.vmware.com/flings/vmware-tools-for-nested-esxi

        Datto

        Reply

    2. Datto’s avatar

      I was able to get x64 Win2008 R2 running as a 4th level nested VM. I used ESXi 5.1a for all levels of ESXi this time and that appears to have made the difference. The outline is:

      ESXi 5.1a physical machine hosting an
      ESXi 5.1a VM hosting an
      ESXi 5.1a VM hosting a
      Windows 2008 R2 64 bit VM

      It’s a little slow down through all those layers of nesting to get to the Win2008 R2 VM but it’s still useable for some cases where the physical box is a super beast (cases such as training room sessions, developers that need their own VSphere environment, service providers with customers coming over a slow WAN connection anyhow that don’t care about speed but need some sandbox VSphere environments).

      The above requires EPT (for Intel) or RVI (for AMD) in the physical CPUs in order to do the nesting properly.

      My vCloud Director lab runs as a nested setup so this probably allows me to put ESXi 5.1a into the catalog and deploy a pre-built VSphere lab that can be brought up new relatively quickly (requires a bit of scripting to pull it off). I seem to remember there’s someone who already had this idea going (ESXi coming off a VCD catalog).

      Anyhow, thought some might be interested to know that 4th level nesting with x64 VMs at the 4th level is possible.

      Datto

      Reply

  69. Rob’s avatar

    I just figured out the problem for when a guest gets stuck, and won’t boot. You must use the VMXNET 3 Adapter for nested hosts to boot correctly. This will also require VMware Tools to be installed. I was having a problem nesting Windows 7 64bit as the first layer, then switching it over to ESXi as the Guest OS.

    Reply

    1. Simon’s avatar

      Thanks for your Solution !!!
      That was EXACTLY what i was looking for !

      Now I can go to bed ealier :D

      Reply

  70. Datto’s avatar

    Just wanted to make some of you with non-EPT/non-RVI CPUs aware of restrictions of running Hyper-V under VSphere 5.1. Apparently, VSphere 5.1 requires EPT/RVI in the physical CPU in order to allow Hyper-V to run properly in a VM under VSphere 5.1.

    So, if you don’t have EPT or RVI capable CPUs and you’re currently using Hyper-V running in a VM under ESXi 5.0 you might want to think this through before you upgrade your server to ESXi 5.1.

    The discussion is at this link — ask there if you have questions:

    http://communities.vmware.com/message/2159987#2159987

    Datto

    Reply

  71. Datto’s avatar

    Also wanted to add — it appears VMware is supporting physical ESXi 5.1 hosts that are running regular VMs when the physical ESXi 5.1 host is also running nested ESXi (the physical host evidently has to be running ESXi 5.1 and not any earlier versions of ESXi in order to get support from VMware).

    See this thread:

    http://communities.vmware.com/message/2166303#2166303

    That may be interesting to those of you with high-powered gronking physical ESXi servers. Note though that the vSwitches coming off the physical ESXi 5.1 host have to be put into Promiscuous Mode in order for the nested inner VMs running on the nested ESXi VM hosts to be able to see the outside world.

    I’m not sure I’d want to do that to a production physical ESXi 5.1 host even if it was supported by VMware. Your call.

    But it sure would make for a great training room setup or for a great corporate lab setup where say developers or trainees need their own ESXi and their own VCenter for some reason.

    Datto

    Reply

  72. Eric Gray’s avatar

    Dear Readers:

    A big Thank You to all who have stopped by to ask questions or share expertise on nested virtualization over the past few years.

    I would like to ask for a favor: please consider voting for VCritical in the 2014 vSphere Land top blog contest. Details available here: http://www.vcritical.com/2014/02/top-vmware-blog-contest-2014-voting-now-open/

    Thanks again,
    Eric Gray

    Reply

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>