Hello Dynamic Memory?

Looks like I was on to something a few weeks back when I showed how Microsoft had tried — but failed — to implement a feature that would allow Hyper-V R2 to accommodate the use of more virtual machine memory than available on the underlying physical host.

A screenshot of Dynamic Memory configuration in a post-RTM build of Windows Server 2008 R2 has surfaced in an article at Softpedia.

Listen… Do you hear that?

It’s the sound of Microsoft Virtualization curbing their criticism of memory overcommit.

Hat tip to Aidan Finn.

Tags: ,

16 comments

  1. roflmao’s avatar

    Dynamic memory is not overcommitment.

    1. Fernando’s avatar

      So, what is it then ?

      1. Anton Zhbankov’s avatar

        It’s dynamic memory! 😉

        1. Fernando’s avatar

          Brilliant explanation 😀

        2. NiTRo’s avatar

          sort of actually 🙂

        3. Mark Wilson’s avatar

          I’m pretty sure that this was demonstrated before, back around September/October 2008 and press were actually briefed that this would be in the Windows Server 2008 R2 version of Hyper-V, but it was pulled from the beta. Consequently, it shouldn’t be a surprise to see it back on the table for a future release.

          Please excuse the self-promotion but here’s a quote from a blog post I wrote in October 2010:

          “Microsoft also spoke to me about a dynamic memory capability (just like the balloon model that competitors offer). I asked why the company had been so vocal in downplaying competitive implementations of this technology yet was now implementing something similar and Ward Ralston explained to me that this is not the right solution for everyone but may help to handle memory usage spikes in a VDI environment. Since then, I’ve been advised that dynamic memory will not be in the beta release of Windows Server 2008 R2 and Microsoft is evaluating options for inclusion (or otherwise) at release candidate stage. These apparently conflicting statements, within just a few days of one another, should not be interpreted as indecisiveness on the part of Microsoft – we’re not even at beta stage yet and features/functionality may change considerably before release.”

          Source: http://www.markwilson.co.uk/blog/2008/10/just-a-few-of-the-new-features-to-expect-in-windows-server-2008-r2.htm

          1. Mark Wilson’s avatar

            Spot the deliberate mistake… I wrote that in October 2008… but could well be writing the same thing again in October 2010 😉

          2. Anton Zhbankov’s avatar

            So VMware has terrible “Memory Overcommitment” that can lead to serious service degradation and Microsft has wonderful “Dynamic Memory” to handle memory usage spikes in VDI.

            This is overcommitment, but you should not call it overcommitment because we want you to think that black is white. And we want you to change your mind anytime our marketing invents new name for technology our engineers finally implemented 5 years later than competitors.

          3. James O'Neill’s avatar

            @Anton.
            Overcommitment means:
            Sum of working Set > Total available memory –> Paging –> poor perf.
            When VMware products share pages
            Sum of Allocated Memory > total memory > Sum of working set
            That’s not overcommitment: Working Set no paging –> Good perf

            Unfortunately people don’t always distinguish.

            The scheme Mark describes had dynamic allocation so
            Sum of Max memory > Total memory
            Sum of allocated memory < Total memory

            In other words once all memory is allocated , the memory available to one VM can only be increased is if the memory available in another is reduced by a balloon driver. [Eric quoted Mark's explanation of that – just follow the link in the original post above.]

            If this technology reaches the market as it was when Mark saw it (and it would be a brave person to extrapolate from a single screen shot to a release schedule), VMware folks will stop saying "Microsoft is copying what we do" and start saying "Microsoft doesn't do all the things we do" and the argument will move to how good or bad paging in the Virtualization layer is really (VMware do it, Microsoft don't).

          4. Anton Zhbankov’s avatar

            James,
            AFAIR VMware defines Memory Overcommitment as (Sum of memory granted) > (Memory physical). So if we have 32GB of memory and 40GB granted to running VMs – memory is overcommited, even if working set is 15GB.

            TSP is included to Memory Overcommitment techniques with ballooning. So actually it’s just a question of terms.

            Marketing always will find something to say, no matter what engineers do.

          5. James O'Neill’s avatar

            Anton, you’re right we will probably see lots of arguments about the correct usage of “commit” and “Overcommit”. Overcommitment (at least when I was introduced to the term in the context of non-virtualized OSes) is the point where paging begins.

            Far be it from me to advise VMware on how to market their products but it would seem to me that the whole upside on page sharing is that it allows (sum of granted) > (memory physical) without actually being overcommitted and needing to page (bad).

            Microsoft’s dynamic memory (as seen in 2008) didn’t page – so it reduced what was granted to one VM to increase what was granted to another. Different beasts.

          6. Anton Zhbankov’s avatar

            James, I know how can you give VM more memory – Windows 2003 Enterprise and above support memory hot-add. But how can you hot remove memory?
            I see only one way to do this – ballooning = swapping.

          7. Mark Chuang’s avatar

            @James

            It feels like saying that “VMware’s approach can lead to paging” while “MS’s approach won’t” misses the bigger picture.

            Let’s look at the scenario that would lead to paging for VMware, i.e. the VMs on a host cumulatively need more memory than exists on the host. In the MS scheme (if I’m interpreting your explanation correctly), the MS balloon driver will try to free up memory, but it won’t be able to because all of the memory is actively being used. So you end up with Hyper-V VMs that need more memory but can’t get it, so the performance for those VMs are essentially capped as well.

            So in both approaches, the bigger picture/solution is that you’ll need to move 1 or more VMs to another host (since AFAIK, neither company has figured out how to defy the laws of physics and create more physical memory out of thin air). That’s where VMware DRS comes in…

            (Disclosure: Yes, I work for VMware.)

          8. Phil’s avatar

            Mark: “That’s where VMware DRS comes in…”

            or PROTips in a Hyper-V environment.

Comments are now closed.