[Openstack-operators] Hypervisor free memory recommendations

Matt Van Winkle mvanwink at rackspace.com
Fri Sep 5 02:52:47 UTC 2014


Based on experience with some of our various platforms, I'd agree with
Jay's recommendations as a great staring point.  On the larger hosts, if
you allow smaller instances - 1G for example - you may find you end up
limiting total VMs per host for other performance reasons before you'll
use up all the available tenant RAM.

Id' also add that, over time, you may find reasons to add more items -
processes, tools etc - to hypervisors, so it might be worth rounding up a
gig or two for long term flexibility.  This is obviously easier on the
larger RAM boxes, but I can think of a few times that we thought - "this
would be a lot easer if had a little more RAM for dom 0" on our 32 gig
hosts.

Just some thoughts...

Thanks!
Matt

On 9/4/14 1:14 PM, "Jay Pipes" <jaypipes at gmail.com> wrote:

>On 09/04/2014 01:51 PM, Juan José Pavlik Salles wrote:
>> Hi Jay, I do agree about 10% being too much memory in big nodes, but
>> right now we are using small ones (too small if you ask). These new
>> nodes are 16GB so if I reserve 4 Gb for the dom0 I'd be loosing 25% of
>> the available RAM. I was thinking about something like: if you have got
>> less than 32GB give 10% of it to the dom0 and If you have got more than
>> 32GB go with 4GB for the dom0.
>
>I'd go with something like this:
>
>  Host RAM           dom0/reserved host RAM
>  ================== ======================
>  16 - 32 GB	    2 GB
>  32 - 64 GB         2.75 GB
>  64 - 128 GB        3.50 GB
>  128 - 256 GB       4.25 GB
>  256+ GB            5.50 GB
>
>If you have heavy packing of VMs (lots of tiny or small VMs), you may
>want to add a half GB to the above, but not much more than that, IMO)
>
> > Maybe different environments will need
>> different rules, but this should work in most standar deployments I'd
>> say. Jay, you mentioned that big nodes running many VMs don't neet more
>> than 4GB of dedicated RAM, haven't you ever had any swapping situation
>> in that kind of scenarios?
>
>No, not on the compute nodes, no. On the controller nodes, yes, but
>that's a totally different thing :)
>
>Best,
>-jay
>
>> 2014-09-04 14:26 GMT-03:00 Jay Pipes <jaypipes at gmail.com
>> <mailto:jaypipes at gmail.com>>:
>>
>>     There's not really any need for 10% in my experience. Giving
>>     dom0/bare metal around 3-4GB is perfectly fine for the vast majority
>>     of scenarios, even when there's a hundred or more VMs on the box.
>>     Most compute node server hardware nowadays should have 128-512GB of
>>     RAM available, and 4GB for the host is more than enough.
>>
>>     -jay
>>
>>
>>     On 09/04/2014 12:45 PM, Juan José Pavlik Salles wrote:
>>
>>         Hi Tomasz, thanks for your answer. I'll start with 10% and see
>>what
>>         happens. Thanks again!
>>
>>
>>         2014-09-04 13:37 GMT-03:00 Tomasz Napierala
>>         <tnapierala at mirantis.com <mailto:tnapierala at mirantis.com>
>>         <mailto:tnapierala at mirantis.__com
>>         <mailto:tnapierala at mirantis.com>>>:
>>
>>
>>
>>              On 04 Sep 2014, at 18:04, Juan José Pavlik Salles
>>              <jjpavlik at gmail.com <mailto:jjpavlik at gmail.com>
>>         <mailto:jjpavlik at gmail.com <mailto:jjpavlik at gmail.com>>> wrote:
>>
>>               > Hi guys, I'm running a Grizzly cloud with Ubuntu
>>         12.04+KVM. I'd
>>              like to know if there's any kind of recommended free RAM
>>         for the
>>              Hypervisor. I know there's a nova variable called "
>>               > reserved_host_memory_mb" but don't know what a proper
>>         value would be.
>>
>>              Check on deployed compute node that has no running VMs, add
>>         some
>>              margin, say 10% and you should be fine. Usually compute
>>         nodes are
>>              not consuming extra memory besides VMs.
>>
>>              Regards,
>>              --
>>              Tomasz 'Zen' Napierala
>>              Sr. OpenStack Engineer
>>         tnapierala at mirantis.com <mailto:tnapierala at mirantis.com>
>>         <mailto:tnapierala at mirantis.__com
>><mailto:tnapierala at mirantis.com>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>         --
>>         Pavlik Salles Juan José
>>         Blog - http://viviendolared.blogspot.__com
>>         <http://viviendolared.blogspot.com>
>>
>>
>>         _________________________________________________
>>         OpenStack-operators mailing list
>>         OpenStack-operators at lists.__openstack.org
>>         <mailto:OpenStack-operators at lists.openstack.org>
>>         
>>http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operato
>>rs
>>         
>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>
>>
>>     _________________________________________________
>>     OpenStack-operators mailing list
>>     OpenStack-operators at lists.__openstack.org
>>     <mailto:OpenStack-operators at lists.openstack.org>
>>     
>>http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operato
>>rs
>>     
>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>
>>
>>
>>
>> --
>> Pavlik Salles Juan José
>> Blog - http://viviendolared.blogspot.com
>
>_______________________________________________
>OpenStack-operators mailing list
>OpenStack-operators at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




More information about the OpenStack-operators mailing list