[Openstack-operators] consolidation options for nova scheduler
Sylvain Bauza
sbauza at redhat.com
Tue Aug 26 15:49:15 UTC 2014
Le 26/08/2014 17:35, Simon Pasquier a écrit :
> Hi,
> IIUC you probably want to set||ram_weight_multiplier to a negative
> number. From the OpenStack documentation [1]:
>
> By default, the scheduler spreads instances across all hosts
> evenly. Set the |ram_weight_multiplier| option to a negative
> number if you prefer stacking instead of spreading. Use a
> floating-point value.
>
>
> Simon
>
> [1]
> http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
>
>
>
Simon is absolutely right, you need to make use of a RAMWeighter with
negative pound.
At the moment, that's only possible for RAM, there is another patch in
progress for consolidating over CPUs [2]
[2] https://review.openstack.org/109325
> On Tue, Aug 26, 2014 at 5:01 PM, George Shuklin
> <george.shuklin at gmail.com <mailto:george.shuklin at gmail.com>> wrote:
>
> Good day.
>
> I can't find any options for nova scheduler to consolidate new
> instances on few hosts instead of 'spreading' them on all
> available hosts.
>
> Simple example: let says we has 10 hosts, each host got 10Gb of
> memory for instances. We has flavors of 3Gb and 5Gb of RAM. If we
> run 20 new instances, they will consume about 6Gb per host and we
> will not able to run new instance with 6Gb of RAM (even we have
> 10*4=40 Gb of free memory on computes, none of the hosts has more
> than 4Gb of memory).
>
> Is any nice way to say 'consolidate' to openstack? Thanks!
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> <mailto:OpenStack-operators at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140826/98a2dcf5/attachment.html>
More information about the OpenStack-operators
mailing list