[Openstack] Which nova scheduler for different hardware sizes?

Christian Wittwer wittwerch at gmail.com
Tue Nov 1 08:38:51 UTC 2011


Lorin,
Thanks for your reply. Well the least cost scheduler with these cost
functions looks interesting.
Unfortunately there is not much documenation about it. Can somebody
give me an example how to switch to that scheduler using the memory
cost function which already exist?

Cheers,
Christian

2011/10/24 Lorin Hochstein <lorin at isi.edu>:
> Christian:
> You could use the least cost scheduler, but I think you'd have to write your
> own cost function to take into account the different number of cores.
> Looking at the source, the only cost function it comes with only takes into
> account the amount of memory that's free, not loading in terms of total
> physical cores and allocated virtual cores. (We use a custom scheduler at
> our site, so I don't have any firsthand experience with the least-cost
> scheduler).
> Lorin
> --
> Lorin Hochstein, Computer Scientist
> USC Information Sciences Institute
> 703.812.3710
> http://www.east.isi.edu/~lorin
>
>
>
> On Oct 22, 2011, at 3:17 AM, Christian Wittwer wrote:
>
> I'm planning to build a openstack nova installation with older
> hardware. These servers obviously doesn't have the same hardware
> configuration like memory and cores.
> It ranges from 2 core and 4GB memory to 16 core and 64GB memory. I
> know that there are different scheduler, but I'm not sure which one to
> choose.
> The simple scheduler tries to find the least used host, but the amount
> of used cores per host (max_cores) is a constant, which doesn't work
> for me.
> Maybe the least cost scheduler would be the right one? But I'm not
> sure, because I did not find any documenation about how to use it.
>
> Cheers,
> Christian
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>




More information about the Openstack mailing list