[Openstack] Openstack+KVM+overcommit, VM priority

Adam Lawson alawson at aqorn.com
Thu Jan 12 17:09:04 UTC 2017

This is a scheduler thing. Kvm/Linux has it's own scheduler built in for
any process that needs to share CPU cycles aside from the filter scheduler
used by OpenStack. My understanding is that any optimizations assigned to
resource consumption will not be handled by OpenStack but with manual

And I may be wrong about it not handled within nova. Also I know you're
doing it now but that is way against best practice my friend
(over-committing RAM).


On Jan 12, 2017 4:46 AM, "Ivan Derbenev" <ivan.derbenev at tech-corps.com>

> What are the facilities for cpu? That's the initial question - how can I
> do it if this is possible
> Best Regards
> Tech-corps IT Engineer
> Ivan Derbenev
> Phone: +79633431774
> -----Original Message-----
> From: James Downs [mailto:egon at egon.cc]
> Sent: Thursday, January 12, 2017 12:56 AM
> To: Ivan Derbenev <ivan.derbenev at tech-corps.com>
> Cc: openstack at lists.openstack.org
> Subject: Re: [Openstack] Openstack+KVM+overcommit, VM priority
> On Wed, Jan 11, 2017 at 09:34:32PM +0000, Ivan Derbenev wrote:
> > if both vms start using all 64gb memory, both of them start using swap
> Don't overcommit RAM.
> > So, the question is - is it possible to prioritize 1st vm above 2nd? so
> the second one will fail before the 1st, to leave maximum possible
> perfomance to the most importan one?
> Do you mean CPU prioritization? There are facilities to allow one VM or
> another to have CPU priority, but what, if a high priority VM wants RAM,
> you want to OOM the other? That doesn't exist, AFAIK.
> Cheers,
> -j
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170112/fddb27fa/attachment.html>

More information about the Openstack mailing list