[ops][nova] Different quotas for different SLAs ?

Arne Wiebalck arne.wiebalck at cern.ch
Wed Oct 30 07:05:12 UTC 2019


Another use case where per flavour quotas would be helpful is bare metal provisioning:
since the flavors are tied via the resource class to specific classes of physical machines,
the usual instance/cores/RAM quotas do not help the user to see how many instances
of which type can still be created.

Having per flavor (resource class, h/w type) projects is what we do for larger chunks of
identical hardware, but this is less practical for users with access to fewer machines of
many different types.

Cheers,
 Arne


> On 30 Oct 2019, at 06:36, Massimo Sgaravatto <massimo.sgaravatto at gmail.com> wrote:
> 
> Thanks a lot for your feedbacks
> The possibility to quota flavours would indeed address also my use case.
> 
> Cheers, Massimo
> 
> 
> On Tue, Oct 29, 2019 at 6:14 PM Tim Bell <Tim.Bell at cern.ch <mailto:Tim.Bell at cern.ch>> wrote:
> We’ve had similar difficulties with a need to quota flavours .. cinder has a nice feature for this but with nova, I think we ended up creating two distinct projects and exposing the different flavours to the different projects, each with the related quota… from a user interface perspective, it means they’re switching projects more often than is ideal but it does control the limits.
> 
> Tim
> 
> > On 29 Oct 2019, at 17:17, Sean Mooney <smooney at redhat.com <mailto:smooney at redhat.com>> wrote:
> > 
> > the normal way to achive this in the past would have been to create host aggreate and then 
> > use the AggregateTypeAffinityFilter to map flavor to specific host aggrates.
> > 
> > so you can have a 2xOvercommit and a 4xOvercommit and map them to different host aggrates that have different over
> > commit ratios set on the compute nodes.
> > 
> > On Tue, 2019-10-29 at 10:45 -0500, Eric Fried wrote:
> >> Massimo-
> >> 
> >>> To decide if an instance should go to a compute node with or without
> >>> overcommitment is easy; e.g. it could be done with host aggregates +
> >>> setting metadata to the relevant flavors/images.
> > ya that basicaly the same as what i said above
> >> 
> >> You could also use custom traits.
> > traits would work yes it woudl be effectivly the same but would have the advatage of  having placment
> > do most of the filtering so it should perform better.
> >> 
> >>> But is it in some  way possible to decide that a certain project has a
> >>> quota of  x VCPUs without overcommitment, and y VCPUs with overcommitments ?
> >> 
> >> I'm not sure whether this helps, but it's easy to detect the allocation
> >> ratio of a compute node's VCPU resource via placement with GET
> >> /resource_providers/$cn_uuid/inventories/VCPU [1].
> >> 
> >> But breaking down a VCPU quota into different "classes" of VCPU
> >> sounds... impossible to me.
> > this is something that is not intended to be supported with unified limits at least not initially? ever?
> >> 
> >> But since you said
> >> 
> >>> In particular I would like to use some compute nodes without
> >>> overcommitments
> >> 
> >> ...perhaps it would help you to use PCPUs instead of VCPUs for these. We
> >> started reporting PCPUs in Train [2].
> > ya pcpus are a good choice for the nova over commit case for cpus.
> > hugepages are the equivalent for memory.
> > idealy you should avoid disk over commit but if you have to do it use cinder when you
> > need over commit and local storage whne you do not.
> >> 
> >> efried
> >> 
> >> [1]
> >> 
> > https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory <https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory>
> >> [2]
> >> http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html <http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html>
> >> 
> > 
> > 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20191030/9c1c70fd/attachment.html>


More information about the openstack-discuss mailing list