[ops][nova] Different quotas for different SLAs ?
Dear all I would like to set different overcommitments factors for the compute nodes. In particular I would like to use some compute nodes without overcommitments, and some compute nodes with a cpu_allocation_ratio equals to 2.0. To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images. But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with overcommitments ? Or the only option is using 2 different projects for the 2 different SLAs (which is something that I would like to avoid) ? Thanks, Massimo
Massimo-
To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images.
You could also use custom traits.
But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with overcommitments ?
I'm not sure whether this helps, but it's easy to detect the allocation ratio of a compute node's VCPU resource via placement with GET /resource_providers/$cn_uuid/inventories/VCPU [1]. But breaking down a VCPU quota into different "classes" of VCPU sounds... impossible to me. But since you said
In particular I would like to use some compute nodes without overcommitments
...perhaps it would help you to use PCPUs instead of VCPUs for these. We started reporting PCPUs in Train [2]. efried [1] https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide... [2] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-res...
Massimo-
To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images. ya that basicaly the same as what i said above
You could also use custom traits.
the normal way to achive this in the past would have been to create host aggreate and then use the AggregateTypeAffinityFilter to map flavor to specific host aggrates. so you can have a 2xOvercommit and a 4xOvercommit and map them to different host aggrates that have different over commit ratios set on the compute nodes. On Tue, 2019-10-29 at 10:45 -0500, Eric Fried wrote: traits would work yes it woudl be effectivly the same but would have the advatage of having placment do most of the filtering so it should perform better.
But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with overcommitments ?
I'm not sure whether this helps, but it's easy to detect the allocation ratio of a compute node's VCPU resource via placement with GET /resource_providers/$cn_uuid/inventories/VCPU [1].
But breaking down a VCPU quota into different "classes" of VCPU sounds... impossible to me.
this is something that is not intended to be supported with unified limits at least not initially? ever?
But since you said
In particular I would like to use some compute nodes without overcommitments
...perhaps it would help you to use PCPUs instead of VCPUs for these. We started reporting PCPUs in Train [2].
ya pcpus are a good choice for the nova over commit case for cpus. hugepages are the equivalent for memory. idealy you should avoid disk over commit but if you have to do it use cinder when you need over commit and local storage whne you do not.
efried
[1]
https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide...
[2] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-res...
We’ve had similar difficulties with a need to quota flavours .. cinder has a nice feature for this but with nova, I think we ended up creating two distinct projects and exposing the different flavours to the different projects, each with the related quota… from a user interface perspective, it means they’re switching projects more often than is ideal but it does control the limits. Tim
On 29 Oct 2019, at 17:17, Sean Mooney <smooney@redhat.com> wrote:
the normal way to achive this in the past would have been to create host aggreate and then use the AggregateTypeAffinityFilter to map flavor to specific host aggrates.
so you can have a 2xOvercommit and a 4xOvercommit and map them to different host aggrates that have different over commit ratios set on the compute nodes.
Massimo-
To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images. ya that basicaly the same as what i said above
You could also use custom traits.
On Tue, 2019-10-29 at 10:45 -0500, Eric Fried wrote: traits would work yes it woudl be effectivly the same but would have the advatage of having placment do most of the filtering so it should perform better.
But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with overcommitments ?
I'm not sure whether this helps, but it's easy to detect the allocation ratio of a compute node's VCPU resource via placement with GET /resource_providers/$cn_uuid/inventories/VCPU [1].
But breaking down a VCPU quota into different "classes" of VCPU sounds... impossible to me.
this is something that is not intended to be supported with unified limits at least not initially? ever?
But since you said
In particular I would like to use some compute nodes without overcommitments
...perhaps it would help you to use PCPUs instead of VCPUs for these. We started reporting PCPUs in Train [2].
ya pcpus are a good choice for the nova over commit case for cpus. hugepages are the equivalent for memory. idealy you should avoid disk over commit but if you have to do it use cinder when you need over commit and local storage whne you do not.
efried
[1]
https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide...
[2] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-res...
Thanks a lot for your feedbacks The possibility to quota flavours would indeed address also my use case. Cheers, Massimo On Tue, Oct 29, 2019 at 6:14 PM Tim Bell <Tim.Bell@cern.ch> wrote:
We’ve had similar difficulties with a need to quota flavours .. cinder has a nice feature for this but with nova, I think we ended up creating two distinct projects and exposing the different flavours to the different projects, each with the related quota… from a user interface perspective, it means they’re switching projects more often than is ideal but it does control the limits.
Tim
On 29 Oct 2019, at 17:17, Sean Mooney <smooney@redhat.com> wrote:
the normal way to achive this in the past would have been to create host aggreate and then use the AggregateTypeAffinityFilter to map flavor to specific host aggrates.
so you can have a 2xOvercommit and a 4xOvercommit and map them to different host aggrates that have different over commit ratios set on the compute nodes.
Massimo-
To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images. ya that basicaly the same as what i said above
You could also use custom traits.
On Tue, 2019-10-29 at 10:45 -0500, Eric Fried wrote: traits would work yes it woudl be effectivly the same but would have the advatage of having placment do most of the filtering so it should perform better.
But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with
overcommitments ?
I'm not sure whether this helps, but it's easy to detect the allocation ratio of a compute node's VCPU resource via placement with GET /resource_providers/$cn_uuid/inventories/VCPU [1].
But breaking down a VCPU quota into different "classes" of VCPU sounds... impossible to me.
this is something that is not intended to be supported with unified limits at least not initially? ever?
But since you said
In particular I would like to use some compute nodes without overcommitments
...perhaps it would help you to use PCPUs instead of VCPUs for these. We started reporting PCPUs in Train [2].
ya pcpus are a good choice for the nova over commit case for cpus. hugepages are the equivalent for memory. idealy you should avoid disk over commit but if you have to do it use cinder when you need over commit and local storage whne you do not.
efried
[1]
https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide...
[2]
http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-res...
Another use case where per flavour quotas would be helpful is bare metal provisioning: since the flavors are tied via the resource class to specific classes of physical machines, the usual instance/cores/RAM quotas do not help the user to see how many instances of which type can still be created. Having per flavor (resource class, h/w type) projects is what we do for larger chunks of identical hardware, but this is less practical for users with access to fewer machines of many different types. Cheers, Arne
On 30 Oct 2019, at 06:36, Massimo Sgaravatto <massimo.sgaravatto@gmail.com> wrote:
Thanks a lot for your feedbacks The possibility to quota flavours would indeed address also my use case.
Cheers, Massimo
On Tue, Oct 29, 2019 at 6:14 PM Tim Bell <Tim.Bell@cern.ch <mailto:Tim.Bell@cern.ch>> wrote: We’ve had similar difficulties with a need to quota flavours .. cinder has a nice feature for this but with nova, I think we ended up creating two distinct projects and exposing the different flavours to the different projects, each with the related quota… from a user interface perspective, it means they’re switching projects more often than is ideal but it does control the limits.
Tim
On 29 Oct 2019, at 17:17, Sean Mooney <smooney@redhat.com <mailto:smooney@redhat.com>> wrote:
the normal way to achive this in the past would have been to create host aggreate and then use the AggregateTypeAffinityFilter to map flavor to specific host aggrates.
so you can have a 2xOvercommit and a 4xOvercommit and map them to different host aggrates that have different over commit ratios set on the compute nodes.
Massimo-
To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images. ya that basicaly the same as what i said above
You could also use custom traits.
On Tue, 2019-10-29 at 10:45 -0500, Eric Fried wrote: traits would work yes it woudl be effectivly the same but would have the advatage of having placment do most of the filtering so it should perform better.
But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with overcommitments ?
I'm not sure whether this helps, but it's easy to detect the allocation ratio of a compute node's VCPU resource via placement with GET /resource_providers/$cn_uuid/inventories/VCPU [1].
But breaking down a VCPU quota into different "classes" of VCPU sounds... impossible to me.
this is something that is not intended to be supported with unified limits at least not initially? ever?
But since you said
In particular I would like to use some compute nodes without overcommitments
...perhaps it would help you to use PCPUs instead of VCPUs for these. We started reporting PCPUs in Train [2].
ya pcpus are a good choice for the nova over commit case for cpus. hugepages are the equivalent for memory. idealy you should avoid disk over commit but if you have to do it use cinder when you need over commit and local storage whne you do not.
efried
[1]
https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide... <https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory>
[2] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-res... <http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html>
On Wed, 2019-10-30 at 08:05 +0100, Arne Wiebalck wrote:
Another use case where per flavour quotas would be helpful is bare metal provisioning: since the flavors are tied via the resource class to specific classes of physical machines, the usual instance/cores/RAM quotas do not help the user to see how many instances of which type can still be created.
Having per flavor (resource class, h/w type) projects is what we do for larger chunks of identical hardware, but this is less practical for users with access to fewer machines of many different types.
flavor quota are not the direction we are currently persuing with quoats and unifed limits at present it has been discussed in the past but we are actully moving in the direction of allowing quota based on placmenet resouce classes. https://review.opendev.org/#/c/602201/ i personally think in the long run the unified limits apporch based on placment resouce classes is a better approch then flavor quotas so i woudl prefer to expend our energy completing that work then designing a flavor based quota mechanism that nova would have to maintain. that said i would encourage you to review that spec and consider if it addresses your usecases. for the ironic case i think it does quite nicely. for the sla case i dont think it does but there may be a way to extended it after the inital version is complete to allow that. e.g. by allow int the quota to be placed on a resouce class + trait in stead of just a resource class. that would complicate this however so i think that would be best left out of scope of v1 of unifed limits.
Cheers, Arne
On 30 Oct 2019, at 06:36, Massimo Sgaravatto <massimo.sgaravatto@gmail.com> wrote:
Thanks a lot for your feedbacks The possibility to quota flavours would indeed address also my use case.
Cheers, Massimo
On Tue, Oct 29, 2019 at 6:14 PM Tim Bell <Tim.Bell@cern.ch <mailto:Tim.Bell@cern.ch>> wrote: We’ve had similar difficulties with a need to quota flavours .. cinder has a nice feature for this but with nova, I think we ended up creating two distinct projects and exposing the different flavours to the different projects, each with the related quota… from a user interface perspective, it means they’re switching projects more often than is ideal but it does control the limits.
Tim
On 29 Oct 2019, at 17:17, Sean Mooney <smooney@redhat.com <mailto:smooney@redhat.com>> wrote:
the normal way to achive this in the past would have been to create host aggreate and then use the AggregateTypeAffinityFilter to map flavor to specific host aggrates.
so you can have a 2xOvercommit and a 4xOvercommit and map them to different host aggrates that have different over commit ratios set on the compute nodes.
On Tue, 2019-10-29 at 10:45 -0500, Eric Fried wrote:
Massimo-
To decide if an instance should go to a compute node with or without overcommitment is easy; e.g. it could be done with host aggregates + setting metadata to the relevant flavors/images.
ya that basicaly the same as what i said above
You could also use custom traits.
traits would work yes it woudl be effectivly the same but would have the advatage of having placment do most of the filtering so it should perform better.
But is it in some way possible to decide that a certain project has a quota of x VCPUs without overcommitment, and y VCPUs with overcommitments ?
I'm not sure whether this helps, but it's easy to detect the allocation ratio of a compute node's VCPU resource via placement with GET /resource_providers/$cn_uuid/inventories/VCPU [1].
But breaking down a VCPU quota into different "classes" of VCPU sounds... impossible to me.
this is something that is not intended to be supported with unified limits at least not initially? ever?
But since you said
In particular I would like to use some compute nodes without overcommitments
...perhaps it would help you to use PCPUs instead of VCPUs for these. We started reporting PCPUs in Train [2].
ya pcpus are a good choice for the nova over commit case for cpus. hugepages are the equivalent for memory. idealy you should avoid disk over commit but if you have to do it use cinder when you need over commit and local storage whne you do not.
efried
[1]
https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide...
< https://docs.openstack.org/api-ref/placement/?expanded=show-resource-provide...
[2] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-res... < http://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html>
participants (5)
-
Arne Wiebalck
-
Eric Fried
-
Massimo Sgaravatto
-
Sean Mooney
-
Tim Bell