[openstack-dev] [ironic][nova] Indivisible Resource Providers

Sam Betts (sambetts) sambetts at cisco.com
Wed Jul 27 14:48:24 UTC 2016


While discussing the proposal to add resource_class' to Ironic nodes for interacting with the resource provider system in Nova with Jim on IRC, I voiced my concern about having a resource_class per node. My thoughts were that we could achieve the behaviour we require by every Ironic node resource provider having a "baremetal" resource class of which they can own a maximum of 1. Flavor's that are required to land on a baremetal node would then define that they require at least 1 baremetal resource, along with any other resources they require.  For example:

Resource Provider 1 Resources:
Baremetal: 1
RAM: 256
        CPUs: 4

Resource Provider 2 Resources:
Baremetal: 1
RAM: 512
        CPUs: 4

Resource Provider 3 Resources:
Baremetal: 0
RAM: 0
        CPUs: 0

(Resource Provider 3 has been used, so it has zero resources left)

 Given the thought experiment it seems like this would work great with one exception, if you define 2 flavors:

Flavor 1 Required Resources:
Baremetal: 1
        RAM: 256

Flavor 2 Required Resources:
    Baremetal: 1
RAM: 512

Flavor 2 will only schedule onto Resource Provider 2 because it is the only resource provider that can provide the amount of resources required. However Flavor 1 could potentially end up landing on Resource Provider 2 even though it provides more RAM than is actually required. The Baremetal resource class would prevent a second node from ever being scheduled onto that resource provider, so scheduling more nodes doesn't result on 2 instance on the same node, but it is an inefficient use of resources.

To combat this inefficient use of resources, I wondered if it was possible to add a flag to a resource provider to define that it is an indivisible resource provider, which would prevent flavors that don't use up all the resources a provider provides from landing on that provider.

Sam


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160727/bfef1158/attachment.html>


More information about the OpenStack-dev mailing list