[openstack-dev] [nova][ironic] Concerns over rigid resource class-only ironic scheduling

Wan-yen Hsu wanyenhsu at gmail.com
Thu Sep 14 23:21:40 UTC 2017


>>* Nisha is raising the question about whether or not we're making
*>>* incorrect assumptions about how people are using nova/ironic and they
*>>* want to use the non-Exact filters for VCPU/MEMORY_MB/DISK_GB, which as
*>>* far as I have ever heard is not recommended/supported upstream as it can
*>>* lead to resource tracking issues in Nova that eventually lead to
*>>* scheduling failures later because of the scheduler thinking a node is
*>>* available for more than one instance when it's really not.
*
>This came up in the Nova PTG room yesterday and I wanted to reply on the
>thread with what I understood about it, for those who weren't in the
>session. In general, it's recommended to use the exact filters (1 flavor
>per Ironic node hardware config) as there's no concept of partially
>claiming a baremetal node.

>But, with the old non-exact filters, you _could_ get away with creating
>fewer flavors than you have hardware configs and get "fuzzy matching" on
>Ironic nodes, to get nodes whose configs are "close enough" but not
>exact. This might be helpful in situations where you have some oddball
>configs you don't want to have separate flavors for.
>I was thinking, if it's possible to assign more than one resource class
>to an Ironic node, maybe you could get similar behavior to the old
>non-exact filters. So if you have an oddball config, you could tag it as
>multiple resource classes that it's "close enough" to for a match. But
>I'm not sure whether it's possible for an Ironic node to be tagged with
>more than one resource class.


>* Nisha is raising the question about whether or not we're making
*>* incorrect assumptions about how people are using nova/ironic and they
*>* want to use the non-Exact filters for VCPU/MEMORY_MB/DISK_GB, which as
*>* far as I have ever heard is not recommended/supported upstream as it can
*>* lead to resource tracking issues in Nova that eventually lead to
*>* scheduling failures later because of the scheduler thinking a node is
*>* available for more than one instance when it's really not.
*
 The concern I have with one single custom resource class for an
Ironic node is that it takes away some of the options that were
available before, such as scheduling based on resource quantity and
non-exact match filters (RamFilter, DiskFilter, and CoreFilter).  Nova
scheduling becomes too restrictive for Ironic.

  I know some users are using these options before Pike with no
issues.   Therefore, it's a mystery to me whether the non-exact filter
for Ironic really has issues.  Even if it has problems, it seems to me
there are ways to address the problem. For instance, Ironic virt
driver can report a node is not available if it's in active state (if
it hasn't done so), or report all resources are consumed when a node
is claimed.   Alternatively, Nova scheduler can report all resources
are consumed for an Ironic node if Nova is willing to make such
change.  Thanks!



On Thu, Sep 7, 2017 at 12:48 PM, Nisha Agarwal <agarwalnisha1980 at gmail.com>
wrote:

> Hi Ironic Operators,
>
> From Pike, ironic nodes get scheduled based on just the resource class
> from nova. Do you guys see any concerns over this "rigid resource class
> only ironic scheduling"?
>
> To be more specific, at your datacentre/production environment what all
> filters are configured in nova.conf (configuration file for nova) for
> scheduling an ironic node? Do you use RamFilter/DiskFilter/CoreFilter in
> the "use_baremetal_filters" for ironic nodes scheduling from nova?
>
> Thanks and Regards
> Nisha
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170914/1c7db4a6/attachment.html>


More information about the OpenStack-dev mailing list