[openstack-dev] [nova][ironic] Concerns over rigid resource class-only ironic scheduling

Nisha Agarwal agarwalnisha1980 at gmail.com
Thu Sep 7 20:06:15 UTC 2017


>>Nisha is raising the question about whether or not we're making incorrect
assumptions >>about how people are using nova/ironic and they want to use
the non-Exact filters for >>VCPU/MEMORY_MB/DISK_GB, which as far as I have
ever heard is not >>recommended/supported upstream as it can lead to
resource tracking issues in Nova that >>eventually lead to scheduling
failures later because of the scheduler thinking a node is >>available for
more than one instance when it's really not.

Just to clarify, I havent heard about this issue lately when we use
non-Exact filters. (before Pike release).

Regards
Nisha


On Fri, Sep 8, 2017 at 1:27 AM, Matt Riedemann <mriedemos at gmail.com> wrote:

> On 9/7/2017 2:48 PM, Nisha Agarwal wrote:
>
>> Hi Ironic Operators,
>>
>>  From Pike, ironic nodes get scheduled based on just the resource class
>> from nova. Do you guys see any concerns over this "rigid resource class
>> only ironic scheduling"?
>>
>> To be more specific, at your datacentre/production environment what all
>> filters are configured in nova.conf (configuration file for nova) for
>> scheduling an ironic node? Do you use RamFilter/DiskFilter/CoreFilter in
>> the "use_baremetal_filters" for ironic nodes scheduling from nova?
>>
>> Thanks and Regards
>> Nisha
>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Some more background information is in the ironic spec here:
>
> https://review.openstack.org/#/c/500429/
>
> Also, be aware of these release notes for Pike related to baremetal
> scheduling:
>
> http://docs-draft.openstack.org/77/501477/1/check/gate-nova-
> releasenotes/1dc7513//releasenotes/build/html/unreleased.html#id2
>
> In Pike, nova is using a combination of VCPU/MEMORY_MB/DISK_GB resource
> class amounts from the flavor during scheduling as it always has, but it
> will also check for the custom resource_class which comes from the ironic
> node. The custom resource class is optional in Pike but will be a hard
> requirement in Queens, or at least that was the plan. The idea being that
> long-term we'd stop consulting VCPU/MEMORY_MB/DISK_GB from the flavor
> during scheduling and just use the atomic node.resource_class since we want
> to allocate a nova instance to an entire ironic node, and this is also why
> the Exact* filters were used too.
>
> There are more details on using custom resource classes for scheduling
> here:
>
> https://specs.openstack.org/openstack/nova-specs/specs/pike/
> approved/custom-resource-classes-in-flavors.html
>
> Nisha is raising the question about whether or not we're making incorrect
> assumptions about how people are using nova/ironic and they want to use the
> non-Exact filters for VCPU/MEMORY_MB/DISK_GB, which as far as I have ever
> heard is not recommended/supported upstream as it can lead to resource
> tracking issues in Nova that eventually lead to scheduling failures later
> because of the scheduler thinking a node is available for more than one
> instance when it's really not.
>
> --
>
> Thanks,
>
> Matt
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170908/d6b7a815/attachment.html>


More information about the OpenStack-dev mailing list