[Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

Sylvain Bauza sbauza at redhat.com
Fri Sep 28 09:11:19 UTC 2018


On Fri, Sep 28, 2018 at 12:50 AM melanie witt <melwittt at gmail.com> wrote:

> On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote:
> > On 9/27/2018 3:02 PM, Jay Pipes wrote:
> >> A great example of this would be the proposed "deploy template" from
> >> [2]. This is nothing more than abusing the placement traits API in order
> >> to allow passthrough of instance configuration data from the nova flavor
> >> extra spec directly into the nodes.instance_info field in the Ironic
> >> database. It's a hack that is abusing the entire concept of the
> >> placement traits concept, IMHO.
> >>
> >> We should have a way *in Nova* of allowing instance configuration
> >> key/value information to be passed through to the virt driver's spawn()
> >> method, much the same way we provide for user_data that gets exposed
> >> after boot to the guest instance via configdrive or the metadata service
> >> API. What this deploy template thing is is just a hack to get around the
> >> fact that nova doesn't have a basic way of passing through some collated
> >> instance configuration key/value information, which is a darn shame and
> >> I'm really kind of annoyed with myself for not noticing this sooner. :(
> >
> > We talked about this in Dublin through right? We said a good thing to do
> > would be to have some kind of template/profile/config/whatever stored
> > off in glare where schema could be registered on that thing, and then
> > you pass a handle (ID reference) to that to nova when creating the
> > (baremetal) server, nova pulls it down from glare and hands it off to
> > the virt driver. It's just that no one is doing that work.
>
> If I understood correctly, that discussion was around adding a way to
> pass a desired hardware configuration to nova when booting an ironic
> instance. And that it's something that isn't yet possible to do using
> the existing ComputeCapabilitiesFilter. Someone please correct me if I'm
> wrong there.
>
> That said, I still don't understand why we are talking about deprecating
> the ComputeCapabilitiesFilter if there's no supported way to replace it
> yet. If boolean traits are not enough to replace it, then we need to
> hold off on deprecating it, right? Would the
> template/profile/config/whatever in glare approach replace what the
> ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly
> understanding this yet.
>
>
I just feel some new traits have to be defined, like Jay said, and some
work has to be done on the Ironic side to make sure they expose them as
traits and not by the old way.
That leaves tho a question : does Ironic support custom capabilities ? If
so, that leads to Jay's point about the key/pair information that's not
intented for traits. If we all agree on the fact that traits shouldn't be
allowed for key/value pairs, could we somehow imagine Ironic to change the
customization mechanism to be boolean only ?

Also, I'm a bit confused whether operators make use of Ironic capabilities
for fancy operational queries, like the ones we have in
https://github.com/openstack/nova/blob/3716752/nova/scheduler/filters/extra_specs_ops.py#L24-L35
and if Ironic correctly documents how to put such things into traits ? (eg.
say CUSTOM_I_HAVE_MORE_THAN_2_GPUS)

All of the above makes me a bit worried by a possible
ComputeCapabilitiesFilter deprecation, if we aren't yet able to provide a
clear upgrade path for our users.

-Sylvain

-melanie
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20180928/f7a78d37/attachment.html>


More information about the OpenStack-operators mailing list