[openstack-dev] [ironic] ironic and traits

Dmitry Tantsur dtantsur at redhat.com
Tue Oct 17 16:11:15 UTC 2017


Hi!

Answering to both Eric and John inline.

On 10/16/2017 07:26 PM, John Garbutt wrote:
> On 16 October 2017 at 17:55, Eric Fried <openstack at fried.cc
> <mailto:openstack at fried.cc>> wrote:
>
>     * Adding references to the specs: ironic side [1]; nova side [2] (which
>     just merged).
>
>     * Since Jay is on vacation, I'll tentatively note his vote by proxy [3]
>     that ironic should be the source of truth - i.e. option (a).  I think
>     the upshot is that it's easier for Ironic to track and resolve conflicts
>     than for the virt driver to do so.
>
>
> As I see it, all of these options have Ironic as the source of truth for Nova.
>
> Driver here is about the Ironic drivers, not Nova virt driver.

This is correct, sorry for confusion.

>
>   > The downside is obvious - with a lot of deploy templates
>
>     > available it can be a lot of manual work.
>
>     * How does option (b) help with this?
>
>
> The operator defines the configuration templates. The driver could then report
> traits for any configuration templates that it knows it a given node can support.

Yeah, this avoids explicit

  openstack baremetal node trait set <UUID> CUSTOM_RAID_5

for many nodes.

>
> But I suspect a node would have to boot up an image to check if a given set of
> RAID or BIOS parameters are valid. Is that correct? I am sure there are way to
> cache things that could help somewhat.

BIOS - no. RAID - well, some drivers do RAID in-band, but I think we can only
leave driver-side validation here to simplify things.

>
>     * I suggested a way to maintain the "source" of a trait (operator,
>     inspector, etc.) [4] which would help with resolving conflicts.
>     However, I agree it would be better to avoid this extra complexity if
>     possible.
>
>
> That is basically (b.2).
>
>
>     * This is slightly off topic, but it's related and will eventually need
>     to be considered: How are you going to know whether a
>     UEFI-capable-but-not-enabled node should have its UEFI mode turned on?
>     Are you going to parse the traits specified in the flavor?  (This might
>     work for Ironic, but will be tough in the general case.)

We have a nova spec approved for passing matches traits to ironic. Ironic then
will use them to figure out. Currently it works the same way with capabilities.

>
>     [1] https://review.openstack.org/504531 <https://review.openstack.org/504531>
>
> Also the other ironic spec: https://review.openstack.org/#/c/504952
>
>     [2] https://review.openstack.org/507052 <https://review.openstack.org/507052>
>     [3]
>     https://review.openstack.org/#/c/507052/4/specs/queens/approved/ironic-traits.rst@88
>     <https://review.openstack.org/#/c/507052/4/specs/queens/approved/ironic-traits.rst@88>
>     [4]
>     https://review.openstack.org/#/c/504531/4/specs/approved/node-traits.rst@196
>     <https://review.openstack.org/#/c/504531/4/specs/approved/node-traits.rst@196>
>
>     On 10/16/2017 11:24 AM, Dmitry Tantsur wrote:
>      > Hi all,
>      >
>      > I promised John to dump my thoughts on traits to the ML, so here we go :)
>      >
>      > I see two roles of traits (or kinds of traits) for bare metal:
>      > 1. traits that say what the node can do already (e.g. "the node is
>      > doing UEFI boot")
>      > 2. traits that say what the node can be *configured* to do (e.g. "the
>     node can
>      > boot in UEFI mode")
>      >
>      > This seems confusing, but it's actually very useful. Say, I have a flavor
>     that
>      > requests UEFI boot via a trait. It will match both the nodes that are
>     already in
>      > UEFI mode, as well as nodes that can be put in UEFI mode.
>      >
>      > This idea goes further with deploy templates (new concept we've been thinking
>      > about). A flavor can request something like CUSTOM_RAID_5, and it will
>     match the
>      > nodes that already have RAID 5, or, more interestingly, the nodes on which we
>      > can build RAID 5 before deployment. The UEFI example above can be treated
>     in a
>      > similar way.
>      >
>      > This ends up with two sources of knowledge about traits in ironic:
>      > 1. Operators setting something they know about hardware ("this node is in
>     UEFI
>      > mode"),
>      > 2. Ironic drivers reporting something they
>      >   2.1. know about hardware ("this node is in UEFI mode" - again)
>      >   2.2. can do about hardware ("I can put this node in UEFI mode")
>      >
>      > For case #1 we are planning on a new CRUD API to set/unset traits for a node.
>      > Case #2 is more interesting. We have two options, I think:
>      >
>      > a) Operators still set traits on nodes, drivers are simply validating
>     them. E.g.
>      > an operators sets CUSTOM_RAID_5, and the node's RAID interface checks if
>     it is
>      > possible to do. The downside is obvious - with a lot of deploy templates
>      > available it can be a lot of manual work.
>      >
>      > b) Drivers report the traits, and they get somehow added to the traits
>     provided
>      > by an operator. Technically, there are sub-cases again:
>      >   b.1) The new traits API returns a union of operator-provided and
>      > driver-provided traits
>      >   b.2) The new traits API returns only operator-provided traits;
>     driver-provided
>      > traits are returned e.g. via a new field (node.driver_traits). Then nova will
>      > have to merge the lists itself.
>
>
> As an alternative, we could enable a configuration template by Resource Class.
> That way its explicit, but you don't have to set it on every node?

This assumes that every resource class corresponds to only one template. We
already have people upset by having only one resource class per node :)

>
> I think we would then need a version of (b.1) to report that extra trait up to
> Nova, based on the given Resource Class.
>
>      > My personal favorite is the last option: I'd like a clear distinction between
>      > different "sources" of traits, but I'd also like to reduce manual work for
>      > operators.
>
>
> I am all for making an operators lives easier, but personally I lean towards
> explicitly enabling things, hence my current preference for (a).

This is certainly easier to implement.

>
> I would be tempted to add (b.2) as a second step, after we get (a) working and
> tested.

I'm not sure how it will work with both ways, to be honest..

>
>      > A valid counter-argument is: what if an operator wants to override a
>      > driver-provided trait? E.g. a node can do RAID 5, but I don't want this
>      > particular node to do it for any reason. I'm not sure if it's a valid
>     case, and
>      > what to do about it.
>
>
> I could claim some horrid performance bug in a RAID controller might mean you
> want that, but I am just making that up.

Well, it sounds like a very valid case actually.

>
> I was imagining for given set of nodes, you want to QA only a certain set of
> RAID configs, and those are the ones you offer and that is a different set to
> some other set of nodes, even if they could support the configs supplied for the
> other nodes. Right now those restrictions will just map to Nova flavors you
> create, but longer term that might cause problems. Maybe it is to have six disks
> configured the same was as five disks, just with one disk unused, maybe you
> don't want that?

I think you've convinced me to go with an explicit case.

>
> I am curious, can we validate if the params are valid for RAID and BIOS config
> without trying it out on a given host? How would we do that for all nodes once a
> new configuration template is added?

We can have only limited validation, and that's probably fine. I'm mostly
worried about operators adding CUSTOM_RAID_5 to e.g. a node whose driver does
not support RAID. Or RAID 5.

>
> Thanks,
> John
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list