[openstack-dev] [placement] The "intended purpose" of traits

Mark Goddard mark at stackhpc.com
Sat Sep 29 09:51:20 UTC 2018


To add some context around what I suspect is the reason for the most recent
incarnation of this debate, many Ironic users have a requirement to be able
to influence the configuration of a server at deploy time, beyond the
existing supported mechanisms. The classic example is hardware RAID - the
ability to support workloads with different requirements is important,
since if you're paying for bare metal cloud resources you'll want to make
sure you're getting the most out of them. Another example that comes up is
hyperthreading - often this is disabled for HPC workloads but enabled for
HTC.

We've had a plan to support deploy-time configuration that has existed for
a few cycles. It began with adding support for traits [1] in Queens, and
continued with the deploy steps framework [2] in Rocky. At the Stein PTG we
had a lot of support [3] for finishing the job by implementing the deploy
templates [4] spec that is currently in review.

At a very high level, deploy templates allow us to map a reuired trait
specified on a flavor to a set of deploy steps in ironic. These deploy
steps are based on the existing cleaning steps framework that has existed
in ironic for many releases, and should feel familiar to users of ironic.
This scheme is conceptually quite simple, which I like.

After a negative review on the spec from Jay on Thursday, I added a design
to the alternatives section of the spec that I thought might align better
with his view of the world. Essentially, decouple the scheduling and
configuration - flavors may specify required traits as they can today, but
also a more explicit list of names or UUIDs of ironic deploy templates. I'm
still not sure how I feel about this. Architecturally it's cleaner, and is
more flexible but from a usability perspective feels a little clunky.

There was a discussion [5] in ironic's IRC yesterday that I missed, in
which Jay offered to write up an alternative spec that uses glance metadata
[6]. There were some concerns about adding a hard requirement on glance for
the standalone use case, but we may be able to provide an alternative
solution analogous to manual cleaning that fills that gap.

I'm certainly interested to see what Jay comes up with. If there is a
better way of doing this, I'm all ears. That said, this is something the
ironic community has been wanting for a long time now, and I can't see us
waiting for a multi-cycle feature to land in nova, given that deploy
templates currently requires no changes in nova.

[1]
http://specs.openstack.org/openstack/ironic-specs/specs/10.1/node-traits.html
[2]
https://specs.openstack.org/openstack/ironic-specs/specs/11.1/deployment-steps-framework.html
[3] https://etherpad.openstack.org/p/ironic-stein-ptg-goals
[4] https://review.openstack.org/#/c/504952/
[5]
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2018-09-28.log.html#t2018-09-28T14:22:57
[6] https://docs.openstack.org/glance/pike/user/metadefs-concepts.html

On Sat, 29 Sep 2018 at 03:15, Alex Xu <soulxu at gmail.com> wrote:

> Sorry for append another email for something I missed to say.
>
> Alex Xu <soulxu at gmail.com> 于2018年9月29日周六 上午10:01写道:
>
>>
>>
>> Jay Pipes <jaypipes at gmail.com> 于2018年9月29日周六 上午5:51写道:
>>
>>> On 09/28/2018 04:42 PM, Eric Fried wrote:
>>> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
>>> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried <openstack at fried.cc>
>>> wrote:
>>> >>> It's time somebody said this.
>>> >>>
>>> >>> Every time we turn a corner or look under a rug, we find another use
>>> >>> case for provider traits in placement. But every time we have to have
>>> >>> the argument about whether that use case satisfies the original
>>> >>> "intended purpose" of traits.
>>> >>>
>>> >>> That's only reason I've ever been able to glean: that it (whatever
>>> "it"
>>> >>> is) wasn't what the architects had in mind when they came up with the
>>> >>> idea of traits. We're not even talking about anything that would
>>> require
>>> >>> changes to the placement API. Just, "Oh, that's not a *capability* -
>>> >>> shut it down."
>>> >>>
>>> >>> Bubble wrap was originally intended as a textured wallpaper and a
>>> >>> greenhouse insulator. Can we accept the fact that traits have (many,
>>> >>> many) uses beyond marking capabilities, and quit with the arbitrary
>>> >>> restrictions?
>>> >>
>>> >> How far are we willing to go? Does an arbitrary (key: value) pair
>>> >> encoded in a trait name like key_`str(value)` (e.g.
>>> CURRENT_TEMPERATURE:
>>> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see
>>> in
>>> >> placement?
>>> >
>>> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
>>> > TEMPERATURE_<specific_number> is not.
>>>
>>> That's correct, because you're encoding >1 piece of information into the
>>> single string (the fact that it's a temperature *and* the value of that
>>> temperature are the two pieces of information encoded into the single
>>> string).
>>>
>>> Now that there's multiple pieces of information encoded in the string
>>> the reader of the trait string needs to know how to decode those bits of
>>> information, which is exactly what we're trying to avoid doing (because
>>> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and
>>> the giant hairball that is the NUMA and CPU pinning "metadata requests"
>>> how that turns out).
>>>
>>
>> May I understand the one of Jay's complain is about metadata API
>> undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess?
>>
>
> If yes, then we resolve the discoverable by the "/Traits" API.
>
>
>>
>> Another complain is about the information in the string. Agree with that
>> TEMPERATURE_<specific_number> is terriable.
>> I prefer the way I used in nvdimm proposal now, I don't want to use Trait
>> NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the
>> different resource provider, and use min_size, max_size limit the
>> allocation. And the user will request with resource class like
>> RC_NVDIMM_GB=512.
>>
>
> TEMPERATURE_<specific_number> is wrong, as the way using it. But I don't
> thing the version of BIOS is wrong, I don't expect the end user to ready
> the information from the trait directly, there should document from the
> admin to explain more. The version of BIOS should be a thing understand by
> the admin, then it is enough.
>
>
>>
>>>
>>> > This thread isn't about setting these parameters; it's about getting
>>> > us to a point where we can discuss a question just like this one
>>> > without running up against: >
>>> > "That's a hard no, because you shouldn't encode key/value pairs in
>>> traits."
>>> >
>>> > "Oh, why's that?"
>>> >
>>> > "Because that's not what we intended when we created traits."
>>> >
>>> > "But it would work, and the alternatives are way harder."
>>> >
>>> > "-1"
>>> >
>>> > "But..."
>>> >
>>> > "-I
>>>
>>> I believe I've articulated a number of times why traits should remain
>>> unary pieces of information, and not just said "because that's what we
>>> intended when we created traits".
>>>
>>> I'm tough on this because I've seen the garbage code and unmaintainable
>>> mess that not having structurally sound data modeling concepts and
>>> information interpretation rules leads to in Nova and I don't want to
>>> encourage any more of it.
>>>
>>> -jay
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180929/5a6f43e2/attachment.html>


More information about the OpenStack-dev mailing list