[openstack-dev] [TripleO] Introspection rules aka advances profiles replacement: next steps
Ben Nemec
openstack at nemebean.com
Wed Oct 14 14:57:50 UTC 2015
On 10/14/2015 06:38 AM, Dmitry Tantsur wrote:
> Hi OoO'ers :)
>
> It's going to be a long letter, fasten your seat-belts (and excuse my
> bad, as usual, English)!
>
> In RDO Manager we used to have a feature called advanced profiles
> matching. It's still there in the documentation at
> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html
> but the related code needed reworking and didn't quite make it upstream
> yet. This mail is an attempt to restart the discussion on this topic.
>
> Short explanation for those unaware of this feature: we used detailed
> data from introspection (acquired using hardware-detect utility [1]) to
> provide scheduling hints, which we called profiles. A profile is
> essentially a flavor, but calculated using much more data. E.g. you
> could sat that a profile "foo" will be assigned to nodes with 1024 <=
> RAM <= 4096 and with GPU devices present (an artificial example).
> Profile was put on an Ironic as a capability as a result of
> introspection. Please read the documentation linked above for more details.
>
> This feature had a bunch of problems with it, to name a few:
> 1. It didn't have an API
> 2. It required a user to modify files by hand to use it
> 3. It was tied to a pretty specific syntax of the hardware [1] library
>
> So we decided to split this thing into 2 parts, which are of value one
> their own:
>
> 1. Pluggable introspection ramdisk - so that we don't force dependency
> on hardware-detect on everyone.
> 2. User-defined introspection rules - some DSL that will allow a user to
> define something like a specs file (see link above) via an API. The
> outcome would be something, probably capabilit(y|ies) set on a node.
> 3. Scheduler helper - an utility that will take capabilities set by the
> previous step, and turn them into exactly one profile to use.
>
> Long story short, we got 1 and 2 implemented in appropriate projects
> (ironic-python-agent and ironic-inspector) during the Liberty time
> frame. Now it's time to figure out what we do in TripleO about this, namely:
>
> 1. Do we need some standard way to define introspection rules for
> TripleO? E.g. a JSON file like we have for ironic nodes?
Yes, please.
>
> 2. Do we need a scheduler helper at all? We could use only capabilities
> for scheduling, but then we can end up with the following situation:
> node1 has capabilities C1 and C2, node2 has capability C1. First we
> deploy a flavor with capability C1, it goes to node1. Then we deploy a
> flavor with capability C2 and it fails, despite us having 2 correct
> nodes initially. This is what state files were solving in [1] (again,
> please refer to the documentation).
It sounds like the answer is yes. If the existing scheduler can't
handle a valid use case then we need some sort of solution.
>
> 3. If we need, where does it go? tripleo-common? Do we need an HTTP API
> for it, or do we just do it in place where we need it? After all, it's a
> pretty trivial manipulation with ironic nodes..
I think that would depend on what the helper ends up being. I can't see
it needing a REST API, but presumably it will have to plug into Nova
somehow. If it's something that would be generally useful (which it
sounds like it might be - Ironic capabilities aren't a TripleO-specific
thing), then it belongs in Nova itself IMHO.
>
> 4. Finally, we need an option to tell introspection to use
> python-hardware. I don't think it should be on by default, but it will
> require rebuilding of IPA (due to a new dependency).
Can we not just build it in always, but only use it when desired? Is
the one extra dependency that much of a burden?
>
> Looking forward to your opinions.
> Dmitry.
>
> [1] https://github.com/redhat-cip/hardware
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list