[openstack-dev] [Ironic] Node groups and multi-node operations

Lucas Alvares Gomes lucasagomes at gmail.com
Thu Jan 23 12:57:32 UTC 2014


> So, a conversation came again up today around whether or not Ironic will,
> in the future, support operations on groups of nodes. Some folks have
> expressed a desire for Ironic to expose operations on groups of nodes;
> others want Ironic to host the hardware-grouping data so that eg. Heat and
> Tuskar can make more intelligent group-aware decisions or represent the
> groups in a UI. Neither of these have an implementation in Ironic today...
> and we still need to implement a host of other things before we start on
> this. FWIW, this discussion is meant to stimulate thinking ahead to things
> we might address in Juno, and aligning development along the way.
>
> There's also some refactoring / code cleanup which is going on and worth
> mentioning because it touches the part of the code which this discussion
> impacts. For our developers, here is additional context:
> * our TaskManager class supports locking >1 node atomically, but both the
> driver API and our REST API only support operating on one node at a time.
> AFAIK, no where in the code do we actually pass a group of nodes.
>

I think it's fine and we should keep things as-is, locking 1 node per time.
Supporting locking a group of nodes at the same time can be very tricky and
hard to debug, what if you try to lock a group of nodes and one of them is
already locked by another conductor? Should you release all the nodes that
you already got a lock and return a failure? Should you skip that one and
try to get the lock of the rest of the nodes in that list ? etc... I think
it's over-complex.

We can add some logic on top of Ironic to do the multiple operations on the
Nodes, I don't think we need to have it baked in Ironic itself.


>  * for historical reasons, our driver API requires both a TaskManager and
> a Node object be passed to all methods. However, the TaskManager object
> contains a reference to the Node(s) which it has acquired, so the node
> parameter is redundant.
>

Yea, that needs to be fixed


> * we've discussed cleaning this up, but I'd like to avoid refactoring the
> same interfaces again when we go to add group-awareness.
>

+1 to discuss it first


I'll try to summarize the different axis-of-concern around which the
> discussion of node groups seem to converge...
>
> 1: physical vs. logical grouping
> - Some hardware is logically, but not strictly physically, grouped. Eg, 1U
> servers in the same rack. There is some grouping, such as failure domain,
> but operations on discrete nodes are discreet. This grouping should be
> modeled somewhere, and some times a user may wish to perform an operation
> on that group. Is a higher layer (tuskar, heat, etc) sufficient? I think so.
>

Right, wouldn't it be achieved today by using something like metadata? All
our resources does have an "extra" attribute that can be used to store
whatever key=value attribute in it, maybe this logical grouping could be
just a matter of adding a value to the "extra" attribute. One problem I can
see with that is that currently we don't offer a way to GET a group nodes
by looking at a specific metada value, maybe we need to improve our
filtering to do that.


> - Some hardware _is_ physically grouped. Eg, high-density cartridges which
> share firmware state or a single management end-point, but are otherwise
> discrete computing devices. This grouping must be modeled somewhere, and
> certain operations can not be performed on one member without affecting all
> members. Things will break if each node is treated independently.
>

I think this is important, I always thought about our Chassis resource to
be able to group this kinda of nodes. Also, independent of if it's modeled
in the API or not, we need to know internally that operations in some nodes
might affects others so we definitely need (even if internally somehow)
this kinda of grouping in Ironic in the future.


> 2: performance optimization
> - Some operations may be optimized if there is an awareness of concurrent
> identical operations. Eg, deploy the same image to lots of nodes using
> multicast or bittorrent. If Heat were to inform Ironic that this deploy is
> part of a group, the optimization would be deterministic. If Heat does not
> inform Ironic of this grouping, but Ironic infers it (eg, from timing of
> requests for similar actions) then optimization is possible but
> non-deterministic, and may be much harder to reason about or debug.
>

> 3: APIs
> - Higher layers of OpenStack (eg, Heat) are expected to orchestrate
> discrete resource units into a larger group operation. This is where the
> grouping happens today, but already results in inefficiencies when
> performing identical operations at scale. Ironic may be able to get around
> this by coalescing adjacent requests for the same operation, but this would
> be non-deterministic.
> - Moving group-awareness or group-operations into the lower layers (eg,
> Ironic) looks like it will require non-trivial changes to Heat and Nova,
> and, in my opinion, violates a layer-constraint that I would like to
> maintain. On the other hand, we could avoid the challenges around
> coalescing. This might be necessary to support physically-grouped hardware
> anyway, too.
>
>
> If Ironic coalesces requests, it could be done in either the
> ConductorManager layer or in the drivers themselves. The difference would
> be whether our internal driver API accepts one node or a set of nodes for
> each operation. It'll also impact our locking model. Both of these are
> implementation details that wouldn't affect other projects, but would
> affect our driver developers.
>
> Also, until Ironic models physically-grouped hardware relationships in
> some internal way, we're going to have difficulty supporting that class of
> hardware. Is that OK? What is the impact of not supporting such hardware?
> It seems, at least today, to be pretty minimal.
>
>
> Discussion is welcome.
>
> -Devananda
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140123/2af1fde5/attachment.html>


More information about the OpenStack-dev mailing list