[openstack-dev] [Ironic] Node groups and multi-node operations

Robert Collins robertc at robertcollins.net
Sat Jan 25 10:47:42 UTC 2014


On 25 January 2014 19:42, Clint Byrum <clint at fewbar.com> wrote:
> Excerpts from Robert Collins's message of 2014-01-24 18:48:41 -0800:

>> > However, in looking at how Ironic works and interacts with Nova, it
>> > doesn't seem like there is any distinction of data per-compute-node
>> > inside Ironic.  So for this to work, I'd have to run a whole bunch of
>> > ironic instances, one per compute node. That seems like something we
>> > don't want to do.
>>
>> Huh?
>>
>
> I can't find anything in Ironic that lets you group nodes by anything
> except chassis. It was not a serious discussion of how the problem would
> be solved, just a point that without some way to tie ironic nodes to
> compute-nodes I'd have to run multiple ironics.

I don't understand the point. There is no tie between ironic nodes and
compute nodes. Why do you want one?

>> What makes you think this? Ironic runs in the same data centre as Nova...
>> It it takes 20000 Api calls to boot 10000 physical machines is that really
>> a performance problem? When other that first power on would you do that
>> anyway?
>>
>
> The API calls are meh. The image distribution and power fluctuations
> may not be.

But there isn't a strong connection between API call and image
distribution - e.g. (and this is my current favorite for 'when we get
to optimising') a glance multicast service - Ironic would just add
nodes to the relevant group as they are requested, and remove when
they complete, and glance can take care of stopping the service when
there are no members in the group.


>> > I actually think that the changes to Heat and Nova are trivial. Nova
>> > needs to have groups for compute nodes and the API needs to accept those
>> > groups. Heat needs to take advantage of them via the API.
>>
>> The changes to Nova would be massive and invasive as they would be
>> redefining the driver api....and all the logic around it.
>>
>
> I'm not sure I follow you at all. I'm suggesting that the scheduler have
> a new thing to filter on, and that compute nodes push their unique ID
> down into the Ironic driver so that while setting up nodes in Ironic one
> can assign them to a compute node. That doesn't sound massive and
> invasive.

I think we're perhaps talking about different things - in the section
you were answering, I thought he was talking about whether the API
should offer operations on arbitrary sets of nodes at once, or whether
each operation should be a separate API call vs what I now think you
were talking about which was whether operations should be able to
describe logical relations to other instances/nodes. Perhaps if we use
the term 'batch' rather than 'group' to talk about the
multiple-things-at-once aspect, and grouping to talk about the
primarily scheduler related problems of affinity / anti affinity etc,
we can avoid future confusion.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list