[openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

Dmitry Tantsur dtantsur at redhat.com
Wed Sep 17 07:24:18 UTC 2014


On Tue, 2014-09-16 at 15:42 -0400, Zane Bitter wrote:
> On 16/09/14 15:24, Devananda van der Veen wrote:
> > On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter <zbitter at redhat.com> wrote:
> >> On 16/09/14 13:56, Devananda van der Veen wrote:
> >>>
> >>> On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy <shardy at redhat.com> wrote:
> >>>>
> >>>> For example, today, I've been looking at the steps required for driving
> >>>> autodiscovery:
> >>>>
> >>>> https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
> >>>>
> >>>> Driving this process looks a lot like application orchestration:
> >>>>
> >>>> 1. Take some input (IPMI credentials and MAC addresses)
> >>>> 2. Maybe build an image and ramdisk(could drop credentials in)
> >>>> 3. Interact with the Ironic API to register nodes in maintenance mode
> >>>> 4. Boot the nodes, monitor state, wait for a signal back containing some
> >>>>      data obtained during discovery (same as WaitConditions or
> >>>>      SoftwareDeployment resources in Heat..)
> >>>> 5. Shutdown the nodes and mark them ready for use by nova
> >>>>
> >>>
> >>> My apologies if the following sounds snarky -- but I think there are a
> >>> few misconceptions that need to be cleared up about how and when one
> >>> might use Ironic. I also disagree that 1..5 looks like application
> >>> orchestration. Step 4 is a workflow, which I'll go into in a bit, but
> >>> this doesn't look at all like describing or launching an application
> >>> to me.
> >>
> >>
> >> +1 (Although step 3 does sound to me like something that matches Heat's
> >> scope.)
> >
> > I think it's a simplistic use case, and Heat supports a lot more
> > complexity than is necessary to enroll nodes with Ironic.
> >
> >>
> >>> Step 1 is just parse a text file.
> >>>
> >>> Step 2 should be a prerequisite to doing -anything- with Ironic. Those
> >>> images need to be built and loaded in Glance, and the image UUID(s)
> >>> need to be set on each Node in Ironic (or on the Nova flavor, if going
> >>> that route) after enrollment. Sure, Heat can express this
> >>> declaratively (ironic.node.driver_info must contain key:deploy_kernel
> >>> with value:NNNN), but are you suggesting that Heat build the images,
> >>> or just take the UUIDs as input?
> >>>
> >>> Step 3 is, again, just parse a text file
> >>>
> >>> I'm going to make an assumption here [*], because I think step 4 is
> >>> misleading. You shouldn't "boot a node" using Ironic -- you do that
> >>> through Nova. And you _dont_ get to specify which node you're booting.
> >>> You ask Nova to provision an _instance_ on a _flavor_ and it picks an
> >>> available node from the pool of nodes that match the request.
> >>
> >>
> >> I think your assumption is incorrect. Steve is well aware that provisioning
> >> a bare-metal Ironic server is done through the Nova API. What he's
> >> suggesting here is that the nodes would be booted - not Nova-booted, but
> >> booted in the sense of having power physically applied - while in
> >> maintenance mode in order to do autodiscovery of their capabilities,
> >
> > Except simply applying power doesn't, in itself, accomplish anything
> > besides causing the machine to power on. Ironic will only prepare the
> > PXE boot environment when initiating a _deploy_.
> 
>  From what I gather elsewhere in this thread, the autodiscovery stuff is 
> a proposal for the future, not something that exists in Ironic now, and 
> that may be the source of the confusion.
> 
> In any case, the etherpad linked at the top of this email was written by 
> someone in the Ironic team and _clearly_ describes PXE booting a 
> "discovery image" in maintenance mode in order to obtain hardware 
> information about the box.
If was written by me and it seems to be my fault that I didn't state
there more clear that this work is not and probably will not be merged
into Ironic upstream. Sorry for the confusion.

That said, my experiments proved quite possible (though not without some
network-related hacks as of now) to follow these steps to collect (aka
discover) hardware information required for scheduling from a node,
knowing only it's IPMI credentials.

> 
> cheers,
> Zane.
> 
> >> which
> >> is presumably hard to do automatically when they're turned off.
> >
> > Vendors often have ways to do this while the power is turned off, eg.
> > via the OOB management interface.
> >
> >> He's also
> >> suggesting that Heat could drive this process, which I happen to disagree
> >> with because it is a workflow not an end state.
> >
> > +1
> >
> >> However the main takeaway
> >> here is that you guys are talking completely past one another, and have been
> >> for some time.
> >>
> >
> > Perhaps more detail in the expected interactions with Ironic would be
> > helpful and avoid me making (perhaps incorrect) assumptions.
> >
> > -D
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





More information about the OpenStack-dev mailing list