[openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration
Dmitry Tantsur
dtantsur at redhat.com
Wed Sep 17 11:28:36 UTC 2014
On Wed, 2014-09-17 at 10:36 +0100, Steven Hardy wrote:
> On Tue, Sep 16, 2014 at 02:06:59PM -0700, Devananda van der Veen wrote:
> > On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter <zbitter at redhat.com> wrote:
> > > On 16/09/14 15:24, Devananda van der Veen wrote:
> > >>
> > >> On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter <zbitter at redhat.com> wrote:
> > >>>
> > >>> On 16/09/14 13:56, Devananda van der Veen wrote:
> > >>>>
> > >>>>
> > >>>> On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy <shardy at redhat.com> wrote:
> > >>>>>
> > >>>>>
> > >>>>> For example, today, I've been looking at the steps required for driving
> > >>>>> autodiscovery:
> > >>>>>
> > >>>>> https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
> > >>>>>
> > >>>>> Driving this process looks a lot like application orchestration:
> > >>>>>
> > >>>>> 1. Take some input (IPMI credentials and MAC addresses)
> > >>>>> 2. Maybe build an image and ramdisk(could drop credentials in)
> > >>>>> 3. Interact with the Ironic API to register nodes in maintenance mode
> > >>>>> 4. Boot the nodes, monitor state, wait for a signal back containing
> > >>>>> some
> > >>>>> data obtained during discovery (same as WaitConditions or
> > >>>>> SoftwareDeployment resources in Heat..)
> > >>>>> 5. Shutdown the nodes and mark them ready for use by nova
> > >>>>>
> > >>>>
> > >>>> My apologies if the following sounds snarky -- but I think there are a
> > >>>> few misconceptions that need to be cleared up about how and when one
> > >>>> might use Ironic. I also disagree that 1..5 looks like application
> > >>>> orchestration. Step 4 is a workflow, which I'll go into in a bit, but
> > >>>> this doesn't look at all like describing or launching an application
> > >>>> to me.
> > >>>
> > >>>
> > >>>
> > >>> +1 (Although step 3 does sound to me like something that matches Heat's
> > >>> scope.)
> > >>
> > >>
> > >> I think it's a simplistic use case, and Heat supports a lot more
> > >> complexity than is necessary to enroll nodes with Ironic.
> > >>
> > >>>
> > >>>> Step 1 is just parse a text file.
> > >>>>
> > >>>> Step 2 should be a prerequisite to doing -anything- with Ironic. Those
> > >>>> images need to be built and loaded in Glance, and the image UUID(s)
> > >>>> need to be set on each Node in Ironic (or on the Nova flavor, if going
> > >>>> that route) after enrollment. Sure, Heat can express this
> > >>>> declaratively (ironic.node.driver_info must contain key:deploy_kernel
> > >>>> with value:NNNN), but are you suggesting that Heat build the images,
> > >>>> or just take the UUIDs as input?
> > >>>>
> > >>>> Step 3 is, again, just parse a text file
> > >>>>
> > >>>> I'm going to make an assumption here [*], because I think step 4 is
> > >>>> misleading. You shouldn't "boot a node" using Ironic -- you do that
> > >>>> through Nova. And you _dont_ get to specify which node you're booting.
> > >>>> You ask Nova to provision an _instance_ on a _flavor_ and it picks an
> > >>>> available node from the pool of nodes that match the request.
> > >>>
> > >>>
> > >>>
> > >>> I think your assumption is incorrect. Steve is well aware that
> > >>> provisioning
> > >>> a bare-metal Ironic server is done through the Nova API. What he's
> > >>> suggesting here is that the nodes would be booted - not Nova-booted, but
> > >>> booted in the sense of having power physically applied - while in
> > >>> maintenance mode in order to do autodiscovery of their capabilities,
> > >>
> > >>
> > >> Except simply applying power doesn't, in itself, accomplish anything
> > >> besides causing the machine to power on. Ironic will only prepare the
> > >> PXE boot environment when initiating a _deploy_.
> > >
> > >
> > > From what I gather elsewhere in this thread, the autodiscovery stuff is a
> > > proposal for the future, not something that exists in Ironic now, and that
> > > may be the source of the confusion.
> > >
> > > In any case, the etherpad linked at the top of this email was written by
> > > someone in the Ironic team and _clearly_ describes PXE booting a "discovery
> > > image" in maintenance mode in order to obtain hardware information about the
> > > box.
> > >
> >
> > Huh. I should have looked at that earlier in the discussion. It is
> > referring to out-of-tree code whose spec was not approved during Juno.
> >
> > Apparently, and unfortunately, throughout much of this discussion,
> > folks have been referring to potential features Ironic might someday
> > have, whereas I have been focused on the features we actually support
> > today. That is probably why it seems we are "talking past each other."
>
> FWIW I think a big part of the problem has been that you've been focussing
> on the fact that my solution doesn't match your preconceived ideas of how
> Ironic should interface with the world, while completely ignoring the
> use-case, e.g the actual problem I'm trying to solve.
>
> That is why I'm referring to features Ironic might someday have - because
> Ironic currently does not solve my problem, so I'm looking for a workable
> way to change that.
>
> When I posted the draft Ironic resources, I did fail to provide detailed
> use-case info, so my bad there, but since I've posted the spec I don't
> really feel like the discussion has been much more productive - I've tried,
> repeatedly, to get you to understand my use-case, and you've tried,
> repeatedly, to tell me my implementation is wrong (without providing any
> fully-formed alternative, I call this "unqualified your-idea-sucks", a
> common and destructive review anti-pattern IMO)
>
> It wasn't until Jay Faulkner's message earlier in this thread that someone
> actually proposed a possible (partial) alternative solution to the "ready
> state" use case, and that isn't implemented at all in Ironic yet.
>
> Maybe referring to stuff like autodiscovery was a mistake, but I was just
> trying to highlight that there are some interesting and potentially
> innovative possiblities which could be explored, if we had some Ironic heat
> resources. It sounds like doing the whole autodiscovery thing in bash is
> what folks prefer, which is fine, nothing stops them doing that regardless
> of anything we do in Heat.
>
> Anyway, lets try to summarize the key points and capture the main
> work-items:
>
> 1. Not everyone will have an enterprise CMDB, so there should be some way
> to input inventory without one (even if it is a text file fed into
> ironicclient). The bulk-loading format to do this is TBD.
>
> 2. A way to generate that inventory in an automated way is desirable for
> some folks, but looks likely to be out-of-scope for Ironic. Folks are -1
> on using heat to drive this process, so we'll probably end up with some
> scary shell scripts instead, or maybe a mistral workflow in future.
Well, IMO we need it at least for TripleO, whether in Ironic or not.
What's the point of having OpenStack deploy OpenStack, if in the middle
we'll ask operators to use scripts/some CMDB to make Ironic node
database ready-to-use?
>
> 3. Vendor-specific optimization of nodes for particular roles will be
> handled via Ironic drivers, which expose capabilities which can be selected
> via nova flavors. (is there a BP for this?)
>
> 4. Stuff like RAID configuration will be handled via in-band config
> management tools, nobody has offered any solution for using management
> interfaces to do this, and drac-raid-mgmt is unlikely to land in Ironic
> (where would such an interface be appropriate then?)
>
> 5. Nobody has offered any solution for management and convergence of BIOS
> and firmware levels (would this be part of the Ironic driver mentioned in
> (3), or are we punting the entire problem to in-band provision-time tooling?)
By the way, these 3 points are important to multi-tenant baremetals in
the future. Can Ironic rely on some tooling to assure that tenant has
left the hardware in a usable state? I don't think so. For me it's part
of Ironic lifecycle management.
>
> If anyone can help by providing existing BP's related to the above (which I
> can follow and/or contribute to) that would be great - I'm happy to drop
> the whole Heat resource thing, but only if there's a clear path to solving
> the problems in some other/better way.
>
> Thanks,
>
> Steve
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list