[openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

Jim Rollenhagen jim at jimrollenhagen.com
Thu Jun 9 16:12:52 UTC 2016


> >>>>>1.)Nova<-> ironic interactions are generally seem terrible?
> >>>>I don't know if I'd call it terrible, but there's friction. Things that
> >>>>are unchangable on hardware are just software configs in vms (like mac
> >>>>addresses, overlays, etc), and things that make no sense in VMs are
> >>>>pretty standard on servers (trunked vlans, bonding, etc).
> >>>>
> >>>>One way we've gotten around it is by using Ironic standalone via
> >>>>Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
> >>>>and includes playbooks to build config drives and deploy images in a
> >>>>fairly rudimentary way without Nova.
> >>>>
> >>>>I call this the "better than Cobbler" way of getting a toe into the
> >>>>Ironic waters.
> >>>>
> >>>>[1] https://github.com/openstack/bifrost
> >>>Out of curiosity, why ansible vs turning
> >>>https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
> >>>(or something like it) into a tiny-wsgi-app (pick useful name here) that
> >>>has its own REST api (that looks pretty similar to the public functions
> >>>in that driver file)?
> >>
> >>That's an interesting idea. I think a reason Bifrost doesn't just import
> >>nova virt drivers is that they're likely _not_ a supported public API
> >>(despite not having _'s at the front). Also, a lot of the reason Bifrost
> >>exists is to enable users to get the benefits of all the baremetal
> >>abstraction work done in Ironic without having to fully embrace all of
> >>OpenStack's core. So while you could get a little bit of the stuff from
> >>nova (like config drive building), you'd still need to handle network
> >>address assignment, image management, etc. etc., and pretty soon you
> >>start having to run a tiny glance and a tiny neutron. The Bifrost way
> >>is the opposite: I just want a tiny Ironic, and _nothing_ else.
> >>
> >
> >Ya, I'm just thinking that at a certain point
> 
> Oops forgot to fill this out, was just thinking that at a certain point it
> might be easier to figure out how to extract that API (meh, if its public or
> private) and just have someone make an executive decision around ironic
> being a stand-alone thing or not (and a capable stand-alone thing, not a
> sorta-standalone-thing).

So, I've been thinking about this quite a bit. We've also talked about
doing a v2 API (as evil as that may be) in Ironic here and there. We've
had lots of lessons learned from the v1 API, mostly that our API is
absolutely terrible for humans. I'd love to fix that (whether that
requires a v2 API or not is unclear, so don't focus on that).

I've noticed that people keep talking about the Nova driver API
not being public/stable/whatever in this thread - let's ignore that and
think bigger.

So, there's two large use cases for ironic that we support today:

* Ironic as a backend to nova. Operators still need to interact with the
  Ironic API for management, troubleshooting, and fixing issues that
  computers do not handle today.

* Ironic standalone - by this I mean ironic without nova. The primary
  deployment method here is using Bifrost, and I also call it the
  "better than cobbler" case. I'm not sure if people are using this
  without bifrost, or with other non-nova services, today. Users in this
  model, as I understand things, do not interact with the Ironic API
  directly (except maybe for troubleshooting).

There's other use cases I would like to support:

* Ironic standalone, without Bifrost. I would love for a deployer to be
  able to stand up Ironic as an end-user facing API, probably with
  Keystone, maybe with Neutron/Glance/Swift if needed. This would
  require a ton of discussion and work (e.g. ironic has no concept of
  tenants/projects today, we might want a scheduler, a concept of an
  instance, etc) and would be a very long road. The ideal solution to
  this is to break out the Compute API and scheduler to be separate from
  Nova, but that's an even longer road, so let's pretend I didn't say
  that and not devolve this thread into that conversation (yet).

* Ironic as a backend to other things. Josh pointed out kubernetes
  somewhere, I'd love to be an official backend there. Heat today goes
  through Nova to get an ironic instance, it seems reasonable to have
  heat talk directly to ironic. Things like that. The amount of work
  here might depend on the application using ironic (e.g. I think k8s
  has it's own scheduler, heat does not, right?).

So all that said, I think there is one big step we can take in the
short-term that works for all of these use cases: make our API better.
Make it simpler. Take a bunch of the logic in the Nova driver, and put
it in our API instead. spawn() becomes /v1/nodes/foo/deploy or
something, etc (I won't let us bikeshed those specifics in this thread).
Just doing that allows us to remove a bunch of code from a number of
places (nova, bifrost, shade, tempest(?)) and make those simpler. It
allows direct API users to more easily deploy things, making one API
call instead of a bunch (we could even create Neutron ports and such for
them). It allows k8s and friends to write less code. Oh, let's also stop
directly exposing state machine transitions as API actions, that's
crazy, kthx.

I think this is what Josh is trying to get at, except maybe with a
separate API service in between, which doesn't sound very desirable to
me.

Thoughts on this?

Additionally, in the somewhat-short term, I'd like us to try to
enumerate the major use cases we're trying to solve, and make those use
cases ridiculously simple to deploy. Ironic is quickly becoming a
tangled mess of configuration options and tweaking surrounding services
(nova, neutron) to deploy it. Once it's figured out, it works very well.
However, it's incredibly difficult to figure out how to get there.

Ultimately, I'd like someone that wants to deploy ironic in a common use
case, with off-the-shelf hardware, to be able to get a POC up and
running in a matter of hours, not days or weeks.

Who's in? :)

// jim



More information about the OpenStack-dev mailing list