[openstack-dev] [openstack-tc] Splitting the Baremetal driver out of Nova

Rafi Khardalian rafi at metacloud.com
Sat Apr 27 00:02:56 UTC 2013


Another +1 here on splitting out the bare metal code code.

As has already been covered quite thoroughly in this thread, the problems
which need to be solved for managing bare metal are entirely different than
that of virtual machines.  The structure of the current bare metal code
within Nova is evidence of that (separate schema, services, etc.).  It just
becomes more problematic as functionality for bare metal is extended.

We are still early enough into the initiative to make this split now,
before we end up with a project inside a project, which would ultimately
slow both efforts.

- Rafi


On Fri, Apr 26, 2013 at 4:04 PM, Michael Still <mikal at stillhq.com> wrote:

> I think this is true -- especially as there are relatively few users at
> the moment. The longer we leave the current implementation in nova, the
> harder it gets to remove.
>
>
> On Sat, Apr 27, 2013 at 1:56 AM, Russell Bryant <rbryant at redhat.com>wrote:
>
>> On 04/26/2013 10:26 AM, Monty Taylor wrote:
>> >
>> >
>> > On 04/26/2013 09:56 AM, Russell Bryant wrote:
>> >> On 04/26/2013 06:29 AM, Mark McLoughlin wrote:
>> >>> Hey
>> >>>
>> >>> On Thu, 2013-04-25 at 11:57 -0700, Devananda van der Veen wrote:
>> >>>> In the Nova "Baremetal Next Steps" design session last Thursday, I
>> proposed
>> >>>> that we split the baremetal driver out into its own top-level
>> project -
>> >>>> this was met with support from everyone in the room. We then
>> discussed the
>> >>>> project's plans and how such a split should be done. I have written
>> up that
>> >>>> proposal in more detail here, and would like to bring this before
>> the TC.
>> >>>>
>> >>>> https://wiki.openstack.org/wiki/BaremetalSplitRationale
>> >>>
>> >>> I'm all for the code being in a separate project - I think this thing
>> >>> could have users outside of Nova.
>> >>>
>> >>> I did work up an idea for how it could be done during Folsom
>> >>> development, but I actually was thinking of it just being a library:
>> >>>
>> >>>   https://gist.github.com/markmc/5466295
>> >>>
>> >>> Not much detail there, but BareMetalConnection/BareMetalNode would be
>> in
>> >>> the library and BareMetalDriver would be the Nova part. I hadn't
>> figured
>> >>> on this being a service with its own REST API, but perhaps that does
>> >>> make sense.
>> >>
>> >> I'm +1 on splitting it out.  I wasn't too sure about library vs.
>> service
>> >> with an API because I'm not hugely familiar with this code, anyway.  I
>> >> think Devananda did a nice job documenting the rationale, though.  I
>> >> think this bullet from the doc helps push me toward the idea of a
>> service:
>> >>
>> >>     Operational teams often perform tasks on hardware which do not
>> >>     apply to virtual machines (eg., discovery, HW RAID configuration,
>> >>     firmware updates, burn-in). These could be added as Nova API
>> >>     extensions, but again, it seems like the wrong approach. Instead,
>> >>     a separate API could manage hardware before exposing via the Nova
>> >>     API. For example, after being discovered, configured, updated, and
>> >>     burned in by Ironic, it could then be enrolled with Nova and
>> >>     provisioned as any other cloud instance (eg, via "nova boot").
>> >>
>> >> It seems that a separate API makes sense here to avoid making awkward
>> >> extensions to the compute API.
>> >>
>> >>> However, that does make me wonder whether we'd see desire to plug
>> >>> alternative baremetal provisioning technologies into that API, in the
>> >>> same way we see a desire for alternative backends to the Swift API.
>> >>
>> >> Would that be a problem?  Actually, is that different from how it works
>> >> already?
>> >>
>> >>     cfg.StrOpt('driver',
>> >>                default='nova.virt.baremetal.pxe.PXE',
>> >>                help='Baremetal driver back-end (pxe or tilera)'),
>> >>     cfg.StrOpt('power_manager',
>> >>                default='nova.virt.baremetal.ipmi.IPMI',
>> >>                help='Baremetal power management method'),
>> >>
>> >>> Finally, making this a service (with a yet undefined API) rather than
>> a
>> >>> library makes me think the new project should go through an incubation
>> >>> period, rather than bypassing incubation the way Cinder did.
>> >>
>> >> I agree here.
>> >>
>> >> I'm actually generally concerned with the fast-track approach.  I do
>> not
>> >> think it is the case here, but it could be abused in the future as an
>> >> alternative, perhaps easier method to getting a project to be
>> >> integrated.  I have actually heard this exact strategy mentioned in
>> >> conversation (growing something in a project and then splitting it off,
>> >> because it seems as if it will be easier that way).
>> >
>> > Good point.
>> >
>> > As it pertains to incubated splits and the deprecation of the feature in
>> > the original project - what do we think the cutover should be? If, for
>> > instance, we moved forward with Ironic as an incubated project, and then
>> > accepted into integrated for "I" - since there had already been the
>> > Havana cycle with the external project and the incubated project, do we
>> > pull the code from nova in I? Or do we wait until J?
>>
>> If there is a sufficiently documented migration path, I think pulling
>> baremetal out of Nova in the same release Ironic becomes integrated
>> would be acceptable (so, theoretically I in this case).
>>
>> --
>> Russell Bryant
>>
>> _______________________________________________
>> OpenStack-TC mailing list
>> OpenStack-TC at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130426/0d8c7031/attachment.html>


More information about the OpenStack-dev mailing list