[openstack-dev] [ironic] reducing tftp usage and separating boot control from dhcp config

Jarrod B Johnson jbjohnso at us.ibm.com
Thu May 9 18:08:06 UTC 2013




Jay Pipes <jaypipes at gmail.com> wrote on 05/09/2013 11:13:25 AM:

> From: Jay Pipes <jaypipes at gmail.com>
> To: openstack-dev at lists.openstack.org
> Date: 05/09/2013 11:56 AM
> Subject: Re: [openstack-dev] [ironic] reducing tftp usage and
> separating boot control from dhcp config
>
> On 05/09/2013 09:39 AM, Jarrod B Johnson wrote:
> > Hello, wanted to measure the relative interest in two related efforts I
> > could work on.  In general, I'm considering bringing over much of the
> > capability of xCAT's deployment facility over to Ironic, but wanted to
> > highlight a couple to start with.
> >
> > -An openstack tailored proxydhcp server.  This would mean the boot
> > control aspect would be cleanly split from network identity control.
> >  This would allow the bootstrap program to be more adaptive to UEFI and
> > BIOS style boot.  Given the limitations of python, I'd probably
> > implement this as a moderately standalone C program (e.g. the ability
to
> > get at IP_PKTINFO afaict isn't cleanly possible until python 3.3).  I
> > can only be confident with x86, ARM I think would work with some
> > different logic, POWER would not work.
> > -If a system does a PXE request, then send down a second stage
> > bootloader.  Then that bootloader would download third stage bootloader
> >
(pxelinux.0/elilo/grub2/efibootmgfw.efi/pxeboot.n12/esxboot.c32/esxboot.efi)

> > and would provide/download kernel/initrd/wim/multboot modules over
https
> > or http.  This would be iPXE based (perhaps the xNBA branch that I
> > established for xCAT).  A patched esxboot would also be relevant.  On
> > EFI Linux, elilo could work with patch (as it does for xCAT), but I
> > might see about grub2 depending on whether it will do something with
> > Simple filesystem or not.  The impetus for not elilo would be figuring
> > out the most straightforward path to target-side initrd concatenation.
> >  I presume there is a desire to ultimately not have many copies of the
> > same initrd data, but I plan to also take advantage of initrd injection
> > for other features.  I can do server side injection into initrds, but
it
> > would mean unique initrds per target.
>
> Hi Jarrod, I have a few starter questions for you.
>
> 1) What benefit would xCAT bring to OpenStack deployments that do not
> use IBM hardware?
>
> 2) How is xCAT different from IPMI + Cobbler/PXE/tftp?
>
> 3) Given that neither the above proposed solutions is Python-based, what
> plans are you thinking about for how to make the eventual solution
> packagable and installation via the methods that most folks use for
> deployment (Chef/Puppet/etc)
>
> Thanks!
> -jay
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

1) xCAT works fine with generic IPMI/PXE systems.  I have even put in
workarounds for spec deviations by Supermicro that never had relevance to
IBM equipment. The hardware control does present additional capability when
it hits IBM hardware (e.g. LED status, remote video, firmware inventory),
but the remote os boot has nearly zero affinity for our hardware as it
stands (bog standard PXE, with the one exceedingly trivial exception of the
ability to specify client IQN in an IBM way in addition to the standardi).
Some other things I am considering (like ISO IPL payload delivery) might
require vendor-specific backends, but these ideas don't.

2) Perhaps I should have broached the xCAT general topic separate of these
specific ideas.  However to briefly distinguish, we have driver injection
and os deployment and imaging for redhat, suse, ubuntu, windows, and esxi
platforms.  Our IPMI implementation can hit 4,000 servers in less than 10
seconds using a single process with a single filehandle.  tftp is of course
used in PXE flows, but to the extent possible immediately we jump to http
for transfer of even kernel and initrd.  For now this at least means a
higher performance protocol to improve scalability and boot time.  It also
paves the way (in conjunction with other concepts) for end to end verified
https transport of material that strongly suggests integrity assurance and
privacy.  However, all of this isn't critical for openstack's as the plan
would be reimplementation of capability, not a direct injection of code
(since xCAT is mostly perl) and as such each piece can be evaluated for its
own merits as it goes rather than its relation to xCAT

3) I must confess to not having my sea legs yet with respect to openstack
(have looked at code, but haven't even so much as installed it yet) and
thus am not yet certain of the best suggested approach to packaging
logisticts.  However, the proxydhcp solution would be roughly analagous to
dnsmasq/dhcpd in terms of being C utility leveraged by broader openstack.
The iPXE payload would be similarly analagous to pxelinux.0.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130509/104c4683/attachment.html>


More information about the OpenStack-dev mailing list