[openstack-dev] [nova][baremetal] Next step of Bare-Metal Provisioning

Ken Igarashi 50barca at gmail.com
Mon Nov 26 12:39:04 UTC 2012


Hi Devananda and all,

I put those discussions at
https://etherpad.openstack.org/GrizzlyBareMetalCloud.
Based on your comment, we docomo team start discussing the next step. We
will let you know once we decide the direction for the next step.

As for "disk image builder project", DOCOMO team are OK having a
"stackforge project" if there is no overlap with other projects (e.g.
https://blueprints.launchpad.net/heat/+spec/prebuilding-images).
DOCOMO team can contribute bare-metal related ramdisk (e.g. Clear ephemeral
disk, Discover BM Node).

Thanks,
Ken


On Tue, Nov 20, 2012 at 4:08 AM, Devananda van der Veen <
devananda.vdv at gmail.com> wrote:

> Hi Ken, all,
>
> The HP team has several goals we are working on. At the top right now is
> getting all the tooling for CI in place so that we can be testing the
> baremetal driver properly. I am adding support to devstack now, and expect
> to finish in the next few days.
>
> (There are a few other bits that need to be done to tie this into
> devstack-gate and jenkins.o.o which aren't related to the driver
> development, so I won't talk about them here.)
>
>  I've add some comments inline ...
>
> On Mon, Nov 19, 2012 at 2:44 AM, Ken Igarashi <50barca at gmail.com> wrote:
>
>> Hi,
>>
>> Since several patches start being merged, I would like to start
>> discussion about the new functions of bare-metal provisioning.
>>
>> The followings are items as far as I understand (most of them are listed
>> at https://etherpad.openstack.org/GrizzlyBareMetalCloud). If someone has
>> already started implementation of some items, or if I miss some important
>> items, please let me know.  I want to avoid duplicate works and
>> make Bare-Metal Provisioning better.
>>
>> 1. Security group support (we will use
>> https://review.openstack.org/#/c/10664/)
>>
>
> ++
>
>
>> 2. One nova compute supports multiple CPU archs (x86, arm, tilera,…),
>> power managers(IPMI, PDU,…) and capabilities
>>
>
> This is slightly important for us, but not in the short term. In the long
> term, we would also like to see nova-compute understand multiple
> hypervisors so that baremetal can co-exist with kvm/xen/etc.
>
>
>> 3. Snapshot for a bare-metal instance
>>
>> 4. Access for Nova-Volume (cinder)
>>
>> - without using nova-compute iSCSI proxy.
>>
>> - use iSCSI CHAP?
>>
>> 5. Clear ephemeral disk after an instance termination (e.g. fill with
>> zero)
>>
> This should be fairly a straight-forward change, with two steps:
>   a. build a "wipe-ramdisk" based on the current "deploy-ramdisk" but with
> different init scripts
>   b. plug this into the driver, eg. within destroy()
>
> We can do this. We are also adding a hardware-discovery ramdisk, so that
> one does not need to know all hw info in advance.
>
>
>> 6. Improve an image deployment
>>
>> - not use iSCSI (get an AMI directory from Glance without using "PXE
>> bare-metal provisioning helper server")
>>
> This is very important for us, and we have had several internal
> discussions on how to do it at scale. The general direction here is to
> embed a bittorrent client into the deploy ramdisk, and provide the
> partition layout via cloud meta-data service. This will probably be a fair
> bit of work; we will do this, too.
>
>
>> 7. Fault-tolerance of bare-metal nova-compute and database -
>> https://blueprints.launchpad.net/nova/+spec/bare-metal-fault-tolerance
>>
>> 8.  More CPU archs (arm) support
>>
>>
>> Others
>>
>> - Use VNC instead of using SOL (Serial over LAN)
>>
>> - Support a bare-metal node with one NIC (we need at least two NICs
>> today).
>>
>
> Actually, what we need is for baremetal to support N nic, where N > 0, and
> understand limitations of a physical network. Unfortunately, we won't be
> able to rely on SDN to solve some of the network issues for us, so we need
> a way to inform nova-network // quantum of the physical topology (how many
> NIC, what each MAC is, what networks each interface is on, what VLAN are
> available or required on which interfaces, and so on). I believe Robert
> Collins sent a large email about this last week, and it may be good for us
> to have a longer discussion about this topic.
>
>
> Regards,
> Devananda
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121126/2fd04a3b/attachment.html>


More information about the OpenStack-dev mailing list