[openstack-dev] [nova][baremetal] Next step of Bare-Metal Provisioning

Osamu Habuka xiu.yushen at gmail.com
Fri Dec 7 05:00:16 UTC 2012


Hi Ken, Devananda and all,

Last year, I and Guan and yoko made dodai-compute as
baremetal provisioning tool that were created with
use of diablo.
https://github.com/nii-cloud/dodai-compute

And also, we think we would like to join to your project
because we have an interest in your baremetal project.

Best Regards,
Osamu

2012/11/27 Robert Collins <robertc at robertcollins.net>:
> On Tue, Nov 27, 2012 at 1:39 AM, Ken Igarashi <50barca at gmail.com> wrote:
>> Hi Devananda and all,
>>
>> I put those discussions at
>> https://etherpad.openstack.org/GrizzlyBareMetalCloud.
>
> Hi - looking at that now.
>
> If I read correctly, these are the 'next step' bits - I've just kept
> the headings, and replying here - I'll feed any consensus we get back
> into the etherpad later.
>
> Security group support
>  - using openflow is ideal, but it would be good to have it work for
> non-openflow too. One way would be by having the security group
> exposed to the instance metadata service (if it is not already) and
> hving the node configure iptables itself based on the metadata. I
> don't think we have any immediate need for that though.
>
> One nova compute supports multiple Hypervisors(kvm/xen/lxr etc), CPU
> archs (x86, arm, tilera,…), power managers(IPMI, PDU,…) and
> capabilities
>  - I don't think this is needed at all if we permit multiple
> nova-compute instances to run on one machine (e.g. by different
> nova.conf files).
>
> Snapshot for a bare-metal instance
>  - This seems possible, but only by rebooting the machine. Certainly
> its not something we need.
>
> Access for Nova-Volume (cinder)
>  - This is quite interesting, being able to run disks on cinder
> volumes. A related thing would be boot-from-volume for bare metal.
> Where the local disk is entirely ephemeral and we boot the machine off
> of cinder.
>  - probably PXE boot the ramdisk and kernel from cinder, iscsi mounted root.
>
> Clear ephemeral disk after an instance termination (e.g. fill with zero)
>  - Definitely interesting.
>  - Two use cases: dispose of instance, and new-tenant
>  - There are two non-wipe use cases I think:
>    - 'redeploy instance' to deploy with the same ephemeral disk
>    - deploying a new instance same tenant, so data security isn't
> important, but time to deploy may matter a lot.
>  - I wonder if (on suitable disks) we can just issue TRIM for the whole disk,
>
> Discover BM Node (collect basic information for BM DB)
>  - We have prototype code coming together for this
>
> Improve an image deployment
>  - This is very important for scaling.
>  - iscsi is ok, but we need to stop needing a full image per bm node,
> rather one image for all the nodes using the same image.
>  - The bittorrent thing is perhaps longer term. Important for fast
> mass deploys, but secondary to getting away from per-node fs
> injection.
>  - The biggest blocker to stopping the fs injection for baremetal is
> getting DHCP available on all baremetal networks - so that all the
> interfaces the physical machine has cabled get good DHCP. I've some
> preliminary work on a quantum provider-network setup for this, but it
> will need in-instance cooperation (again instance metadata seems like
> a sensible handoff strategy).
>
> Fault-tolerance of bare-metal nova-compute and database -
> https://blueprints.launchpad.net/nova/+spec/bare-metal-fault-tolerance
>  - There does not seem to be much on this spec yet, it just loops back
> into the etherpad. I think its a good thing to have though.
>  - A few quick thoughts
>  - multiple -compute nodes listening on AMQP
>  - replace the baremetal helper with nova-API, which has an existing HA story
>  - Redundant DHCP via HA quantum-dhcp-agent
>  - Redundant openflow agent
>
> More CPU archs (arm) support
>  - interesting but not on our immediate roadmap
>
> Support a bare-metal node with one NIC (we need at least two NICs today)
>  - very important to us
>
>> Based on your comment, we docomo team start discussing the next step. We
>> will let you know once we decide the direction for the next step.
>>
>> As for "disk image builder project", DOCOMO team are OK having a "stackforge
>> project" if there is no overlap with other projects (e.g.
>> https://blueprints.launchpad.net/heat/+spec/prebuilding-images).
>> DOCOMO team can contribute bare-metal related ramdisk (e.g. Clear ephemeral
>> disk, Discover BM Node).
>
> Cool, we'll get that done. There is some overlap with HEAT, and we're
> actively discussing with them the best way to collaborate.
>
> -Rob
> --
> Robert Collins <rbtcollins at hp.com>
> Distinguished Technologist
> HP Cloud Services
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list