[openstack-dev] [Ironic]Communication between Nova and Ironic

Oleg Gelbukh ogelbukh at mirantis.com
Sun Dec 29 19:33:37 UTC 2013


This discussion is very interesting indeed :)

The current approach to auto-scale is that it is decided upon by Heat
service. Heat templates have special mechanisms to trigger auto-scaling of
resources when certain conditions are met.
In combination with Ironic, it has powerful potential for
OpenStack-on-OpenStack use case you're describing.

Baiscally, Heat has all orchestration functions in OpenStack. I see it as a
natural place for other interesting things like auto-migration of workloads
and so on.

Best regards,
Oleg Gelbukh

On Sun, Dec 29, 2013 at 8:03 AM, LeslieWang <wqyuwss at hotmail.com> wrote:

> Hi Client,
> Current ironic call is for add/delete baremetl server, not with
> auto-scale. As we discussed in another thread. What I'm thinking is
> related with auto-scale baremetal server. In my mind, the logic can be
>   1. Nova scheduler determines scale up one baremetal server.
>   2. Nova scheduler notify ironic (or other API?) to power up the server.
>   3. if ironic (or other service?) returns success, nova scheduler can
> call ironic to add the baremetal server into cluster.
> Of course, this is not a sole way for auto-scale. As you specified in
> another thread, auto-scale can be triggered from under-cloud or other
> monitoring service. Just try to bring up the interesting discussion. :-)
> Best Regards
> Leslie
> > From: clint at fewbar.com
> > To: openstack-dev at lists.openstack.org
> > Date: Sat, 28 Dec 2013 13:40:08 -0800
> > Subject: Re: [openstack-dev] [Ironic]Communication between Nova and
> Ironic
> >
> > Excerpts from LeslieWang's message of 2013-12-24 03:01:51 -0800:
> > > Hi Oleg,
> > >
> > > Thanks for your promptly reply and detail explanation. Merry Christmas
> and wish you have a happy new year!
> > >
> > > At the same time, I think we can discuss more on Ironic is for backend
> driver for nova. I'm new in ironic. Per my understanding, the purpose of
> bare metal as a backend driver is to solve the problem that some appliance
> systems can not be virtualized, but operator still wants same cloud
> management system to manage these systems. With the help of ironic,
> operator can achieve the goal, and use one openstack to manage these
> systems as VMs, create, delete, deploy image etc. this is one typical use
> case.
> > >
> > > In addition, actually I'm thinking another interesting use case.
> Currently openstack requires ops to pre-install all servers. TripleO try to
> solve this problem and bootstrap openstack using openstack. However, what
> is missing here is dynamic power on VM/switches/storage only. Imagine
> initially lab only had one all-in-one openstack controller. The whole work
> flow can be:
> > > 1. Users request one VM or baremetal server through portal.
> > > 2. Horizon sends request to nova-scheduler
> > > 3. Nova-scheduler finds no server, then invoke ironic api to power on
> one through IPMI, and install either hyper visor or appliance directly.
> > > 4. If it need create VM, Nova-scheduler will find one compute node,
> and send message for further processing.
> > >
> > > Based on this use case, I'm thinking whether it makes sense to embed
> this ironic invokation logic in nova-scheduler, or another approach is as
> overall orchestration manager, TripleO project has a TripleO-scheduler to
> firstly intercept the message, invoke ironic api, then heat api which calls
> nova api, neutron api, storage api. In this case, TripleO only powers on
> baremetal server running VM, nova is responsible to power on baremetal
> server running appliance system. Sounds like latter one is a good solution,
> however the prior one also works. So can you please comment on it? Thanks!
> > >
> >
> > I think this basically already works the way you desire. The scheduler
> > _does_ decide to call ironic, it just does so through nova-compute RPC
> > calls. That is important, as this allows the scheduler to hand-off the
> > entire work-flow of provisioning a machine to nova-compute in the exact
> > same way as is done for regular cloud workloads.
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131229/4fd0e0b9/attachment.html>

More information about the OpenStack-dev mailing list