[openstack-dev] [Nova] [Ironic] [TripleO] scheduling flow with Ironic?

Devananda van der Veen devananda.vdv at gmail.com
Wed Nov 13 20:28:18 UTC 2013


On Wed, Nov 13, 2013 at 8:02 AM, Alex Glikson <GLIKSON at il.ibm.com> wrote:

> Hi,
>
> Is there a documentation somewhere on the scheduling flow with Ironic?
>
> The reason I am asking is because we would like to get virtualized and
> bare-metal workloads running in the same cloud (ideally with the ability to
> repurpose physical machines between bare-metal workloads and virtualized
> workloads), and would like to better understand where the gaps are (and
> potentially help bridging them).


Hi Alex,

Baremetal uses an alternative
scheduler, nova.scheduler.baremetal_host_manager.BaremetalHostManager, so
giving that a read may be helpful. It searches the available list of
baremetal nodes for one that matches the CPU, RAM, and disk capacity of the
requested flavor, and compares the node's extra_specs:cpu_arch to that of
the requested image, then consumes 100% of that node's available resources.
Otherwise, I believe the scheduling flow is basically the same: http
request to n-api, rpc passes to n-scheduler, which selects a node, and
calls to n-conductor & n-cpu to do the work of spawn()ing it.

As far as the gaps in running both baremetal and virtual -- I have been
told by several folks that it's possible to run both baremetal and virtual
hypervisors in the same cloud by using separate regions, or separate
host-aggregates, for the simple reason that these require distinct
nova-scheduler processes. A single scheduler, today, can't serve both. I
haven't seen any docs on how to do this, though.

As for moving a workload between them, the TripleO team has discussed this
and, afaik, decided to hold off working on it for now. It would be better
for them to fill in the details here -- my memory may be wrong, or things
may have changed.

Cheers,
Devananda
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131113/23757f2a/attachment.html>


More information about the OpenStack-dev mailing list