[openstack-dev] [Nova] [Ironic] [TripleO] scheduling flow with Ironic?
Alex Glikson
GLIKSON at il.ibm.com
Thu Nov 14 06:11:06 UTC 2013
Thanks, I understand the Nova scheduler part. One of the gaps there is
related to the blueprint we have are working on [1]. I was wondering
regarding the role of Ironic, and the exact interaction between the user,
Nova and Ironic.
In particular, initially I thought that Ironic is going to have its own
scheduler, resolving some of the issues and complexity within Nova (which
could focus on VM management, maybe even getting rid of hosts versus
nodes, etc). But it seems that Ironic aims to stay at the level of virt
driver API.. It is a bit unclear to me what is the desired architecture
going forward - e.g., if the idea is to standardize virt driver APIs but
keep the scheduling centralized, maybe we should take the rest of virt
drivers into separate projects as well, and extend Nova to schedule beyond
just compute (if it is already doing so for virt + bare-metal).
Alternatively, each of them could have its own scheduler (like the
approach we took when splitting out cinder, for example) - and then
someone on top (e.g., Heat) would need to do the cross-project logic.
Taking different architectural approaches in different cases confuses me a
bit.
Thanks,
Alex
[1] https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
From: Devananda van der Veen <devananda.vdv at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>,
Date: 13/11/2013 10:29 PM
Subject: Re: [openstack-dev] [Nova] [Ironic] [TripleO] scheduling
flow with Ironic?
On Wed, Nov 13, 2013 at 8:02 AM, Alex Glikson <GLIKSON at il.ibm.com> wrote:
Hi,
Is there a documentation somewhere on the scheduling flow with Ironic?
The reason I am asking is because we would like to get virtualized and
bare-metal workloads running in the same cloud (ideally with the ability
to repurpose physical machines between bare-metal workloads and
virtualized workloads), and would like to better understand where the gaps
are (and potentially help bridging them).
Hi Alex,
Baremetal uses an alternative
scheduler, nova.scheduler.baremetal_host_manager.BaremetalHostManager, so
giving that a read may be helpful. It searches the available list of
baremetal nodes for one that matches the CPU, RAM, and disk capacity of
the requested flavor, and compares the node's extra_specs:cpu_arch to that
of the requested image, then consumes 100% of that node's available
resources. Otherwise, I believe the scheduling flow is basically the same:
http request to n-api, rpc passes to n-scheduler, which selects a node,
and calls to n-conductor & n-cpu to do the work of spawn()ing it.
As far as the gaps in running both baremetal and virtual -- I have been
told by several folks that it's possible to run both baremetal and virtual
hypervisors in the same cloud by using separate regions, or separate
host-aggregates, for the simple reason that these require distinct
nova-scheduler processes. A single scheduler, today, can't serve both. I
haven't seen any docs on how to do this, though.
As for moving a workload between them, the TripleO team has discussed this
and, afaik, decided to hold off working on it for now. It would be better
for them to fill in the details here -- my memory may be wrong, or things
may have changed.
Cheers,
Devananda
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131114/e5a54b3c/attachment.html>
More information about the OpenStack-dev
mailing list