[openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler
Gil Rapaport
GILR at il.ibm.com
Thu Feb 6 14:33:06 UTC 2014
Mike, exactly: we would like to allow flexibility & complexity at the
Advisor level without it affecting the placement computation.
Advisors are expected to manifest complex behavior as suggested by these
BPs and gather constraints from multiple sources (users and providers).
The idea is indeed to define a protocol that can express placement
requests without exposing the engine to
complex/high-level/rapidly-changing/3rd-party semantics.
I think BPs such as the group API and flexible-evacuation combined with
the power of LP solvers Yathiraj mentioned do push the scheduler towards
being a more generic placement oracle, so the protocol should probably not
be limited to the current "deploy one or more instances of the same kind"
request.
Here's a more detailed description of our thoughts on how such a protocol
might look:
https://wiki.openstack.org/wiki/Nova/PlacementAdvisorAndEngine
We've concentrated on the Nova scheduler; Would be interesting to see if
this protocol aligns with Yathiraj's thoughts on a global scheduler
addressing compute+storage+network.
Feedback is most welcome.
Regards,
Gil
From: Mike Spreitzer <mspreitz at us.ibm.com>
To: "OpenStack Development Mailing List \(not for usage questions\)"
<openstack-dev at lists.openstack.org>,
Date: 02/04/2014 10:10 AM
Subject: Re: [openstack-dev] [Nova][Scheduler] Policy Based
Scheduler and Solver Scheduler
> From: Khanh-Toan Tran <khanh-toan.tran at cloudwatt.com>
...
> There is an unexpected line break in the middle of the link, so I post
it
> again:
>
>
https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
> IQB2Y
The mailing list software keeps inserting that line break. I
re-constructed the URL and looked at the document. As you point out at
the end, the way you attempt to formulate load balancing as a linear
objective does not work. I think load-balancing is a non-linear thing.
I also doubt that simple load balancing is what cloud providers want; I
think cloud providers want to bunch up load, within limits, for example to
keep some hosts idle so that they can be powered down to save on costs or
left available for future exclusive use.
> From: Gil Rapaport <GILR at il.ibm.com>
...
> As Alex Glikson hinted a couple of weekly meetings ago, our approach
> to this is to think of the driver's work as split between two entities:
> -- A Placement Advisor, that constructs placement problems for
> scheduling requests (filter-scheduler and policy-based-scheduler)
> -- A Placement Engine, that solves placement problems (HostManager
> in get_filtered_hosts() and solver-scheduler with its LP engine).
Yes, I see the virtue in that separation. Let me egg it on a little. What
Alex and KTT want is more structure in the Placement Advisor, where there
is a multiplicity of plugins, each bound to some fraction of the whole
system, and a protocol for combining the advice from the plugins. I would
also like to remind you of another kind of structure: some of the
placement desiderata come from the cloud users, and some from the cloud
provider.
> From: "Yathiraj Udupi (yudupi)" <yudupi at cisco.com>
...
> Like you point out, I do agree the two entities of placement
> advisor, and the placement engine, but I think there should be a
> third one – the provisioning engine, which should be responsible for
> whatever it takes to finally create the instances, after the
> placement decision has been taken.
I'm not sure what you mean by "whatever it takes to finally create the
instances", but that sounds like what I had assumed everybody meant by
"orchestration" (until I heard that there is no widespread agreement) ---
and I think we need to take a properly open approach to that. I think the
proper API for cross-service whole-pattern scheduling should primarily
focus on conveying the placement problem to the thing that will make the
joint decision. After the joint decision is made comes the time to create
the individual resources. I think we can NOT mandate one particular agent
or language for that. We will have to allow general clients to make calls
on Nova, Cinder, etc. to do the individual resource creations (with some
sort of reference to the decision that was already made). My original
position was that we could use Heat for this, but I think we have gotten
push-back saying it is NOT OK to *require* that. For example, note that
some people do not want to use Heat at all, they prefer to make individual
calls on Nova, Cinder, etc. Of course, we definitely want to support,
among others, the people who *do* use Heat.
> From: "Yathiraj Udupi (yudupi)" <yudupi at cisco.com>
...
> The solver-scheduler is designed to solve for an arbitrary list of
> instances of different flavors. We need to have some updated apis in
> the scheduler to be able to pass on such requests. Instance group
> api is an initial effort to specify such groups.
I'll remind the other readers of our draft of such a thing, at
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA
(you will have to re-assemble the URL if the ML software broke it for you)
My take-aways from the Icehouse summit's review of that are as follows.
(1) do NOT put orchestration under the covers (as I said above), allow
general clients to make the calls to create the individual resources.
(2) The community was not convinced that this would scale as needed.
(3) There were some remarks about "too complicated" but I am not clear on
whether the issue(s) were: (a) there was not clear/compelling motivation
for some of the expressiveness offered, (b) there is a simpler way to
accomplish the same things, (c) it's all good but more than we can take on
now, and/or (d) something else I did not get.
Regards,
Mike _______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140206/c005cc06/attachment.html>
More information about the OpenStack-dev
mailing list