[openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload
Mike Spreitzer
mspreitz at us.ibm.com
Tue Oct 29 20:18:37 UTC 2013
John Garbutt <john at johngarbutt.com> wrote on 10/29/2013 07:29:19 AM:
> ...
> Its looking good, but I was thinking about a slightly different
approach:
>
> * I would like to see instance groups be used to describe all
> scheduler hints (including, please run on cell X, or please run on
> hypervisor Y)
I think Yathi's proposal is open in the sense that any type of policy can
appear (we only have to define the policy types :-). Removing old
features from the existing API is something that would have to be done
over time, if at all.
> * passing old scheduler hints to the API will just create a new
> instance group to persist the request
Yes, implementation re-org is easier that retiring old API.
> * ensure live-migrate/migrate never lets you violate the rules in the
> user hints, at least don't allow it to happen by accident
Right, that's why we are persisting the policy information.
> * I was expecting to see hard and soft constraints/hints, like: try
> keep in same switch, but make sure on separate servers
Good point, I forgot to mention that in my earlier reviews of the model!
> * Would be nice to have admin defined global options, like: "ensure
> tenant does note have two servers on the same hypervisor" or soft
That's the second time I have seen that idea in a week, there might be
something to it.
> * I expected to see the existing boot server command simply have the
> addition of a reference to a group, keeping the existing methods of
> specifying multiple instances
That was my expectation too, for how a 2-stage API would work. (A 1-stage
API would not have the client making distinct calls to create the
instances.)
> * I aggree you can't change a group's spec once you have started some
> VMs in that group, but you could then simply launch more VMs keeping
> to the same policy
Not if a joint decision was already made based on the totality of the
group.
> ...
>
> * augment the server details (and group?) with more location
> information saying where the scheduler actually put things, obfuscated
> on per tenant basis. So imagine nova, cinder, neutron exposing ordered
> (arbitrary tagged) location metadata like nova: (("host_id", "foo"),
> ("switch_group_id": "bar"), ("power_group": "bas"))
+1
> * the above should help us define the "scope" of a constraint relative
> to either a nova, cinder or neutron resource.
I am lost. What "above", what scope definition problem?
> * Consider a constraint that includes constraints about groups, like
> must be separate to group X, in the scope of the switch, or something
> like that
I think Yathi's proposal, with the policy types I suggested, already does
a lot of stuff like that. But I do not know what you mean by "in the
scope of the switch". I think you mean a location constraint, but am not
sure which switch you have in mind. I would approach this perhaps a
little more abstractly, as a collocation constraint between two resources
that are known to and meaningful to the client (yes, we are starting with
Nova only in Icehouse, hope to go holistic later).
> * Need more thought on constraints between volumes, servers and
> networks, I don't think edges are the right way to state that, I think
> it would be better as a cross group constraint, where the scope of the
> constraint is related to neutron.
I need more explanation or concrete examples to understand what problem(s)
you are thinking of. We are explicitly limiting ourselves to Nova at
first, later will add in other services.
Thanks,
Mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131029/c58cb202/attachment.html>
More information about the OpenStack-dev
mailing list