[openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

Stein, Manuel (Manuel) manuel.stein at alcatel-lucent.com
Wed Sep 18 13:35:03 UTC 2013


Mike,

interesting document.

What would be your approach to regions/zones/ensembles - does holistic mean to schedule w.r.t. I-specific constraints across _all_ hosts?

According to your naming and description, on the one hand I understand that the infrastructure orchestrator would not do any of the colocate-like constraint-evaluation. On the other hand, the holistic scheduler would leave some freedom in host selection up to the infras-scheduler, because it should try to align real state with the target state through tracking of the observed state? Do you split placement across the two (e.g. holistic decides on zone, infrastructure decides on host)?

It seems both the holistic scheduler as well as the infrastructure orchestrator use the observed state, but wouldn't they consume different information (I mean not-overlapping)? What kind of information are shared / both used by the scheduler and orchestrator?

As you mention Boris' take on reducing scheduling efficiency, his perspective is direct notification and circumvent the DB. Effectively, this would also affect the synchronized state among scheduler instances. What's your take on this? My humble understanding is that your holistic scheduling design (zoom in https://docs.google.com/drawings/d/1o2AcxO-qe2o1CE_g60v769hx9VNUvkUSaHBQITRl_8E/edit?pli=1) resembles a little bit the current nova DB approach with one central observed state (currently kept in the nova DB) and a synchronized/synthezised effective state in the scheduler instance (like the compute_node_get_all() call), however, why the separation between the holistic scheduler and infrastructure orchestrator? Once the scheduling decision is taken based on the effective state, the result could be given to the next level - why the orchestration? Multiple regions/service endpoints?

In case the holistic scheduler's "target state" decision that was taken on effective-state is a target that can't be achieved by recurring infrastructure orchestration: When would you then requeue the I-CFN and re-evaluate the holistic scheduler's decision based on an updated effective-state? When do you decide the target state can't be met with the decisions of the holistic scheduler?

I somehow would expect a "first come first served" policy from a provider. Is there some point of serialization of I-CFN deployments through one instance of a holistic scheduler or do you plan to have multiple instances of it? When parallel holistic schedulers pass decisions to parallel orchestrated deployment, the pursuit of a complex application topology/pattern/template's target state may be repeatedly interrupted by other decisions/pursuits of smaller applications coming in, causing the complex deployment to be delayed. Where would you prevent that?

Best, Manuel

PS: though I'm neither developer, sub-group or board member yet, I very much welcome the idea of the deployment phases (S-CFN, I-CFN, CFN) and referencing levels as we had exactly that approach in a EU research project (IRMOS) applying ontologies and interlinking RDF entities. But we had made heavy use of asynchronous chained transactions in order to request/reserve/book resources which is heavy on the transaction state side and doesn't fit with RESTful req/res and eventual consistency. I believe the key observation in your suggestion however is IMHO to call for a somewhat clearer separation of Software policy, Infrastructure policy and the actually demanded virtual infrastructure.

________________________________
From: Mike Spreitzer [mailto:mspreitz at us.ibm.com]
Sent: Dienstag, 17. September 2013 07:00
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

I have written a brief document, with pictures.  See https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps

Regards,
Mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130918/b86c69be/attachment.html>


More information about the OpenStack-dev mailing list