[openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

Khanh-Toan Tran khanh-toan.tran at cloudwatt.com
Wed Nov 13 12:58:37 UTC 2013

Hi all, 

Having no info from the HK summit on this topic, I would like to re-open it in the mailing list. 

My concerns with the API is that it does not refer to a particular (scheduling) component. In my 
understanding, it comes along with the SolverScheduler proposal. However, we does not know whether 
this component is located either. The first implementation suggests that it would located within Nova, 
but, as the discussion goes on, I believe that it will have interractions with Cinder & Neutron 
also. Thus its position and its (possible) relation with Heat and other *-scheduler is unclear. 

The reason I want to point out this is that normally we would design a global architecture, then 
depending on the funcitonality of a component and its interactions with other components that we 
would define its API. This API, however, does not refer to any particular component. In my 
understanding, it is designed so that users could define their request as a whole, as opposed to 
current openstack where users have to make requests for nova, cinder and neutron independently. 

With that purpose in mind, I wonder if the component that would receive this request should be Heat? 
'Cause users will not send two requests for the same application. In this case we should extend Heat 
API to accept this kind of request, not to define a completely new one. If, however, that the 
component that this API is not user-facing (i.e. designed for some meta-scheduler which receives request 
that are generated from another component, such as Heat), then the API should be designed in the way 
that reflex this aspect. 

Anyway, I would like that before we go further, please give a global architecture that would benefit 
from such a functionality and which component that would receive and make use of this request. 


----- Original Message -----

From: "Mike Spreitzer" <mspreitz at us.ibm.com> 
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org> 
Sent: Wednesday, October 30, 2013 6:34:51 PM 
Subject: Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload 

Alex Glikson <GLIKSON at il.ibm.com> wrote on 10/30/2013 02:26:08 AM: 

> Mike Spreitzer <mspreitz at us.ibm.com> wrote on 30/10/2013 06:11:04 AM: 
> > Date: 30/10/2013 06:12 AM 
> > 
> > Alex also wrote: 
> > ``I wonder whether it is possible to find an approach that takes 
> > into account cross-resource placement considerations (VM-to-VM 
> > communicating over the application network, or VM-to-volume 
> > communicating over storage network), but does not require delivering 
> > all the intimate details of the entire environment to a single place 
> > -- which probably can not be either of Nova/Cinder/Neutron/etc.. but 
> > can we still use the individual schedulers in each of them with 
> > partial view of the environment to drive a placement decision which 
> > is consistently better than random?'' 
> > 
> > I think you could create a cross-scheduler protocol that would 
> > accomplish joint placement decision making --- but would not want 
> > to. It would involve a lot of communication, and the subject matter 
> > of that communication would be most of what you need in a 
> > centralized placement solver anyway. You do not need "all the 
> > intimate details", just the bits that are essential to making the 
> > placement decision. 
> Amount of communication depends on the protocol, and what exactly 
> needs to be shared.. Maybe there is a range of options here that we 
> can potentially explore, between what exists today (Heat talking to 
> each of the components, retrieving local information about 
> availability zones, flavors and volume types, existing resources, 
> etc, and communicates back with scheduler hints), and having a 
> centralized DB that keeps the entire data model. 
> Also, maybe different points on the continuum between 'share few' 
> and 'share a lot' would be a good match for different kinds of 
> environments and different kinds of workload mix (for example, as 
> you pointed out, in an environment with flat network and centralized 
> storage, the sharing can be rather minimal). 

I'm not going to claim the direction you're heading is impossible, I am not good at impossibility proofs. But I do wonder about the why of it. This came up in the context of the issues around the fact that orchestration is downstream from joint decision making. Even if that joint decision making is done in a distributed way, orchestration will still be downstream from it. 

OpenStack-dev mailing list 
OpenStack-dev at lists.openstack.org 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131113/1d08ca84/attachment.html>

More information about the OpenStack-dev mailing list