[openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

Alex Glikson GLIKSON at il.ibm.com
Tue Oct 29 22:05:17 UTC 2013


If we envision the main benefits only after (parts of) this logic moves 
outside of Nova (and starts addressing other resources) -- would it be 
still worth maintaining an order of 5K LOC in Nova to support this 
feature? Why not going for the 'ultimate' solution in the first place 
then, keeping in Nova only the mandatory enablement (TBD)?
Alternatively, if we think that there is value in having this just in Nova 
-- would be good to understand the exact scenarios which do not require 
awareness of other resources (and see if they are important enough to 
maintain those 5K LOC), and how exactly this can gradually evolve into the 
'ultimate' solution.
Or am I missing something?

Alex




From:   "Yathiraj Udupi (yudupi)" <yudupi at cisco.com>
To:     "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev at lists.openstack.org>, 
Date:   29/10/2013 11:46 PM
Subject:        Re: [openstack-dev] [nova][scheduler] Instance Group Model 
and APIs - Updated document with an example request payload



Thanks Alex, Mike, Andrew, Russel for your comments.  This ongoing API 
discussion started in our scheduler meetings, as a first step to tackle in 
the Smarter resource placement ideas - See the doc for reference - 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit 
 This roadmap calls for a unified resource placement decisions to be taken 
covering resources across services, starting from a complete topology 
request with all the necessary nodes/instances/resources, their 
connections, and the policies. 

However we agreed that we will first address the defining of the required 
APIs, and start the effort to make this happen within Nova,  using VM 
instances groups, with policies. 
Hence this proposal for the instance groups. 

The entire group needs to be placed as a whole, at least the first step is 
to find an ideal placement choices for the entire group.  Once the 
placement has been identified (using a smart resource placement engine 
that addresses solving the entire group), we then focus on ways to 
schedule them as a whole.  This is not part of the API discussion, however 
important for the smart resource placement ideas.  This definitely 
involves concepts such as reservation, etc.  Heat or Heat APIs could be a 
choice to enable the final orchestration, but I am not commenting on that 
here.

The APIs effort here is an attempt to provide clean interfaces now to be 
able to represent this instance group, and save them, and also define apis 
to create them.  The actual implementation will have to rely on one or 
more services to - 1. to make the resource placement decisions, 2. then 
actually provision them, orchestrate them in the right order, etc. 

The placement decisions itself can happen in a module that can be a 
separate service, and can be reused by different services, and it also 
needs to have a global vision of all the resources.  (Again all of this 
part of the scope of smart resource placement topic). 

Thanks,
Yathi. 


On 10/29/13, 2:14 PM, "Andrew Laski" <andrew.laski at rackspace.com> wrote:

On 10/29/13 at 04:05pm, Mike Spreitzer wrote:
Alex Glikson <GLIKSON at il.ibm.com> wrote on 10/29/2013 03:37:41 AM:

1. I assume that the motivation for rack-level anti-affinity is to
survive a rack failure. Is this indeed the case?
This is a very interesting and important scenario, but I am curious
about your assumptions regarding all the other OpenStack resources
and services in this respect.

Remember we are just starting on the roadmap.  Nova in Icehouse, holistic
later

2. What exactly do you mean by "network reachibility" between the
two groups? Remember that we are in Nova (at least for now), so we
don't have much visibility to the topology of the physical or
virtual networks. Do you have some concrete thoughts on how such
policy can be enforced, in presence of potentially complex
environment managed by Neutron?

I am aiming for the holistic future, and Yathi copied that from an example
I drew with the holistic future in mind.  While we are only addressing
Nova, I think a network reachability policy is inapproprite.

3. The JSON somewhat reminds me the interface of Heat, and I would
assume that certain capabilities that would be required to implement
it would be similar too. What is the proposed approach to
'harmonize' between the two, in environments that include Heat? What
would be end-to-end flow? For example, who would do the
orchestration of individual provisioning steps? Would "create"
operation delegate back to Heat for that? Also, how other
relationships managed by Heat (e.g., links to storage and network)
would be incorporated in such an end-to-end scenario?

You raised a few interesting issues.

1. Heat already has a way to specify resources, I do not see why we should
invent another.

2. Should Nova call Heat to do the orchestration?  I would like to see an
example where ordering is an issue.  IMHO, since OpenStack already has a
solution for creating resources in the right order, I do not see why we
should invent another.

Having Nova call into Heat is backwards IMO.  If there are specific 
pieces of information that Nova can expose, or API capabilities to help 
with orchestration/placement that Heat or some other service would like 
to use then let's look at that.  Nova has placement concerns that extend 
to finding a capable hypervisor for the VM that someone would like to 
boot, and then just slightly beyond.  If there are higher level 
decisions to be made about placement decisions I think that belongs 
outside of Nova, and then just tell Nova where to put it.



Thanks,
Mike

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131030/3232d159/attachment.html>


More information about the OpenStack-dev mailing list