[openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

shalz shalz at hotmail.com
Thu Sep 12 01:49:21 UTC 2013


Mike,

You mention  "We are now extending that example to include storage, and we are also working examples with Hadoop. "

In the context of your examples / scenarios, do these placement decisions consider storage performance and capacity on a physical node?

For example: Based on application needs, and IOPS, latency requirements - carving out a SSD storage or a traditional spinning disk block volume?  Or say for cost-efficiency reasons using SSD caching on Hadoop name nodes? 

I'm investigating  a) Per node PCIe SSD deployment need in Openstack environment /  Hadoop environment and ,b) selected node SSD caching, specifically for OpenStack Cinder.  Hope this is the right forum to ask this question.

rgds,
S

On Sep 12, 2013, at 12:29 AM, Mike Spreitzer <mspreitz at us.ibm.com> wrote:

> Yes, I've seen that material.  In my group we have worked larger and more complex examples.  I have a proposed breakout session at the Hong Kong summit to talk about one, you might want to vote for it.  The URL is http://www.openstack.org/summit/openstack-summit-hong-kong-2013/become-a-speaker/TalkDetails/109 and the title is "Continuous Delivery of Lotus Connections on OpenStack".  We used our own technology to do the scheduling (make placement decisions) and orchestration, calling Nova and Quantum to carry out the decisions our software made.  Above the OpenStack infrastructure we used two layers of our own software, one focused on infrastructure and one adding concerns for the software running on that infrastructure.  Each used its own language for a whole topology AKA pattern AKA application AKA cluster.  For example, our pattern has 16 VMs running the WebSphere application server, organized into four homogenous groups (members are interchangeable) of four each.  For each group, we asked that it both (a) be spread across at least two racks, with no more than half the VMs on any one rack and (b) have no two VMs on the same hypervisor.  You can imagine how this would involve multiple levels of grouping and relationships between groups (and you will probably be surprised by the particulars).  We also included information on licensed products, so that the placement decision can optimize license cost (for the IBM "sub-capacity" licenses, placement of VMs can make a cost difference).  Thus, multiple policies per thing.  We are now extending that example to include storage, and we are also working examples with Hadoop. 
> 
> Regards, 
> Mike 
> 
> 
> 
> From:        Gary Kotton <gkotton at vmware.com> 
> To:        OpenStack Development Mailing List <openstack-dev at lists.openstack.org>, 
> Date:        09/11/2013 06:06 AM 
> Subject:        Re: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint 
> 
> 
> 
> 
> 
> From: Mike Spreitzer <mspreitz at us.ibm.com>
> Reply-To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
> Date: Tuesday, September 10, 2013 11:58 PM
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint 
> 
> First, I'm a newbie here, wondering: is this the right place for comments/questions on blueprints?  Supposing it is... 
> 
> [Gary Kotton] Yeah, as Russel said this is the correct place 
> 
> I am referring to https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension
> 
> In my own research group we have experience with a few systems that do something like that, and more (as, indeed, that blueprint explicitly states that it is only the start of a longer roadmap).  I would like to highlight a couple of differences that alarm me.  One is the general overlap between groups.  I am not saying this is wrong, but as a matter of natural conservatism we have shied away from unnecessary complexities.  The only overlap we have done so far is hierarchical nesting.  As the instance-group-api-extension explicitly contemplates groups of groups as a later development, this would cover the overlap that we have needed.  On the other hand, we already have multiple "policies" attached to a single group.  We have policies for a variety of concerns, so some can combine completely or somewhat independently.  We also have relationships (of various sorts) between groups (as well as between individuals, and between individuals and groups).  The policies and relationships, in general, are not simply names but also have parameters. 
> 
> [Gary Kotton] The instance groups was meant to be the first step towards what we had presented in Portland. Please look at the presentation that we gave an this may highlight what the aims were: https://docs.google.com/presentation/d/1oDXEab2mjxtY-cvufQ8f4cOHM0vIp4iMyfvZPqg8Ivc/edit?usp=sharing. Sadly for this release we did not manage to get the instance groups through (it was an issue of timing and bad luck). We will hopefully get this though in the first stages of the I cycle and then carry on building on it as it has a huge amount of value for OpenStack. It will be great if you can also participate in the discussions. 
> 
> Thanks, 
> Mike_______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130912/f0cc60ca/attachment.html>


More information about the OpenStack-dev mailing list