[openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

Adam Young ayoung at redhat.com
Tue Sep 30 20:25:05 UTC 2014


On 09/30/2014 12:10 PM, John Griffith wrote:
>
>
> On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt <john at johngarbutt.com 
> <mailto:john at johngarbutt.com>> wrote:
>
>     On 30 September 2014 14:04, joehuang <joehuang at huawei.com
>     <mailto:joehuang at huawei.com>> wrote:
>     > Hello, Dear TC and all,
>     >
>     > Large cloud operators prefer to deploy multiple OpenStack
>     instances(as different zones), rather than a single monolithic
>     OpenStack instance because of these reasons:
>     >
>     > 1) Multiple data centers distributed geographically;
>     > 2) Multi-vendor business policy;
>     > 3) Server nodes scale up modularized from 00's up to million;
>     > 4) Fault and maintenance isolation between zones (only REST
>     interface);
>     >
>     > At the same time, they also want to integrate these OpenStack
>     instances into one cloud. Instead of proprietary orchestration
>     layer, they want to use standard OpenStack framework for
>     Northbound API compatibility with HEAT/Horizon or other 3rd
>     ecosystem apps.
>     >
>     > We call this pattern as "OpenStack Cascading", with proposal
>     described by [1][2]. PoC live demo video can be found[3][4].
>     >
>     > Nova, Cinder, Neutron, Ceilometer and Glance (optional) are
>     involved in the OpenStack cascading.
>     >
>     > Kindly ask for cross program design summit session to discuss
>     OpenStack cascading and the contribution to Kilo.
>     >
>     > Kindly invite those who are interested in the OpenStack
>     cascading to work together and contribute it to OpenStack.
>     >
>     > (I applied for “other projects” track [5], but it would be
>     better to have a discussion as a formal cross program session,
>     because many core programs are involved )
>     >
>     >
>     > [1] wiki:
>     https://wiki.openstack.org/wiki/OpenStack_cascading_solution
>     > [2] PoC source code: https://github.com/stackforge/tricircle
>     > [3] Live demo video at YouTube:
>     https://www.youtube.com/watch?v=OSU6PYRz5qY
>     > [4] Live demo video at Youku (low quality, for those who can't
>     access YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
>     > [5]
>     http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
>
>     There are etherpads for suggesting cross project sessions here:
>     https://wiki.openstack.org/wiki/Summit/Planning
>     https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
>
>     I am interested at comparing this to Nova's cells concept:
>     http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html
>
>     Cells basically scales out a single datacenter region by aggregating
>     multiple child Nova installations with an API cell.
>
>     Each child cell can be tested in isolation, via its own API, before
>     joining it up to an API cell, that adds it into the region. Each cell
>     logically has its own database and message queue, which helps get more
>     independent failure domains. You can use cell level scheduling to
>     restrict people or types of instances to particular subsets of the
>     cloud, if required.
>
>     It doesn't attempt to aggregate between regions, they are kept
>     independent. Except, the usual assumption that you have a common
>     identity between all regions.
>
>     It also keeps a single Cinder, Glance, Neutron deployment per region.
>

I'm starting on work to support a comparable mechanism to share data 
between Keystone servers.

http://adam.younglogic.com/2014/09/multiple-signers/


>     It would be great to get some help hardening, testing, and building
>     out more of the cells vision. I suspect we may form a new Nova subteam
>     to trying and drive this work forward in kilo, if we can build up
>     enough people wanting to work on improving cells.
>
>     Thanks,
>     John
>
>     _______________________________________________
>     OpenStack-dev mailing list
>     OpenStack-dev at lists.openstack.org
>     <mailto:OpenStack-dev at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ​Interesting idea, to be honest when TripleO was first announced what 
> you have here is more along the lines of what I envisioned.  It seems 
> that this would have some interesting wins in terms of upgrades, 
> migrations and scaling in general.  Anyway, you should propose it to 
> the etherpad as John G ( the other John G :) ) recommended, I'd love 
> to dig deeper into this.





>
> Thanks,
> John​
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140930/1a8e9acc/attachment.html>


More information about the OpenStack-dev mailing list