[openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

henry hly henry4hly at gmail.com
Sat Oct 4 11:33:59 UTC 2014


Hi Monty and Cellers,

I understand that there are installation base for Cells, these clouds
are still running and some issues needed to be addressed for the daily
operation. For sure the improvement on Cells are necessary to be done
in first class for the community's commitment.

The introduction of OpenStack cascading is not to divide the
community, ‍it is to address some other interests that Cell is not
designed for: heterogeneous cluster integration based on established
REST API, and total distributed scalability (not only Nova, but also
Cinder/Neutron/Ceilometer...). Total distribution is essential for
some large cloud operators who has many data centers distributed
geographically, and heterogeneous cluster integration‍ is the base
business policy (different version, different vendor, and even
none-Openstack like vcenter).

So Cascading is not an alternative game for cells, both solutions can
co-exist and complement to each other. Also I don't think cellers need
to shift their work to OpenStack cascading, they still focus on cells,
 and there would be not any conflicts between codes of cells and
cascading.

Best Regards,
Wu Hongning


On Sat, Oct 4, 2014 at 5:44 AM, Monty Taylor <mordred at inaugust.com> wrote:
>
> On 09/30/2014 12:07 PM, Tim Bell wrote:
> >> -----Original Message-----
> >> From: John Garbutt [mailto:john at johngarbutt.com]
> >> Sent: 30 September 2014 15:35
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> >> cascading
> >>
> >> On 30 September 2014 14:04, joehuang <joehuang at huawei.com> wrote:
> >>> Hello, Dear TC and all,
> >>>
> >>> Large cloud operators prefer to deploy multiple OpenStack instances(as
> >> different zones), rather than a single monolithic OpenStack instance because of
> >> these reasons:
> >>>
> >>> 1) Multiple data centers distributed geographically;
> >>> 2) Multi-vendor business policy;
> >>> 3) Server nodes scale up modularized from 00's up to million;
> >>> 4) Fault and maintenance isolation between zones (only REST
> >>> interface);
> >>>
> >>> At the same time, they also want to integrate these OpenStack instances into
> >> one cloud. Instead of proprietary orchestration layer, they want to use standard
> >> OpenStack framework for Northbound API compatibility with HEAT/Horizon or
> >> other 3rd ecosystem apps.
> >>>
> >>> We call this pattern as "OpenStack Cascading", with proposal described by
> >> [1][2]. PoC live demo video can be found[3][4].
> >>>
> >>> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
> >> OpenStack cascading.
> >>>
> >>> Kindly ask for cross program design summit session to discuss OpenStack
> >> cascading and the contribution to Kilo.
> >>>
> >>> Kindly invite those who are interested in the OpenStack cascading to work
> >> together and contribute it to OpenStack.
> >>>
> >>> (I applied for “other projects” track [5], but it would be better to
> >>> have a discussion as a formal cross program session, because many core
> >>> programs are involved )
> >>>
> >>>
> >>> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> >>> [2] PoC source code: https://github.com/stackforge/tricircle
> >>> [3] Live demo video at YouTube:
> >>> https://www.youtube.com/watch?v=OSU6PYRz5qY
> >>> [4] Live demo video at Youku (low quality, for those who can't access
> >>> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> >>> [5]
> >>> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
> >>> .html
> >>
> >> There are etherpads for suggesting cross project sessions here:
> >> https://wiki.openstack.org/wiki/Summit/Planning
> >> https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
> >>
> >> I am interested at comparing this to Nova's cells concept:
> >> http://docs.openstack.org/trunk/config-reference/content/section_compute-
> >> cells.html
> >>
> >> Cells basically scales out a single datacenter region by aggregating multiple child
> >> Nova installations with an API cell.
> >>
> >> Each child cell can be tested in isolation, via its own API, before joining it up to
> >> an API cell, that adds it into the region. Each cell logically has its own database
> >> and message queue, which helps get more independent failure domains. You can
> >> use cell level scheduling to restrict people or types of instances to particular
> >> subsets of the cloud, if required.
> >>
> >> It doesn't attempt to aggregate between regions, they are kept independent.
> >> Except, the usual assumption that you have a common identity between all
> >> regions.
> >>
> >> It also keeps a single Cinder, Glance, Neutron deployment per region.
> >>
> >> It would be great to get some help hardening, testing, and building out more of
> >> the cells vision. I suspect we may form a new Nova subteam to trying and drive
> >> this work forward in kilo, if we can build up enough people wanting to work on
> >> improving cells.
> >>
> >
> > At CERN, we've deployed cells at scale but are finding a number of architectural issues that need resolution in the short term to attain feature parity. A vision of "we all run cells but some of us have only one" is not there yet. Typical examples are flavors, security groups and server groups, all of which are not yet implemented to the necessary levels for cell parent/child.
> >
> > We would be very keen on agreeing the strategy in Paris so that we can ensure the gap is closed, test it in the gate and that future features cannot 'wishlist' cell support.
>
> I agree with this. I know that there are folks who don't like cells -
> but I think that ship has sailed. It's there - which means we need to
> make it first class.
>
> > Resources can be made available if we can agree the direction but current reviews are not progressing (such as https://bugs.launchpad.net/nova/+bug/1211011)
> >
> > Tim
> >
> >> Thanks,
> >> John
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list