[openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

joehuang joehuang at huawei.com
Tue Sep 30 23:56:43 UTC 2014


Hello, Joshua,

Thank you very much for your deep thinking.

1. Quite different with cells. I have to copy the content from the mail to John Garbutt:

The major difference between Cells and OpenStack cascading is the  problem domain:
OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology

2. For quota, it is controlled by the cascading OpenStack (the parent OpenStack). Because the cascading OpenStack has all logical objects.

3. Race condition: what's the concrete race condition issue.

4. Inconsistency. Because there are object uuid mapping between the cascading OpenStack and cascaded OpenStack, so to track the consistency is possible and easy to solve, although we did not implement in the PoC source code.

5. "I'd rather stick with the less scalable distributed system we have", no conflict, no matter OpenStack cascading introduced or not, we need a solid, stable and scalable OpenStack.

6. "How I imagine this working out (in my view)", all these things are good, I also like it.

Best Regards

Chaoyi Hunag ( joehuang )

________________________________________
From: Joshua Harlow [harlowja at outlook.com]
Sent: 01 October 2014 3:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by     OpenStack cascading

So this does seem a-lot like cells but makes cells appear in the other projects.

IMHO the same problems that occur in cells appear here in that we are sacrificing consistency of the already problematic systems that already exist to gain scale (and to gain more inconsistency). Every time I see a 'the parent OpenStack manage many child OpenStacks by using standard OpenStack API' in that wiki I wonder how the parent will resolve inconsistencies that exist in children (likely it can't). How do quotas work across parent/children, how do race conditions get resolved...

IMHO I'd rather stick with the less scalable distributed system we have, iron out its quirks, fix the quota (via whatever that project is named now), split out the nova/... drivers so they can be maintainable in various projects, fix the various already inconsistent state machines that exist, split out the scheduler into its own project so that can be shared... All of the mentioned things improve scale and improve tolerance to individual failures rather than create a whole new level of 'pain' via a tightly bound set of proxies, cascading hierarchies.... Managing this whole cascading clusters and such also would seem to be operational management nightmare that I'm not sure is justified at the current time being (when operators already have enough trouble with the current code bases).

How I imagine this working out (in my view):

* Split out the shared services (gantt, scheduler, quotas...) into real SOA services that everyone can use.
* Have cinder-api, nova-api, neutron-api integrate with the split out services to obtain consistent views of the world when performing API operations.
* Have cinder, nova, neutron provide 'workers' (nova-compute is a basic worker) that can be scaled out across all your clusters and interconnected to a type of conductor node in some manner (mq?), and have the outcome of cinder-api, nova-api, neutron-api be a workflow that some service (conductor/s?) ensures occurs reliably (or aborts). This makes it so that cinder-api, nova-api... can scale at will, conductors can scale at will and so can worker nodes...
* Profit!

TLDR; It would seem like this adds more complexity, not less, and I'm not sure complexity is what openstack needs more of right now...

-Josh

On Sep 30, 2014, at 6:04 AM, joehuang <joehuang at huawei.com> wrote:

> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as different zones), rather than a single monolithic OpenStack instance because of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances into one cloud. Instead of proprietary orchestration layer, they want to use standard OpenStack framework for Northbound API compatibility with HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack cascading to work together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be better to have a discussion as a formal cross program session, because many core programs are involved )
>
>
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list