<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<style type="text/css" id="owaParaStyle"></style>
</head>
<body fpstyle="1" ocsi="0">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">Some evaluation aspect were added to the etherpad https://etherpad.openstack.org/p/massively-distributed_WG_description for massively distributed edge clouds, so we can evaluate
each proposals. Your comments for these consideration are welcome :<br>
<br>
- Security management over the WAN: how manage the inter-site communication and edge cloud securely.<br>
- Fail-safe: each edge cloud should be able to run independently, one edge cloud crash should not impact other edge clouds running and operation.
<br>
- Maintainability: each edge cloud installation/upgrading/patch should be able to be managed indepently, don't have to upgrade all edge clouds at the same time.<br>
- Manageable: no island even if some link broken<br>
- Easy integration: need to support easy integration for multi-vendors for handreds or thousands of edge cloud.<br>
- Consistency: eventually consistent information(stable status) should be achieved for distributed system.<br>
<br>
And also prepared one skeleton for candidate proposals discussion: https://etherpad.openstack.org/p/massively-distributed_WG_candidate_proposals_ocata, and linked it into the etherpad mentioned above.<br>
<br>
Consider that Tricircle is moving to divide it into two projects: TricircleNetworking and TricircleGateway: https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,<br>
So I listed these two sub-projects in the etherpad, these two projects can work together or separately.<br>
<br>
Best Regards
<div>Chaoyi Huang(joehuang)<br>
<br>
________________________________________<br>
From: lebre.adrien@free.fr [lebre.adrien@free.fr]<br>
Sent: 01 September 2016 1:36<br>
To: OpenStack Development Mailing List (not for usage questions)<br>
Subject: Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs<br>
<br>
As promised, I just wrote a first draft at https://etherpad.openstack.org/p/massively-distributed_WG_description<br>
I will try to add more content tomorrow in particular pointers towards articles/ETSI specifications/use-cases.<br>
<br>
Comments/remarks welcome.<br>
Ad_rien_<br>
<br>
PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is probably a bit too ambitious for one summit because the point 3 ''Gaps in OpenStack'' looks to me a major action that will probably last more than just one summit but I think you gave
the right directions !<br>
<br>
----- Mail original -----<br>
> De: "joehuang" <joehuang@huawei.com><br>
> À: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org><br>
> Envoyé: Mercredi 31 Août 2016 08:48:01<br>
> Objet: Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs<br>
><br>
> Hello, Joshua,<br>
><br>
> According to Peter's message, "However that still leaves us with the<br>
> need to manage a stack of servers in thousands of telephone<br>
> exchanges, central offices or even cell-sites, running multiple work<br>
> loads in a distributed fault tolerant manner", the number of edge<br>
> clouds may even at thousands level.<br>
><br>
> These clouds may be disjoint, but some may need to provide<br>
> inter-connection for the tenant's network, for example, to support<br>
> database cluster distributed in several clouds, the inter-connection<br>
> for data replication is needed.<br>
><br>
> There are different thoughts, proposals or projects to tackle the<br>
> challenge, architecture level discussion is necessary to see if<br>
> these design and proposals can fulfill the demands. If there are<br>
> lots of proposals, it's good to compare the pros. and cons, and<br>
> which scenarios the proposal work, which scenario the proposal can't<br>
> work very well.<br>
><br>
> So I suggest to have at least two successive dedicated design summit<br>
> sessions to discuss about that f2f, all thoughts, proposals or<br>
> projects to tackle these kind of problem domain could be collected<br>
> now, the topics to be discussed could be as follows :<br>
><br>
> 0. Scenario<br>
> 1, Use cases<br>
> 2, Requirements in detail<br>
> 3, Gaps in OpenStack<br>
> 4, Proposal to be discussed<br>
><br>
> Architecture level proposal discussion<br>
> 1, Proposals<br>
> 2, Pros. and Cons. comparation<br>
> 3, Challenges<br>
> 4, next step<br>
><br>
> Best Regards<br>
> Chaoyi Huang(joehuang)<br>
> ________________________________________<br>
> From: Joshua Harlow [harlowja@fastmail.com]<br>
> Sent: 31 August 2016 13:13<br>
> To: OpenStack Development Mailing List (not for usage questions)<br>
> Subject: Re: [openstack-dev] [all][massively<br>
> distributed][architecture]Coordination between actions/WGs<br>
><br>
> joehuang wrote:<br>
> > Cells is a good enhancement for Nova scalability, but there are<br>
> > some issues in deployment Cells for massively distributed edge<br>
> > clouds:<br>
> ><br>
> > 1) using RPC for inter-data center communication will bring the<br>
> > difficulty in inter-dc troubleshooting and maintenance, and some<br>
> > critical issue in operation. No CLI or restful API or other tools<br>
> > to manage a child cell directly. If the link between the API cell<br>
> > and child cells is broken, then the child cell in the remote edge<br>
> > cloud is unmanageable, no matter locally or remotely.<br>
> ><br>
> > 2). The challenge in security management for inter-site RPC<br>
> > communication. Please refer to the slides[1] for the challenge 3:<br>
> > Securing OpenStack over the Internet, Over 500 pin holes had to be<br>
> > opened in the firewall to allow this to work – Includes ports for<br>
> > VNC and SSH for CLIs. Using RPC in cells for edge cloud will face<br>
> > same security challenges.<br>
> ><br>
> > 3)only nova supports cells. But not only Nova needs to support edge<br>
> > clouds, Neutron, Cinder should be taken into account too. How<br>
> > about Neutron to support service function chaining in edge clouds?<br>
> > Using RPC? how to address challenges mentioned above? And Cinder?<br>
> ><br>
> > 4). Using RPC to do the production integration for hundreds of edge<br>
> > cloud is quite challenge idea, it's basic requirements that these<br>
> > edge clouds may be bought from multi-vendor, hardware/software or<br>
> > both.<br>
> ><br>
> > That means using cells in production for massively distributed edge<br>
> > clouds is quite bad idea. If Cells provide RESTful interface<br>
> > between API cell and child cell, it's much more acceptable, but<br>
> > it's still not enough, similar in Cinder, Neutron. Or just deploy<br>
> > lightweight OpenStack instance in each edge cloud, for example,<br>
> > one rack. The question is how to manage the large number of<br>
> > OpenStack instance and provision service.<br>
> ><br>
> > [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf<br>
> ><br>
> > Best Regards<br>
> > Chaoyi Huang(joehuang)<br>
> ><br>
><br>
> Very interesting questions,<br>
><br>
> I'm starting to think that the API you want isn't really nova,<br>
> neutron,<br>
> or cinder at this point though. At some point it feels like the<br>
> efforts<br>
> you are spending in things like service chaining (there is a south<br>
> park<br>
> episode I almost linked here, but decided I probably shouldn't) would<br>
> almost be better served by a top-level API that knows how to<br>
> communicate<br>
> with the more isolated silos (edge clouds I guess u are calling<br>
> them).<br>
><br>
> It just starts to feel that the architecture you want and the one I<br>
> see<br>
> being built are seemingly quite different and I haven't seen it shift<br>
> to<br>
> something different so maybe it's time to switch the problem on the<br>
> head<br>
> and accept that a solution may/will have to figure out how to unify a<br>
> bunch of disjoint clouds (as best you can)?<br>
><br>
> I know I can say that such a thing I'd like as well, because though<br>
> godaddy doesn't have hundreds of edge clouds, it is approaching more<br>
> than a handful of disjoint clouds (across the world) and a way to<br>
> join<br>
> them behind something that can unify them (across just nova) as much<br>
> as<br>
> it can would be welcome.<br>
><br>
> -Josh<br>
><br>
><br>
><br>
> __________________________________________________________________________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe:<br>
> OpenStack-dev-request@lists.openstack.org?subject:unsubscribe<br>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<br>
><br>
> __________________________________________________________________________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe:<br>
> OpenStack-dev-request@lists.openstack.org?subject:unsubscribe<br>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<br>
><br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe<br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<br>
</div>
</div>
</body>
</html>