[openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

joehuang joehuang at huawei.com
Thu Sep 1 03:47:47 UTC 2016


Some evaluation aspect were added to the etherpad https://etherpad.openstack.org/p/massively-distributed_WG_description for massively distributed edge clouds, so we can evaluate each proposals. Your comments for these consideration are welcome :

- Security management over the WAN: how manage the inter-site communication and edge cloud securely.
- Fail-safe: each edge cloud should be able to run independently, one edge cloud crash should not impact other edge clouds running and operation.
- Maintainability: each edge cloud installation/upgrading/patch should be able to be managed indepently, don't have to upgrade all edge clouds at the same time.
- Manageable: no island even if some link broken
- Easy integration: need to support easy integration for multi-vendors for handreds or thousands of edge cloud.
- Consistency: eventually consistent information(stable status) should be achieved for distributed system.

And also prepared one skeleton for candidate proposals discussion: https://etherpad.openstack.org/p/massively-distributed_WG_candidate_proposals_ocata, and linked it into the etherpad mentioned above.

Consider that Tricircle is moving to divide it into two projects: TricircleNetworking and TricircleGateway: https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
So I listed these two sub-projects in the etherpad, these two projects can work together or separately.

Best Regards
Chaoyi Huang(joehuang)

________________________________________
From: lebre.adrien at free.fr [lebre.adrien at free.fr]
Sent: 01 September 2016 1:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

As promised, I just wrote a first draft at https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards articles/ETSI specifications/use-cases.

Comments/remarks welcome.
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is probably a bit too ambitious for one summit because the point 3 ''Gaps in OpenStack'' looks to me a major action that will probably last more than just one summit but I think you gave the right directions !

----- Mail original -----
> De: "joehuang" <joehuang at huawei.com>
> À: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs
>
> Hello, Joshua,
>
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
>
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
>
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
>
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now, the topics to be discussed could be as follows :
>
> 0. Scenario
> 1, Use cases
> 2, Requirements in detail
> 3, Gaps in OpenStack
> 4, Proposal to be discussed
>
> Architecture level proposal discussion
> 1, Proposals
> 2, Pros. and Cons. comparation
> 3, Challenges
> 4, next step
>
> Best Regards
> Chaoyi Huang(joehuang)
> ________________________________________
> From: Joshua Harlow [harlowja at fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
>
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or restful API or other tools
> > to manage a child cell directly. If the link between the API cell
> > and child cells is broken, then the child cell in the remote edge
> > cloud is unmanageable, no matter locally or remotely.
> >
> > 2). The challenge in security management for inter-site RPC
> > communication. Please refer to the slides[1] for the challenge 3:
> > Securing OpenStack over the Internet, Over 500 pin holes had to be
> > opened in the firewall to allow this to work – Includes ports for
> > VNC and SSH for CLIs. Using RPC in cells for edge cloud will face
> > same security challenges.
> >
> > 3)only nova supports cells. But not only Nova needs to support edge
> > clouds, Neutron, Cinder should be taken into account too. How
> > about Neutron to support service function chaining in edge clouds?
> > Using RPC? how to address challenges mentioned above? And Cinder?
> >
> > 4). Using RPC to do the production integration for hundreds of edge
> > cloud is quite challenge idea, it's basic requirements that these
> > edge clouds may be bought from multi-vendor, hardware/software or
> > both.
> >
> > That means using cells in production for massively distributed edge
> > clouds is quite bad idea. If Cells provide RESTful interface
> > between API cell and child cell, it's much more acceptable, but
> > it's still not enough, similar in Cinder, Neutron. Or just deploy
> > lightweight OpenStack instance in each edge cloud, for example,
> > one rack. The question is how to manage the large number of
> > OpenStack instance and provision service.
> >
> > [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf
> >
> > Best Regards
> > Chaoyi Huang(joehuang)
> >
>
> Very interesting questions,
>
> I'm starting to think that the API you want isn't really nova,
> neutron,
> or cinder at this point though. At some point it feels like the
> efforts
> you are spending in things like service chaining (there is a south
> park
> episode I almost linked here, but decided I probably shouldn't) would
> almost be better served by a top-level API that knows how to
> communicate
> with the more isolated silos (edge clouds I guess u are calling
> them).
>
> It just starts to feel that the architecture you want and the one I
> see
> being built are seemingly quite different and I haven't seen it shift
> to
> something different so maybe it's time to switch the problem on the
> head
> and accept that a solution may/will have to figure out how to unify a
> bunch of disjoint clouds (as best you can)?
>
> I know I can say that such a thing I'd like as well, because though
> godaddy doesn't have hundreds of edge clouds, it is approaching more
> than a handful of disjoint clouds (across the world) and a way to
> join
> them behind something that can unify them (across just nova) as much
> as
> it can would be welcome.
>
> -Josh
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160901/9a28c3d3/attachment.html>


More information about the OpenStack-dev mailing list