[openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

joehuang joehuang at huawei.com
Wed Aug 31 01:52:13 UTC 2016


Cells is a good enhancement for Nova scalability, but there are some issues in deployment Cells for massively distributed edge clouds: 

1) using RPC for inter-data center communication will bring the difficulty in inter-dc troubleshooting and maintenance, and some critical issue in operation. No CLI or restful API or other tools to manage a child cell directly. If the link between the API cell and child cells is broken, then the child cell in the remote edge cloud is unmanageable, no matter locally or remotely. 

2). The challenge in security management for inter-site RPC communication. Please refer to the slides[1] for the challenge 3: Securing OpenStack over the Internet, Over 500 pin holes had to be opened in the firewall to allow this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells for edge cloud will face same security challenges.

3)only nova supports cells. But not only Nova needs to support edge clouds, Neutron, Cinder should be taken into account too. How about Neutron to support service function chaining in edge clouds? Using RPC? how to address challenges mentioned above? And Cinder? 

4). Using RPC to do the production integration for hundreds of edge cloud is quite challenge idea, it's basic requirements that these edge clouds may be bought from multi-vendor, hardware/software or both. 

That means using cells in production for massively distributed edge clouds is quite bad idea. If Cells provide RESTful interface between API cell and child cell, it's much more acceptable, but it's still not enough, similar in Cinder, Neutron. Or just deploy lightweight OpenStack instance in each edge cloud, for example, one rack. The question is how to manage the large number of OpenStack instance and provision service.

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

Best Regards
Chaoyi Huang(joehuang)

________________________________________
From: Andrew Laski [andrew at lascii.com]
Sent: 30 August 2016 21:03
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

On Tue, Aug 30, 2016, at 05:36 AM, lebre.adrien at free.fr wrote:
> Dear all
>
> Sorry my lack of reactivity, I 've been out for the few last days.
>
> According to the different replies, I think we should enlarge the
> discussion and not stay on the vCPE use-case, which is clearly specific
> and represents only one use-case among the ones we would like to study.
> For instance we are in touch with NRENs in France and Poland that are
> interested to deploy up to one rack in each of their largest PoP in order
> to provide a distributed IaaS platform  (for further informations you can
> give a look to the presentation we gave during the last summit [1] [2]).
>
> The two questions were:
> 1./ Understand whether the fog/edge computing use case is in the scope of
> the Architecture WG and if not, do we need a massively distributed WG?

Besides the question of which WG this might fall under is the question
of how any of the work groups are going to engage with the project
communities. There is a group of developers pushing forward on cellsv2
in Nova there should be some level of engagement between them and
whomever is discussing the fog/edge computing use case. To me it seems
like there's some level of overlap between the efforts even if cellsv2
is not a full solution. But whatever conversations are taking place
about fog/edge or large scale distributed use cases seem  to be
happening in channels that I am not aware of, and I haven't heard any
other cells developers mention them either.

So let's please find a way for people who are interested in these use
cases to talk to the developers who are working on similar things.


> 2./ How can we coordinate our actions with the ones performed in the
> Architecture WG?
>
> Regarding 1./, according to the different reactions, I propose to write a
> first draft in an etherpard to present the main goal of the Massively
> distributed WG and how people interested by such discussions can interact
> (I will paste the link to the etherpad by tomorrow).
>
> Regarding 2./,  I mentioned the Architecture WG because we do not want to
> develop additional software layers like Tricircle or other solutions (at
> least for the moment).
> The goal of the WG is to conduct studies and experiments to identify to
> what extent current mechanisms can satisfy the needs of such a massively
> distributed use-cases and what are the missing elements.
>
> I don't want to give to many details in the present mail in order to stay
> as consice as possible (details will be given in the proposal).
>
> Best regards,
> Adrien
>
> [1] https://youtu.be/1oaNwDP661A?t=583 (please just watch the use-case
> introduction ;  the distribution of the DB  was one possible revision of
> Nova and according to the cell V2 changes it is probably now deprecated).
> [2] https://hal.inria.fr/hal-01320235
>
> ----- Mail original -----
> > De: "Peter Willis" <p3t3rw11115 at gmail.com>
> > À: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> > Envoyé: Mardi 30 Août 2016 11:24:00
> > Objet: Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs
> >
> >
> >
> > Colleagues,
> >
> >
> > An interesting discussion, the only question appears to be whether
> > vCPE is a suitable use case as the others do appear to be cloud use
> > cases. Lots of people assume CPE == small residential devices
> > however CPE covers a broad spectrum of appliances. Some of our
> > customers' premises are data centres, some are HQs, some are
> > campuses, some are branches. For residential CPE we use the
> > Broadband Forum's CPE Wide Area Network management protocol
> > (TR-069), which may be easier to modify to handle virtual
> > machines/containers etc. than to get OpenStack to scale to millions
> > of nodes. However that still leaves us with the need to manage a
> > stack of servers in thousands of telephone exchanges, central
> > offices or even cell-sites, running multiple work loads in a
> > distributed fault tolerant manner.
> >
> >
> > Best Regards,
> > Peter.
> >
> >
> > On Tue, Aug 30, 2016 at 4:48 AM, joehuang < joehuang at huawei.com >
> > wrote:
> >
> >
> > Hello, Jay,
> >
> > > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use
> > > cases
> >
> > Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so,
> > it's cloud. The introduction slides [1] can help you to learn the
> > use cases quickly, there are lots of material in ETSI website[2].
> >
> > [1]
> > http://www.etsi.org/images/files/technologies/MEC_Introduction_slides__SDN_World_Congress_15-10-14.pdf
> > [2]
> > http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing
> >
> > And when we talk about massively distributed cloud, vCPE is only one
> > of the scenarios( now in argue - ing ), but we can't forget that
> > there are other scenarios like vCDN, vEPC, vIMS, MEC, IoT etc.
> > Architecture level discussion is still necessary to see if current
> > design and new proposals can fulfill the demands. If there are lots
> > of proposals, it's good to compare the pros. and cons, and which
> > scenarios the proposal work, which scenario the proposal can't work
> > very well.
> >
> > ( Hope this reply in the thread :) )
> >
> > Best Regards
> > Chaoyi Huang(joehuang)
> > ________________________________________
> > From: Jay Pipes [ jaypipes at gmail.com ]
> > Sent: 29 August 2016 18:48
> > To: openstack-dev at lists.openstack.org
> >
> >
> > Subject: Re: [openstack-dev] [all][massively
> > distributed][architecture]Coordination between actions/WGs
> >
> > On 08/27/2016 11:16 AM, HU, BIN wrote:
> > > The challenge in OpenStack is how to enable the innovation built on
> > > top of OpenStack.
> >
> > No, that's not the challenge for OpenStack.
> >
> > That's like saying the challenge for gasoline is how to enable the
> > innovation of a jet engine.
> >
> > > So telco use cases is not only the innovation built on top of
> > > OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking),
> > > vCPE Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed
> > > requirement for innovation in OpenStack itself. If OpenStack don't
> > > address those basic requirements,
> >
> > That's the thing, Bin, those are *not* "basic" requirements. The
> > Telco
> > vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking
> > for fundamental architectural and design changes to the foundational
> > components of OpenStack. Instead of Nova being designed to manage a
> > bunch of hardware in a relatively close location (i.e. a datacenter
> > or
> > multiple datacenters), vCPE is asking for Nova to transform itself
> > into
> > a micro-agent that can be run on an Apple Watch and do things in
> > resource-constrained environments that it was never built to do.
> >
> > And, honestly, I have no idea what Gluon is trying to do. Ian sent me
> > some information a while ago on it. I read it. I still have no idea
> > what
> > Gluon is trying to accomplish other than essentially bypassing
> > Neutron
> > entirely. That's not "innovation". That's subterfuge.
> >
> > > the innovation will never happen on top of OpenStack.
> >
> > Sure it will. AT&T and BT and other Telcos just need to write their
> > own
> > software that runs their proprietary vCPE software distribution
> > mechanism, that's all. The OpenStack community shouldn't be relied
> > upon
> > to create software that isn't applicable to general cloud computing
> > and
> > cloud management platforms.
> >
> > > An example is - self-driving car is built on top of many
> > > technologies, such as sensor/camera, AI, maps, middleware etc. All
> > > innovations in each technology (sensor/camera, AI, map, etc.)
> > > bring together the innovation of self-driving car.
> >
> > Yes, indeed, but the people who created the self-driving car software
> > didn't ask the people who created the cameras to write the software
> > for
> > them that does the self-driving.
> >
> > > WE NEED INNOVATION IN OPENSTACK in order to enable the innovation
> > > built on top of OpenStack.
> >
> > You are defining "innovation" in an odd way, IMHO. "Innovation" for
> > the
> > vCPE use case sounds a whole lot like "rearchitect your entire
> > software
> > stack so that we don't have to write much code that runs on set-top
> > boxes."
> >
> > Just being honest,
> > -jay
> >
> > > Thanks
> > > Bin
> > > -----Original Message-----
> > > From: Edward Leafe [mailto: ed at leafe.com ]
> > > Sent: Saturday, August 27, 2016 10:49 AM
> > > To: OpenStack Development Mailing List (not for usage questions) <
> > > openstack-dev at lists.openstack.org >
> > > Subject: Re: [openstack-dev] [all][massively
> > > distributed][architecture]Coordination between actions/WGs
> > >
> > > On Aug 27, 2016, at 12:18 PM, HU, BIN < bh526r at att.com > wrote:
> > >
> > >>> From telco perspective, those are the areas that allow
> > >>> innovation, and provide telco customers with new types of
> > >>> services.
> > >>
> > >> We need innovation, starting from not limiting ourselves from
> > >> bringing new idea and new use cases, and bringing those
> > >> impossibility to reality.
> > >
> > > There is innovation in OpenStack, and there is innovation in things
> > > built on top of OpenStack. We are simply trying to keep the two
> > > layers from getting confused.
> > >
> > >
> > > -- Ed Leafe
> > >
> > >
> > >
> > >
> > >
> > >
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list