[openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward
Joe Gordon
joe.gordon0 at gmail.com
Fri Dec 12 03:28:53 UTC 2014
On Thu, Dec 11, 2014 at 6:25 PM, joehuang <joehuang at huawei.com> wrote:
> Hello, Joe
>
>
>
> Thank you for your good question.
>
>
>
> Question:
>
> How would something like flavors work across multiple vendors. The
> OpenStack API doesn't have any hard coded names and sizes for flavors. So a
> flavor such as m1.tiny may actually be very different vendor to vendor.
>
>
>
> Answer:
>
> The flavor is defined by Cloud Operator from the cascading OpenStack. And
> Nova-proxy ( which is the driver for “Nova as hypervisor” ) will sync the
> flavor to the cascaded OpenStack when it was first used in the cascaded
> OpenStack. If flavor was changed before a new VM is booted, the changed
> flavor will also be updated to the cascaded OpenStack just before the new
> VM booted request. Through this synchronization mechanism, all flavor used
> in multi-vendor’s cascaded OpenStack will be kept the same as what used in
> the cascading level, provide a consistent view for flavor.
>
I don't think this is sufficient. If the underlying hardware the between
multiple vendors is different setting the same values for a flavor will
result in different performance characteristics. For example, nova allows
for setting VCPUs, but nova doesn't provide an easy way to define how
powerful a VCPU is. Also flavors are commonly hardware dependent, take
what rackspace offers:
http://www.rackspace.com/cloud/public-pricing#cloud-servers
Rackspace has "I/O Optimized" flavors
* High-performance, RAID 10-protected SSD storage
* Option of booting from Cloud Block Storage (additional charges apply for
Cloud Block Storage)
* Redundant 10-Gigabit networking
* Disk I/O scales with the number of data disks up to ~80,000 4K random
read IOPS and ~70,000 4K random write IOPS.*
How would cascading support something like this?
>
>
> Best Regards
>
>
>
> Chaoyi Huang ( joehuang )
>
>
>
> *From:* Joe Gordon [mailto:joe.gordon0 at gmail.com]
> *Sent:* Friday, December 12, 2014 8:17 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells -
> summit recap and move forward
>
>
>
>
>
>
>
> On Thu, Dec 11, 2014 at 1:02 AM, joehuang <joehuang at huawei.com> wrote:
>
> Hello, Russell,
>
> Many thanks for your reply. See inline comments.
>
> -----Original Message-----
> From: Russell Bryant [mailto:rbryant at redhat.com]
> Sent: Thursday, December 11, 2014 5:22 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit
> recap and move forward
>
> >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang <joehuang at huawei.com> wrote:
> >>> Dear all & TC & PTL,
> >>>
> >>> In the 40 minutes cross-project summit session “Approaches for
> >>> scaling out”[1], almost 100 peoples attended the meeting, and the
> >>> conclusion is that cells can not cover the use cases and
> >>> requirements which the OpenStack cascading solution[2] aim to
> >>> address, the background including use cases and requirements is also
> >>> described in the mail.
>
> >I must admit that this was not the reaction I came away with the
> discussion with.
> >There was a lot of confusion, and as we started looking closer, many (or
> perhaps most)
> >people speaking up in the room did not agree that the requirements being
> stated are
> >things we want to try to satisfy.
>
> [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the
> use cases and requirements which the OpenStack cascading solution aim to
> address. 2) Need further discussion whether to satisfy the use cases and
> requirements.
>
> On 12/05/2014 06:47 PM, joehuang wrote:
> >>> Hello, Davanum,
> >>>
> >>> Thanks for your reply.
> >>>
> >>> Cells can't meet the demand for the use cases and requirements
> described in the mail.
>
> >You're right that cells doesn't solve all of the requirements you're
> discussing.
> >Cells addresses scale in a region. My impression from the summit session
> > and other discussions is that the scale issues addressed by cells are
> considered
> > a priority, while the "global API" bits are not.
>
> [joehuang] Agree cells is in the first class priority.
>
> >>> 1. Use cases
> >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
> >>> to 12'30" ), establishing globally addressable tenants which result
> >>> in efficient services deployment.
>
> > Keystone has been working on federated identity.
> >That part makes sense, and is already well under way.
>
> [joehuang] The major challenge for VDF use case is cross OpenStack
> networking for tenants. The tenant's VM/Volume may be allocated in
> different data centers geographically, but virtual network
> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and
> isolated between tenants. Keystone federation can help authorization
> automation, but the cross OpenStack network automation challenge is still
> there.
> Using prosperity orchestration layer can solve the automation issue, but
> VDF don't like prosperity API in the north-bound, because no ecosystem is
> available. And other issues, for example, how to distribute image, also
> cannot be solved by Keystone federation.
>
> >>> b). Telefonica use case[5], create virtual DC( data center) cross
> >>> multiple physical DCs with seamless experience.
>
> >If we're talking about multiple DCs that are effectively local to each
> other
> >with high bandwidth and low latency, that's one conversation.
> >My impression is that you want to provide a single OpenStack API on top of
> >globally distributed DCs. I honestly don't see that as a problem we
> should
> >be trying to tackle. I'd rather continue to focus on making OpenStack
> work
> >*really* well split into regions.
> > I think some people are trying to use cells in a geographically
> distributed way,
> > as well. I'm not sure that's a well understood or supported thing,
> though.
> > Perhaps the folks working on the new version of cells can comment
> further.
>
> [joehuang] 1) Splited region way cannot provide cross OpenStack networking
> automation for tenant. 2) exactly, the motivation for cascading is "single
> OpenStack API on top of globally distributed DCs". Of course, cascading can
> also be used for DCs close to each other with high bandwidth and low
> latency. 3) Folks comment from cells are welcome.
> .
>
> >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
> >>> 8#. For NFV cloud, it’s in nature the cloud will be distributed but
> >>> inter-connected in many data centers.
>
> >I'm afraid I don't understand this one. In many conversations about NFV,
> I haven't heard this before.
>
> [joehuang] This is the ETSI requirement and use cases specification for
> NFV. ETSI is the home of the Industry Specification Group for NFV. In
> Figure 14 (virtualization of EPC) of this document, you can see that the
> operator's cloud including many data centers to provide connection service
> to end user by inter-connected VNFs. The requirements listed in (
> https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the
> requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run
> over cloud, eg. migrate the traditional telco. APP from prosperity hardware
> to cloud. Not all NFV requirements have been covered yet. Forgive me there
> are so many telco terms here.
>
> >>
> >>> 2.requirements
> >>> a). The operator has multiple sites cloud; each site can use one or
> >>> multiple vendor’s OpenStack distributions.
>
> >Is this a technical problem, or is a business problem of vendors not
> >wanting to support a mixed environment that you're trying to work
> >around with a technical solution?
>
> [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been
> stated very clearly: 1) Local relationships: Operating Companies also have
> long standing relationships to their own choice of vendors; 2) Multi-Vendor
> :Each site can use one or multiple vendors which leads to better use of
> local resources and capabilities. Technical solution must be provided for
> multi-vendor integration and verification, It's usually ETSI standard in
> the past for mobile network. But how to do that in multi-vendor's cloud
> infrastructure? Cascading provide a way to use OpenStack API as the
> integration interface.
>
>
>
> How would something like flavors work across multiple vendors. The
> OpenStack API doesn't have any hard coded names and sizes for flavors. So a
> flavor such as m1.tiny may actually be very different vendor to vendor.
>
>
>
>
> >> b). Each site with its own requirements and upgrade schedule while
> >> maintaining standard OpenStack API c). The multi-site cloud must
> >> provide unified resource management with global Open API exposed, for
> >> example create virtual DC cross multiple physical DCs with seamless
> >> experience.
>
> >> Although a prosperity orchestration layer could be developed for the
> >> multi-site cloud, but it's prosperity API in the north bound
> >> interface. The cloud operators want the ecosystem friendly global
> >> open API for the mutli-site cloud for global access.
>
> >I guess the question is, do we see a "global API" as something we want
> >to accomplish. What you're talking about is huge, and I'm not even sure
> >how you would expect it to work in some cases (like networking).
>
> [joehuang] Yes, the most challenge part is networking. In the PoC, L2
> networking cross OpenStack is to leverage the L2 population mechanism.The
> L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s
> port is up, and then ML2 L2 population will be activated, the VM1's
> tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy
> for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron
> with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external
> port will be attached to the L2GW or only external port created, L2
> population(if not L2GW used) inside DC2 can be activated to notify all VMs
> located in DC2 for the same L2 network. For L3 networking finished in the
> PoC is to use extra route over GRE to serve local VLAN/VxLAN networks
> located in different DCs. Of course, other L3 networking method can be
> developed, for example, through VPN service. There are 4 or 5 BPs talking
> about edge network gateway to connect OpenStack tenant network to outside
> network, all these technologies can be leveraged to do cross OpenStack
> networking for different scenario. To experience the cross OpenStack
> networking, please try PoC source code:
> https://github.com/stackforge/tricircle
>
> >In any case, to be as clear as possible, I'm not convinced this is
> something
> >we should be working on. I'm going to need to see much more
> >overwhelming support for the idea before helping to figure out any
> further steps.
>
> [joehuang] If you or any other have any doubts, please feel free to ignite
> a discussion thread. For time difference reason, we (working in China) are
> not able to join most of IRC meeting, so mail-list is a good way for
> discussion.
>
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> Best Regards
>
> Chaoyi Huang ( joehuang )
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141211/c901a428/attachment.html>
More information about the OpenStack-dev
mailing list