[openstack-dev] [magnum]swarm + compose = k8s?

Jay Lau jay.lau.513 at gmail.com
Fri Oct 2 02:59:25 UTC 2015


Hi sdake and Joshua,

I think that the current Kubernetes already have some logic of
"multi-tenancy" with "namespace", you can get more detail from here:
https://github.com/kubernetes/kubernetes/tree/master/docs/admin/namespaces

IMHO, I think that it is an important feature for sharing bay/baymodel for
different tenants.

For baymodel sharing, we already have a bp tracing this
https://blueprints.launchpad.net/magnum/+spec/public-baymodels

For bay sharing, this may need more discussion, but at least sharing a bay
can decrease the overkill of provisioning bay and we can leverage
"namespace" in Kubernetes to support this. I also filed a bp here
https://blueprints.launchpad.net/magnum/+spec/tenant-shared-model

Thanks!

On Thu, Oct 1, 2015 at 6:55 AM, Joshua Harlow <harlowja at outlook.com> wrote:

> Totally get it,
>
> And its interesting the boundaries that are being pushed,
>
> Also interesting to know the state of the world, and the state of magnum
> and the state of COE systems. I'm somewhat surprised that they lack
> multi-tenancy in any kind of manner (but I guess I'm not to surprised,
> its a feature that many don't add-on until later, for better or
> worse...), especially kubernetes (coming from google), but not entirely
> shocked by it ;-)
>
> Insightful stuff, thanks :)
>
> Steven Dake (stdake) wrote:
> > Joshua,
> >
> > If you share resources, you give up multi-tenancy.  No COE system has the
> > concept of multi-tenancy (kubernetes has some basic implementation but it
> > is totally insecure).  Not only does multi-tenancy have to “look like” it
> > offers multiple tenants isolation, but it actually has to deliver the
> > goods.
> >
> > I understand that at first glance a company like Yahoo may not want
> > separate bays for their various applications because of the perceived
> > administrative overhead.  I would then challenge Yahoo to go deploy a COE
> > like kubernetes (which has no multi-tenancy or a very basic
> implementation
> > of such) and get it to work with hundreds of different competing
> > applications.  I would speculate the administrative overhead of getting
> > all that to work would be greater then the administrative overhead of
> > simply doing a bay create for the various tenants.
> >
> > Placing tenancy inside a COE seems interesting, but no COE does that
> > today.  Maybe in the future they will.  Magnum was designed to present an
> > integration point between COEs and OpenStack today, not five years down
> > the road.  Its not as if we took shortcuts to get to where we are.
> >
> > I will grant you that density is lower with the current design of Magnum
> > vs a full on integration with OpenStack within the COE itself.  However,
> > that model which is what I believe you proposed is a huge design change
> to
> > each COE which would overly complicate the COE at the gain of increased
> > density.  I personally don’t feel that pain is worth the gain.
> >
> > Regards,
> > -steve
> >
> >
> > On 9/30/15, 2:18 PM, "Joshua Harlow"<harlowja at outlook.com>  wrote:
> >
> >> Wouldn't that limit the ability to share/optimize resources then and
> >> increase the number of operators needed (since each COE/bay would need
> >> its own set of operators managing it)?
> >>
> >> If all tenants are in a single openstack cloud, and under say a single
> >> company then there isn't much need for management isolation (in fact I
> >> think said feature is actually a anti-feature in a case like this).
> >> Especially since that management is already by keystone and the
> >> project/tenant&  user associations and such there.
> >>
> >> Security isolation I get, but if the COE is already multi-tenant aware
> >> and that multi-tenancy is connected into the openstack tenancy model,
> >> then it seems like that point is nil?
> >>
> >> I get that the current tenancy boundary is the bay (aka the COE right?)
> >> but is that changeable? Is that ok with everyone, it seems oddly matched
> >> to say a company like yahoo, or other private cloud, where one COE would
> >> I think be preferred and tenancy should go inside of that; vs a eggshell
> >> like solution that seems like it would create more management and
> >> operability pain (now each yahoo internal group that creates a bay/coe
> >> needs to figure out how to operate it? and resources can't be shared
> >> and/or orchestrated across bays; hmmmm, seems like not fully using a COE
> >> for what it can do?)
> >>
> >> Just my random thoughts, not sure how much is fixed in stone.
> >>
> >> -Josh
> >>
> >> Adrian Otto wrote:
> >>> Joshua,
> >>>
> >>> The tenancy boundary in Magnum is the bay. You can place whatever
> >>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
> >>> Swarm). This allows you to use native tools to interact with the COE in
> >>> that bay, rather than using an OpenStack specific client. If you want
> to
> >>> use the OpenStack client to create both bays, pods, and containers, you
> >>> can do that today. You also have the choice, for example, to run kubctl
> >>> against your Kubernetes bay, if you so desire.
> >>>
> >>> Bays offer both a management and security isolation between multiple
> >>> tenants. There is no intent to share a single bay between multiple
> >>> tenants. In your use case, you would simply create two bays, one for
> >>> each of the yahoo-mail.XX tenants. I am not convinced that having an
> >>> uber-tenant makes sense.
> >>>
> >>> Adrian
> >>>
> >>>> On Sep 30, 2015, at 1:13 PM, Joshua Harlow<harlowja at outlook.com
> >>>> <mailto:harlowja at outlook.com>>  wrote:
> >>>>
> >>>> Adrian Otto wrote:
> >>>>> Thanks everyone who has provided feedback on this thread. The good
> >>>>> news is that most of what has been asked for from Magnum is actually
> >>>>> in scope already, and some of it has already been implemented. We
> >>>>> never aimed to be a COE deployment service. That happens to be a
> >>>>> necessity to achieve our more ambitious goal: We want to provide a
> >>>>> compelling Containers-as-a-Service solution for OpenStack clouds in a
> >>>>> way that offers maximum leverage of what’s already in OpenStack,
> >>>>> while giving end users the ability to use their favorite tools to
> >>>>> interact with their COE of choice, with the multi-tenancy capability
> >>>>> we expect from all OpenStack services, and simplified integration
> >>>>> with a wealth of existing OpenStack services (Identity,
> >>>>> Orchestration, Images, Networks, Storage, etc.).
> >>>>>
> >>>>> The areas we have disagreement are whether the features offered for
> >>>>> the k8s COE should be mirrored in other COE’s. We have not attempted
> >>>>> to do that yet, and my suggestion is to continue resisting that
> >>>>> temptation because it is not aligned with our vision. We are not here
> >>>>> to re-invent container management as a hosted service. Instead, we
> >>>>> aim to integrate prevailing technology, and make it work great with
> >>>>> OpenStack. For example, adding docker-compose capability to Magnum is
> >>>>> currently out-of-scope, and I think it should stay that way. With
> >>>>> that said, I’m willing to have a discussion about this with the
> >>>>> community at our upcoming Summit.
> >>>>>
> >>>>> An argument could be made for feature consistency among various COE
> >>>>> options (Bay Types). I see this as a relatively low value pursuit.
> >>>>> Basic features like integration with OpenStack Networking and
> >>>>> OpenStack Storage services should be universal. Whether you can
> >>>>> present a YAML file for a bay to perform internal orchestration is
> >>>>> not important in my view, as long as there is a prevailing way of
> >>>>> addressing that need. In the case of Docker Bays, you can simply
> >>>>> point a docker-compose client at it, and that will work fine.
> >>>>>
> >>>> So an interesting question, but how is tenancy going to work, will
> >>>> there be a keystone tenancy<->  COE tenancy adapter? From my
> >>>> understanding a whole bay (COE?) is owned by a tenant, which is great
> >>>> for tenants that want to ~experiment~ with a COE but seems disjoint
> >>>> from the end goal of an integrated COE where the tenancy model of both
> >>>> keystone and the COE is either the same or is adapted via some adapter
> >>>> layer.
> >>>>
> >>>> For example:
> >>>>
> >>>> 1) Bay that is connected to uber-tenant 'yahoo'
> >>>>
> >>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
> >>>> <http://yahoo-mail.us/>'
> >>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
> >>>> ...
> >>>>
> >>>> All those tenancy information is in keystone, not replicated/synced
> >>>> into the COE (or in some other COE specific disjoint system).
> >>>>
> >>>> Thoughts?
> >>>>
> >>>> This one becomes especially hard if said COE(s) don't even have a
> >>>> tenancy model in the first place :-/
> >>>>
> >>>>> Thanks,
> >>>>>
> >>>>> Adrian
> >>>>>
> >>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
> >>>>>> Kulkarni<devdatta.kulkarni at RACKSPACE.COM
> >>>>>> <mailto:devdatta.kulkarni at RACKSPACE.COM>>  wrote:
> >>>>>>
> >>>>>> +1 Hongbin.
> >>>>>>
> >>>>>>  From perspective of Solum, which hopes to use Magnum for its
> >>>>>> application container scheduling requirements, deep integration of
> >>>>>> COEs with OpenStack services like Keystone will be useful.
> >>>>>> Specifically, I am thinking that it will be good if Solum can
> >>>>>> depend on Keystone tokens to deploy and schedule containers on the
> >>>>>> Bay nodes instead of having to use COE specific credentials. That
> >>>>>> way, container resources will become first class components that
> >>>>>> can be monitored using Ceilometer, access controlled using
> >>>>>> Keystone, and managed from within Horizon.
> >>>>>>
> >>>>>> Regards, Devdatta
> >>>>>>
> >>>>>>
> >>>>>> From: Hongbin Lu<hongbin.lu at huawei.com
> >>>>>> <mailto:hongbin.lu at huawei.com>>  Sent: Wednesday, September
> >>>>>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
> >>>>>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
> >>>>>> compose = k8s?
> >>>>>>
> >>>>>>
> >>>>>> +1 from me as well.
> >>>>>>
> >>>>>> I think what makes Magnum appealing is the promise to provide
> >>>>>> container-as-a-service. I see coe deployment as a helper to achieve
> >>>>>> the promise, instead of the main goal.
> >>>>>>
> >>>>>> Best regards, Hongbin
> >>>>>>
> >>>>>>
> >>>>>> From: Jay Lau [mailto:jay.lau.513 at gmail.com] Sent: September-29-15
> >>>>>> 10:57 PM To: OpenStack Development Mailing List (not for usage
> >>>>>> questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
> >>>>>> k8s?
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> +1 to Egor, I think that the final goal of Magnum is container as a
> >>>>>> service but not coe deployment as a service. ;-)
> >>>>>>
> >>>>>> Especially we are also working on Magnum UI, the Magnum UI should
> >>>>>> export some interfaces to enable end user can create container
> >>>>>> applications but not only coe deployment.
> >>>>>>
> >>>>>> I hope that the Magnum can be treated as another "Nova" which is
> >>>>>> focusing on container service. I know it is difficult to unify all
> >>>>>> of the concepts in different coe (k8s has pod, service, rc, swarm
> >>>>>> only has container, nova only has VM, PM with different
> >>>>>> hypervisors), but this deserve some deep dive and thinking to see
> >>>>>> how can move forward.....
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<EGuz at walmartlabs.com
> >>>>>> <mailto:EGuz at walmartlabs.com>>
> >>>>>> wrote: definitely ;), but the are some thoughts to Tom’s email.
> >>>>>>
> >>>>>> I agree that we shouldn't reinvent apis, but I don’t think Magnum
> >>>>>> should only focus at deployment (I feel we will become another
> >>>>>> Puppet/Chef/Ansible module if we do it ):) I belive our goal should
> >>>>>> be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
> >>>>>> (Neutron/Cinder/Barbican/etc) even if we need to step in to
> >>>>>> Kub/Mesos/Swarm communities for that.
> >>>>>>
> >>>>>> — Egor
> >>>>>>
> >>>>>> From: Adrian
> >>>>>> Otto<adrian.otto at rackspace.com
> >>>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
> >>
> >>>>>> Reply-To: "OpenStack Development Mailing List (not for usage
> >>>>>> questions)"<openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org>>
> >>>>>>
> >>>>>>
> >>>> Date: Tuesday, September 29, 2015 at 08:44
> >>>>>> To: "OpenStack Development Mailing List (not for usage
> >>>>>> questions)"<openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org>>
> >>>>>>
> >>>>>>
> >>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >>>>>> This is definitely a topic we should cover in Tokyo.
> >>>>>>
> >>>>>> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
> >>>>>> (danehans)<danehans at cisco.com
> >>>>>> <mailto:danehans at cisco.com><mailto:danehans at cisco.com>>  wrote:
> >>>>>>
> >>>>>>
> >>>>>> +1
> >>>>>>
> >>>>>> From: Tom Cammann<tom.cammann at hpe.com
> >>>>>> <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com>>
> >>>>>> Reply-To:
> >>>>>> "openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org>"<openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org>>
> >>>>>>
> >>>>>>
> >>>> Date: Tuesday, September 29, 2015 at 2:22 AM
> >>>>>> To:
> >>>>>> "openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org>"<openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org>>
> >>>>>>
> >>>>>>
> >>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >>>>>> This has been my thinking in the last couple of months to
> >>>>>> completely deprecate the COE specific APIs such as pod/service/rc
> >>>>>> and container.
> >>>>>>
> >>>>>> As we now support Mesos, Kubernetes and Docker Swarm its going to
> >>>>>> be very difficult and probably a wasted effort trying to
> >>>>>> consolidate their separate APIs under a single Magnum API.
> >>>>>>
> >>>>>> I'm starting to see Magnum as COEDaaS - Container Orchestration
> >>>>>> Engine Deployment as a Service.
> >>>>>>
> >>>>>> On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
> >>>>>> opposite of Wanghua's question: should pod/service/rc be deprecated
> >>>>>> if the user can easily get to the k8s api? Even if we want to
> >>>>>> orchestrate these in a Heat template, the corresponding heat
> >>>>>> resources can just interface with k8s instead of Magnum. Ton Ngo,
> >>>>>>
> >>>>>> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
> >>>>>> docker compose is just command line tool which doesn’t have any api
> >>>>>> or scheduling feat
> >>>>>>
> >>>>>> From: Egor Guz<EGuz at walmartlabs.com
> >>>>>> <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com>
> >>>>>> To:
> >>>>>> "openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org>"<mailto:
> openstack-dev at lists
> >>>>>> .openstack.org>
> >>>>>> <openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org>><mailto:
> openstack-dev at lists
> >>>>>> .openstack.org>
> >>>>>>
> >>>>>>
> >>>> Date: 09/28/2015 10:20 PM
> >>>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >>>>>> ________________________________
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> Also I belive docker compose is just command line tool which
> >>>>>> doesn’t have any api or scheduling features. But during last Docker
> >>>>>> Conf hackathon PayPal folks implemented docker compose executor for
> >>>>>> Mesos (https://github.com/mohitsoni/compose-executor) which can
> >>>>>> give you pod like experience.
> >>>>>>
> >>>>>> — Egor
> >>>>>>
> >>>>>> From: Adrian
> >>>>>> Otto<adrian.otto at rackspace.com
> >>>>>>
> >>>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
> ><m
> >>>>>> ailto:adrian.otto at rackspace.com>>
> >>>>>>
> >>>>>>
> >>>> Reply-To: "OpenStack Development Mailing List (not for usage
> >>>> questions)"<openstack-dev at lists.openstack.org
> >>>>
> >>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.op
> >>>> enstack.org><mailto:openstack-dev at lists.openstack.org>>
> >>>>>> Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
> >>>>>> Development Mailing List (not for usage
> >>>>>> questions)"<openstack-dev at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:
> openstack-dev at lists.
> >>>>>> openstack.org><mailto:openstack-dev at lists.openstack.org>>
> >>>>>>
> >>>>>>
> >>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >>>>>> Wanghua,
> >>>>>>
> >>>>>> I do follow your logic, but docker-compose only needs the docker
> >>>>>> API to operate. We are intentionally avoiding re-inventing the
> >>>>>> wheel. Our goal is not to replace docker swarm (or other existing
> >>>>>> systems), but to compliment it/them. We want to offer users of
> >>>>>> Docker the richness of native APIs and supporting tools. This way
> >>>>>> they will not need to compromise features or wait longer for us to
> >>>>>> implement each new feature as it is added. Keep in mind that our
> >>>>>> pod, service, and replication controller resources pre-date this
> >>>>>> philosophy. If we started out with the current approach, those
> >>>>>> would not exist in Magnum.
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Adrian
> >>>>>>
> >>>>>> On Sep 28, 2015, at 8:32 PM, 王华
> >>>>>> <wanghua.humble at gmail.com
> >>>>>>
> >>>>>> <mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com
> ><mai
> >>>>>> lto:wanghua.humble at gmail.com>>
> >>>>>> wrote:
> >>>>>>
> >>>>>> Hi folks,
> >>>>>>
> >>>>>> Magnum now exposes service, pod, etc to users in kubernetes coe,
> >>>>>> but exposes container in swarm coe. As I know, swarm is only a
> >>>>>> scheduler of container, which is like nova in openstack. Docker
> >>>>>> compose is a orchestration program which is like heat in openstack.
> >>>>>> k8s is the combination of scheduler and orchestration. So I think
> >>>>>> it is better to expose the apis in compose to users which are at
> >>>>>> the same level as k8s.
> >>>>>>
> >>>>>>
> >>>>>> Regards Wanghua
> >>>>>>
> >>>>>>
> ______________________________________________________________________
> >>>>>> ____
> >>>>>>
> >>>>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:
> OpenStack-de
> >>>>>> v-request at lists.openstack.org><mailto:
> OpenStack-dev-request at lists.open
> >>>>>> stack.org>?subject:unsubscribe
> >>>>>>
> >>>>>>
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> ______________________________________________________________________
> >>>>>> ____
> >>>>>>
> >>>>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> >?subject:unsubscribe
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> ?subject:unsubscribe>
> >>>>>>
> >>>>>>
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> ______________________________________________________________________
> >>>>>> ____
> >>>>>>
> >>>>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> >?subject:unsubscribe
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> ?subject:unsubscribe>
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> <ATT00001.gif>__________________________________________________________
> >>>> ________________
> >>>>>> OpenStack Development Mailing List (not for usage questions)
> >>>>>>
> >>>>>>
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:
> OpenStack-de
> >>>>>> v-request at lists.openstack.org>?subject:unsubscribe
> >>>>>>
> >>>>>>
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>>
> ______________________________________________________________________
> >>>>>> ____
> >>>>>>
> >>>>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> >?subject:unsubscribe
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> Thanks, Jay Lau (Guangya Liu)
> >>>>>>
> >>>>>>
> >>>>>>
> ______________________________________________________________________
> >>>>>> ____
> >>>>>>
> >>>>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org
> >>>>>>
> >>>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> >?subject:unsubscribe
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>
> >>>>>
> _______________________________________________________________________
> >>>>> ___
> >>>>>
> >>>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>>> Unsubscribe:
> >>>>> OpenStack-dev-request at lists.openstack.org
> >>>>> <mailto:OpenStack-dev-request at lists.openstack.org
> >?subject:unsubscribe
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>>
> ________________________________________________________________________
> >>>> __
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org
> >>>> <mailto:OpenStack-dev-request at lists.openstack.org
> >?subject:unsubscribe
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> _________________________________________________________________________
> >>> _
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151002/1bbf1906/attachment-0001.html>


More information about the OpenStack-dev mailing list