[openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

Hongbin Lu hongbin.lu at huawei.com
Thu Apr 21 23:16:06 UTC 2016


Hi Monty,

I respect your position, but I want to point out that there is not only one human wants this. There are a group of people want this. I have been working for Magnum in about a year and a half. Along the way, I have been researching how to attract users to Magnum. My observation is there are two groups of potential users. The first group of users are generally in the domain of individual COEs and they want to use the native COE APIs. The second group of users are generally out of the domain and they want an OpenStack way to manage containers. Below are the specific use cases:
* Some people want to migrate the workload from VM to container
* Some people want to support hybrid deployment (VMs & containers) of their application
* Some people want to bring containers (in Magnum bays) to a Heat template, and enable connections between containers and other OpenStack resources
* Some people want to bring containers to Horizon
* Some people want to send container metrics to Ceilometer
* Some people want a portable experience across COEs
* Some people just want a container and don't want the complexities of others (COEs, bays, baymodels, etc.)

I think we need to research how large the second group of users is. Then, based on the data, we can decide if the LCD APIs should be part of Magnum, a Magnum plugin, or it should not exist. Thoughts?

Best regards,
Hongbin 

> -----Original Message-----
> From: Monty Taylor [mailto:mordred at inaugust.com]
> Sent: April-21-16 4:42 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> > Here's where we disagree.
> 
> We may have to agree to disagree.
> 
> > Your speaking for everyone in the world now, and all you need is one
> > counter example. I'll be that guy. Me. I want a common abstraction
> for
> > some common LCD stuff.
> 
> We also disagree on this. Just because one human wants something does
> not make implementing that feature a good idea. In fact, good design is
> largely about appropriately and selectively saying no.
> 
> Now I'm not going to pretend that we're good at design around here...
> we seem to very easily fall into the trap that your assertion presents.
> But in almost every one of those cases, having done so winds up having
> been a mistake.
> 
> > Both Sahara and Trove have LCD abstractions for very common things.
> > Magnum should too.
> >
> > You are falsely assuming that if an LCD abstraction is provided, then
> > users cant use the raw api directly. This is false. There is no
> > either/or. You can have both. I would be against it too if they were
> > mutually exclusive. They are not.
> 
> I'm not assuming that at all. I'm quite clearly asserting that the
> existence of an OpenStack LCD is a Bad Idea. This is a thing we
> disagree about.
> 
> I think it's unfriendly to the upstreams in question. I think it does
> not provide significant enough value to the world to justify that
> unfriendliness. And also, https://xkcd.com/927/
> 
> > Thanks, Kevin ________________________________________ From: Monty
> > Taylor [mordred at inaugust.com] Sent: Thursday, April 21, 2016 10:22 AM
> > To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev]
> > [magnum][app-catalog][all] Build unified abstraction for all COEs
> >
> > On 04/21/2016 11:03 AM, Tim Bell wrote:
> >>
> >>
> >> On 21/04/16 17:38, "Hongbin Lu" <hongbin.lu at huawei.com> wrote:
> >>
> >>>
> >>>
> >>>> -----Original Message----- From: Adrian Otto
> >>>> [mailto:adrian.otto at rackspace.com] Sent: April-21-16 10:32 AM
> >>>> To: OpenStack Development Mailing List (not for usage
> >>>> questions) Subject: Re: [openstack-dev] [magnum][app-catalog][all]
> >>>> Build unified abstraction for all COEs
> >>>>
> >>>>
> >>>>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow <harlowja at fastmail.com>
> >>>> wrote:
> >>>>>
> >>>>> Thierry Carrez wrote:
> >>>>>> Adrian Otto wrote:
> >>>>>>> This pursuit is a trap. Magnum should focus on making
> >>>>>>> native container APIs available. We should not wrap APIs
> >>>>>>> with leaky abstractions. The lowest common denominator of
> >>>>>>> all COEs is an remarkably low value API that adds
> >>>>>>> considerable complexity to
> >>>> Magnum
> >>>>>>> that will not strategically advance OpenStack. If we
> >>>>>>> instead focus our effort on making the COEs work better
> >>>>>>> on OpenStack, that would be a winning strategy. Support
> >>>>>>> and compliment our various COE
> >>>> ecosystems.
> >>>>>
> >>>>> So I'm all for avoiding 'wrap APIs with leaky abstractions'
> >>>>> and 'making COEs work better on OpenStack' but I do dislike
> >>>>> the part
> >>>> about COEs (plural) because it is once again the old
> >>>> non-opinionated problem that we (as a community) suffer from.
> >>>>>
> >>>>> Just my 2 cents, but I'd almost rather we pick one COE and
> >>>>> integrate that deeply/tightly with openstack, and yes if this
> >>>>> causes some part of the openstack community to be annoyed,
> >>>>> meh, to bad. Sadly I have a feeling we are hurting ourselves
> >>>>> by continuing to try to be
> >>>> everything
> >>>>> and not picking anything (it's a general thing we, as a
> >>>>> group, seem
> >>>> to
> >>>>> be good at, lol). I mean I get the reason to just support all
> >>>>> the things, but it feels like we as a community could just
> >>>>> pick something, work together on figuring out how to pick
> >>>>> one, using all these bright leaders we have to help make that
> >>>>> possible (and yes this might piss some people off, to bad).
> >>>>> Then work toward making that something
> >>>> great
> >>>>> and move on...
> >>>>
> >>>> The key issue preventing the selection of only one COE is that
> >>>> this area is moving very quickly. If we would have decided what
> >>>> to pick at the time the Magnum idea was created, we would have
> >>>> selected Docker. If you look at it today, you might pick
> >>>> something else. A few months down the road, there may be yet
> >>>> another choice that is more compelling. The fact that a cloud
> >>>> operator can integrate services with OpenStack, and have the
> >>>> freedom to offer support for a selection of COE's is a form of
> >>>> insurance against the risk of picking the wrong one. Our
> >>>> compute service offers a choice of hypervisors, our block
> >>>> storage service offers a choice of storage hardware drivers,
> >>>> our networking service allows a choice of network drivers.
> >>>> Magnum is following the same pattern of choice that has made
> >>>> OpenStack compelling for a very diverse community. That design
> >>>> consideration was intentional.
> >>>>
> >>>> Over time, we can focus the majority of our effort on deep
> >>>> integration with COEs that users select the most. I'm convinced
> >>>> it's still too early to bet the farm on just one choice.
> >>>
> >>> If Magnum want to avoid the risk of picking the wrong COE, that
> >>> mean the risk is populated to all our users. They might pick a
> >>> COE and explore the its complexities. Then they find out another
> >>> COE is more compelling and their integration work is wasted. I
> >>> wonder if we can do better by taking the risk and provide
> >>> insurance for our users? I am trying to understand the rationales
> >>> that prevents us to improve the integration between COEs and
> >>> OpenStack. Personally, I don't like to end up with a situation
> >>> that "this is the pain from our users, but we cannot do
> >>> anything".
> >>
> >> We're running Magnum and have requests from our user communities
> >> for Kubernetes, Docker Swarm and Mesos. The use cases are
> >> significantly different and can justify the selection of different
> >> technologies. We're offering Kubernetes and Docker Swarm now and
> >> adding Mesos. If I was only to offer one, they'd build their own at
> >> considerable cost to them and the IT department.
> >>
> >> Magnum allows me to make them all available under the single
> >> umbrella of quota, capacity planning, identity and resource
> >> lifecycle. As experience is gained, we may make a recommendation
> >> for those who do not have a strong need but I am pleased to be able
> >> to offer all of them under the single framework.
> >>
> >> Since we're building on the native APIs for the COEs, the effect
> >> from the operator side to add new engines is really very small
> >> (compared to trying to explain to the user that they're wrong in
> >> choosing something different from the IT department).
> >>
> >> BTW, our users also really appreciate using the native APIs.
> >>
> >> Some more details at
> >> http://superuser.openstack.org/articles/openstack-magnum-on-the-
> cern-production-cloud
> >> and we'll give more under the hood details in a further blog.
> >
> >
> > Yes!!!
> >
> > This is 100% where the value of magnum comes from to me. It's about
> > end-user choice, and about a sane way for operators to enable that
> > end-user choice.
> >
> > I do not believe anyone in the world wants us to build an
> > abstraction layer on top of the _use_ of swarm/k8s/mesos. People who
> > want to use those technologies know what they want.
> >
> >>>>
> >>>> Adrian
> >>>>
> >>>>>> I'm with Adrian on that one. I've attended a lot of
> >>>>>> container-oriented conferences over the past year and my
> >>>>>> main takeaway is that this new crowd of potential users is
> >>>>>> not interested (at all) in an OpenStack-specific lowest
> >>>>>> common denominator API for COEs. They want to take
> >>>>>> advantage of the cool features in Kubernetes API or the
> >>>>>> versatility of Mesos. They want to avoid caring about the
> >>>>>> infrastructure provider bit (and not deploy Mesos or
> >>>>>> Kubernetes
> >>>> themselves).
> >>>>>>
> >>>>>> Let's focus on the infrastructure provider bit -- that is
> >>>>>> what we do and what the ecosystem wants us to provide.
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> ______________________________________________________________________
> >>>>>
> >>>>
> ____ OpenStack Development Mailing List (not for usage questions)
> >>>>> Unsubscribe:
> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>>
> >>>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>>
> _______________________________________________________________________
> >>>>
> >>>>
> ___
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>> Unsubscribe: OpenStack-dev-
> >>>> request at lists.openstack.org?subject:unsubscribe
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>>
> _______________________________________________________________________
> ___
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>>
> _______________________________________________________________________
> ___
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> _______________________________________________________________________
> ___
> >
> >
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> _______________________________________________________________________
> ___
> >
> >
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> _______________________________________________________________________
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list