[openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

Hongbin Lu hongbin.lu at huawei.com
Sat Apr 23 21:59:33 UTC 2016



> -----Original Message-----
> From: Flavio Percoco [mailto:flavio at redhat.com]
> Sent: April-23-16 12:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> On 21/04/16 23:34 +0000, Fox, Kevin M wrote:
> >Amrith,
> >
> >Very well thought out. Thanks. :)
> >
> >I agree a nova driver that let you treat containers the same way as
> vm's, bare metal, and lxc containers would be a great thing, and if it
> could plug into magnum managed clusters well, would be awesome.
> >
> >I think a bit of the conversation around it gets muddy when you start
> talking about the philosophy between lxc and docker containers. They
> are very different. lxc containers typically are heavy weight. You
> think of them more as a vm without the kernel. Multiple daemons run in
> them, you have a regular init system, etc. This isn't bad. It has some
> benefits. But it also has some drawbacks.
> >
> >Docker's philosophy of software deployment has typically been much
> different then that, and doesn't lend itself to launching that way with
> nova. With docker, each individual services gets its own container, and
> are co'scheduled. Not at the ip level but even lower at the
> unix/filesystem level.
> >
> >For example, with Trove, architected with the docker philosophy, you
> might have two containers, one for mysql which exports its unix socket
> to a second container for the guest agent, which talks to mysql over
> the shared socket. The benefit with this is, you only need one guest
> agent container for all of your different types of databases (mysql,
> postgres,mongodb,etc). Your db and your guest agent can even be
> different distro's and still will work. Its then also very easy to
> upgrade just the guest agent container without affecting the db
> container at all. You just delete/recreate that container, leaving the
> other container alone.
> 
> There are ways to make the agent and the database run under the same
> docker container. In my local tests, I've tried both ways and I
> certainly prefer the one that provides separate containers for the
> agent and the database instance.
> As you mentioned below, this can't be architected under the same nova
> API but it could likely be architected under the same containers API as
> this philosophy is shared by other COEs.
> 
> >So, when you architect docker containers using this phylosophy, you
> can't really use nova as is, as an abstraction. You can't share unix
> sockets between container instances... But, this kind of functionality
> is very common in all the COE's and should be able to be easily made
> into an abstraction that all current COE's can easily launch. Hence
> thinking it might be best put in Magnum. A nova extension may work
> too... not sure. But seems more natural in Magnum to me.
> 
> I'm on the boat of folks that think it'd be nice to have a single API
> to manage *containers lifecycle*. TBH, I'd hate to see yet another
> service/project being created for this BUT I could see why some folks
> would prefer that over adding these APIs to Magnum.

I don't like to see another service/project as well. IMHO, Magnum claimed to be a container service so it should be within the scope of Magnum. However, if the majority wants to push it out, it is fine with me.

> 
> For a long time I pitched what Amrith said below. Most users shouldn't
> care whether their compute resource is a container, a bare metal node
> or a VM.
> Unfortunately, the current APIs don't allow for abstracting compute
> resources well enough. For those users that do care, then access to the
> native API must be provided (YAY, Magnum does this).
> 
> As far as the Trove use case goes, I believe a common API would do most
> of the job but there are things that access to the native API would do
> better. For example:
> 
> - Managing One Agent: Many Instances deployments.
> - Managing containers upgrades (whether it's a db instance upgrade or
> an agent upgrade)
> 
> Flavio
> 
> >Thanks,
> >Kevin
> >
> >________________________________________
> >From: Amrith Kumar [amrith at tesora.com]
> >Sent: Thursday, April 21, 2016 2:21 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> >abstraction for all COEs
> >
> >As I was preparing some thoughts for the Board/TC meeting on Sunday
> that will discuss this topic, I made the notes below and was going to
> post them on a topic specific etherpad but I didn't find one.
> >
> >I want to represent the view point of a consumer of compute services
> in OpenStack. Trove is a consumer of many OpenStack services, and one
> of the things I sometimes (infrequently) get asked is whether Trove
> supports containers. I have wondered about the utility of running
> databases in containers and after quizzing people who asked for
> container support, I was able to put them into three buckets and ranked
> them roughly by frequency.
> >
> >2. containers are a very useful packaging construct; unionfs for VM's
> >would be a great thing 3. containers launch faster than VM's 4.
> >container performance is in some cases better than VM's
> >
> >That's weird, what is #1, you may ask. Well, that was
> >
> >1. containers are cool, it is currently the highest grossing buzzword
> >
> >OK, so I ignored #1 and focused on #2-#4 and these are very relevant
> for Trove, I think.
> >
> >While I realize that containers offer many capabilities, from the
> perspective of Trove, I have not found a compelling reason to treat it
> differently from any other compute capability. As a matter of fact,
> Trove works fine with bare metal (using the ironic driver) and with
> VM's using the various VM drivers. I even had all of Trove working with
> containers using nova-docker. I had to make some specific choices on my
> docker images but I got it all to work as a prototype.
> >
> >My belief is that there are a group of use-cases where a common
> compute abstraction would be beneficial. In an earlier message on one
> of these threads, Adrian made a very good point[1] that "I suppose you
> were imagining an LCD approach. If that's what you want, just use the
> existing Nova API, and load different compute drivers on different host
> aggregates. A single Nova client can produce VM, BM (Ironic), and
> Container (lbvirt-lxc) instances all with a common API (Nova) if it's
> configured in this way. That's what we do. Flavors determine which
> compute type you get."
> >
> >He then went on to say, "If what you meant is that you could tap into
> the power of all the unique characteristics of each of the various
> compute types (through some modular extensibility framework) you'll
> likely end up with complexity in Trove that is comparable to
> integrating with the native upstream APIs, along with the disadvantage
> of waiting for OpenStack to continually catch up to the pace of change
> of the various upstream systems on which it depends. This is a recipe
> for disappointment."
> >
> >I've pondered this a while and it is still my belief that there are a
> class of use-cases, and I submit to you that I believe that Trove is
> one of them, where the LCD is sufficient in the area of compute. I say
> this knowing full well that in the area of storage this is likely not
> the case and we are discussing how we can better integrate with storage
> in a manner akin to what Adrian says later in his reply [1].
> >
> >I submit to you that there are likely other situations where an LCD
> approach is sufficient, and there are most definitely situations where
> an LCD approach is not sufficient, and one would benefit from "tap[ping]
> into the power of all the unique characteristics of each of the various
> compute types".
> >
> >I'm not proposing that we must have only one or the other.
> >
> >I believe that OpenStack should provide both. It should equally
> provide Magnum, a mechanism to tap into all the finer aspects of
> containers, should one want it, and also a common compute abstraction
> through some means whereby a user could get a LCD.
> >
> >I don't believe that Magnum can (or intends to) allow a user to
> provision VM's or bare-metal servers (nor should it). But, I believe
> that a common compute API that provides the LCD and determines whether
> in some way (potentially through flavors) whether the request should be
> a bare-metal server, a VM, or a container, has value too.
> >
> >Specifically, what I'm wondering is why there isn't interest in a
> driver/plugin for Nova that will provide an LCD container capability
> from Magnum. I am sure there's likely a good reason for this, that's
> one of the things I was definitely looking to learn in the course of
> the board/TC meeting.
> >
> >Thanks, and my apologies for writing long emails.
> >
> >-amrith
> >
> >[1]
> >http://lists.openstack.org/pipermail/openstack-dev/2016-
> April/091982.ht
> >ml
> >
> >
> >
> >> -----Original Message-----
> >> From: Monty Taylor [mailto:mordred at inaugust.com]
> >> Sent: Thursday, April 21, 2016 4:42 PM
> >> To: openstack-dev at lists.openstack.org
> >> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
> unified
> >> abstraction for all COEs
> >>
> >> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> >> > Here's where we disagree.
> >>
> >> We may have to agree to disagree.
> >>
> >> > Your speaking for everyone in the world now, and all you need is
> >> > one counter example. I'll be that guy. Me. I want a common
> >> > abstraction for some common LCD stuff.
> >>
> >> We also disagree on this. Just because one human wants something
> does
> >> not make implementing that feature a good idea. In fact, good design
> >> is largely about appropriately and selectively saying no.
> >>
> >> Now I'm not going to pretend that we're good at design around here...
> >> we seem to very easily fall into the trap that your assertion
> >> presents. But in almost every one of those cases, having done so
> >> winds up having been a mistake.
> >>
> >> > Both Sahara and Trove have LCD abstractions for very common things.
> >> > Magnum should too.
> >> >
> >> > You are falsely assuming that if an LCD abstraction is provided,
> >> > then users cant use the raw api directly. This is false. There is
> >> > no either/or. You can have both. I would be against it too if they
> >> > were mutually exclusive. They are not.
> >>
> >> I'm not assuming that at all. I'm quite clearly asserting that the
> >> existence of an OpenStack LCD is a Bad Idea. This is a thing we
> >> disagree about.
> >>
> >> I think it's unfriendly to the upstreams in question. I think it
> does
> >> not provide significant enough value to the world to justify that
> >> unfriendliness. And also, https://xkcd.com/927/
> >>
> >> > Thanks, Kevin ________________________________________ From: Monty
> >> > Taylor [mordred at inaugust.com] Sent: Thursday, April 21, 2016 10:22
> >> > AM
> >> > To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev]
> >> > [magnum][app-catalog][all] Build unified abstraction for all COEs
> >> >
> >> > On 04/21/2016 11:03 AM, Tim Bell wrote:
> >> >>
> >> >>
> >> >> On 21/04/16 17:38, "Hongbin Lu" <hongbin.lu at huawei.com> wrote:
> >> >>
> >> >>>
> >> >>>
> >> >>>> -----Original Message----- From: Adrian Otto
> >> >>>> [mailto:adrian.otto at rackspace.com] Sent: April-21-16 10:32 AM
> >> >>>> To: OpenStack Development Mailing List (not for usage
> >> >>>> questions) Subject: Re: [openstack-dev]
> >> >>>> [magnum][app-catalog][all] Build unified abstraction for all
> >> >>>> COEs
> >> >>>>
> >> >>>>
> >> >>>>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow
> >> >>>>> <harlowja at fastmail.com>
> >> >>>> wrote:
> >> >>>>>
> >> >>>>> Thierry Carrez wrote:
> >> >>>>>> Adrian Otto wrote:
> >> >>>>>>> This pursuit is a trap. Magnum should focus on making native
> >> >>>>>>> container APIs available. We should not wrap APIs with leaky
> >> >>>>>>> abstractions. The lowest common denominator of all COEs is
> an
> >> >>>>>>> remarkably low value API that adds considerable complexity
> to
> >> >>>> Magnum
> >> >>>>>>> that will not strategically advance OpenStack. If we instead
> >> >>>>>>> focus our effort on making the COEs work better on OpenStack,
> >> >>>>>>> that would be a winning strategy. Support and compliment our
> >> >>>>>>> various COE
> >> >>>> ecosystems.
> >> >>>>>
> >> >>>>> So I'm all for avoiding 'wrap APIs with leaky abstractions'
> >> >>>>> and 'making COEs work better on OpenStack' but I do dislike
> the
> >> >>>>> part
> >> >>>> about COEs (plural) because it is once again the old
> >> >>>> non-opinionated problem that we (as a community) suffer from.
> >> >>>>>
> >> >>>>> Just my 2 cents, but I'd almost rather we pick one COE and
> >> >>>>> integrate that deeply/tightly with openstack, and yes if this
> >> >>>>> causes some part of the openstack community to be annoyed, meh,
> >> >>>>> to bad. Sadly I have a feeling we are hurting ourselves by
> >> >>>>> continuing to try to be
> >> >>>> everything
> >> >>>>> and not picking anything (it's a general thing we, as a group,
> >> >>>>> seem
> >> >>>> to
> >> >>>>> be good at, lol). I mean I get the reason to just support all
> >> >>>>> the things, but it feels like we as a community could just
> pick
> >> >>>>> something, work together on figuring out how to pick one,
> using
> >> >>>>> all these bright leaders we have to help make that possible
> >> >>>>> (and yes this might piss some people off, to bad).
> >> >>>>> Then work toward making that something
> >> >>>> great
> >> >>>>> and move on...
> >> >>>>
> >> >>>> The key issue preventing the selection of only one COE is that
> >> >>>> this area is moving very quickly. If we would have decided what
> >> >>>> to pick at the time the Magnum idea was created, we would have
> >> >>>> selected Docker. If you look at it today, you might pick
> >> >>>> something else. A few months down the road, there may be yet
> >> >>>> another choice that is more compelling. The fact that a cloud
> >> >>>> operator can integrate services with OpenStack, and have the
> >> >>>> freedom to offer support for a selection of COE's is a form of
> >> >>>> insurance against the risk of picking the wrong one. Our
> compute
> >> >>>> service offers a choice of hypervisors, our block storage
> >> >>>> service offers a choice of storage hardware drivers, our
> >> >>>> networking service allows a choice of network drivers.
> >> >>>> Magnum is following the same pattern of choice that has made
> >> >>>> OpenStack compelling for a very diverse community. That design
> >> >>>> consideration was intentional.
> >> >>>>
> >> >>>> Over time, we can focus the majority of our effort on deep
> >> >>>> integration with COEs that users select the most. I'm convinced
> >> >>>> it's still too early to bet the farm on just one choice.
> >> >>>
> >> >>> If Magnum want to avoid the risk of picking the wrong COE, that
> >> >>> mean the risk is populated to all our users. They might pick a
> >> >>> COE and explore the its complexities. Then they find out another
> >> >>> COE is more compelling and their integration work is wasted. I
> >> >>> wonder if we can do better by taking the risk and provide
> >> >>> insurance for our users? I am trying to understand the
> rationales
> >> >>> that prevents us to improve the integration between COEs and
> >> >>> OpenStack. Personally, I don't like to end up with a situation
> >> >>> that "this is the pain from our users, but we cannot do
> >> >>> anything".
> >> >>
> >> >> We're running Magnum and have requests from our user communities
> >> >> for Kubernetes, Docker Swarm and Mesos. The use cases are
> >> >> significantly different and can justify the selection of
> different
> >> >> technologies. We're offering Kubernetes and Docker Swarm now and
> >> >> adding Mesos. If I was only to offer one, they'd build their own
> >> >> at considerable cost to them and the IT department.
> >> >>
> >> >> Magnum allows me to make them all available under the single
> >> >> umbrella of quota, capacity planning, identity and resource
> >> >> lifecycle. As experience is gained, we may make a recommendation
> >> >> for those who do not have a strong need but I am pleased to be
> >> >> able to offer all of them under the single framework.
> >> >>
> >> >> Since we're building on the native APIs for the COEs, the effect
> >> >> from the operator side to add new engines is really very small
> >> >> (compared to trying to explain to the user that they're wrong in
> >> >> choosing something different from the IT department).
> >> >>
> >> >> BTW, our users also really appreciate using the native APIs.
> >> >>
> >> >> Some more details at
> >> >> http://superuser.openstack.org/articles/openstack-magnum-on-the-
> ce
> >> >> rn-
> >> production-cloud
> >> >> and we'll give more under the hood details in a further blog.
> >> >
> >> >
> >> > Yes!!!
> >> >
> >> > This is 100% where the value of magnum comes from to me. It's
> about
> >> > end-user choice, and about a sane way for operators to enable that
> >> > end-user choice.
> >> >
> >> > I do not believe anyone in the world wants us to build an
> >> > abstraction layer on top of the _use_ of swarm/k8s/mesos. People
> >> > who want to use those technologies know what they want.
> >> >
> >> >>>>
> >> >>>> Adrian
> >> >>>>
> >> >>>>>> I'm with Adrian on that one. I've attended a lot of
> >> >>>>>> container-oriented conferences over the past year and my main
> >> >>>>>> takeaway is that this new crowd of potential users is not
> >> >>>>>> interested (at all) in an OpenStack-specific lowest common
> >> >>>>>> denominator API for COEs. They want to take advantage of the
> >> >>>>>> cool features in Kubernetes API or the versatility of Mesos.
> >> >>>>>> They want to avoid caring about the infrastructure provider
> >> >>>>>> bit (and not deploy Mesos or Kubernetes
> >> >>>> themselves).
> >> >>>>>>
> >> >>>>>> Let's focus on the infrastructure provider bit -- that is
> what
> >> >>>>>> we do and what the ecosystem wants us to provide.
> >> >>>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>
> >>
> _____________________________________________________________________
> >> _
> >> >>>>>
> >> >>>>
> >> ____ OpenStack Development Mailing List (not for usage questions)
> >> >>>>> Unsubscribe:
> >> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> >>>>>
> >> >>>>>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>>
> >> >>>>
> >>
> _____________________________________________________________________
> >> __
> >> >>>>
> >> >>>>
> >> ___
> >> >>>> OpenStack Development Mailing List (not for usage questions)
> >> >>>> Unsubscribe: OpenStack-dev-
> >> >>>> request at lists.openstack.org?subject:unsubscribe
> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> de
> >> >>>> v
> >> >>>
> >> >>>>
> >>
> _____________________________________________________________________
> >> _____
> >> >>> OpenStack Development Mailing List (not for usage questions)
> >> >>> Unsubscribe:
> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> >> >>
> >> >>>
> >>
> _____________________________________________________________________
> >> _____
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >
> >> >
> >> >
> >>
> _____________________________________________________________________
> >> _____
> >> >
> >> >
> >> OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> >>
> _____________________________________________________________________
> >> _____
> >> >
> >> >
> >> OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> >>
> _____________________________________________________________________
> >> _____ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >______________________________________________________________________
> _
> >___ OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >______________________________________________________________________
> _
> >___ OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> --
> @flaper87
> Flavio Percoco


More information about the OpenStack-dev mailing list