[openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

Amrith Kumar amrith at tesora.com
Fri Apr 22 11:40:33 UTC 2016


For those interested in one aspect of this discussion (a common compute API for bare-metal, VM's and containers), there's a review of a spec in Trove [1], and a session at the summit [2]. 

Please join [2] if you are able

     Trove Container Support
     Thursday, April 28, 9:50am-10:30am
     Hilton Austin - MR 406

Keith, more detailed answer to one of your questions is below.

Thanks,

-amrith


[1] https://review.openstack.org/#/c/307883/4
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9150

> -----Original Message-----
> From: Keith Bray [mailto:keith.bray at RACKSPACE.COM]
> Sent: Thursday, April 21, 2016 5:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> 100% agreed on all your points… with the addition that the level of
> functionality you are asking for doesn’t need to be baked into an API
> service such as Magnum.  I.e., Magnum doesn’t have to be the thing
> providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
> Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
> Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
> etc. can all provide this by pulling together the underlying API
> services/technologies to give users the easy app deployment buttons.   I
> don’t think Magnum should do everything (or next thing we know we’ll be
> trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve gotten
> carried away).  Hopefully my position is understood, and no problem if
> folks disagree with me.  I’d just rather compartmentalize domain concerns
> and scope Magnum to something focused, achievable, agnostic, and easy for
> operators to adopt first. User traction will not be helped by increasing
> service/operator complexity.  I’ll have to go look at the latest Trove and
> Sahara APIs to see how LCD is incorporated, and would love feedback from

[amrith] Trove provides a common, database agnostic set of API's for a number of common database workflows including provisioning and lifecycle management. It also provides abstractions for common database topologies like replication and clustering, and management actions that will manipulate those topologies (grow, shrink, failover, ...). It provides abstractions for some common database administration activities like user management, database management, and ACL's. It allows you to take backups of databases and to launch new instances from backups. It provides a simple way in which a user can manage the configuration of databases (a subset of the configuration parameters that the database supports, the choice the subset being up to the operator) in a consistent way. Further it allows users to make configuration changes across a group of databases through the process of associating a 'configuration group' to database instances.

The important thing about this is that there is a desire to provide all of the above capabilities through the Trove API and make these capabilities database agnostic. The actual database specific implementations are within Trove and largely contained in a database specific guest agent that performs the database specific actions to achieve the end result that the user requested via the Trove API.

The user interacts directly with the database as well; the application speaks native database API's to the database and unlike (for example, DynamoDB) Trove does not get into the data path between the application and the database itself. Users and administrators are able to interact with the database through its native management interfaces as well (some restrictions may apply, depending on the level of access that the operator allows).

In short, the value provided is that databases are long lived things and provisioning and initial configuration are very important, but ongoing maintenance and management are as well. The mantra for dba's is always to automate and standardize all the repeated workflows. Trove does that for you through a single set of API's because todays datacenters have a wide diversity of databases. Hope that helps.

> Trove and Sahara operators on the value vs. customer confusion or operator
> overhead they get from those LCDs if they are required parts of the
> services.
> 
> Thanks,
> -Keith
> 
> On 4/21/16, 3:31 PM, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:
> 
> >There are a few reasons, but the primary one that affects me is Its
> >from the app-catalog use case.
> >
> >To gain user support for a product like OpenStack, you need users. The
> >easier you make it to use, the more users you can potentially get.
> >Traditional Operating Systems learned this a while back. Rather then
> >make each OS user have to be a developer and custom deploy every app
> >they want to run, they split the effort in such a way that Developers
> >can provide software through channels that Users that are not skilled
> >Developers can consume and deploy. The "App" culture in the mobile
> >space it the epitome of that at the moment. My grandmother fires up the
> >app store on her phone, clicks install on something interesting, and
> starts using it.
> >
> >Right now, Thats incredibly difficult in OpenStack. You have to find
> >the software your interested in, figure out which components your going
> >to consume (nova, magnum, which COE, etc) then use those api's to
> >launch some resource. Then after that resource is up, then you have to
> >switch tools and then use those tools to further launch things, ansible
> >or kubectl or whatever, then further deploy things.
> >
> >What I'm looking for, is a unified enough api, that a user can go into
> >horizon, go to the app catalog, find an interesting app, click
> >install/run, and then get a link to a service they can click on and
> >start consuming the app they want in the first place. The number of
> >users that could use such an interface, and consume OpenStack resources
> >are several orders of magnitude greater then the numbers that can
> >manually deploy something ala the procedure in the previous paragraph.
> >More of that is good for Users, Developers, and Operators.
> >
> >Does that help?
> >
> >Thanks,
> >Kevin
> >
> >
> >________________________________________
> >From: Keith Bray [keith.bray at RACKSPACE.COM]
> >Sent: Thursday, April 21, 2016 1:10 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> >abstraction for all COEs
> >
> >If you don¹t want a user to have to choose a COE, can¹t we just offer
> >an option for the operator to mark a particular COE as the ³Default
> >COE² that could be defaulted to if one isn¹t specified in the Bay
> >create call?  If the operator didn¹t specify a default one, then the
> >CLI/UI must submit one in the bay create call otherwise it would fail.
> >
> >Kevin, can you clarify Why you have to write scripts to deploy a
> container
> >to the COE?   It can be made easy for the user to extract all the
> >runtime/env vars needed for a user to just do ³docker run Š²  and poof,
> >container running on Swarm on a Magnum bay.  Can you help me understand
> >the script part of it?   I don¹t believe container users want an
> >abstraction between them and their COE CLIŠ but, what I believe isn¹t
> >important.  What I do think is important is that we not require
> >OpenStack operators to run that abstraction layer to be running a
> >³magnum compliant² service.  It should either be an ³optional² API
> >add-on or a separate API or separate project.  If some folks want an
> >abstraction layer, then great, feel free to build it and even propose it
> under the OpenStack ecosystem..
> >But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
> >need to be part of Magnum¹s offering, as it would be targeted at the
> >COE interactions and not the bay interactions (which is where Magnum
> >scope is best focused).  I don¹t think Magnum should play in both these
> >distinct domains (Bay interaction vs. COE interaction).  The former
> >(bay
> >interaction) is an infrastructure cloud thing (fits well with
> >OpenStack), the latter (COE interaction) is an obfuscation of emerging
> >technologies, which gets in to the Trap that Adrian mentioned.  The
> >abstraction layer API will forever and always be drastically behind in
> >trying to keep up with the COE innovation.
> >
> >In summary, an abstraction over the COEs would be best served as a
> >different effort.  Magnum would be best focused on bay interactions and
> >should not try to pick a COE winner or require an operator to run a
> >lowest-common-demonitor API abstraction.
> >
> >Thanks for listening to my soap-box.
> >-Keith
> >
> >
> >
> >On 4/21/16, 2:36 PM, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:
> >
> >>I agree with that, and thats why providing some bare minimum
> >>abstraction will help the users not have to choose a COE themselves.
> >>If we can't decide, why can they? If all they want to do is launch a
> >>container, they should be able to script up "magnum launch-container
> >>foo/bar:latest" and get one. That script can then be relied upon.
> >>
> >>Today, they have to write scripts to deploy to the specific COE they
> >>have chosen. If they chose Docker, and something better comes out,
> >>they have to go rewrite a bunch of stuff to target the new, better
> >>thing. This puts a lot of work on others.
> >>
> >>Do I think we can provide an abstraction that prevents them from ever
> >>having to rewrite scripts? No. There are a lot of features in the COE
> >>world in flight right now and we dont want to solidify an api around
> >>them yet. We shouldn't even try that. But can we cover a few common
> >>things now? Yeah.
> >>
> >>Thanks,
> >>Kevin
> >>________________________________________
> >>From: Adrian Otto [adrian.otto at rackspace.com]
> >>Sent: Thursday, April 21, 2016 7:32 AM
> >>To: OpenStack Development Mailing List (not for usage questions)
> >>Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> >>abstraction for all COEs
> >>
> >>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow <harlowja at fastmail.com>
> >>>wrote:
> >>>
> >>> Thierry Carrez wrote:
> >>>> Adrian Otto wrote:
> >>>>> This pursuit is a trap. Magnum should focus on making native
> >>>>>container  APIs available. We should not wrap APIs with leaky
> >>>>>abstractions. The  lowest common denominator of all COEs is an
> >>>>>remarkably low value API  that adds considerable complexity to
> >>>>>Magnum that will not  strategically advance OpenStack. If we
> >>>>>instead focus our effort on  making the COEs work better on
> >>>>>OpenStack, that would be a winning  strategy. Support and
> >>>>>compliment our various COE ecosystems.
> >>>
> >>> So I'm all for avoiding 'wrap APIs with leaky abstractions' and
> >>>'making  COEs work better on OpenStack' but I do dislike the part
> >>>about COEs
> >>>(plural) because it is once again the old non-opinionated problem
> >>>that we (as a community) suffer from.
> >>>
> >>> Just my 2 cents, but I'd almost rather we pick one COE and integrate
> >>>that deeply/tightly with openstack, and yes if this causes some part
> >>>of the openstack community to be annoyed, meh, to bad. Sadly I have a
> >>>feeling we are hurting ourselves by continuing to try to be
> >>>everything and not picking anything (it's a general thing we, as a
> >>>group, seem to be good at, lol). I mean I get the reason to just
> >>>support all the things, but it feels like we as a community could
> >>>just pick something, work together on figuring out how to pick one,
> >>>using all these bright leaders we have to help make that possible
> >>>(and yes this might piss some people off, to bad). Then work toward
> >>>making that something great and move onŠ
> >>
> >>The key issue preventing the selection of only one COE is that this
> >>area is moving very quickly. If we would have decided what to pick at
> >>the time the Magnum idea was created, we would have selected Docker.
> >>If you look at it today, you might pick something else. A few months
> >>down the road, there may be yet another choice that is more
> >>compelling. The fact that a cloud operator can integrate services with
> >>OpenStack, and have the freedom to offer support for a selection of
> >>COE¹s is a form of insurance against the risk of picking the wrong
> >>one. Our compute service offers a choice of hypervisors, our block
> >>storage service offers a choice of storage hardware drivers, our
> >>networking service allows a choice of network drivers. Magnum is
> >>following the same pattern of choice that has made OpenStack
> >>compelling for a very diverse community. That design consideration was
> intentional.
> >>
> >>Over time, we can focus the majority of our effort on deep integration
> >>with COEs that users select the most. I¹m convinced it¹s still too
> >>early to bet the farm on just one choice.
> >>
> >>Adrian
> >>
> >>>> I'm with Adrian on that one. I've attended a lot of
> >>>>container-oriented  conferences over the past year and my main
> >>>>takeaway is that this new  crowd of potential users is not
> >>>>interested (at all) in an  OpenStack-specific lowest common
> >>>>denominator API for COEs. They want to  take advantage of the cool
> >>>>features in Kubernetes API or the versatility  of Mesos. They want
> >>>>to avoid caring about the infrastructure provider  bit (and not
> >>>>deploy Mesos or Kubernetes themselves).
> >>>>
> >>>> Let's focus on the infrastructure provider bit -- that is what we
> >>>>do and  what the ecosystem wants us to provide.
> >>>>
> >>>
> >>>
> >>>_____________________________________________________________________
> >>>___
> >>>_
> >>>_
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>______________________________________________________________________
> >>___
> >>_
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>______________________________________________________________________
> >>___
> >>_
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >_______________________________________________________________________
> >___ OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >_______________________________________________________________________
> >___ OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list