[openstack-dev] [magnum][higgins][all] The first Higgins team meeting

Ihor Dvoretskyi idvoretskyi at mirantis.com
Fri May 13 03:10:35 UTC 2016


Hongbin,

Will the activities within the project be related to container
orchestration systems (Kubernetes, Mesos, Swarm), or they will still live
in the world of Magnum?

Ihor

On Wed, May 11, 2016 at 8:33 AM Hongbin Lu <hongbin.lu at huawei.com> wrote:

> Hi all,
>
>
>
> I am happy to announce that a new project (Higgins [1][2]) was created for
> providing container service on OpenStack. The Higgins team will hold the
> first team meeting at this Friday 0030 UTC [3]. At the first meeting, we
> plan to collect requirements from interested individuals and drive
> consensus on the project roadmap. Everyone is welcome to join. I hope to
> see you all there.
>
>
>
> [1] https://review.openstack.org/#/c/313935/
>
> [2] https://wiki.openstack.org/wiki/Higgins
>
> [3] https://wiki.openstack.org/wiki/Higgins#Agenda_for_2016-05-13_0300_UTC
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Hongbin Lu
> *Sent:* May-03-16 11:31 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* RE: [openstack-dev] [magnum][all] Build unified abstraction
> for all COEs
>
>
>
> Hi all,
>
>
>
> According to the decision in the design summit [1], we are going to narrow
> the scope of the Magnum project [2]. In particular, Magnum will focus on
> COEs deployment and management. The efforts of building unified container
> abstraction will potentially go into a new project. My role here is to
> collect interests for the new project, help to create a new team (if there
> are enough interests), and then pass the responsibility to the new team. An
> etherpad was created for this purpose:
>
>
>
> https://etherpad.openstack.org/p/container-management-service
>
>
>
> If you interest in contributing and/or leveraging the new container
> service, I would request to have your name and requirements stated in the
> etherpad. Your inputs will be appreciated.
>
>
>
> [1] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
>
> [2] https://review.openstack.org/#/c/311476/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Adrian Otto [mailto:adrian.otto at rackspace.com
> <adrian.otto at rackspace.com>]
> *Sent:* April-23-16 11:27 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
>
>
> Magnum is not a COE installer. It offers multi tenancy from the ground up,
> is well integrated with OpenStack services, and more COE features
> pre-configured than you would get with an ordinary stock deployment. For
> example, magnum offers integration with keystone that allows developer
> self-service to get a native container service in a few minutes with the
> same ease as getting a database server from Trove. It allows cloud
> operators to set up the COE templates in a way that they can be used to fit
> policies of that particular cloud.
>
>
>
> Keeping a COE working with OpenStack requires expertise that the Magnum
> team has codified across multiple options.
>
> --
>
> Adrian
>
>
> On Apr 23, 2016, at 2:55 PM, Hongbin Lu <hongbin.lu at huawei.com> wrote:
>
> I am not necessary agree with the viewpoint below, but that is the
> majority viewpoints when I was trying to sell Magnum to them. There are
> people who interested in adopting Magnum, but they ran away after they
> figured out what Magnum actually offers is a COE deployment service. My
> takeaway is COE deployment is not the real pain, and there are several
> alternatives available (Heat, Ansible, Chef, Puppet, Juju, etc.). Limiting
> Magnum to be a COE deployment service might prolong the existing adoption
> problem.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com
> <gokrokvertskhov at mirantis.com>]
> *Sent:* April-20-16 6:51 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
>
>
> If Magnum will be focused on installation and management for COE it will
> be unclear how much it is different from Heat and other generic
> orchestrations.  It looks like most of the current Magnum functionality is
> provided by Heat. Magnum focus on deployment will potentially lead to
> another Heat-like  API.
>
> Unless Magnum is really focused on containers its value will be minimal
> for OpenStack users who already use Heat/Orchestration.
>
>
>
>
>
> On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray <keith.bray at rackspace.com>
> wrote:
>
> Magnum doesn¹t have to preclude tight integration for single COEs you
> speak of.  The heavy lifting of tight integration of the COE in to
> OpenStack (so that it performs optimally with the infra) can be modular
> (where the work is performed by plug-in models to Magnum, not performed by
> Magnum itself. The tight integration can be done by leveraging existing
> technologies (Heat and/or choose your DevOps tool of choice:
> Chef/Ansible/etc). This allows interested community members to focus on
> tight integration of whatever COE they want, focusing specifically on the
> COE integration part, contributing that integration focus to Magnum via
> plug-ins, without having to actually know much about Magnum, but instead
> contribute to the COE plug-in using DevOps tools of choice.   Pegging
> Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
> etc. project for every COE of interest, all with different ways of kicking
> off COE management.  Magnum could unify that experience for users and
> operators, without picking a winner in the COE space ‹ this is just like
> Nova not picking a winner between VM flavors or OS types.  It just
> facilitates instantiation and management of thins.  Opinion here:  The
> value of Magnum is in being a light-weight/thin API, providing modular
> choice and plug-ability to COE provisioning and management, thereby
> providing operators and users choice of COE instantiation and management
> (via the bay concept), where each COE can be as tightly or loosely
> integrated as desired by different plug-ins contributed to perform the COE
> setup and configurations.  So, Magnum could have two or more swarm plug-in
> options contributed to the community.. One overlays generic swarm on VMs.
> The other swarm plug-in could instantiate swarm tightly integrated to
> neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
> model with thin API to offer choice of CEO instantiation and management.
> The plug-in does the heavy lifting using whatever methods desired by the
> curator.
>
> That¹s my $0.2.
>
> -Keith
>
>
> On 4/20/16, 4:49 PM, "Joshua Harlow" <harlowja at fastmail.com> wrote:
>
> >Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making native container
> >>> APIs available. We should not wrap APIs with leaky abstractions. The
> >>> lowest common denominator of all COEs is an remarkably low value API
> >>> that adds considerable complexity to Magnum that will not
> >>> strategically advance OpenStack. If we instead focus our effort on
> >>> making the COEs work better on OpenStack, that would be a winning
> >>> strategy. Support and compliment our various COE ecosystems.
> >
> >So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
> >COEs work better on OpenStack' but I do dislike the part about COEs
> >(plural) because it is once again the old non-opinionated problem that
> >we (as a community) suffer from.
> >
> >Just my 2 cents, but I'd almost rather we pick one COE and integrate
> >that deeply/tightly with openstack, and yes if this causes some part of
> >the openstack community to be annoyed, meh, to bad. Sadly I have a
> >feeling we are hurting ourselves by continuing to try to be everything
> >and not picking anything (it's a general thing we, as a group, seem to
> >be good at, lol). I mean I get the reason to just support all the
> >things, but it feels like we as a community could just pick something,
> >work together on figuring out how to pick one, using all these bright
> >leaders we have to help make that possible (and yes this might piss some
> >people off, to bad). Then work toward making that something great and
> >move on...
> >
> >>
> >> I'm with Adrian on that one. I've attended a lot of container-oriented
> >> conferences over the past year and my main takeaway is that this new
> >> crowd of potential users is not interested (at all) in an
> >> OpenStack-specific lowest common denominator API for COEs. They want to
> >> take advantage of the cool features in Kubernetes API or the versatility
> >> of Mesos. They want to avoid caring about the infrastructure provider
> >> bit (and not deploy Mesos or Kubernetes themselves).
> >>
> >> Let's focus on the infrastructure provider bit -- that is what we do and
> >> what the ecosystem wants us to provide.
> >>
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Georgy Okrokvertskhov
> Director of Performance Engineering,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Best regards,

Ihor Dvoretskyi,
OpenStack Operations Engineer

---

Mirantis, Inc. (925) 808-FUEL
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160513/683164bd/attachment.html>


More information about the OpenStack-dev mailing list