[openstack-dev] [magnum] Nesting /containers resource under /bays

Hongbin Lu hongbin.lu at huawei.com
Tue Jan 19 22:10:43 UTC 2016


I don't see why the existent of /containers endpoint blocks your workflow. However, with /containers gone, the alternate workflows are blocked.

As a counterexample, some users want to manage containers through an OpenStack API for various reasons (i.e. single integration point, lack of domain knowledge of COEs, orchestration with other OpenStack resources: VMs, networks, volumes, etc.):

* Deployment of a cluster
* Management of that cluster
* Creation of a container
* Management of that container

As another counterexample, some users just want a container:

* Creation of a container
* Management of that container

Then, should we remove the /bays endpoint as well? Mangum is currently in an early stage, so workflows are diverse, non-static, and hypothetical. It is a risk to have Magnum overfit into a specific workflow by removing others. 

For your analogies, Cinder is a block storage service so it doesn't abstract the filesystems. Mangum is a container service [1] so it is reasonable to abstract containers. Again, if your logic is applied, should Nova have an endpoint that let you work with individual hypervisor? Probably not, because Nova is a Compute service.

[1] https://github.com/openstack/magnum/blob/master/specs/containers-service.rst

Best regards,
Hongbin

-----Original Message-----
From: Kyle Kelley [mailto:kyle.kelley at RACKSPACE.COM] 
Sent: January-19-16 2:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

With /containers gone, what Magnum offers is a workflow for consuming container orchestration engines:

* Deployment of a cluster
* Management of that cluster
* Key handling (creation, upload, revocation, etc.)

The first two are handled underneath by Nova + Heat, the last is in the purview of Barbican. That doesn't matter though.

What users care about is getting access to these resources without having to write their own heat template, create a backing key store, etc. They'd like to get started immediately with container technologies that are proven.

If you're looking for analogies Hongbin, this would be more like saying that Cinder shouldn't have an endpoint that let you work with individual files on a volume. It would be unreasonable to try to abstract across filesystems in a meaningful and sustainable way.

________________________________________
From: Hongbin Lu <hongbin.lu at huawei.com>
Sent: Tuesday, January 19, 2016 9:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Assume your logic is applied. Should Nova remove the endpoint of managing VMs? Should Cinder remove the endpoint of managing volumes?

I think the best way to deal with the heterogeneity is to introduce a common abstraction layer, not to decouple from it. The real critical functionality Magnum could offer to OpenStack is to provide a Container-as-a-Service. If Magnum is a Deployment-as-a-service, it will be less useful and won't bring too much value to the OpenStack ecosystem.

Best regards,
Hongbin

-----Original Message-----
From: Clark, Robert Graham [mailto:robert.clark at hpe.com]
Sent: January-19-16 5:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

+1

Doing this, and doing this well, provides critical functionality to OpenStack while keeping said functionality reasonably decoupled from the COE API vagaries that would inevitably encumber a solution that sought to provide ‘one api to control them all’.

-Rob

From: Mike Metral
Reply-To: OpenStack List
Date: Saturday, 16 January 2016 02:24
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

The requirements that running a fully containerized application optimally & effectively requires the usage of a dedicated COE tool such as Swarm, Kubernetes or Marathon+Mesos.

OpenStack is better suited for managing the underlying infrastructure.

Mike Metral
Product Architect – Private Cloud R&D
email: mike.metral at rackspace.com<mailto:mike.metral at rackspace.com>
cell: +1-<tel:%2B1-305-282-7606>305-282-7606<tel:%2B1-305-282-7606>

From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

A reason is the container abstraction brings containers to OpenStack: Keystone for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgbkrk at gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>> wrote:
Disagree.

If the container managing part is removed, Magnum is just a COE deployment tool. This is really a scope-mismatch IMO. The middle ground I can see is to have a flag that allows operators to turned off the container managing part. If it is turned off, COEs are not managed by Magnum and requests sent to the /container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral [mailto:mike.metral at rackspace.com<mailto:mike.metral at rackspace.com>]
Sent: January-15-16 6:24 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>

Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us in the long run seeing how all COE’s operate & are based off various different paradigms in terms of describing & managing containers, and this divergence will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing containers in Magnum seems redundant in nature as this is the very reason to want to use a COE in the first place – because it’s a more suited tool for the task
If there is low-hanging fruit in terms of common functionality across all COE’s, then those generic capabilities could be abstracted and integrated into Magnum, but these have to be carefully examined beforehand to ensure true parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum should and could be a part of the managing container story to some degree – which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it just looks like a snowball of scope-mismatch and management issues just waiting to happen.

Mike Metral
Product Architect – Private Cloud R&D - Rackspace ________________________________
From: Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the followings:
1.       Generate a uuid (if not provided).
2.       Call Docker Swarm API to create a container, with its hostname equal to the generated uuid.
3.       Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the uuid (or the name) of the container (if name is provided, it will be used to lookup the uuid). Magnum will do the followings:
1.       Call Docker Swarm API to list all containers.
2.       Find the container whose hostname is equal to the provided uuid, record its “docker_id” that is the ID assigned by native tool.
3.       Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. Alternatively, users can directly call the native APIs. In this case, the created resources are not managed by Magnum and won’t be accessible through Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kelley at RACKSPACE.COM]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

This presumes a model where Magnum is in complete control of the IDs of individual containers. How does this work with the Docker daemon?

> In Rest API, you can set the “uuid” field in the json request body 
> (this is not supported in CLI, but it is an easy add).​

In the Rest API for Magnum or Docker? Has Magnum completely broken away from exposing native tooling - are all container operations assumed to be routed through Magnum endpoints?

> For the idea of nesting container resource, I prefer not to do that if there are alternatives or it can be work around. IMO, it sets a limitation that a container must have a bay, which might not be the case in future. For example, we might add a feature that creating a container will automatically create a bay. If a container must have a bay on creation, such feature is impossible.

If that's *really* a feature you need and are fully involved in designing for, this seems like a case where creating a container via these endpoints would create a bay and return the full resource+subresource.

Personally, I think these COE endpoints need to not be in the main spec, to reduce the surface area until these are put into further use.



________________________________
From: Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>
Sent: Wednesday, January 13, 2016 5:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Hi Jamie,

I would like to clarify several things.

First, a container uuid is intended to be unique globally (not within individual cluster). If you create a container with duplicated uuid, the creation will fail regardless of its bay. Second, you are in control of the uuid of the container that you are going to create. In Rest API, you can set the “uuid” field in the json request body (this is not supported in CLI, but it is an easy add). If a uuid is provided, Magnum will use it as the uuid of the container (instead of generating a new uuid).

For the idea of nesting container resource, I prefer not to do that if there are alternatives or it can be work around. IMO, it sets a limitation that a container must have a bay, which might not be the case in future. For example, we might add a feature that creating a container will automatically create a bay. If a container must have a bay on creation, such feature is impossible.

Best regards,
Hongbin

From: Jamie Hannaford [mailto:jamie.hannaford at rackspace.com]
Sent: January-13-16 4:43 AM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [magnum] Nesting /containers resource under /bays

I've recently been gathering feedback about the Magnum API and one of the things that people commented on​ was the global /containers endpoints. One person highlighted the danger of UUID collisions:

"""
It takes a container ID which is intended to be unique within that individual cluster. Perhaps this doesn't matter, considering the surface for hash collisions. You're running a 1% risk of collision on the shorthand container IDs:

In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p))) In [15]: n(.01, 0x1000000000000)
Out[15]: 2378620.6298183016<tel:6298183016>

(this comes from the Birthday Attack - https://en.wikipedia.org/wiki/Birthday_attack)<https://en.wikipedia.org/wiki/Birthday_attack>

The main reason I questioned this is that we're not in control of how the hashes are created whereas each Docker node or Swarm cluster will pick a new ID under collisions. We don't have that guarantee when aggregating across.

The use case that was outlined appears to be aggregation and reporting. That can be done in a different manner than programmatic access to single containers.​ """

Representing a resource without reference to its parent resource also goes against the convention of many other OpenStack APIs.

Nesting a container resource under its parent bay would mitigate both of these issues:

/bays/{uuid}/containers/{uuid}​

I'd like to get feedback from folks in the Magnum team and see if anybody has differing opinions about this.

Jamie



________________________________
Rackspace International GmbH a company registered in the Canton of Zurich, Switzerland (company identification number CH-020.4.047.077-1) whose registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace International GmbH privacy policy can be viewed atwww.rackspace.co.uk/legal/swiss-privacy-policy<http://www.rackspace.co.uk/legal/swiss-privacy-policy> - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com<mailto:abuse at rackspace.com> and delete the original message. Your cooperation is appreciated.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kyle Kelley (@rgbkrk<https://twitter.com/rgbkrk>; lambdaops.com<http://lambdaops.com/>)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list