[openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

Steven Dake sdake at redhat.com
Mon Jan 19 04:02:20 UTC 2015


On 01/18/2015 07:59 PM, Jay Pipes wrote:
> On 01/18/2015 11:11 AM, Steven Dake wrote:
>> On 01/18/2015 06:39 AM, Jay Lau wrote:
>>> Thanks Steven, just some questions/comments here:
>>>
>>> 1) For native docker support, do we have some project to handle the
>>> network? The current native docker support did not have any logic for
>>> network management, are we going to leverage neutron or nova-network
>>> just like nova-docker for this?
>> We can just use flannel for both these use cases.  One way to approach
>> using flannel is that we can expect docker networks will always be setup
>> the same way, connecting into a flannel network.
>
> Note that the README on the Magnum GH repository states that one of 
> the features of Magnum is its use of Neutron:
>
> "Integration with Neutron for k8s multi-tenancy network security."
>
> Is this not true?
>
Jay,

We do integrate today with Neutron for multi-tenant network security.  
Flannel runs on top of Neutron networks using vxlan. Neutron provides 
multi-tenant security; Flannel provides container networking.  Together, 
they solve the multi-tenant container networking problem in a secure way.

Its a shame these two technologies can't be merged at this time, but we 
will roll with it until someone invents an integration.

>>> 2) For k8s, swarm, we can leverage the scheduler in those container
>>> management tools, but what about docker native support? How to handle
>>> resource scheduling for native docker containers?
>>>
>> I am not clear on how to handle native Docker scheduling if a bay has
>> more then one node.  I keep hoping someone in the community will propose
>> something that doesn't introduce an agent dependency in the OS.
>
> So, perhaps because I've not been able to find any documentation for 
> Magnum besides the README (the link to developers docs is a 404), I 
> have quite a bit of confusion around what value Magnum brings to the 
> OpenStack ecosystem versus a tenant just installing Kubernetes on one 
> of more of their VMs and managing container resources using k8s directly.
>
Agree documentation is dearth at this point.  The only thing we really 
have at  this time is the developer guide here:
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst

Installing Kubernetes in one or more of their VMs would also work with 
kubernetes.  In fact, you can do this easily today with larsks 
heat-kubernetes Heat template which we shamelessly borrowed without 
magnum at all.

We do intend to offer bare metal deployment of kubernetes as well, which 
should offer a significant I/O performance advantage, which is after all 
what cloud services are all about.

Of course someone could just deploy kubernetes themselves on bare metal, 
but there isn't at this time an integrated tool to provide 
"Kubernetes-installation-as-a-service" endpoint.  Magnum does that job 
today right now on master.  I suspect it can and will do more as we get 
past our 2 month mark of development ;)


> Is the goal of Magnum to basically be like Trove is for databases and 
> be a Kubernetes-installation-as-a-Service endpoint?
>
I believe that is how the project vision started out.  I'm not clear on 
the long term roadmap - I suspect there is alot more value that can be 
added in.  Some of these things, like manually or automatically scaling 
the infrastructure show some of our future plans.  I'd appreciate your 
suggestions.

> Thanks in advance for more info on the project. I'm genuinely curious.
>

Always a pleasure,
-steve

> Best,
> -jay
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list