[openstack-dev] [Magnum] Scheduling for Magnum

Jay Lau jay.lau.513 at gmail.com
Tue Feb 10 06:57:07 UTC 2015


Thanks Steve, just want to discuss more for this. Then per Andrew's
comments, we need a generic scheduling interface, but if our focus is
native docker, then does this still needed? Thanks!

2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) <stdake at cisco.com>:

>
>
>   From: Jay Lau <jay.lau.513 at gmail.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Date: Monday, February 9, 2015 at 11:31 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum
>
>    Steve,
>
>  So you mean we should focus on docker and k8s scheduler? I was a bit
> confused, why do we need to care k8s? As the k8s cluster was created by
> heat and once the k8s was created, the k8s has its own scheduler for
> creating pods/service/rcs.
>
>  So seems we only need to care scheduler for native docker and ironic bay,
> comments?
>
>
>  Ya scheduler only matters for native docker.  Ironic bay can be k8s or
> docker+swarm or something similar.
>
>  But yup, I understand your point.
>
>
> Thanks!
>
> 2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) <stdake at cisco.com>:
>
>>
>>
>>   From: Joe Gordon <joe.gordon0 at gmail.com>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev at lists.openstack.org>
>> Date: Monday, February 9, 2015 at 6:41 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org>
>> Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum
>>
>>
>>
>>  On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) <stdake at cisco.com>
>> wrote:
>>
>>>
>>>
>>> On 2/9/15, 3:02 AM, "Thierry Carrez" <thierry at openstack.org> wrote:
>>>
>>> >Adrian Otto wrote:
>>> >> [...]
>>> >> We have multiple options for solving this challenge. Here are a few:
>>> >>
>>> >> 1) Cherry pick scheduler code from Nova, which already has a working a
>>> >>filter scheduler design.
>>> >> 2) Integrate swarmd to leverage its scheduler[2].
>>> >> 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
>>> >>This is expected to happen about a year from now, possibly sooner.
>>> >> 4) Write our own filter scheduler, inspired by Nova.
>>> >
>>> >I haven't looked enough into Swarm to answer that question myself, but
>>> >how much would #2 tie Magnum to Docker containers ?
>>> >
>>> >There is value for Magnum to support other container engines / formats
>>> >(think Rocket/Appc) in the long run, so we should avoid early design
>>> >choices that would prevent such support in the future.
>>>
>>> Thierry,
>>> Magnum has an object type of a bay which represents the underlying
>>> cluster
>>> architecture used.  This could be kubernetes, raw docker, swarmd, or some
>>> future invention.  This way Magnum can grow independently of the
>>> underlying technology and provide a satisfactory user experience dealing
>>> with the chaos that is the container development world :)
>>>
>>
>>  While I don't disagree with anything said here, this does sound a lot
>> like https://xkcd.com/927/
>>
>>
>>
>>     Andrew had suggested offering a unified standard user experience and
>> API.  I think that matches the 927 comic pretty well.  I think we should
>> offer each type of system using APIs that are similar in nature but that
>> offer the native features of the system.  In other words, we will offer
>> integration across the various container landscape with OpenStack.
>>
>>  We should strive to be conservative and pragmatic in our systems
>> support and only support container schedulers and container managers that
>> have become strongly emergent systems.  At this point that is docker and
>> kubernetes.  Mesos might fit that definition as well.  Swarmd and rocket
>> are not yet strongly emergent, but they show promise of becoming so.  As a
>> result, they are clearly systems we should be thinking about for our
>> roadmap.  All of these systems present very similar operational models.
>>
>>  At some point competition will choke off new system design placing an
>> upper bound on the amount of systems we have to deal with.
>>
>>  Regards
>> -steve
>>
>>
>>
>>> We will absolutely support relevant container technology, likely through
>>> new Bay formats (which are really just heat templates).
>>>
>>> Regards
>>> -steve
>>>
>>> >
>>> >--
>>> >Thierry Carrez (ttx)
>>> >
>>>
>>> >__________________________________________________________________________
>>> >OpenStack Development Mailing List (not for usage questions)
>>> >Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>   Thanks,
>
>  Jay Lau (Guangya Liu)
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150210/85a828d1/attachment-0001.html>


More information about the OpenStack-dev mailing list