[openstack-dev] [Magnum] Scheduling for Magnum

Adrian Otto adrian.otto at rackspace.com
Tue Feb 10 00:04:17 UTC 2015


Steve,

On Feb 9, 2015, at 9:54 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:



From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, February 9, 2015 at 10:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

I think Sylvain is getting at an important point. Magnum is trying to be as agnostic as possible when it comes to selecting a backend. Because of that, I think the biggest benefit to Magnum would be a generic scheduling interface that each pod type would implement. A pod type with a backend providing scheduling could implement a thin scheduler that simply translates the generic requests into something the backend can understand. And a pod type requiring outside scheduling could implement something more heavy.

If we are careful to keep the heavy scheduling generic enough to be shared between backends requiring it, we could hopefully swap in an implementation using Gantt once that is ready.

Great mid-cycle topic discussion topic.  Can you add it to the planning etherpad?

Yes, it was listed as #5 here:
https://etherpad.openstack.org/p/magnum-midcycle-topics

We will arrange that further up the priority list as soon as we feel that list is complete, and ready for sorting.

Adrian


Thanks
-steve

--Andrew

________________________________
From: Jay Lau [jay.lau.513 at gmail.com<mailto:jay.lau.513 at gmail.com>]
Sent: Monday, February 09, 2015 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Thanks Sylvain, we did not work out the API requirement till now but I think the requirement should be similar with nova: we need select_destination to select the best target host based on filters and weights.

There are also some discussions here https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Thanks!

2015-02-09 16:22 GMT+08:00 Sylvain Bauza <sbauza at redhat.com<mailto:sbauza at redhat.com>>:
Hi Magnum team,


Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :


From: Eric Windisch <eric at windisch.us<mailto:eric at windisch.us>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Saturday, February 7, 2015 at 10:09 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a working a filter scheduler design.

The Gantt team explored that option by the Icehouse cycle and it failed with a lot of problems. I won't list all of those, but I'll just explain that we discovered how the Scheduler and the Nova compute manager were tighly coupled, which was meaning that a repository fork was really difficult to do without reducing the tech debt.

That said, our concerns were far different from the Magnum team : it was about having feature parity and replacing the current Nova scheduler, while your team is just saying that they want to have something about containers.


2) Integrate swarmd to leverage its scheduler[2].

I see #2 as not an alternative but possibly an "also". Swarm uses the Docker API, although they're only about 75% compatible at the moment. Ideally, the Docker backend would work with both single docker hosts and clusters of Docker machines powered by Swarm. It would be nice, however, if scheduler hints could be passed from Magnum to Swarm.

Regards,
Eric Windisch

Adrian & Eric,

I would prefer to keep things simple and just integrate directly with swarm and leave out any cherry-picking from Nova. It would be better to integrate scheduling hints into Swarm, but I’m sure the swarm upstream is busy with requests and this may be difficult to achieve.


I don't want to give my opinion about which option you should take as I don't really know your needs. If I understand correctly, this is about having a scheduler providing affinity rules for containers. Do you have a document explaining which interfaces you're looking for, which kind of APIs you're wanting or what's missing with the current Nova scheduler ?

MHO is that the technology shouldn't drive your decision : whatever the backend is (swarmd or an inherited nova scheduler), your interfaces should be the same.

-Sylvain


Regards
-steve




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150210/c1f75182/attachment.html>


More information about the OpenStack-dev mailing list