[openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

Jay Pipes jaypipes at gmail.com
Mon Jul 14 08:44:06 UTC 2014


Hi Don, comments inline...

On 07/14/2014 12:18 AM, Dugger, Donald D wrote:
> My understanding is the main goal is to get a fully functional gantt
> working before we do the split.  This means we have to clean up the
> nova interfaces so that all of the current scheduler functionality,
> including things like aggregates and resource tracking, so that we
> can make the split and create the gantt tree which will be the
> default scheduler.

+1. Clean before cleave.

> This means I see 3 main tasks that need to be done before we do the
> split:
>
> 1)  Create the scheduler client library 2)  Complete the isolate
> scheduler DB accesses 3)  Move the resource tracker out of Nova and
> into the scheduler
>
> If we can focus on those 3 tasks we should be able to actually split
> the code out into a fully functional scheduler.

While I have little disagreement on the above tasks, I feel that 
actually the order should be: 3), then 2), then 1).

My reasoning for this is that the client interface would be dramatically 
changed by 3), and the number of DB accesses would also be increased by 
3), therefore it is more important to fix the current lack of 
claim-based resource tracking in the scheduler before we move to either 
1) or 2).

Best,
-jay

> -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph:
> 303/443-3786
>
> -----Original Message----- From: Sylvain Bauza
> [mailto:sbauza at redhat.com] Sent: Friday, July 11, 2014 8:38 AM To:
> John Garbutt Cc: OpenStack Development Mailing List (not for usage
> questions) Subject: Re: [openstack-dev] [Nova] [Gantt] Scheduler
> split status (updated)
>
> Le 11/07/2014 13:14, John Garbutt a écrit :
>> On 10 July 2014 16:59, Sylvain Bauza <sbauza at redhat.com> wrote:
>>> Le 10/07/2014 15:47, Russell Bryant a écrit :
>>>> On 07/10/2014 05:06 AM, Sylvain Bauza wrote:
>>>>> Hi all,
>>>>>
>>>>> === tl;dr: Now that we agree on waiting for the split prereqs
>>>>> to be done, we debate on if ResourceTracker should be part of
>>>>> the scheduler code and consequently Scheduler should expose
>>>>> ResourceTracker APIs so that Nova wouldn't own compute nodes
>>>>> resources. I'm proposing to first come with RT as Nova
>>>>> resource in Juno and move ResourceTracker in Scheduler for K,
>>>>> so we at least merge some patches by Juno. ===
>>>>>
>>>>> Some debates occured recently about the scheduler split, so I
>>>>> think it's important to loop back with you all to see where
>>>>> we are and what are the discussions. Again, feel free to
>>>>> express your opinions, they are welcome.
>>>> Where did this resource tracker discussion come up?  Do you
>>>> have any references that I can read to catch up on it?  I would
>>>> like to see more detail on the proposal for what should stay in
>>>> Nova vs. be moved.  What is the interface between Nova and the
>>>> scheduler here?
>>>>
>>>
>>> Oh, missed the most important question you asked. So, about the
>>> interface in between scheduler and Nova, the original agreed
>>> proposal is in the spec https://review.openstack.org/82133
>>> (approved) where the Scheduler exposes : - select_destinations()
>>> : for querying the scheduler to provide candidates -
>>> update_resource_stats() : for updating the scheduler internal
>>> state (ie. HostState)
>>>
>>> Here, update_resource_stats() is called by the ResourceTracker,
>>> see the implementations (in review)
>>> https://review.openstack.org/82778 and
>>> https://review.openstack.org/104556.
>>>
>>>
>>> The alternative that has just been raised this week is to provide
>>> a new interface where ComputeNode claims for resources and frees
>>> these resources, so that all the resources are fully owned by the
>>> Scheduler. An initial PoC has been raised here
>>> https://review.openstack.org/103598 but I tried to see what would
>>> be a ResourceTracker proxified by a Scheduler client here :
>>> https://review.openstack.org/105747. As the spec hasn't been
>>> written, the names of the interfaces are not properly defined but
>>> I made a proposal as : - select_destinations() : same as above -
>>> usage_claim() : claim a resource amount - usage_update() : update
>>> a resource amount - usage_drop(): frees the resource amount
>>>
>>> Again, this is a dummy proposal, a spec has to written if we
>>> consider moving the RT.
>> While I am not against moving the resource tracker, I feel we
>> could move this to Gantt after the core scheduling has been moved.
>>
>> I was imagining the extensible resource tracker to become (sort
>> of) equivalent to cinder volume drivers. Also the persistent
>> resource claims will give us another plugin point for gantt. That
>> might not be enough, but I think it easier to see once the other
>> elements have moved.
>>
>> But the key point thing I like, is how the current approach amounts
>> to refactoring, similar to the cinder move. I feel we should stick
>> to that if possible.
>>
>> John
>
> Thanks John for your feedback. I'm +1 with you, we need to go on the
> way we defined with all the community, create Gantt once the prereqs
> are done (see my above and first mail for these) and see after if the
> line is needed to move.
>
> I think this discussion should also be interesting if we also take in
> account the current Cinder and Neutron scheduling needs, so we would
> say if it's the good direction.
>
>
> Others ?
>
> Note: The spec https://review.openstack.org/89893 is not yet approved
> today, as the Spec approval freeze happened, I would like to discuss
> with the team if we can have an exception on it so the work could
> happen by Juno.
>
>
> Thanks, -Sylvain
>
>
> _______________________________________________ OpenStack-dev mailing
> list OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________ OpenStack-dev mailing
> list OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list