[openstack-dev] [Scheduler] New scheduler feature - VM Ensembles
Gary Kotton
gkotton at redhat.com
Mon Jan 28 09:42:54 UTC 2013
On 01/24/2013 12:14 PM, Gary Kotton wrote:
> On 01/22/2013 05:03 PM, Russell Bryant wrote:
>> On 01/08/2013 08:27 AM, Gary Kotton wrote:
>>> Hi,
>>> The link
>>> https://docs.google.com/document/d/1bAMtkaIFn4ZSMqqsXjs_riXofuRvApa--qo4UTwsmhw/edit
>>>
>>> <https://docs.google.com/document/d/1bAMtkaIFn4ZSMqqsXjs_riXofuRvApa--qo4UTwsmhw/edit>
>>>
>>> introduces the concept of a VM ensemble or VM group into Nova. An
>>> ensemble will provide the tenant the ability to group together VMs that
>>> provide a certain service or part of the same application. More
>>> specifically it enables configuring scheduling policies per group. This
>>> will in turn allow for a more robust and resilient service.
>>> Specifically, it will allow a tenant to deploy a multi-VM application
>>> that is designed for VM fault tolerance in a way that application
>>> availability is actually resilient to physical host failure.
>>> Any inputs and comments will be greatly appreciated.
>> I just read over the blueprint and document for this.
>>
>> My gut reaction is that this seems to be bleeding some of what Heat does
>> into Nova, which I don't like.
>
> No. This does not take any of Heats functionality. It actually
> provides Heat with tools that can provide better placements.
>
>>
>> The document provides an example using the existing different_host
>> scheduler hint. It shows how it might schedule things in a way that
>> won't work, but only if the compute cluster is basically running out of
>> capacity. I'm wondering how realistic of a problem this example is
>> demonstrating.
>
> This may be a rare case if you are looking at a 40%-50% overall
> resource utilization. However, once you are pushing utilization to the
> 70%-80% and (where most cloud operators would expect it to be), then
> resource contention would be a very common phenomena that cannot be
> ignored. 70-80% utilization is not an edge case.
>
> We also foresee that at the target utilization levels of 75%+ we will
> also need to deal with resource fragmentation. Such that,
> de-fragmentation of resources will need to automatically triggered
> when needed. Therefore, VM groups scheduling policies need to be
> stored and used for future re-scheduling of VM group members.
>
>
>>
>> The document mentions that when requesting N instances, specifying that
>> anti-affinity is required for the hosts is not possible. Perhaps that's
>> something that could be added.
>
> This will not work - the best example is to look at that in the
> document. In order to provide a complete solution the scheduler would
> be required to see the whole group in order to select the best
> placement strategy. The scheduling hints do not provide a solution
> when the scheduler need to see the whole picture.
>
> This is certainly a topic that we certainly need to speak about at up
> and coming summit. I think that it would also be very beneficial if we
> can have this in the G version.
>
>>
>
I did not get a response to the answers to the questions. I have added
additional implementations - this was following the review comments.
What do we still need to do in order to get approval for the blueprint.
I think that there is consensus that this is a very valuable feature and
that OpenStack can benefit from it.
Thanks
Gary
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list