[openstack-dev] [Scheduler] New scheduler feature - VM Ensembles
Andrew Laski
andrew.laski at rackspace.com
Wed Jan 30 20:22:10 UTC 2013
On 01/28/13 at 11:42am, Gary Kotton wrote:
>On 01/24/2013 12:14 PM, Gary Kotton wrote:
>>On 01/22/2013 05:03 PM, Russell Bryant wrote:
>>>On 01/08/2013 08:27 AM, Gary Kotton wrote:
>>>>Hi,
>>>>The link
>>>>https://docs.google.com/document/d/1bAMtkaIFn4ZSMqqsXjs_riXofuRvApa--qo4UTwsmhw/edit
>>>>
>>>><https://docs.google.com/document/d/1bAMtkaIFn4ZSMqqsXjs_riXofuRvApa--qo4UTwsmhw/edit>
>>>>
>>>>introduces the concept of a VM ensemble or VM group into Nova. An
>>>>ensemble will provide the tenant the ability to group together VMs that
>>>>provide a certain service or part of the same application. More
>>>>specifically it enables configuring scheduling policies per group. This
>>>>will in turn allow for a more robust and resilient service.
>>>>Specifically, it will allow a tenant to deploy a multi-VM application
>>>>that is designed for VM fault tolerance in a way that application
>>>>availability is actually resilient to physical host failure.
>>>>Any inputs and comments will be greatly appreciated.
>>>I just read over the blueprint and document for this.
>>>
>>>My gut reaction is that this seems to be bleeding some of what Heat does
>>>into Nova, which I don't like.
>>
>>No. This does not take any of Heats functionality. It actually
>>provides Heat with tools that can provide better placements.
>>
>>>
>>>The document provides an example using the existing different_host
>>>scheduler hint. It shows how it might schedule things in a way that
>>>won't work, but only if the compute cluster is basically running out of
>>>capacity. I'm wondering how realistic of a problem this example is
>>>demonstrating.
>>
>>This may be a rare case if you are looking at a 40%-50% overall
>>resource utilization. However, once you are pushing utilization to
>>the 70%-80% and (where most cloud operators would expect it to be),
>>then resource contention would be a very common phenomena that
>>cannot be ignored. 70-80% utilization is not an edge case.
>>
>>We also foresee that at the target utilization levels of 75%+ we
>>will also need to deal with resource fragmentation. Such that,
>>de-fragmentation of resources will need to automatically triggered
>>when needed. Therefore, VM groups scheduling policies need to be
>>stored and used for future re-scheduling of VM group members.
>>
>>
>>>
>>>The document mentions that when requesting N instances, specifying that
>>>anti-affinity is required for the hosts is not possible. Perhaps that's
>>>something that could be added.
>>
>>This will not work - the best example is to look at that in the
>>document. In order to provide a complete solution the scheduler
>>would be required to see the whole group in order to select the
>>best placement strategy. The scheduling hints do not provide a
>>solution when the scheduler need to see the whole picture.
>>
>>This is certainly a topic that we certainly need to speak about at
>>up and coming summit. I think that it would also be very beneficial
>>if we can have this in the G version.
>>
>>>
>>
>
>I did not get a response to the answers to the questions. I have
>added additional implementations - this was following the review
>comments. What do we still need to do in order to get approval for
>the blueprint. I think that there is consensus that this is a very
>valuable feature and that OpenStack can benefit from it.
I like the feature overall, but I think it's more because it enhances
scheduling rather than that this particular feature is something I
would use. I haven't come up with the right way to quantify what
feels limiting about the scheduler, and this is getting offtopic, but
it just doesn't seem to sit in the right place IMO. So providing more
flexibility seems to me to be a good thing.
But, just to bring out what we've discussed elsewhere, what I dislike
about this patch is that I already don't like how request_spec is
setup because it's limiting and awkward in some respects, mainly due
to its support of multiple instances. Adding an additional concept of
multiple instances, next to request_spec, seems to make the situation
worse not better.
For backwards compatibility I understand the need to approach it the
way you have but I want to see that there's a plan or some discussion
about how to ultimately integrate this with request_spec and perhaps
create a new abstraction to provide a single path for scheduling
instances and not always have an if/else in the scheduler manager.
To summarize, I think what would help get support for this would be to
try and fix what's currently available rather than working around it.
Is backwards compatibility the only challenge there? If so let's have
a discussion around that, if not then let's discuss the other
challenges.
>
>Thanks
>Gary
>
>>
>>_______________________________________________
>>OpenStack-dev mailing list
>>OpenStack-dev at lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>_______________________________________________
>OpenStack-dev mailing list
>OpenStack-dev at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list