[openstack-dev] Running multiple filter schedulers in parallel
Day, Phil
philip.day at hp.com
Thu May 23 14:32:47 UTC 2013
Hi Chris,
Yep, Conductor can defiantly make life better here by breaking the create sequence up into smaller steps.
We could probably do something fairly simple ahead of that (if anyone was interested) by just providing granularity between the capacity failure and other build errors and having separate retry limits for both. (I always like to keep simple fixes moving through while waiting for the big changes to land)
Phil
From: Chris Behrens [mailto:cbehrens at codestud.com]
Sent: 22 May 2013 18:48
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Running multiple filter schedulers in parallel
On May 22, 2013, at 7:01 AM, "Day, Phil" <philip.day at hp.com<mailto:philip.day at hp.com>> wrote:
Thanks Chris,
Yep we do have the scheduler_host_subset_size set (I think that was one of our contributions), and we are getting the kick back from the compute nodes when the resource tracker kicks in, so all of that is working as it should do. I've been a bit wary of bumping the retry count too high as we've also seen a number of errors bouncing thought host due to other errors (such as the quantum port quota issue), but I might look at bumping it up a tad.
Yeah. I think we need some better logic based on the type of exception that occurs. You may want to possibly retry forever (maybe not forever, but a lot more than 3 times -- although maybe forever is OK -- worst case you loop through all your hosts and they're all full :) if the exception comes from the resource tracker. It's doing its job properly and it's kicking back the message quickly. But other exceptions can occur later... possibly things like bad images, etc. You probably want to retry a few times just to make sure, but you don't want to retry very long because it's likely never going to succeed and you want to make sure don't have an instance sitting in 'building' forever.
As I said, I think this can get better with conductor. If we have conductor responsible for building, we can break up the process to make things easier. We can do things like:
[conductor]
def _get_host_from_scheduer():
[query scheduler and return a host we've not tried with this request]
def _allocate_networks():
return nwinfo
def _tell_compute_to_assign_instance():
[on the compute side, this does resource_tracker checking and assigns instance['host']]
def _tell_compute_to_build():
[compute downloads image and builds]
def _tell_compute_to_run_instance()
Then build instance logic could be:
with try_2_times():
_allocate_networks()
with try_2_times():
with try_forever():
_get_host_from_scheduler()
_tell_compute_to_assign_instance()
_tell_compute_to_build()
_tell_compute_to_run_instance()
Something like that if you know what I mean. This would happen to address one of the other issues with retries... they deallocate and reallocate networks every time you have to retry due to resource tracker failure.
I'm also thinking about a small mod to the scheduler_host_subset_size code that adds a scheduler_host_subset_offest, so that you could for example have one scheduler picking from the top 10 hosts, and another picking from 11-20. That won't guarantee there's never an overlap, but I think it would reduce it considerably. It would also mean that if you do loose a scheduler the most hosts that are no longer scheduled to becomes scheduler_host_subset_size.
That's an interesting thought. Although I'd like to make things "just work" without having to explicitly configure something like this. :)
- Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130523/a1d3ac30/attachment.html>
More information about the OpenStack-dev
mailing list