[OpenStack-Infra] Status of check-tempest-dsvm-f20 job

James E. Blair jeblair at openstack.org
Wed Jun 18 15:18:53 UTC 2014


Ian Wienand <iwienand at redhat.com> writes:

> but eventually, at 30:1, the fedora node gets dropped

I think the formula at work for deciding if a single marginal node
should be allocated as a precise node is:

  (demand_for_precise / total_demand) * available_nodes

Eg, round(20/40*1) = 1 so it's allocated as precise; round(20/41*1) = 0
so it moves on to the next one.

So that's the highly constrained case (available_nodes == 1) at work
here: the image with the highest number of requests always wins until
that number is reduced enough that another image takes its place.  Of
course if there are more nodes available, the proportional aspect
returns, but in the marginal case of one-at-a-time allocation, there's
not a lot to work with.

> On 06/18/2014 11:32 AM, Dan Prince wrote:
>> Would this fix (or something similar) help nodepool to allocate things
>> more efficiently?
>> 
>> https://review.openstack.org/#/c/88223/

Yes, that looks promising!  Though, it does seem to be failing unit
tests.  I suspect that it may be hard to unit test due to the
non-deterministic nature of it, and perhaps that is causing errors in
the existing tests.  The allocator is so complicated that I'd rather not
discard the minimal testing that we do have.

I'm guessing that we could use known PRNG seed values to get consistent
output.

Or we could modify that approach and instead of shuffling randomly,
shuffle by allocation time (so that the request for the least recently
allocated node type is first).  That would end up round-robining among
images (actually putting nodes like f20 at an advantage), but at least
nothing starves.

We could shuffle by allocation percentage, so that if recent allocations
don't match the demand ratios, shuffle to favor the ones that are low.
(This requires tracking a bit more state across allocation runs).

> (p.s. I want to turn this into a test-case, once we know what sort of
> result we're looking for)

Yay!

-Jim



More information about the OpenStack-Infra mailing list