[openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

Gregory Haynes greg at greghaynes.net
Tue Sep 2 21:19:38 UTC 2014


Excerpts from Nejc Saje's message of 2014-09-01 07:48:46 +0000:
> Hey guys,
> 
> in Ceilometer we're using consistent hash rings to do workload 
> partitioning[1]. We've considered generalizing your hash ring 
> implementation and moving it up to oslo, but unfortunately your 
> implementation is not actually consistent, which is our requirement.
> 
> Since you divide your ring into a number of equal sized partitions, 
> instead of hashing hosts onto the ring, when you add a new host,
> an unbound amount of keys get re-mapped to different hosts (instead of 
> the 1/#nodes remapping guaranteed by hash ring). I've confirmed this 
> with the test in aforementioned patch[2].

I am just getting started with the ironic hash ring code, but this seems
surprising to me. AIUI we do require some rebalancing when a conductor
is removed or added (which is normal use of a CHT) but not for every
host added. This is supported by the fact that we currently dont have a
rebalancing routine, so I would be surprised if ironic worked at all if
we required it for each host that is added.

Can anyone in Ironic with a bit more experience confirm/deny this?

> 
> If this is good enough for your use-case, great, otherwise we can get a 
> generalized hash ring implementation into oslo for use in both projects 
> or we can both use an external library[3].
> 
> Cheers,
> Nejc
> 
> [1] https://review.openstack.org/#/c/113549/
> [2] 
> https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
> [3] https://pypi.python.org/pypi/hash_ring
> 

Thanks,
Greg



More information about the OpenStack-dev mailing list