[openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation
Robert Collins
robertc at robertcollins.net
Mon Sep 8 04:22:59 UTC 2014
On 8 September 2014 05:57, Nejc Saje <nsaje at redhat.com> wrote:
\
>> That generator API is pretty bad IMO - because it means you're very
>> heavily dependent on gc and refcount behaviour to keep things clean -
>> and there isn't (IMO) a use case for walking the entire ring from the
>> perspective of an item. Whats the concern with having replicas a part
>> of the API?
>
>
> Because they don't really make sense conceptually. Hash ring itself doesn't
> actually 'make' any replicas. The replicas parameter in the current Ironic
> implementation is used solely to limit the amount of buckets returned.
> Conceptually, that seems to me the same as take(<replicas>,
> iterate_nodes()). I don't know python internals enough to know what problems
> this would cause though, can you please clarify?
I could see replicas being a parameter to a function call, but take(N,
generator) has the same poor behaviour - generators in general that
won't be fully consumed rely on reference counting to be freed.
Sometimes thats absolutely the right tradeoff.
>> its absolutely a partition of the hash space - each spot we hash a
>> bucket onto is thats how consistent hashing works at all :)
>
>
> Yes, but you don't assign the number of partitions beforehand, it depends on
> the number of buckets. What you do assign is the amount of times you hash a
> single bucket onto the ring, which is currently named 'replicas' in
> Ceilometer code, but I suggested 'distribution_quality' or something
> similarly descriptive in an earlier e-mail.
I think you misunderstand the code. We do assign the number of
partitions beforehand - its approximately fixed and independent of the
number of buckets. More buckets == less times we hash each bucket.
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
More information about the OpenStack-dev
mailing list