[openstack-dev] [ceilometer] about workload partition
gord at live.ca
Fri Dec 1 13:42:37 UTC 2017
On 2017-12-01 05:03 AM, 李田清 wrote:
> we test workload partition, and find it's much slower than not
> using it.
> After some review, we find that, after get samples from
> ceilometer unpacks them and sends them one by one to the pipe
> ceilometer.pipe.*, this will make the consumer slow. Right now,
> the rabbit_qos_prefetch_count to 1. If we sent it to 10, the connection
> will be reset
currently, i believe rabbit_qos_prefetch_count will be set to whatever
value you set batch_size to.
> regularly. Under this pos, the consumer will be very slow in
> workload partition. If you do not use workload partition, the messages
> can all be consumer. If you use it, the messages in pipe will be piled
> up more and more。
what is "pos"? i'm not sure it means the same thing to both of us... or
well i guess it could :)
> May be right now workload partition is not a good choice? Or any
i'll give a two part answer but first i'll start with a question: what
version of oslo.messaging do you have?
i see a performance drop as well but the reason for it is because of an
oslo.messaging bug introduced into master/pike/ocata releases. more
details can be found here:
https://bugs.launchpad.net/oslo.messaging/+bug/1734788. we're working on
backporting it. we've also done some work regarding performance/memory
to shrink memory usage of partitioning in master.
with that said, there are only two scenarios where you should have
partitioning enabled. if you have multiple notification agents AND:
1. you have transformations in your pipeline
2. you want to batch efficiently to gnocchi
if you don't have workload partitioning on, your transform metrics will
probably be wrong or missing values. it also won't batch to gnocchi so
you'll see a lot more http requests there.
so yes, you do have a choice to disable it, but the above is your tradeoff.
More information about the OpenStack-dev