[openstack-dev] [ceilometer] about workload partition
gordon chung
gord at live.ca
Mon Dec 4 13:07:23 UTC 2017
On 2017-12-03 10:30 PM, 李田清 wrote:
>
>
> On 2017-12-01 05:03 AM, 李田清 wrote:
> >> Hello,
> >> we test workload partition, and find it's much slower than not
> >> using it.
> >> After some review, we find that, after get samples from
> >> notifications.sample
> >> ceilometer unpacks them and sends them one by one to the pipe
> >> ceilometer.pipe.*, this will make the consumer slow. Right now,
> >> the rabbit_qos_prefetch_count to 1. If we sent it to 10, the connection
> >> will be reset
>
> > currently, i believe rabbit_qos_prefetch_count will be set to whatever
> > value you set batch_size to.
> You mean in the past, it can not be set to whatever?
> We test newton, and find under 1k vm, if we open workload partition,
> it will be reset regularly.
>
> >i'll give a two part answer but first i'll start with a question: what
> >version of oslo.messaging do you have?
>
> newton 5.10.2
>
i just checked, oslo.messaging==5.10.2 has the offending patch that
decreases performance significantly. as newton is EOL, it seems you have
a few choices:
- revert to 5.10.1 but i'm not sure if that reintroduces old bugs
- manually patch your oslo.messaging with:
https://review.openstack.org/#/c/524099
- pull in a newer oslo.messaging from another branch (fix is unreleased
currently)
- manually patch notification agent to use multiple threads[1] (but this
will possibly add some instability to your transforms)
- disable workload_partitioning
option 2 or 5 are probably the safest choices depending on your load.
[1]
https://github.com/openstack/ceilometer/blob/newton-eol/ceilometer/notification.py#L305-L307
cheers,
--
gord
More information about the OpenStack-dev
mailing list