<div dir="auto">Hi,<div dir="auto"><br></div><div dir="auto">dont know if durable queues help, but should be enabled by rabbitmq policy which (alone) doesnt seem to fix this (we have this active)</div><div dir="auto"><br></div><div dir="auto"> Fabian</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Massimo Sgaravatto <<a href="mailto:massimo.sgaravatto@gmail.com">massimo.sgaravatto@gmail.com</a>> schrieb am Sa., 8. Aug. 2020, 09:36:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">We also see the issue. When it happens stopping and restarting the rabbit cluster usually helps.<div><div><br></div><div>I thought the problem was because of a wrong setting in the openstack services conf files: I missed these settings (that I am now going to add):</div><div><br><div>[oslo_messaging_rabbit]<br>rabbit_ha_queues = true<br>amqp_durable_queues = true<br></div></div></div><div><br></div><div>Cheers, Massimo</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Aug 8, 2020 at 6:34 AM Fabian Zimmermann <<a href="mailto:dev.faz@gmail.com" target="_blank" rel="noreferrer">dev.faz@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Hi,<div dir="auto"><br></div><div dir="auto">we also have this issue.</div><div dir="auto"><br></div><div dir="auto">Our solution was (up to now) to delete the queues with a script or even reset the complete cluster.</div><div dir="auto"><br></div><div dir="auto">We just upgraded rabbitmq to the latest version - without luck.</div><div dir="auto"><br></div><div dir="auto">Anyone else seeing this issue?</div><div dir="auto"><br></div><div dir="auto"> Fabian</div><div dir="auto"><br></div><div dir="auto"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Arnaud Morin <<a href="mailto:arnaud.morin@gmail.com" target="_blank" rel="noreferrer">arnaud.morin@gmail.com</a>> schrieb am Do., 6. Aug. 2020, 16:47:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hey all,<br>
<br>
I would like to ask the community about a rabbit issue we have from time<br>
to time.<br>
<br>
In our current architecture, we have a cluster of rabbits (3 nodes) for<br>
all our OpenStack services (mostly nova and neutron).<br>
<br>
When one node of this cluster is down, the cluster continue working (we<br>
use pause_minority strategy).<br>
But, sometimes, the third server is not able to recover automatically<br>
and need a manual intervention.<br>
After this intervention, we restart the rabbitmq-server process, which<br>
is then able to join the cluster back.<br>
<br>
At this time, the cluster looks ok, everything is fine.<br>
BUT, nothing works.<br>
Neutron and nova agents are not able to report back to servers.<br>
They appear dead.<br>
Servers seems not being able to consume messages.<br>
The exchanges, queues, bindings seems good in rabbit.<br>
<br>
What we see is that removing bindings (using rabbitmqadmin delete<br>
binding or the web interface) and recreate them again (using the same<br>
routing key) brings the service back up and running.<br>
<br>
Doing this for all queues is really painful. Our next plan is to<br>
automate it, but is there anyone in the community already saw this kind<br>
of issues?<br>
<br>
Our bug looks like the one described in [1].<br>
Someone recommands to create an Alternate Exchange.<br>
Is there anyone already tried that?<br>
<br>
FYI, we are running rabbit 3.8.2 (with OpenStack Stein).<br>
We had the same kind of issues using older version of rabbit.<br>
<br>
Thanks for your help.<br>
<br>
[1] <a href="https://groups.google.com/forum/#!newtopic/rabbitmq-users/rabbitmq-users/zFhmpHF2aWk" rel="noreferrer noreferrer noreferrer" target="_blank">https://groups.google.com/forum/#!newtopic/rabbitmq-users/rabbitmq-users/zFhmpHF2aWk</a><br>
<br>
-- <br>
Arnaud Morin<br>
<br>
<br>
</blockquote></div>
</blockquote></div>
</blockquote></div>