[kolla] [rabbitmq] RMQ trouble after config update

Albert Braden ozzzo at yahoo.com
Wed Mar 23 19:24:08 UTC 2022


 That fixed it; thank you! My co-workers will think I'm an RMQ guru.
     On Monday, March 21, 2022, 10:52:16 AM EDT, Felix Hüttner <felix.huettner at mail.schwarz> wrote:  
 
 <!--#yiv7291928012 _filtered {} _filtered {} _filtered {}#yiv7291928012 #yiv7291928012 p.yiv7291928012MsoNormal, #yiv7291928012 li.yiv7291928012MsoNormal, #yiv7291928012 div.yiv7291928012MsoNormal {margin:0cm;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv7291928012 a:link, #yiv7291928012 span.yiv7291928012MsoHyperlink {color:blue;text-decoration:underline;}#yiv7291928012 p.yiv7291928012msonormal, #yiv7291928012 li.yiv7291928012msonormal, #yiv7291928012 div.yiv7291928012msonormal {margin-right:0cm;margin-left:0cm;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv7291928012 .yiv7291928012MsoChpDefault {font-family:"Calibri", sans-serif;} _filtered {}#yiv7291928012 div.yiv7291928012WordSection1 {}-->
Hi Albert,
 
  
 
i think the following should work (but not tested)
 
  
 
{"vhost": "/", "name": "notifications-expire", "pattern": "^(notifications_designate|versioned_notifications).*", "apply-to": "queues", "definition": {"message-ttl":600000,"expires":1200000, "ha-mode":"all","ha-promote-on-shutdown": "always", "ha-sync-mode":"automatic"}, "priority":1},
{"vhost": "/", "name": "ha-all", "pattern": "^(?!(amq\.)|(.*_fanout_)|(reply_)).*", "apply-to": "all", "definition": {"ha-mode":"all","ha-promote-on-shutdown": "always", "ha-sync-mode":"automatic"}, "priority":0}
 
  
 
Note the “priority”: 1 on the first one. It should ensure it is prioritized over the second one.
 
This way the notifications get all the ha setting as well as the expire stuff.
 
  
 
Best Regards

Felix Hüttner
Schwarz IT KG * Stiftsbergstraße 1 * D-74172 Neckarsulm
 
From: Albert Braden <ozzzo at yahoo.com>
Sent: Monday, March 21, 2022 2:56 PM
To: Openstack-discuss <openstack-discuss at lists.openstack.org>; Felix Hüttner <felix.huettner at mail.schwarz>
Subject: Re: [kolla] [rabbitmq] RMQ trouble after config update
 
  
 
Thank you Felix; that’s very helpful. I think I understand how it works now. The problem is, I want the notifications_designate messages to expire, because if they don’t, unconsumed messages linger forever. Is it possible to combine these two policies so that all durable queues are HA, but notifications_designate still expires? Can I have two definitions applied to different patterns in a single policy?

{"vhost": "/", "name": "notifications-expire", "pattern": "^(notifications_designate|versioned_notifications).*", "apply-to": "queues", "definition": {"message-ttl":600000,"expires":1200000}, "priority":0},
{"vhost": "/", "name": "ha-all", "pattern": "^(?!(amq\.)|(.*_fanout_)|(reply_)).*", "apply-to": "all", "definition": {"ha-mode":"all","ha-promote-on-shutdown": "always", "ha-sync-mode":"automatic"}, "priority":0} ] }
 
On Monday, March 21, 2022, 06:11:23 AM EDT, Felix Hüttner <felix.huettner at mail.schwarz> wrote:
 
  
 
  
 
Hi Albert,
 
 
 
TL;DR: you set durable on your openstack service, not on rabbitmq. It is the same for all queues, so either you make them all HA or none of them (at least afaik).
 
 
 
the „durable“ setting is not defined by the policy but by the client that declares the queue. Openstack normally uses the following setting for that:
 
 
 
[oslo_messaging_rabbit]
 
amqp_durable_queues = true
 
 
 
AFAIK this is a global setting and not different for notifications vs. other queues.
 
An exception of this is fanout/reply queues which are always non-durable.
 
 
 
The whole durable/HA thing can be summarized as follows
 
 
 
Durable: If true then the message will survive the crash/restart of a rabbitmq nodes. If false the message is lost on crash/restart. Set by the client declaring the queue
 
HA-mode: If enabled the queue is replicated across multiple nodes. If disabled only a single node hosts the queue and all messages. Set by the policy on rabbitmq
 
 
 
And the following are the result of restarts of a single node in each of the combinations:
 
| 
 
  | 
Durable
  | 
Non-durable
  |
| 
HA
  | 
Queue is failing over to another node. All messages are preserved
  | 
DO NOT DO THIS. The queue should fail over to another node, but the other node does not have any message. This does not make sense and causes all kind of strange rabbitmq issues.
  |
| 
Non-HA
  | 
Queue is unavailable (no reads or writes) until the single node comes back. All messages are preserved.
  | 
Queue is deleted and all messages are lost. It can be redeclared on another node while the first one is down.
  |


 
 
 
 
Best Regards

Felix Hüttner
Schwarz IT KG * Stiftsbergstraße 1 * D-74172 Neckarsulm
 
From: Albert Braden <ozzzo at yahoo.com>
Sent: Friday, March 18, 2022 9:50 PM
To: Openstack-discuss <openstack-discuss at lists.openstack.org>; Felix Hüttner <felix.huettner at mail.schwarz>
Subject: Re: [kolla] [rabbitmq] RMQ trouble after config update
 
 
 
I fixed the typo and redeployed. The filter now reads:

{"vhost": "/", "name": "ha-all", "pattern": "^(?!(amq\.)|(.*_fanout_)|(reply_)|(notifications_designate)|(versioned_notifications)).*", "apply-to": "all", "definition": {"ha-mode":"all","ha-promote-on-shutdown": "always", "ha-sync-mode":"automatic"}, "priority":0} ]

I also tried deleting the RMQ containers and volumes and then redeploying, and deleting the queues and their exchanges, but they are re-created durable.
 
On Friday, March 18, 2022, 04:22:17 PM EDT, Albert Braden <ozzzo at yahoo.com> wrote:
 
 
 
 
 
I did not intend to make the notifications_designate queues HA nor durable. It
appears that those two queues are treated differently from the others that I
didn't specify as HA. For example, if I look at one of the _fanout queues, it
is not durable:

Features
x-expires: 1800000
Policy
Operator policy
Effective policy definition
Node rabbit at de6-ctrl2

I tried adding the notifications_designate queues to the exclusion list:

{"vhost": "/", "name": "ha-all", "pattern":
"^(?!(amq\.)|(.*_fanout_)|(reply_)(notifications_designate)|(versioned_notifications)).*",
"apply-to": "all", "definition": {"ha-mode":"all","ha-promote-on-shutdown":
"always", "ha-sync

But this doesn't seem to make a difference. The notifications_designate queues
still have " Features: durable: true"

How can I make the notifications_designate queues non-durable? 
 
On Friday, March 18, 2022, 10:04:08 AM EDT, Felix Hüttner <felix.huettner at mail.schwarz> wrote:
 
 
 
 
 
Hi Albert,

for the queue "notifications_designate.info" both the "notifications-expire" and the "ha-all" policy would match with their pattern. However only one policy can be applied to a queue at any given point in time. I assume "notifications-expire" is applied (but you can check that easily in the UI or with "rabbitmqctl list_queues name policy").

In this case the "notifications_designate.info" is created as durable (because of amqp_durable_queues) and non-ha (as the policy does not define ha).
When a node with a durable and non-ha queue goes down then this queue no longer useable until either the node comes back or the queue is deleted.

In your case I would assume that you also want this queue to be HA, so you probably need to set the options of the "ha-all" policy also in the "notifications-expire" policy.

Regarding the last point of the queue being on ctrl2 and you connecting to ctrl1 this is normal rabbitmq behaviour in that it will forward messages to other nodes if needed.

Best Regards

Felix Hüttner
Schwarz IT KG * Stiftsbergstraße 1 * D-74172 Neckarsulm
 

-----Original Message-----
From: Albert Braden <ozzzo at yahoo.com>
Sent: Friday, March 18, 2022 1:54 PM
To: Openstack-discuss <openstack-discuss at lists.openstack.org>
Subject: [kolla] [rabbitmq] RMQ trouble after config update

We're running kolla-ansible Train. I followed the recommendations in [1] and ended up with the following config:

definitions.json (in the rabbitmq container):

{
  "vhosts": [
    {"name": "/"}  ],
  "users": [
    {"name": "openstack", "password": "<password>", "tags": "administrator"}  ],
  "permissions": [
    {"user": "openstack", "vhost": "/", "configure": ".*", "write": ".*", "read": ".*"}  ],
  "policies":[
    {"vhost": "/", "name": "notifications-expire", "pattern": "^(notifications_designate|versioned_notifications).*", "apply-to": "queues", "definition": {"message-ttl":600000,"expires":1200000}, "priority":0},
    {"vhost": "/", "name": "ha-all", "pattern": "^(?!(amq\.)|(.*_fanout_)|(reply_)).*", "apply-to": "all", "definition": {"ha-mode":"all","ha-promote-on-shutdown": "always", "ha-sync-mode":"automatic"}, "priority":0}  ] }

etc/kolla/config/global.conf:

[oslo_messaging_rabbit]
amqp_durable_queues = True

This fixed some issues, but we seem to have a new issue so I must be missing a setting. When we stop the RMQ container on ctrl1, designate stops working (DNS records are not created nor deleted) and I see this in designate-sink.log:

2022-03-17 19:21:29.261 28 ERROR oslo.messaging._drivers.impl_rabbit [req-2c0cd9f4-5331-4697-9c3e-eece475a52af - - - - -] Failed to consume message from queue: Queue.declare: (404) NOT_FOUND - home node 'rabbit at de6-ctrl1' of durable queue 'notifications_designate.info' in vhost '/' is down or inaccessible: amqp.exceptions.NotFound: Queue.declare: (404) NOT_FOUND - home node 'rabbit at de6-ctrl1' of durable queue 'notifications_designate.info' in vhost '/' is down or inaccessible

When I look at the notifications_designate.info queue in the web interface, it appears to have moved to ctrl2:

Features
durable:              true
Policy    notifications-expire
Operator policy
Effective policy definition
expires: 1200000
message-ttl:      600000
Node    rabbit at qde3-ctrl2

When I look at designate.conf in the designate_sink containers I don't see anything pointing only to ctrl1:

transport_url = rabbit://openstack:<pwd>@<ctrl1>:5672,openstack:<pwd>@<ctrl2>:5672,openstack:<pwd>@<ctrl3>:5672//

But it appears that Designate still tries to use the queue on ctrl1. After I bring up ctrl1, the notifications_designate.info queue remains on ctrl2, but Designate starts working.

What am I missing?

[1] 
 
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.openstack.org%2Fwiki%2FLarge_Scale_Configuration_Rabbit&data=04%7C01%7C%7C103b79868a61424923bb08da08df82f7%7Cd04f47175a6e4b98b3f96918e0385f4c%7C0%7C0%7C637832053312648154%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=1RIlfljNyKg71IrZc0NIuqDoHPkTGtSBthadz%2Bneofk%3D&reserved=0

Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Sie hier<https://www.datenschutz.schwarz
 
>.
 
Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Siehier.
 
Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Siehier.
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220323/34ffec37/attachment-0001.htm>


More information about the openstack-discuss mailing list