On 22/09/2025 16:04, Dan Smith wrote:
Is it possible, with oslo.messaging, to have 3 daemons to get the Neutron port.{create,update,delete}.end notification events?
I tried hard, but no mater how I do it, only one of my 3 daemons get the message. I'm about to add forwarding feature to the other 2 when one receives an event. Is there a better way? I think you're looking for the "pool" parameter:
https://docs.openstack.org/oslo.messaging/ocata/notification_listener.html
That is supposed to make it so that all the receivers in the same pool get a copy. I'm not sure, but I think this is implemented in some re-queuing way that is likely not very robust. But, if it works for you it might be good enough.
I agree it would be much better for the notification stuff to be (or at least have the option to be) sent in a fanout manner.
im not very familar with it but newer veriosn of rabbit have the consept of a stream instead of a queue looking at the use cases https://www.rabbitmq.com/docs/streams#use-cases i belive that long term that would be the rabbit native way to support notification in a more robust multi consumer fashion recent versions of oslo messaging (caracal+) now support using streams instead of fanouts https://github.com/openstack/oslo.messaging/commit/e95f334459d4dfd3778ec9e84... Julien Cosmao is giving a talk called "who framed rabbitMQ?" that cover the performance improvements in rabbit and oslo messaging that is schduled for the paris sumit in a few week that i belive is goign to cover some of the more recent features. https://summit2025.openinfra.org/a/schedule# "Who framed RabbitMQ?: Sat, October 18, 5:05pm - 5:35pm | Pierre Faurre" but i agree that using a fan-out ideally via stream would likely be the best way to achieve that i think dan is corerc that using a fanout is not how the pools parameter works today im not sure if the pool parmater can be used in combination with rabbit_stream_fanout=true to achive that in the future but perhaps that could be a future enhancement to oslo.messaging if not. if the entiry notifaction topic was a stream and the pool parmater was mapped to a partition of that stream, aka Superstreams (Partitioned Streams) https://www.rabbitmq.com/docs/streams#super-streams or using filterign https://www.rabbitmq.com/docs/streams#filtering on the pool. as i said i am not famialr enough with these newer features to say either way but there is at least a direction to eveolved this in the future looping back to dans suggestion the pool parmater dan is refering too is documented here https://github.com/openstack/oslo.messaging/blob/4b1941221f1f51a0432cb96bc0b... https://github.com/openstack/oslo.messaging/blob/4b1941221f1f51a0432cb96bc0b... the rabbit implementation is here https://github.com/openstack/oslo.messaging/blob/4b1941221f1f51a0432cb96bc0b... the driver use callback to process the incomming mseages wso its a littel hard to follow but i bleive this is the one that is invoked """ The *pool* parameter, if specified, should cause the driver to create a subscription that is shared with other subscribers using the same pool identifier. Each pool gets a single copy of the message. For example if there is a subscriber pool with identifier **foo** and another pool **bar**, then one **foo** subscriber and one **bar** subscriber will each receive a copy of the message. The driver should implement a delivery pattern that distributes message in a balanced fashion across the subscribers in a pool.""" if we read the description of the pool parmater if you want all 3 deamson to get the message alwasy then you should set pool to a unique value per deamon i.e. its uuid or hostname if you want ot guarentee that exactly 1 of them will always get it then it should be set to the same value if you wnat ot leave it up to the admin to choose the behavior then i would make it a config option. i cant follow the oslo logic closely enough to say if the rabbit logic is internally requing or if its smarter but here definly is reque logic in NotificationAMQPIncomingMessage https://github.com/openstack/oslo.messaging/blob/4b1941221f1f51a0432cb96bc0b... regards sean.
--Dan