[openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

Sandy Walsh sandy.walsh at rackspace.com
Thu Aug 15 18:00:18 UTC 2013



On 08/15/2013 02:00 PM, Eric Windisch wrote:
> On Wed, Aug 14, 2013 at 4:08 PM, Sandy Walsh <sandy.walsh at rackspace.com> wrote:
>> At Eric's request in https://review.openstack.org/#/c/41979/ I'm
>> bringing this to the ML for feedback.
> 
> Thank you Sandy.
> 
>> Currently, oslo-common rpc behaviour is to always ack() a message no
>> matter what.
> 
> Actually, the Qemu and Kombu drivers default to this. The behavior and
> the expectation of the abstraction itself is different, in my opinion.
> The ZeroMQ driver doesn't presently support acknowledgements and
> they're not supported or exposed by the abstraction itself.

Hmm, that's interesting. I'd be curious to know how it deals with a
worker that can't process a message and needs to requeue.

> The reason I've asked for a mailing list post is because
> acknowledgements aren't presently baked into the RPC abstraction/API.
> You're suggesting that the idea of acknowledgements leaks into the
> abstraction. It isn't necessarily bad, but it is significant enough I
> felt it warranted visibility here on the list.

Yep, makes sense.

>> Since each notification has a unique message_id, it's easy to detect
>> events we've seen before and .reject() them.
> 
> Only assuming you have a very small number of consumers or
> store/lookup the seen-messages in a global state store such as
> memcache. That might work in the limited use-cases you intend to
> deploy this, but might not be appropriate at the level of a general
> abstraction. I've seen that features we support such as fanout() get
> horribly abused simply because they're available, used outside their
> respective edge-cases, for patterns they don't work well for.

Actually, we're letting the db deal with it via a unique key constraint.
So it'll still work with a large number of consumers. But you're
correct, if we never had a simple way of detecting dups this would be a
problem.

The direction I see ceilometer going suggests we'll have a large number
of consumers (Collectors) processing and post-processing messages vs.
just the one or two we need now.

> I suppose there is much to be said about giving people the leverage to
> shoot themselves in their own feet, but I'm interested in knowing more
> about how you intend to implement the rejection mechanism. I assume
> you intend to implement this at the consumer level within a project
> (i.e. Ceilometer), or is this something you intend to put into
> service.py?

Sort of. The consumer decides this is a bad message and wants to kill
it. The current mechanism is for the consumer to throw a
RejectMessageException and have the messaging layer reject it (since
messages themselves are not part of the abstraction either). If we were
to make the message itself an API entity, the consumer could call
.reject()/.requeue() directly.

https://review.openstack.org/#/c/40618/

The whole thing falls into a broader set of problems I outline here:
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013710.html

> Also, fyi, I'm not actually terribly opposed to this patch. It makes
> some sense. I just want to make sure we don't foul up the abstraction
> in some way or unintentionally give developers rope they'll inevitably
> strangle themselves on.

That's fair.

I think we can keep this from bothering the rpc side of the fence in the
olso.common.messaging project if notifications have a separate
abstraction from rpc calls. Sadly, for oslo.common.rpc, we have to live
with supporting both.

Cheers!
-S


> --
> Regards,
> Eric Windisch
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list