<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 12/20/2013 11:18 AM, Herndon, John
Luke wrote:<br>
</div>
<blockquote cite="mid:2F25697B-8012-4FE4-86A7-E9D0587CE606@hp.com"
type="cite">
<pre wrap="">
On Dec 20, 2013, at 10:47 AM, Julien Danjou <a class="moz-txt-link-rfc2396E" href="mailto:julien@danjou.info"><julien@danjou.info></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Fri, Dec 20 2013, Herndon, John Luke wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Yeah, I like this idea. As far as I can tell, AMQP doesn’t support grabbing
more than a single message at a time, but we could definitely have the
broker store up the batch before sending it along. Other protocols may
support bulk consumption. My one concern with this approach is error
handling. Currently the executors treat each notification individually. So
let’s say the broker hands 100 messages at a time. When client is done
processing the messages, the broker needs to know if message 25 had an error
or not. We would somehow need to communicate back to the broker which
messages failed. I think this may take some refactoring of
executors/dispatchers. What do you think?
</pre>
</blockquote>
<pre wrap="">
Yeah, it definitely needs to change the messaging API a bit to handle
such a case. But in the end that will be a good thing to support such a
case, it being natively supported by the broker or not.
For brokers where it's not possible, it may be simple enough to have a
"get_one_notification_nb()" method that would either return a
notification or None if there's none to read, and would that
consequently have to be _non-blocking_.
So if the transport is smart we write:
# Return up to max_number_of_notifications_to_read
notifications =
transport.get_notificatations(conf.max_number_of_notifications_to_read)
storage.record(notifications)
Otherwise we do:
for i in range(conf.max_number_of_notifications_to_read):
notification = transport.get_one_notification_nb():
if notification:
notifications.append(notification)
else:
break
storage.record(notifications)
So it's just about having the right primitive in oslo.messaging, we can
then build on top of that wherever that is.
</pre>
</blockquote>
<pre wrap="">
I think this will work. I was considering putting in a timeout so the broker would not send off all of the messages immediately, and implement using blocking calls. If the consumer consumes faster than the publishers are publishing, this just becomes single-notification batches. So it may be beneficial to wait for more messages to arrive before sending off the batch. If the batch is full before the timeout is reached, then the batch would be sent off.
</pre>
<blockquote type="cite">
<pre wrap="">--
Julien Danjou
/* Free Software hacker * independent consultant
<a class="moz-txt-link-freetext" href="http://julien.danjou.info">http://julien.danjou.info</a> */
</pre>
</blockquote>
<pre wrap="">
-----------------
John Herndon
HP Cloud
<a class="moz-txt-link-abbreviated" href="mailto:john.herndon@hp.com">john.herndon@hp.com</a>
</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
OpenStack-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
A couple of things that I think need to be emphasized here:<br>
1. the mechanism needs to be configurable, so if you are more
worried about reliability than performance you would be able to turn
off bulk loading<br>
2. the caching size should also be configurable, so that we can
limit your exposure to lost messages<br>
3. while you can have the message queue hold the messages until you
acknowledge them, it seems like this adds a lot of complexity to the
interaction. you will need to be able to propagate this information
all the way back from the storage driver.<br>
4. any integration that is depdendent on a specific configuration on
the rabbit server is brittle, since we have seen a lot of variation
between services on this. I would prefer to control the behavior on
the collection side.<br>
<br>
So in general, I would prefer a mechanism that pulls the data in a
default manner, caches on the collection side based on configuration
that allows you to determine your own risk level and then manager
retries in the storage driver or at the cache controller level.<br>
<br>
Dan Dyer<br>
HP cloud<br>
<a class="moz-txt-link-abbreviated" href="mailto:dan.dyer@hp.com">dan.dyer@hp.com</a><br>
<br>
</body>
</html>