[Openstack] Notifications proposal

Eric Day eday at oddments.org
Tue May 10 20:15:06 UTC 2011


For the record, I should also say I think RabbitMQ is awesome and
should be used for deployments where it makes sense. Keeping it
modular and also allowing burrow to be an option will make more sense
for some deployments.

-Eric

On Tue, May 10, 2011 at 07:52:55PM +0000, Matt Dietz wrote:
> For the record, I like the idea of using Burrow at this level. I certainly
> don't expect everyone to go to the trouble of setting up something like
> PSHB to get their notifications. I can look at adding another driver for
> Burrow in addition to Rabbit so there are plenty of options.
> 
> On 5/10/11 2:30 PM, "Eric Day" <eday at oddments.org> wrote:
> 
> >Hi George,
> >
> >Understood, but burrow can act as both. At the core, the difference
> >between SQS and SNS are notification workers and a lower default
> >message TTL. Matt mentioned that Nova will push to RabbitMQ or some
> >other MQ and workers pull from the queue to translate into PuSH, email,
> >sms, etc. If this intermediate message queue is burrow, clients could
> >also subscribe directly to the notification queue with their OpenStack
> >credentials and see messages along with the other workers. It's simply
> >opening up the data pipe at another level if thats more convenient
> >or efficient for the event consumers.
> >
> >If we're going through the trouble of building a scalable message
> >queue/notification service for general use, I'm not sure why we
> >wouldn't use it over maintaining other MQ systems. If we don't want to
> >use burrow when it's ready, I should probably reevaluate the purpose
> >of Burrow as this was one of the example use cases. :)
> >
> >-Eric
> >
> >On Tue, May 10, 2011 at 02:17:46PM -0500, George Reese wrote:
> >> This isn't a message queue, it's a push system.
> >> 
> >> In other words, consumers don't pull info from a queue, the info is
> >>pushed out to any number of subscribers as the message is generated.
> >> 
> >> Amazon SNS vs. SQS, except this isn't a cloud service but a mechanism
> >>for notifying interested party of cloud changes.
> >> 
> >> -George
> >> 
> >> On May 10, 2011, at 1:49 PM, Eric Day wrote:
> >> 
> >> > We may also want to put in some kind version or self-documenting URL
> >> > so it's easier to accommodate message format changes later on.
> >> > 
> >> > As for the issue of things getting backed up in the queues for other
> >> > non-PuSH mechanisms (and fanout), burrow has fanout functionality
> >> > that depends on messages to expire (every message is inserted with
> >> > a TTL). This would allow multiple readers to see the same message
> >> > and for it to disappear after say an hour. This allows deployments,
> >> > third party tools, and clients to write workers to act on events from
> >> > the raw queue.
> >> > 
> >> > With burrow, it will also be possible for clients to pull raw messages
> >> > directly from the queue via a REST API in a secure fashion using
> >> > the same account credentials as other OpenStack service (whatever
> >> > keystone is configured for). So while an email notification will want
> >> > to strip any sensitive information, a direct queue client could see
> >> > more details.
> >> > 
> >> > -Eric
> >> > 
> >> > On Mon, May 09, 2011 at 10:20:04PM +0000, Matt Dietz wrote:
> >> >>   Hey guys,
> >> >>   Monsyne Dragon and myself are proposing an implementation for
> >> >>   notifications going forward. Currently my branch exists
> >> >>   under 
> >>https://code.launchpad.net/~cerberus/nova/nova_notifications. you'll
> >> >>   see that's it been proposed for merge, but we're currently
> >>refactoring it
> >> >>   around changes proposed at the summit during the notifications
> >>discussion,
> >> >>   which you can see at http://etherpad.openstack.org/notifications
> >> >>   At the heart of the above branch is the idea that, because nova is
> >>about
> >> >>   compute, we get notifications away from Nova as quickly as
> >>possible. As
> >> >>   such, we've implemented a simple modular driver system which
> >>merely pushes
> >> >>   messages out. The two sample "drivers" are for pushing messages
> >>into
> >> >>   Rabbit, or doing nothing at all. There's been talk about adding
> >>Burrow as
> >> >>   a third possible driver, which I don't think would be an issue.
> >> >>   One of the proposals is to have priority levels for each
> >>notification.
> >> >>   What we're proposing is emulating the standard Python logging
> >>module and
> >> >>   providing levels like "WARN' and "CRITICAL" in the notification.
> >> >>   Additionally, the message format we're proposing will be a JSON
> >>dictionary
> >> >>   containing the following attributes:
> >> >>   publisher_id - the source worker_type.host of the message.
> >> >>   timestamp - the GMT timestamp the notification was sent at
> >> >>   event_type - the literal type of event (ex. Instance Creation)
> >> >>   priority - patterned after the enumeration of Python logging
> >>levels in
> >> >>                  the set (DEBUG, WARN, INFO, ERROR, CRITICAL)
> >> >>   payload - A python dictionary of attributes
> >> >>   Message example:
> >> >>       { 'publisher_id': 'compute.host1',
> >> >>         'timestamp': '2011-05-09 22:00:14.621831',
> >> >>         'priority': 'WARN',
> >> >>         'event_type': 'compute.create_instance',
> >> >>         'payload': {'instance_id': 12, ... }}
> >> >>   There was a lot of concern voiced over messages backing up in any
> >>of the
> >> >>   queueing implementations, as well as the intended priority of one
> >>message
> >> >>   over another. There are couple of immediately obvious solutions to
> >>this.
> >> >>   We think the simplest solution is to implement N queues, where N
> >>is equal
> >> >>   the number of priorities. Afterwards, consuming those queues is
> >> >>   implementation specific and dependent on the solution that works
> >>best for
> >> >>   the user.
> >> >>   The current plan for the Rackspace specific implementation is to
> >>use
> >> >>   PubSubHubBub, with a dedicated worker consuming the notification
> >>queues
> >> >>   and providing the glue necessary to work with a standard Hub
> >> >>   implementation. I have a very immature worker implementation
> >> >>   at https://github.com/Cerberus98/yagi if you're interested in
> >>checking
> >> >>   that out. 
> >> >>   We'll be going forward with this plan immediately, but we'd love
> >>feedback
> >> >>   if you have it. Questions, comments, concerns are very much
> >>welcomed!
> >> >>   Matt Dietz
> >> > 
> >> >> _______________________________________________
> >> >> Mailing list: https://launchpad.net/~openstack
> >> >> Post to     : openstack at lists.launchpad.net
> >> >> Unsubscribe : https://launchpad.net/~openstack
> >> >> More help   : https://help.launchpad.net/ListHelp
> >> > 
> >> > 
> >> > _______________________________________________
> >> > Mailing list: https://launchpad.net/~openstack
> >> > Post to     : openstack at lists.launchpad.net
> >> > Unsubscribe : https://launchpad.net/~openstack
> >> > More help   : https://help.launchpad.net/ListHelp
> >> 
> >> --
> >> George Reese - Chief Technology Officer, enStratus
> >> e: george.reese at enstratus.com    t: @GeorgeReese    p: +1.207.956.0217
> >>  f: +1.612.338.5041
> >> enStratus: Governance for Public, Private, and Hybrid Clouds -
> >>@enStratus - http://www.enstratus.com
> >> To schedule a meeting with me: http://tungle.me/GeorgeReese
> >> 
> >
> >
> 
> 
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse at rackspace.com, and delete the original message.
> Your cooperation is appreciated.




More information about the Openstack mailing list