[Openstack] SystemUsageData in Diablo via notification system?

Monsyne Dragon mdragon at RACKSPACE.COM
Wed Oct 26 21:08:35 UTC 2011


We've used the standard google ref. hub.  The Yagi app stores notifications (with an optional expiry) in Redis, and generates feeds from the items in redis.   Redis can be clustered. Yagi is composed of two parts, the yagi-event daemon which reads from the AMQP queue,  stores notifications in redis, and pings the hub, and then the yagi-feed wsgi app, which simply pulls from redis and generates an Atom feed.  N number of yagi-event daemons can be run on multiple nodes, and they divide messages coming in from the queues between them. If one failed, the others would keep on.   (And rabbit would queue the the messages until acknowledged anyway, even if all of them stopped.) The feed app can also have many instances run in parallel and be load-balanced.  Most of the hubs are the same.  If your application goes down and misses a ping from the hub, it can just look at the feed for any events it missed  when it comes back up.

As far as hub HA setup, performance, etc, we have not gone into it too deeply at the moment. We are currently pushing events to another (internal) system using AtomPub (we have other internal systems that generate atom feeds,  so we have an internal aggregator). We do want to test the various hubs for scalability at some point, but we haven't done that yet .


On Oct 26, 2011, at 12:59 PM, Joseph Heck wrote:

Have you been testing and/or working with a specific hub from the list on that wiki page (http://code.google.com/p/pubsubhubbub/wiki/Hubs<https://nebula.onconfluence.com/pages/viewpage.action?pageId=1508006>)?

What I'm wondering is how we could set up a notification system that would be highly available (i.e. two nodes or a failover mechanism) that wouldn't loose data. I don't have any background with pubsubhubbub as yet, so looking for some insight from someone who has messed with it previously.

-joe

On Oct 26, 2011, at 10:25 AM, Monsyne Dragon wrote:
I answered Roe Lee's question via email, but I figured some other folks on the list might want to know as well...

Begin forwarded message:

Date: October 26, 2011 12:21:34 AM CDT
To: Roe Lee <hrlee.us at gmail.com<mailto:hrlee.us at gmail.com>>
Subject: Re: SystemUsageData in Diablo via notification system?

Hello!  Yes, notifications were mostly added in Diablo, and the usage data has also been expanded in the current trunk (for the Essex release).

I have updated some of the information on the implementation on notifications on the openstack wiki here:

http://wiki.openstack.org/NotificationSystem#Implementation


On Oct 25, 2011, at 10:20 PM, Roe Lee wrote:

Hi-

I am looking for the way to get system usage data for a billing purpose
in Diablo release. Is there anyone know as to how to get event messages
such as compute.instance.create, compute.instance.delete, etc? I believe
this information cat be retrieved via log files or AMQP.

P.S: I guess system usage data is not available in cactus.

Hope to hearing any tips.

Thanks,
Roe
--
This message was sent from Launchpad by
Roe Lee (https://launchpad.net/~roe-lee)
to each member of the OpenStack Team team using the "Contact this team" link
on the OpenStack Team team page (https://launchpad.net/~openstack).
For more information see
https://help.launchpad.net/YourAccount/ContactingPeople

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111026/894c92f8/attachment.html>


More information about the Openstack mailing list