[openstack-dev] Changing period audit to real-time audit

Doug Hellmann doug.hellmann at dreamhost.com
Tue Nov 6 14:09:44 UTC 2012


I posted some of this information on the changeset in gerrit, but I'm
reproducing it here for the mailing list in case not everyone clicked
through...

On Mon, Nov 5, 2012 at 12:17 PM, Monsyne Dragon <mdragon at rackspace.com>wrote:

> Er, I'm somewhat confused at this.
>
> Having the audit run from the volume manager as a periodic task is good
> (Nova already does this), but, first, running it every 60 seconds would
> produce *way* too much notification traffic,
> and second removing the audit_period breaks the usage notification model.
>

Yes, I think we agree that's bad. We collect similar data every 10 minutes
by default elsewhere, so we should make this change work the same way (I
think that's what Julien's comments about the ticks_between_runs was
about). Ultimately we want it to be configurable.


>
> Cinder and Nova already produce immediate notifications (volume.create.*
> and volume.delete.* for Cinder, and many many types
> (compute.instance.create.*, etc) for Nova)
>

Those events are insufficient for the way ceilometer meters usage because
we want to ensure we know about resources even if we miss their creation
event, and we want to ensure we know about the metadata for a resource and
how it changes over time.


>
> The audit events are meant for to provide a baseline for each period,  so
> at any instant, you can know "the last period, there was these x
> volumes/instances, and I just got a create event, so now there is x+1",
> and you don't have to look back through history to the beginning of time
> to get an accurate count.
>
> I do not think this is necessary.
>

Ceilometer works differently than the existing auditing system(s) because
it is doing more than just auditing. In order to support all of the queries
needed for our use cases, we need the events to come more frequently than
once every hour. The receiver will deal with auditing periods, aggregating
data, etc. The sender should not have to worry about any of those
requirements, so it becomes simpler and always sends regular exists
notifications for each resource it knows about.

In the changeset comments you mentioned that this would break existing
tools looking for the "exists" events, and that's a good point. It looks
like you and Julien are proposing using "volume.usage" instead of
"volume.exists". I'm not sure that's quite the right description of the
purpose of these events, but I don't have a better suggestion, so I can
live with it. :-)

Doug


>
>
> On Nov 5, 2012, at 9:34 AM, Julien Danjou wrote:
>
> > Hi,
> >
> > I've been working to change the actual audit code used into Cinder (but
> > also in Nova) to a real-time one, so it's more useful to projects like
> > Ceilometer.
> >
> > The implementation for Cinder is under review at:
> >
> >    https://review.openstack.org/#/c/15115/
> >
> > Once this patch got accepted, I intend to port it to Nova in the same
> > fashion.
> >
> > Now, Huang Zhiteng raises an issue that might also interest Nova team,
> > so I'm bringing this to this list.
> >
> > My patch plugs the audit in a periodic task under the cinder-volume
> > daemon, since this is the one making more sense for cinder. For Nova,
> > that would probably be nova-compute.
> > The concern is that sending exists notifications might take too long and
> > blocks the daemon for too much time.
> >
> > From my point of view, since each host is only sending notifications
> > about volumes (in Nova that would be instances) it handles, this is not
> > going to be a lot of them, and that shouldn't take the daemon too long
> > nor block it for a while. Compared to the burden of shipping another
> > daemon.
> >
> > Also, while I'm at it, suggestions about a default auditing rate might
> > be welcome, since the default periodic_interval (60s) might be a little
> > too low. Maybe using ticks_between_runs would be a good idea?
> >
> > --
> > Julien Danjou
> > # Free Software hacker & freelance
> > # http://julien.danjou.info
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121106/f8f9bec2/attachment.html>


More information about the OpenStack-dev mailing list