[openstack-dev] Quantum usage audit

Doug Hellmann doug.hellmann at dreamhost.com
Wed Oct 3 15:13:33 UTC 2012


On Wed, Oct 3, 2012 at 10:10 AM, Gary Kotton <gkotton at redhat.com> wrote:

>  On 10/03/2012 02:42 PM, Doug Hellmann wrote:
>
> On Wed, Oct 3, 2012 at 8:30 AM, Gary Kotton <gkotton at redhat.com> wrote:
>
>  Hi,
>> I wanted to ask a few additional questions regarding the patch -
>> https://review.openstack.org/#/c/13943/ - aka quantum-usage-audit:-
>> 1. Do you guys intend to take care of the router and floating IP support?
>> This seems to be lacking at the moment. I am not sure if this is supported
>> by notifications at the moment.
>>
>
>  We are currently polling for floating IP status. Julien did that work,
> IIRC, and I think polling was used because there were no notifications. We
> should be able to contribute some help to add notifications during grizzly,
> which would let us eliminate the polling on our side.
>
>
> OK. This is certainly something that is missing in Quantum. I think that
> we should open a bug for tracking.
>

I agree. If you mark the bug as affecting ceilometer, too, we'll be able to
coordinate the work between the projects.


>
>
>
>> 2. I think that the reporting can be optimized. Can you please elaborate
>> a little more on how you see things. I would hope that this is going to be
>> something that is going to be discussed at the summit. Did you guys
>> consider using a pull from Quantum using the API - similar to the L3 agent?
>>
>
>  We do have some polling, although I think it's going straight to the
> database right now (fixing that is another goal, all of this is a work in
> progress).
>
>
> Is the polling via the quantum client?
>

No, it's going directly to the database. See
https://github.com/stackforge/ceilometer/blob/master/ceilometer/network/floatingip.py


>
>
>  I'm not sure what you mean by "optimized" though. Having a standalone
> app generate data over the message bus seems like it would have less impact
> on quantum than if we were hitting the API every few minutes. Perhaps
> there's some aspect to the way the data is gathered that I don't understand
> though. The ultimate goal is to have quantum emitting the audit
> notifications on its own in whatever way combines efficiency and accuracy.
> In nova and cinder the audit messages are disabled by default, so a
> deployer has to set a configuration option to enable them (for nova) or
> enable a cron job (for cinder).
>
>
> I was originally thinking of having a timestamp and then doing the updates
> to ceilometer would be done according to the timestamp. Say for example
> having a "Last Updated" field could ensure that there is less data sent
> every interval. At the moment there will be a network burst every time the
> audit is run.
>

Ah, I see. There are two reasons we don't want a timestamp or other
criteria to throttle those updates. First, we don't want to manage state
between the two systems (or potentially more systems if other clients start
listening for the notifications). Quantum shouldn't have to worry about
what ceilometer may or may not already know to try to decide what to tell
us about next.

Most importantly, though, the entire design for ceilometer depends on
receiving regular updates. We don't treat "create" and "delete" events in
any special way. Every incoming event is just an update to the state of a
resource. The first event we see for a resource/meter combination causes
the new resource to be created in our database, but that event doesn't have
to be a "create" event. Treating all events in the same way simplifies the
message handling logic in the notification handlers, the API
implementation, and client-side use of the API. The API lets the caller ask
questions about resource usage in a lot of different ways, since different
operators will bill for different usage, at different intervals, using
different criteria (possibly even using metadata settings for the resource
itself, with instance flavor being the most obvious example but the
location of the instance may be another factor). Because we don't know
exactly what the users of the API are going to ask for, we cannot aggregate
the data significantly. If we depended on special handling for different
notification event types, and we miss an event, a resource may be left off
of a customer's bill. But by using regular updates, and not treating any of
the events as special triggers, we're able to be more flexible and more
robust at the same time.

If there is going to be too much data to update ceilometer at at once, we
could send the updates in batches. However, quantum should never assume
that ceilometer (or any other listener) knows anything at all, so it should
eventually (in a relatively small amount of time) iterate over all of its
resources and send audit events.

One thing we could do is split up the existing audit job into separate
scripts for each resource type. That way if a deployer doesn't need to
meter ports, for example, they could avoid doing that processing.


>
>
>  We do have several summit sessions planned. Ceilometer has a mini-track
> Monday morning with 3 sessions (
> http://wiki.openstack.org/EfficientMetering/GrizzlySummit), and at Dan's
> request I proposed a session for the Quantum track specifically to talk
> about integrating Ceilometer and Quantum.
>
>
> Thanks
>
>
>
>> 3. Is this something that will be cherry picked to stable quantum and in
>> turn should be packaged? If so I understand that this is a cron job that
>> should be run every X minutes?
>>
>
> That is the goal. It would be up to deployers to set up the cron job, and
> we would need to document that in the ceilometer documentation (probably
> the quantum docs, too, but we'll definitely have it in our docs).
>
>  Doug
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121003/1637e9f9/attachment.html>


More information about the OpenStack-dev mailing list