[openstack-dev] [Ceilometer] [Metering] BP for new Ceilometer Agent(less) framework ...

Sandy Walsh sandy.walsh at RACKSPACE.COM
Fri Feb 1 18:33:14 UTC 2013


(sorry for top-post, crappy web client)

I suspect you're correct and I'm mixed the components again. I was mostly looking at the Compute Agent and forgot that notifications can be handled elsewhere too.

I'll look at using the existing mechanism for that.

Thanks again!

-S

________________________________
From: Doug Hellmann [doug.hellmann at dreamhost.com]
Sent: Friday, February 01, 2013 12:17 PM
To: Sandy Walsh
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Ceilometer] [Metering] BP for new Ceilometer Agent(less) framework ...



On Thu, Jan 31, 2013 at 8:16 AM, Sandy Walsh <sandy.walsh at rackspace.com<mailto:sandy.walsh at rackspace.com>> wrote:


On 01/30/2013 02:12 PM, Doug Hellmann wrote:
>
>
> On Wed, Jan 30, 2013 at 8:04 AM, Julien Danjou <julien at danjou.info<mailto:julien at danjou.info>
> <mailto:julien at danjou.info<mailto:julien at danjou.info>>> wrote:
>
>     On Tue, Jan 29 2013, Sandy Walsh wrote:
>
>     Hi Sandy,
>
>     > https://blueprints.launchpad.net/ceilometer/+spec/new-ceilometer-agent
>     >
>     > Full spec: http://wiki.openstack.org/NewCeilometerAgent
>     >
>     > Video walkthrough of the StackTach worker (basis for proposal):
>     http://www.youtube.com/watch?v=thaZcHuJXhM
>     > (bump resolution and bandwidth, sorry for the um's)
>
>     What you propose is to drop the ceilometer-compute-agent, not the
>     central one, you should make this clearer I think.
>     Dropping the central one is just not going to happen anytime soon, but
>     that doesn't sound like a problem to me for now -- and it doesn't seem
>     it's a problem for you too.
>
>     IIUC your blueprint what you want to do is to retrieve data in a push
>     model from notifications sent by Nova, rather than polling Nova+libvirt.
>
>     As I already pointed out to you on IRC, this has already been discussed
>     back in November, the thread is at:
>
>
>      http://lists.openstack.org/pipermail/openstack-dev/2012-November/002791.html
>
>     (I imagine you remember since it seems you participated)
>
>     As far as I know, what we have currently in the compute agent is an
>     implementation of the result of this thread, and Eoghan did the
>     necessary work on abstracting the virt driver etc on our side rather
>     than in Nova side.
>
>     Now, if your blueprint is about implementing the necessary components
>     into Nova to emit notifications with interesting information that
>     Ceilometer wants (e.g. instance CPU or I/O usage) and use them from
>     Ceilometer to drop the ceilometer-agent-compute, I don't think anybody
>     -- from the Ceilometer side -- will be against this since, IIRC,
>     Ceilometer crew were all for moving all the stuff into Nova back then.
>
>
> I can certainly see nova allowing more notifications with more data for
> instances, but I'm not sure they would include some of the other aspects
> we want to eventually include about the host machine itself. Perhaps the
> data collection being done for the scheduler now can also emit
> notifications?
>

The notification scheme inside Nova today is "notify when something
changes". Deltas. And the notifications are important enough that they
can't get dropped. BI think if we go to a periodic_task notification
approach (basically just reporting on hypervisor load, like the
scheduler would use) we should adopt the UDP method discussed elsewhere.

>
>
>     What you may want to do to start is likely to create a blueprint in Nova
>     and get it validated. Changing stuff in Ceilometer then isn't likely to
>     be a problem; as you know, we already consume a lot of notifications.
>
>
> The worker processes described in the video sound a lot like the part of
> the collector that processes the incoming notifications. I'm not sure
> exactly what the new implementation buys us over what is already there.
> Can you elaborate on that aspect, too?

I highlight all the issues with the existing collector in the wiki page
http://wiki.openstack.org/NewCeilometerAgent but I'll review the
collector again just to make sure I didn't miss something. Simplicity
and more comprehensive event collection are the big things.

I think you're conflating the different parts of the system again. We are already listening to the notifications in the collector daemon. There are separate plugins for the collector that subscribe to specific notifications and convert them into metering messages. Adding support for new notification types should just mean adding a new plugin. http://docs.openstack.org/developer/ceilometer/contributing/plugins.html#notifications

As far as the other points, as we've discussed elsewhere, we're not going to rewrite it to drop the use of the Oslo RPC library because we don't want to be limited to one message bus type. Adding support for listening to the error message queue shouldn't require completely rewriting the collector (although we may need to change it, if we have to do special processing based on the error event).

So, if nova can be updated to provide notifications more frequently (not just when something changes) and to contain all of the same stats currently being collected by the ceilometer compute agent, then there shouldn't be a need for a new process at all -- the collector should just do the right thing.

Doug


-S

>
> Doug
>
>
>
>     --
>     Julien Danjou
>     # Free Software hacker & freelance
>     # http://julien.danjou.info
>
>     _______________________________________________
>     OpenStack-dev mailing list
>     OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
>     <mailto:OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130201/5871688a/attachment.html>


More information about the OpenStack-dev mailing list