[openstack-dev] [Openstack] [metering][ceilometer] Unified Instrumentation, Metering, Monitoring ...

Doug Hellmann doug.hellmann at dreamhost.com
Thu Nov 8 17:54:09 UTC 2012


On Wed, Nov 7, 2012 at 10:21 PM, Sandy Walsh <sandy.walsh at rackspace.com>wrote:

> Hey!
>
> (sorry for the top-posting, crappy web client)
>
> There is a periodic task already in the compute manager that can handle
> this:
> https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3021
>
> There seems to be some recent (to me) changes in the manager now wrt the
> resource_tracker.py and stats.py files about how this information gets
> relayed. Now it seems it only goes to the db, but previously it was sent to
> a fanout queue that the schedulers could use.
>
> Regardless, this is done at a high enough level that it doesn't really
> care about the underlying virt layer, so long at the virt layer supports
> the get_available_resource() method.
>
>
> https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L152
>
> https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L392
>
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2209
>
> I'd add a hook in there do what we want with this data. Write it to the
> db, send it over the wire, whatever. If there is additional information
> required, it should go in this dictionary (or we should define a format for
> extensions to it).
>

It looks like that is collecting resource data, but not usage data. For
example, there's no disk I/O information, just disk space. Is that what you
mean by adding extra information to the dictionary?

Doug


>
> The --periodic_interval value is meant to be the "fastest tick" approach
> and the individual methods have to deal with how many multiples of the base
> tick it should use. So you can have different data reported at different
> intervals.
>
> Now, the question of polling vs. pushing shouldn't really matter if the
> sampling rate is predetermined. We can push when the sample is taken or we
> can read from some other store from an external process ... but the
> sampling should only be done in one place, once.
>
> Hope I answered your question? If not, just repeat it in another way and
> I'll try again :)
>
> -S
>
>
>
> ________________________________________
> From: openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net[openstack-bounces+sandy.walsh=
> rackspace.com at lists.launchpad.net] on behalf of Eoghan Glynn [
> eglynn at redhat.com]
> Sent: Wednesday, November 07, 2012 4:32 PM
> To: OpenStack Development Mailing List
> Cc: openstack at lists.launchpad.net
> Subject: Re: [Openstack] [openstack-dev] [metering][ceilometer] Unified
> Instrumentation, Metering, Monitoring ...
>
> > Here's a first pass at a proposal for unifying StackTach/Ceilometer
> > and other instrumentation/metering/monitoring efforts.
> >
> > It's v1, so bend, spindle, mutilate as needed ... but send feedback!
> >
> > http://wiki.openstack.org/UnifiedInstrumentationMetering
>
> Thanks for putting this together Sandy,
>
> We were debating on IRC (#heat) earlier the merits of moving the
> ceilometer "emission" logic into the services, e.g. directly into the
> nova-compute node. At first sight, this seemed to be what you were
> getting at with the suggestion:
>
>  "Remove the Compute service that Ceilometer uses and integrate the
>   existing fanout compute notifications into the data collected by the
>   workers. There's no need for yet-another-worker."
>
> While this could be feasible for measurements driven directly by
> notifications, I'm struggling with the idea of moving say the libvirt
> polling out of the ceilometer compute agent, as this seems to leak too
> many monitoring-related concerns directly into nova (cadence of polling,
> semantics of libvirt stats reported etc.).
>
> So I just wanted to clarify whether the type of low level unification
> you're proposing includes both push & pull (i.e. notification & polling)
> or whether you mainly had just former in mind when it comes to ceilometer.
>
> Cheers,
> Eoghan
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121108/4fa57f7e/attachment.html>


More information about the OpenStack-dev mailing list