[openstack-dev] [nova][ceilometer] model for ceilo/nova interaction going forward

Sandy Walsh sandy.walsh at RACKSPACE.COM
Fri Nov 16 15:46:20 UTC 2012


________________________________
From: Doug Hellmann [doug.hellmann at dreamhost.com]
Sent: Friday, November 16, 2012 8:53 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova][ceilometer] model for ceilo/nova interaction going forward



On Thu, Nov 15, 2012 at 6:58 PM, Sandy Walsh <sandy.walsh at rackspace.com<mailto:sandy.walsh at rackspace.com>> wrote:
From: Doug Hellmann [doug.hellmann at dreamhost.com<mailto:doug.hellmann at dreamhost.com>]
Sent: Thursday, November 15, 2012 5:51 PM

On Thu, Nov 15, 2012 at 4:04 PM, Sandy Walsh <sandy.walsh at rackspace.com<mailto:sandy.walsh at rackspace.com>> wrote:

So this should be a Ceilometer notifier, that lives in the Ceilometer code base and is a nova.conf --notification_driver setting for whomever deploys it. This implies there are two ways to get notifications out of Nova:
1. via the Rabbit Notifier with an external Worker/Consumer (preferred for monitoring/usage/billing)
2. via a specific Notifier (like https://github.com/openstack/nova/blob/master/nova/openstack/common/notifier/log_notifier.py)

> We're talking specifically about values like disk I/O, CPU stats, etc. That data isn't generated as part of
>  a notification, that's why we're having to poll for it. What we're looking for is a solution that doesn't involve
>  ceilometer importing part of nova's code *in an unsupported way*, as it does now. Some of the options
>  presented involve new network-based communication between the existing ceilometer agent and the
>  compute agent (RPC or REST, in different directions). None of those is really optimal, because we don't
>  want to burden the compute agent with lots of calls asking for stats, either for metering or for monitoring. I
>  think the option the ceilometer team is favoring at the moment is making the hypervisor library in nova a
>  public API, so we can use it without fear of the API changing in unannounced ways. That would let us keep
>  the hypervisor polling in a separate daemon from the hypervisor management. There are details to work out
>  about how such a separate library would be implemented.

I don't know how some shops would feel about putting an api server on their compute nodes.

> It wont' be an API server. We just need the nova team to agree that other projects consume some part of the
> virtualization library so that the API of that library doesn't change without warning.

I think my plug-in proposal at the data collection stage would solve that.


I'd use the same approach we use everywhere else in OpenStack, make the data collection portion of the hypervisor a plug-in. Each plug-in in the chain can add a new data section to the dictionary sent up for transmission. Out of the box we would send the basic stuff that is sent today. Other deployments might add some ceilometer/hypervisor specific modules to gather other things.

> What event is triggering that transmission, though? There's nothing doing it currently.

Yes, the periodic task does this currently. I wrote about this in another of the integration posts, there is a periodic task that calls down through the virt layers to get data in a hypervisor independent fashion.



I was specifically talking about lifecycle notifications, in which case atomic snapshots of state are desired. Regardless, separating notifications for network, storage and compute would be generally good things I think.

>We're getting notifications from quantum for networking and from nova for compute (although not as frequently as we would like).
> We are working with the cinder team on notifications there, too, but I don't know if that patch landed yet.

I think you're missing my point here. The payload is so large and expensive now because it includes all things in one message. Storage, Network and Compute ... and they all stem from Compute (currently). Ideally these could be broken down from OneBigMessage to a bunch of smaller messages in order to reduce dependency on/expense of the db (in compute anyway, where access is going away)

-S

> Doug


-S


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121116/11b5458e/attachment.html>


More information about the OpenStack-dev mailing list