[openstack-dev] [cinder] Volume metering in nova

Doug Hellmann doug.hellmann at dreamhost.com
Fri Aug 10 14:49:53 UTC 2012


On Tue, Aug 7, 2012 at 12:59 PM, O'Driscoll, Cian <cian.o-driscoll at hp.com>wrote:

>  Sorry for the delay in a reply but was out of the office for the last
> couple of days.****
>
> ** **
>
> The Goal is B, to get usage(reads, writes, read mb, write mb) of all
> volumes in status “attached”.****
>
> ** **
>
> >Compute keeps a record of volumes that are attached via the
> block_device_mapping table. The method for doing this is to get all
> instances belonging to the host, iterate through each instance and get a
> list of attached volues from block_device_mapping, and log usage for each
> one. You could do this inside the existing _instance_usage_audit task
> (which already grabs all active instances on the host) using
> self._get_instance_volume_bdms(instance_uuid).****
>
> ** **
>
> We can extend _instance_usage_audit to send volume usage data similar to
> how it currently sends bandwidth usage data.****
>
> _instance_usage_audit reads the bandwidth data from the db which has
> already been populated by a separate periodic task _poll_bandwidth_usage.*
> ***
>
> I think we still need to gather that information through a peridoc task
> like _poll_bandwidth_usage, using self._get_instance_volume_bdms(instance_uuid)
>  - (tested and works fine). ****
>
> ** **
>
> If this was an auditor for the amount of time as volume was mounted, then
> we could do this all in cinder but as we need to query in this case
> libvirt for the usage stats(reads, writes, read mb, write mb) then we have
> to do this on the compute host as we cannot query libvirt remotely(from
> volume manager).
>

Ceilometer is already running an agent on the compute host to poll libvirt
for other stats about the instance. Maybe we should add a pollster there
for this new info, too?

Doug


> ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Vishvananda Ishaya [mailto:vishvananda at gmail.com]
> *Sent:* 02 August 2012 17:03
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] [cinder] Volume metering in nova****
>
> ** **
>
> There are two possible solutions depending on what the goal of usage
> metering is.****
>
> ** **
>
> Goal A: log usage of all created volumes regardless of attach status.****
>
> ** **
>
> This should be done in cinder (or nova-volume/manager.py) side where the
> list of volumes exists.****
>
> ** **
>
> Goal B: log usage of all attached volumes.****
>
> ** **
>
> Compute keeps a record of volumes that are attached via the
> block_device_mapping table. The method for doing this is to get all
> instances belonging to the host, iterate through each instance and get a
> list of attached volues from block_device_mapping, and log usage for each
> one. You could do this inside the existing _instance_usage_audit task
> (which already grabs all active instances on the host) using
> self._get_instance_volume_bdms(instance_uuid).****
>
> ** **
>
> Vish****
>
> ** **
>
> On Aug 2, 2012, at 4:43 AM, "O'Driscoll, Cian" <cian.o-driscoll at hp.com>
> wrote:****
>
>
>
> ****
>
> Just wanted to start a discussion on the issue I hit yesterday when
> implementing volume usage metering (
> https://blueprints.launchpad.net/nova/+spec/volume-usage-metering)****
>
> Here is what I found:****
>
>  ****
>
> I create a periodic task in nova/compute/manager.py, the function looks
> like this****
>
>  ****
>
>     @manager.periodic_task****
>
>     def _poll_volume_usage(self, context, start_time=None, stop_time=None):
> ****
>
>         if not start_time:****
>
>             start_time = utils.last_completed_audit_period()[1]****
>
>  ****
>
>         curr_time = time.time()****
>
>         if curr_time - self._last_vol_usage_poll >
> FLAGS.volume_poll_interval:****
>
>             self._last_vol_usage_poll = curr_time****
>
>             LOG.info(_("Updating volume usage cache"))****
>
>  ****
>
>             *volumes = self.volume_api.get_all(context)*****
>
>             try:****
>
>                 vol_usage = self.driver.get_all_volume_usage(context,
> volumes,****
>
>                         start_time, stop_time)****
>
>             except NotImplementedError:****
>
>                 return****
>
>  ****
>
>             for usage in vol_usage:****
>
>                 self.db.vol_usage_update(context, usage['volume'],
> start_time,****
>
>                                          usage['rd_req'],
> usage['rd_bytes'],****
>
>                                          usage['wr_req'],
> usage['wr_bytes'])****
>
>  ****
>
>  ****
>
>  ****
>
>  ****
>
> The issue is with the call “self.volume_api.get_all(context)”, in my
> blueprint I was going to use self.db.volume_get_all_by_host(ctxt,
> self.host) but because nova db doesn’t contain volume info anymore, we need
> to talk to cinder to get the info.****
>
> _poll_volume_usage is passed and admin context when ran. The admin context
> doesn’t contain a service_catalog or any authentication info from Keystone,
> bascically just a context that says is_admin=True.****
>
> So when  “self.volume_api.get_all(context)” is called passing in the admin
> context, the instantiation of a python cinder client fails as the service
> catalog is empty.****
>
> Even if the service catalog was populated, I still think we’d fail as we
> wouldn’t have a Auth token to talk to cinder.****
>
>  ****
>
>  ****
>
> 2012-08-01 14:05:01 ERROR nova.manager [-] Error during
> ComputeManager._poll_volume_usage: 'NoneType' object is not iterable****
>
> 2012-08-01 14:05:01 TRACE nova.manager Traceback (most recent call last):*
> ***
>
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/manager.py", line 173, in periodic_tasks****
>
> 2012-08-01 14:05:01 TRACE nova.manager     task(self, context)****
>
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/compute/manager.py", line 2639, in _poll_volume_usage
> ****
>
> 2012-08-01 14:05:01 TRACE nova.manager     volumes =
> self.volume_api.get_all(context)****
>
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/volume/cinder.py", line 125, in get_all****
>
> 2012-08-01 14:05:01 TRACE nova.manager     items =
> cinderclient(context).volumes.list(detailed=True)****
>
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/volume/cinder.py", line 45, in cinderclient****
>
> 2012-08-01 14:05:01 TRACE nova.manager     url =
> sc.url_for(service_type='volume', service_name='cinder')****
>
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/python-cinderclient/cinderclient/service_catalog.py", line 53,
> in url_for****
>
> 2012-08-01 14:05:01 TRACE nova.manager     for service in catalog:****
>
> 2012-08-01 14:05:01 TRACE nova.manager TypeError: 'NoneType' object is not
> iterable****
>
> 2012-08-01 14:05:01 TRACE nova.manager****
>
>  ****
>
>  ****
>
> One possible solution to this is to have a cinder admin user with read
> only(read all data from cinder db) access created in Keystone (Glance does
> something very similar when talking to swift)? It keeps the user data in a
> conf file on the glance nodes.****
>
> So before we call “self.volume_api.get_all(context)”, we can generate a
> cinder admin context using the info in a conf file.****
>
> I think this could be done in another blueprint as I feel there are other
> use cases where a cinder admin user is required (Any cinder auditing in
> nova would require the admin user).****
>
>  ****
>
> For now I propose, to make progress in my implementation of just adding
> stats collection on detach and not implementing the periodic task for now.
> ****
>
> This would be volume usage metering part 1 say, as a proof of concept and
> when a general consensus/implementation is reached around the cinder admin
> user, I can implement the periodic task as part ?****
>
>  ****
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev****
>
> ** **
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20120810/b4584d4b/attachment-0001.html>


More information about the OpenStack-dev mailing list