[openstack-dev] [cinder] Volume metering in nova

Doug Hellmann doug.hellmann at dreamhost.com
Thu Aug 2 16:53:42 UTC 2012


On Thu, Aug 2, 2012 at 12:03 PM, Vishvananda Ishaya
<vishvananda at gmail.com>wrote:

> There are two possible solutions depending on what the goal of usage
> metering is.
>
> Goal A: log usage of all created volumes regardless of attach status.
>
> This should be done in cinder (or nova-volume/manager.py) side where the
> list of volumes exists.
>

It seems we want all volumes, but we also want to know for each if it is
attached (and possibly where). Does cinder have that information? If so,
cinder could publish notifications similar to 'compute.instance.exists'
which ceilometer could consume.

Doug


>
> Goal B: log usage of all attached volumes.
>
> Compute keeps a record of volumes that are attached via the
> block_device_mapping table. The method for doing this is to get all
> instances belonging to the host, iterate through each instance and get a
> list of attached volues from block_device_mapping, and log usage for each
> one. You could do this inside the existing _instance_usage_audit task
> (which already grabs all active instances on the host) using
> self._get_instance_volume_bdms(instance_uuid).
>
> Vish
>
> On Aug 2, 2012, at 4:43 AM, "O'Driscoll, Cian" <cian.o-driscoll at hp.com>
> wrote:
>
> Just wanted to start a discussion on the issue I hit yesterday when
> implementing volume usage metering (
> https://blueprints.launchpad.net/nova/+spec/volume-usage-metering)****
> Here is what I found:****
>
> I create a periodic task in nova/compute/manager.py, the function looks
> like this****
>
>     @manager.periodic_task****
>     def _poll_volume_usage(self, context, start_time=None, stop_time=None):
> ****
>         if not start_time:****
>             start_time = utils.last_completed_audit_period()[1]****
>
>         curr_time = time.time()****
>         if curr_time - self._last_vol_usage_poll >
> FLAGS.volume_poll_interval:****
>             self._last_vol_usage_poll = curr_time****
>             LOG.info(_("Updating volume usage cache"))****
>
>             *volumes = self.volume_api.get_all(context)*
>             try:****
>                 vol_usage = self.driver.get_all_volume_usage(context,
> volumes,****
>                         start_time, stop_time)****
>             except NotImplementedError:****
>                 return****
>
>             for usage in vol_usage:****
>                 self.db.vol_usage_update(context, usage['volume'],
> start_time,****
>                                          usage['rd_req'],
> usage['rd_bytes'],****
>                                          usage['wr_req'],
> usage['wr_bytes'])****
>
>
>
>
> The issue is with the call “self.volume_api.get_all(context)”, in my
> blueprint I was going to use self.db.volume_get_all_by_host(ctxt,
> self.host) but because nova db doesn’t contain volume info anymore, we need
> to talk to cinder to get the info.****
> _poll_volume_usage is passed and admin context when ran. The admin context
> doesn’t contain a service_catalog or any authentication info from Keystone,
> bascically just a context that says is_admin=True.****
> So when  “self.volume_api.get_all(context)” is called passing in the admin
> context, the instantiation of a python cinder client fails as the service
> catalog is empty.****
> Even if the service catalog was populated, I still think we’d fail as we
> wouldn’t have a Auth token to talk to cinder.****
>
>
> 2012-08-01 14:05:01 ERROR nova.manager [-] Error during
> ComputeManager._poll_volume_usage: 'NoneType' object is not iterable****
> 2012-08-01 14:05:01 TRACE nova.manager Traceback (most recent call last):*
> ***
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/manager.py", line 173, in periodic_tasks****
> 2012-08-01 14:05:01 TRACE nova.manager     task(self, context)****
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/compute/manager.py", line 2639, in _poll_volume_usage
> ****
> 2012-08-01 14:05:01 TRACE nova.manager     volumes =
> self.volume_api.get_all(context)****
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/volume/cinder.py", line 125, in get_all****
> 2012-08-01 14:05:01 TRACE nova.manager     items =
> cinderclient(context).volumes.list(detailed=True)****
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/nova/nova/volume/cinder.py", line 45, in cinderclient****
> 2012-08-01 14:05:01 TRACE nova.manager     url =
> sc.url_for(service_type='volume', service_name='cinder')****
> 2012-08-01 14:05:01 TRACE nova.manager   File
> "/opt/stack/python-cinderclient/cinderclient/service_catalog.py", line 53,
> in url_for****
> 2012-08-01 14:05:01 TRACE nova.manager     for service in catalog:****
> 2012-08-01 14:05:01 TRACE nova.manager TypeError: 'NoneType' object is not
> iterable****
> 2012-08-01 14:05:01 TRACE nova.manager****
>
>
> One possible solution to this is to have a cinder admin user with read
> only(read all data from cinder db) access created in Keystone (Glance does
> something very similar when talking to swift)? It keeps the user data in a
> conf file on the glance nodes.****
> So before we call “self.volume_api.get_all(context)”, we can generate a
> cinder admin context using the info in a conf file.****
> I think this could be done in another blueprint as I feel there are other
> use cases where a cinder admin user is required (Any cinder auditing in
> nova would require the admin user).****
>
> For now I propose, to make progress in my implementation of just adding
> stats collection on detach and not implementing the periodic task for now.
> ****
> This would be volume usage metering part 1 say, as a proof of concept and
> when a general consensus/implementation is reached around the cinder admin
> user, I can implement the periodic task as part ?****
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20120802/f78f3691/attachment-0001.html>


More information about the OpenStack-dev mailing list