[openstack-dev] [cinder] Volume metering in nova
O'Driscoll, Cian
cian.o-driscoll at hp.com
Thu Aug 2 11:43:10 UTC 2012
Just wanted to start a discussion on the issue I hit yesterday when implementing volume usage metering (https://blueprints.launchpad.net/nova/+spec/volume-usage-metering)
Here is what I found:
I create a periodic task in nova/compute/manager.py, the function looks like this
@manager.periodic_task
def _poll_volume_usage(self, context, start_time=None, stop_time=None):
if not start_time:
start_time = utils.last_completed_audit_period()[1]
curr_time = time.time()
if curr_time - self._last_vol_usage_poll > FLAGS.volume_poll_interval:
self._last_vol_usage_poll = curr_time
LOG.info(_("Updating volume usage cache"))
volumes = self.volume_api.get_all(context)
try:
vol_usage = self.driver.get_all_volume_usage(context, volumes,
start_time, stop_time)
except NotImplementedError:
return
for usage in vol_usage:
self.db.vol_usage_update(context, usage['volume'], start_time,
usage['rd_req'], usage['rd_bytes'],
usage['wr_req'], usage['wr_bytes'])
The issue is with the call "self.volume_api.get_all(context)", in my blueprint I was going to use self.db.volume_get_all_by_host(ctxt, self.host) but because nova db doesn't contain volume info anymore, we need to talk to cinder to get the info.
_poll_volume_usage is passed and admin context when ran. The admin context doesn't contain a service_catalog or any authentication info from Keystone, bascically just a context that says is_admin=True.
So when "self.volume_api.get_all(context)" is called passing in the admin context, the instantiation of a python cinder client fails as the service catalog is empty.
Even if the service catalog was populated, I still think we'd fail as we wouldn't have a Auth token to talk to cinder.
2012-08-01 14:05:01 ERROR nova.manager [-] Error during ComputeManager._poll_volume_usage: 'NoneType' object is not iterable
2012-08-01 14:05:01 TRACE nova.manager Traceback (most recent call last):
2012-08-01 14:05:01 TRACE nova.manager File "/opt/stack/nova/nova/manager.py", line 173, in periodic_tasks
2012-08-01 14:05:01 TRACE nova.manager task(self, context)
2012-08-01 14:05:01 TRACE nova.manager File "/opt/stack/nova/nova/compute/manager.py", line 2639, in _poll_volume_usage
2012-08-01 14:05:01 TRACE nova.manager volumes = self.volume_api.get_all(context)
2012-08-01 14:05:01 TRACE nova.manager File "/opt/stack/nova/nova/volume/cinder.py", line 125, in get_all
2012-08-01 14:05:01 TRACE nova.manager items = cinderclient(context).volumes.list(detailed=True)
2012-08-01 14:05:01 TRACE nova.manager File "/opt/stack/nova/nova/volume/cinder.py", line 45, in cinderclient
2012-08-01 14:05:01 TRACE nova.manager url = sc.url_for(service_type='volume', service_name='cinder')
2012-08-01 14:05:01 TRACE nova.manager File "/opt/stack/python-cinderclient/cinderclient/service_catalog.py", line 53, in url_for
2012-08-01 14:05:01 TRACE nova.manager for service in catalog:
2012-08-01 14:05:01 TRACE nova.manager TypeError: 'NoneType' object is not iterable
2012-08-01 14:05:01 TRACE nova.manager
One possible solution to this is to have a cinder admin user with read only(read all data from cinder db) access created in Keystone (Glance does something very similar when talking to swift)? It keeps the user data in a conf file on the glance nodes.
So before we call "self.volume_api.get_all(context)", we can generate a cinder admin context using the info in a conf file.
I think this could be done in another blueprint as I feel there are other use cases where a cinder admin user is required (Any cinder auditing in nova would require the admin user).
For now I propose, to make progress in my implementation of just adding stats collection on detach and not implementing the periodic task for now.
This would be volume usage metering part 1 say, as a proof of concept and when a general consensus/implementation is reached around the cinder admin user, I can implement the periodic task as part ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20120802/8d10525b/attachment-0001.html>
More information about the OpenStack-dev
mailing list