[openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?
Deepak Shetty
dpkshetty at gmail.com
Thu Apr 3 14:14:12 UTC 2014
Hi,
I am looking to umount the glsuterfs shares that are mounted as part of
gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in devstack
env) or when c-vol service is being shutdown.
I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it didn't
work
def __del__(self):
LOG.info(_("DPKS: Inside __del__ Hurray!, shares=%s")%
self._mounted_shares)
for share in self._mounted_shares:
mount_path = self._get_mount_point_for_share(share)
command = ['umount', mount_path]
self._do_umount(command, True, share)
self._mounted_shares is defined in the base class (RemoteFsDriver)
1. ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
Caught SIGINT, stopping children
2. 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-]
Caught SIGTERM, exiting
3. 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-]
Caught SIGTERM, exiting
4. 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
Waiting on 2 children to exit
5. 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-]
Child 30185 exited with status 1
6. 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-]
DPKS: Inside __del__ Hurray!, shares=[]
7. 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-]
Child 30186 exited with status 1
8. Exception TypeError: "'NoneType' object is not callable" in <bound
method GlusterfsDriver.__del__ of
<cinder.volume.drivers.glusterfs.GlusterfsDriver object at 0x2777ed0>>
ignored
9. [stack at devstack-vm tempest]$
So the _mounted_shares is empty ([]) which isn't true since I have 2
glsuterfs shares mounted and when i print _mounted_shares in other parts of
code, it does show me the right thing.. as below...
>From volume/drivers/glusterfs.py @ line 1062:
LOG.debug(_('Available shares: %s') % self._mounted_shares)
which dumps the debugprint as below...
2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
[req-2cf69316-cc42-403a-96f1-90e8e77375aa None None]* Available shares:
[u'devstack-vm.localdomain:/gvol1',
u'devstack-vm.localdomain:/gvol1']*from (pid=30185)
_ensure_shares_mounted
/opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
This brings in few Qs ( I am usign devstack env) ...
1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
gluster backends setup, hence 2 cinder-volume instances, but i see __del__
being called once only (as per above debug prints)
2) I tried atexit and registering a function to do the cleanup. Ctrl-C'ing
c-vol (from screen ) gives the same issue.. shares is empty ([]), but this
time i see that my atexit handler called twice (once for each backend)
3) In general, whats the right way to do cleanup inside cinder volume
driver when a service is going down or being restarted ?
4) The solution should work in both devstack (ctrl-c to shutdown c-vol
service) and production (where we do service restart c-vol)
Would appreciate a response
thanx,
deepak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140403/3112149c/attachment.html>
More information about the OpenStack-dev
mailing list