[openstack-dev] [Nova] Regarding deleting snapshot when instance is OFF

Deepak Shetty dpkshetty at gmail.com
Wed Apr 8 18:01:40 UTC 2015

    Cinder w/ GlusterFS backend is hitting the below error as part of
test_volume_boot_pattern tempest testcase

(at the end of testcase when it deletes the snap)


lib/python2.7/dist-packages/libvirt.py", line 792, in blockRebase
2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver if ret == -1:
raise libvirtError ('virDomainBlockRebase() failed', dom=self)
2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver
libvirtError: *Requested
operation is not valid: domain is not running*
2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver

More details in the LP bug [1]

In looking closely at the testcase, it waits for the Instance to turn OFF
post which the cleanup starts which tried to delete the snap, but since the
cinder volume is attached state (in-use) it lets nova take control of the
snap del operation, and nova fails as it cannot do blockRebase as domain is


1) Is this a valid scenario being tested ? Some say yes, I am not sure,
since the test makes sure that instance is OFF before snap is deleted and
this doesn't work for fs-backed drivers as they use hyp assisted snap which
needs domain to be active.

2) If this is valid scenario, then it means libvirt.py in nova should be
modified NOT to raise error, but continue with the snap delete (as if
volume was not attached) and take care of the dom xml (so that domain is
still bootable post snap deletion), is this the way to go ?

Appreciate suggestions/comments



[1]: https://bugs.launchpad.net/cinder/+bug/1441050
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150408/6344834e/attachment.html>

More information about the OpenStack-dev mailing list