[openstack-dev] [nova] upgrade connection_info when Ceph mon IP changed

zhou.bin9 at zte.com.cn zhou.bin9 at zte.com.cn
Tue May 17 01:39:55 UTC 2016

Hi all: 

      I got a problem described in 
and my colleague got another similar problem described in 
It's all about the storage backend ip change. With the storage backend, 
not only Ceph but also IPSAN,
when the backend's ip changed, the related volumes attached to VMs would 
not be available.  Previously 
I proposed to auto-check the consistency of IP record in nova's bdm table 
and storage backend, which was
submitted in https://review.openstack.org/#/c/289813/. 
     reviewers point out that it's a waste of performance with normal case 
and it's a not a good scenario 
to do thess checking in a regular function. I agree with this suggestion 
and the bug troubled me and my 
colleagues all the time. 
     I think if we can just add an option in nova api, such as "nova 
reboot --refresh-conn"
to manually modify the VM's bdm info when the bug happened. The 
"--refresh-conn" was parsed and passed to 
"reboot_instance" function in nova-compute. Without auto-checking, it 
would be more flexible and efficient.
And I need all of your valued opinions and appreciate for hearing from you 
The fake code is like this in nova-compute: 
     def reboot_instance(self, context, instance, block_device_info, 
                         reboot_type, refresh_conn = False): 
         """Reboot an instance on this host.""" 
         block_device_info = self._get_instance_block_device_info(context, 


 Thank you. 

related links are as follows:
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160517/28b38f28/attachment.html>

More information about the OpenStack-dev mailing list