[Openstack-security] [Bug 1419577] Related fix merged to nova (master)
OpenStack Infra
1419577 at bugs.launchpad.net
Tue Oct 18 04:45:54 UTC 2016
Reviewed: https://review.openstack.org/342111
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b83cae02ece4c338e09c3606c6ae69b715bd6f8c
Submitter: Jenkins
Branch: master
commit b83cae02ece4c338e09c3606c6ae69b715bd6f8c
Author: Lee Yarwood <lyarwood at redhat.com>
Date: Thu Jul 14 11:53:09 2016 +0100
block_device: Make refresh_conn_infos py3 compatible
Also add a simple test ensuring that refresh_connection_info is called
for each DriverVolumeBlockDevice derived device provided.
Related-Bug: #1419577
Partially-Implements: blueprint goal-python35
Change-Id: Ib1ff00e7f4f5b599317d7111c322ce9af8a9a2b1
--
You received this bug notification because you are a member of OpenStack
Security, which is subscribed to OpenStack.
https://bugs.launchpad.net/bugs/1419577
Title:
when live-migrate failed, lun-id couldn't be rollback in havana
Status in OpenStack Compute (nova):
In Progress
Status in OpenStack Security Advisory:
Won't Fix
Bug description:
Hi, guys
When live-migrate failed with error, lun-id of connection_info column in Nova's block_deivce_mapping table couldn't be rollback.
and failed VM can have others volume.
my test environment is following :
Openstack Version : Havana ( 2013.2.3)
Compute Node OS : 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Compute Node multipath : multipath-tools 0.4.9-3ubuntu7.2
test step is :
1) create 2 Compute node (host#1 and host#2)
2) create 1 VM on host#1 (vm01)
3) create 1 cinder volume (vol01)
4) attach 1 volume to vm01 (/dev/vdb)
5) live-migrate vm01 from host#1 to host#2
6) live-migrate success
- please check the mapper by using multipath command in host#1 (# multipath -ll), then you can find mapper is not deleted.
and the status of devices is "failed faulty"
- please check the lun-id of vol01
7) Again, live-migrate vm01 from host#2 to host#1 (vm01 was migrated to host#2 at step 4)
8) live-migrate fail
- please check the mapper in host#1
- please check the lun-id of vol01, then you can find the lun hav "two" igroups
- please check the connection_info column in Nova's block_deivce_mapping table, then you can find lun-id couldn't be rollback
This Bug is critical security issue because the failed VM can have
others volume.
and every backend storage of cinder-volume can have same problem
because this is the bug of live-migration's rollback process.
I suggest below methods to solve issue :
1) when live-migrate is complete, nova should delete mapper devices at origin host
2) when live-migrate is failed, nova should rollback lun-id in connection_info column
3) when live-migrate is failed, cinder should delete the mapping between lun and host (Netapp : igroup, EMC : storage_group ...)
4) when volume-attach is requested , cinder volume driver of vendors should make lun-id randomly for reduce of probability of mis-mapping
please check this bug.
Thank you.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419577/+subscriptions
More information about the Openstack-security
mailing list