<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:PMingLiU;
panose-1:2 2 5 0 0 0 0 0 0 0;}
@font-face
{font-family:PMingLiU;
panose-1:2 2 5 0 0 0 0 0 0 0;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:PMingLiU;
panose-1:2 2 5 0 0 0 0 0 0 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:ZH-TW;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0cm;
margin-right:0cm;
margin-bottom:0cm;
margin-left:36.0pt;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:ZH-TW;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:ZH-TW;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:930043955;
mso-list-type:hybrid;
mso-list-template-ids:1435415564 -1977055562 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}
@list l0:level1
{mso-level-start-at:2015;
mso-level-number-format:bullet;
mso-level-text:-;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-font-family:PMingLiU;
mso-bidi-font-family:"Times New Roman";}
@list l0:level2
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";}
@list l0:level3
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
@list l0:level4
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Symbol;}
@list l0:level5
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";}
@list l0:level6
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
@list l0:level7
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Symbol;}
@list l0:level8
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";}
@list l0:level9
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal">I am developing as cinder plugin and I recently faced one problem while running the cinder driver’s certification test ( devstack/driver_certs/cinder_driver_cert.sh )<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Here is what happened and my understanding of the problem (correct me if I am wrong)<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">From the test logs I can see the following information:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">{2} tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_list_get_volume_attachments [238.283153s] ... FAILED<o:p></o:p></p>
<p class="MsoNormal">…..<o:p></o:p></p>
<p class="MsoNormal">Captured traceback:<o:p></o:p></p>
<p class="MsoNormal">~~~~~~~~~~~~~~~~~~~<o:p></o:p></p>
<p class="MsoNormal"> Traceback (most recent call last):<o:p></o:p></p>
<p class="MsoNormal"> File "tempest/api/compute/volumes/test_attach_volume.py", line 46, in _detach<o:p></o:p></p>
<p class="MsoNormal"> self.volumes_client.wait_for_volume_status(volume_id, 'available')<o:p></o:p></p>
<p class="MsoNormal"> File "tempest/services/volume/json/volumes_client.py", line 182, in wait_for_volume_status<o:p></o:p></p>
<p class="MsoNormal"> raise exceptions.TimeoutException(message)<o:p></o:p></p>
<p class="MsoNormal"> TimeoutException: Request timed out<o:p></o:p></p>
<p class="MsoNormal"> Details: Volume <b>bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a</b> failed to reach available status (current: detaching) within the required time (196 s).<o:p></o:p></p>
<p class="MsoNormal"> Traceback (most recent call last):<o:p></o:p></p>
<p class="MsoNormal"> _StringException: Empty attachments:<o:p></o:p></p>
<p class="MsoNormal"> stderr<o:p></o:p></p>
<p class="MsoNormal"> stdout<o:p></o:p></p>
<p class="MsoNormal">
<o:p></o:p></p>
<p class="MsoNormal">when I check what the test case actually does, I see the following
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">44 if self.attachment:<o:p></o:p></p>
<p class="MsoNormal">45 self.servers_client.detach_volume(server_id, volume_id)<o:p></o:p></p>
<p class="MsoNormal">46 self.volumes_client.wait_for_volume_status(volume_id, 'available')<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">So the volume failed to detach within the timeout period.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">To do simple the test that failed does the following:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">1 attach a volume<o:p></o:p></p>
<p class="MsoNormal">2 wait for the volume to be attached ( by checking the volume status to be ‘in-use’)<o:p></o:p></p>
<p class="MsoNormal">3 detach the volume<o:p></o:p></p>
<p class="MsoNormal">4 wait for the volume to be detached ( by checking the volume status to be ‘available’)<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">In nova-compute’s log I see an Exception that seems to be the reason why we get this timeout. This happens when the detach request is received by nova-compute:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">1226617609] Exception during message handling: <type 'NoneType'> can't be decoded<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher executor_callback))<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher executor_callback)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 414, in decorated_function<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher return function(self, context, *args, **kwargs)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 88, in wrapped<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher payload)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in __exit__<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 71, in wrapped<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher return f(self, context, *args, **kw)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 298, in decorated_function<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher pass<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in __exit__<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 283, in decorated_function<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher return function(self, context, *args, **kwargs)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 326, in decorated_function<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info())<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in __exit__<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 314, in decorated_function<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher return function(self, context, *args, **kwargs)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4730, in detach_volume<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher self._detach_volume(context, instance, bdm)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/compute/manager.py", line 4671, in _detach_volume<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher connection_info = jsonutils.loads(bdm.connection_info)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 213, in loads<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher return json.loads(encodeutils.safe_decode(s, encoding), **kwargs)<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/encodeutils.py", line 33, in safe_decode<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher raise TypeError("%s can't be decoded" % type(text))<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:01.151 10931 TRACE oslo_messaging.rpc.dispatcher<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">When I run only this specific test case, the problem does not occur.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Hence I added some logs in nova-compute to find out the difference between these two executions. From those, I can see that the connection_info field is None when the Error is triggered.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">This log is printed when nova receive the detach request ( just before parsing the connection_info field ).<o:p></o:p></p>
<p class="MsoNormal">2015-02-26 13:50:00.572 AUDIT nova.compute.manager [req-9930623e-c073-4c5b-bcaf-f4098245277f AttachVolumeTestJSON-1822846528 AttachVolumeTestJSON-1226617609] print connection info : BlockDeviceMapping(boot_index=None,<b>connection_info=None</b>,created_at=2015-02-26T05:49:36Z,delete_on_termination=False,deleted=False,deleted_at=None,destination_type='volume',device_name='/dev/vdb',device_type=None,disk_bus=None,guest_format=None,id=166,image_id=None,instance=<?>,instance_uuid=9aeea261-30af-46eb-9fbb-53d03b65b43f,no_device=False,snapshot_id=None,source_type='volume',updated_at=None,volume_id='bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a',volume_size=None)<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Then I tried to figure out why this field was None and went back to the previous step : attach volume. From my basic understanding, when nova attach a volume, it will update two things ( source : nova/virt/libvirt/block_device.py)<o:p></o:p></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><![if !supportLists]><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span><![endif]>1 The volume status ( in cinder’s database )<o:p></o:p></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><![if !supportLists]><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span><![endif]>2 The connection_info ( after the call to virt_driver.attach_volume )<o:p></o:p></p>
<p class="MsoNormal">From the logs in cinder (c-api.log )<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">The first time the volume status is mentioned as ‘in-use’ is at 13:49:59.491 in (cinder-api’s log)<o:p></o:p></p>
<p class="MsoNormal">2-26 13:49:59.491 31396 INFO cinder.api.v1.volumes [req-dc56cd34-8ae3-42b0-8c35-8c01cd142955 690795b5c8c74e4fb6d292eeebef12d5 b49a40aa157e460592f3e3ac9a9f16eb - - -] vol={'migration_status': None, 'provider_id': None, 'availability_zone':
u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2015, 2, 26, 5, 49, 58), 'provider_geometry': None, 'replication_extended_status': None, 'replication_status': u'disabled', 'snapshot_id': None, 'ec2_id': None, 'mountpoint': u'/dev/vdb', 'deleted_at':
None, 'id': u'bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a', 'size': 1L, 'user_id': u'690795b5c8c74e4fb6d292eeebef12d5', 'attach_time': u'2015-02-26T05:49:52.128113', 'attached_host': None, 'display_description': None, 'volume_admin_metadata': u'[<cinder.db.sqlalchemy.models.VolumeAdminMetadata
object at 0x7fa32ed8cd10>, <cinder.db.sqlalchemy.models.VolumeAdminMetadata object at 0x7fa32ef789d0>]', 'project_id': u'b49a40aa157e460592f3e3ac9a9f16eb', 'launched_at': datetime.datetime(2015, 2, 26, 5, 49, 35), 'scheduled_at': datetime.datetime(2015, 2,
26, 5, 49, 33), 'status': u'in-use', 'volume_type_id': None, 'deleted': False, 'provider_location': u'999993877', 'host': u'cwisr3s18#disco', 'consistencygroup_id': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'test', 'instance_uuid':
u'9aeea261-30af-46eb-9fbb-53d03b65b43f', 'bootable': False, 'created_at': datetime.datetime(2015, 2, 26, 5, 49, 32), 'attach_status': u'attached', 'volume_type': None, 'consistencygroup': None, 'volume_metadata': [], '_name_id': None, 'encryption_key_id':
None, 'replication_driver_data': None, 'metadata': {u'readonly': u'False', u'attached_mode': u'rw'}}<o:p></o:p></p>
<p class="MsoListParagraph"><o:p> </o:p></p>
<p class="MsoNormal">Which matches approximately the time the test script got the in-use status<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">2015-02-26 13:49:59,558 31513 DEBUG [tempest_lib.common.rest_client] Request (AttachVolumeTestJSON:test_list_get_volume_attachments): 200 GET http://10.203.18.1:8776/v1/b49a40aa157e460592f3e3ac9a9f16eb/volumes/bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a
0.099s<o:p></o:p></p>
<p class="MsoNormal"> Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}<o:p></o:p></p>
<p class="MsoNormal"> Body: None<o:p></o:p></p>
<p class="MsoNormal"> Response - Headers: {'status': '200', 'content-length': '748', 'content-location': 'http://10.203.18.1:8776/v1/b49a40aa157e460592f3e3ac9a9f16eb/volumes/bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a', 'x-compute-request-id': 'req-dc56cd34-8ae3-42b0-8c35-8c01cd142955',
'connection': 'close', 'date': 'Thu, 26 Feb 2015 05:49:59 GMT', 'content-type': 'application/json', 'x-openstack-request-id': 'req-dc56cd34-8ae3-42b0-8c35-8c01cd142955'}<o:p></o:p></p>
<p class="MsoNormal"> Body: {"volume": {"status": "in-use", "display_name": "test", "attachments": [{"host_name": null, "device": "/dev/vdb", "server_id": "9aeea261-30af-46eb-9fbb-53d03b65b43f", "id": "bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a", "volume_id":
"bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a"}], "availability_zone": "nova", "bootable": "false", "encrypted": false, "created_at": "2015-02-26T05:49:32.000000", "os-vol-tenant-attr:tenant_id": "b49a40aa157e460592f3e3ac9a9f16eb", "os-volume-replication:driver_data":
null, "display_description": null, "os-volume-replication:extended_status": null, "volume_type": null, "snapshot_id": null, "source_volid": null, "metadata": {"readonly": "False", "attached_mode": "rw"}, "id": "bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a", "size":
1}}<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">After that the test script will call detach volume and we will face the error previously mentioned<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Besides I also added one log at the very last line of the function attach in nova/virt/block_device.py in order to check the connection info<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"> @update_db<o:p></o:p></p>
<p class="MsoNormal"> def attach(self, context, instance, volume_api, virt_driver,<o:p></o:p></p>
<p class="MsoNormal"> do_check_attach=True, do_driver_attach=False):<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">From Nova-cpu’s log I can see that the log is displayed at <o:p>
</o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">2015-02-26 13:50:02.003 AUDIT nova.virt.block_device [req-fd8124ca-d608-44a0-a1b9-7cb933fd840f AttachVolumeTestJSON-1822846528 AttachVolumeTestJSON-1226617609] test thelo 7 {u'driver_volume_type': u'disco', 'serial': u'bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a',
u'data': {'device_path': u'/dev/dms999993877', u'access_mode': u'rw', u'disco_id': u'999993877', u'qos_specs': None, u'name': u'volume-bfcd0dd4-2abd-4c06-b8b8-e05d77306d6a'}}<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Which means that the detach_volume is called before attach_volume functions actually ends and potentially update the connection_info field in the database.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">To confirm this logic, I added a “sleep” before we detach the volume in tempest test and run the test again. This time the test passed.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Does this logic seem correct? If yes, is that a bug and shouldn’t we guarantee that the connection_info is saved into the database before we update the volume status?<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Thanks for taking the time to read this,<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Thelo<o:p></o:p></p>
</div>
</body>
</html>本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 <br>
This email may contain confidential information. Please do not use or disclose it in any way and delete it if you are not the intended recipient.