[Openstack] Problems attaching san iSCSI to instances

John Griffith john.griffith at solidfire.com
Fri Dec 23 19:18:04 UTC 2011


Additional info...

It turns out that restarting open-iscsi resolves the problem.  I'm still
curious as to what causes it to get into this state to begin with however.

Thanks,
John

On Fri, Dec 23, 2011 at 12:06 PM, John Griffith <john.griffith at solidfire.com
> wrote:

> Hi,
>
> I was hoping somebody might be able to help me with an issue I'm having.
>  I have some code I've written to support SolidFire iSCSI devices in
> san.py.  Things work pretty well until I attempt to attach a second volume
> to an instance running on the compute node.
>
> In other words, I can create an instance, create a volume and attach the
> created volume.  Everything at this point works fine.  However, I then
> create another instance, create another volume and try to attach the second
> volume to the second instance and I run into issues.
>
> Looking at the logs it appears that the issue is an error from iscsiadm
> (shown below):
>
> (nova.compute.manager): TRACE: Stderr: 'iscsiadm: Could not login to
> [iface: default, target: iqn.2010-01.com.solidfire:wmit.volume-00000002.51,
> portal:     192.168.69.211,3260]: \niscsiadm: initiator reported error (15
> - already exists)\n'
>
> A quick check on the compute node to see what it has for entries in
> /etc/iscsi/nodes, shows that it has created two entries as I would expect:
>
> root at devstack-1:/etc/iscsi# ls /etc/iscsi/nodes/
> iqn.2010-01.com.solidfire:wmit.volume-00000001.50
> iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> root at devstack-1:/etc/iscsi#
>
> I'm not sure what the problem is here, is there something that I need to
> manually configure in my iscsid.conf file to prevent this?  Is there
> possibly something wrong with my export_volume method?  I've googled the
> error message from iscsiadm but not having much luck.
>
> Logs are below, does anybody by chance have some experience with this and
> know what I'm doing wrong?
>
> Thanks,
> John
>
>
>
> ~~~~~~~~ nova-compute.log ~~~~~~~~~
>
> 2011-12-23 10:35:53,579 DEBUG nova.virt.libvirt_conn [-] instance
> instance-00000002: is running from (pid=1090) spawn
> /opt/stack/nova/nova/virt/libvirt/connection.py:682
> 2011-12-23 10:35:53,580 DEBUG nova.compute.manager [-] Checking state of
> instance-00000002 from (pid=1090) _get_power_state
> /opt/stack/nova/nova/compute/manager.py:211
> 2011-12-23 10:35:54,245 INFO nova.virt.libvirt_conn [-] Instance
> instance-00000002 spawned successfully.
> 2011-12-23 10:36:18,773 DEBUG nova.rpc [-] received {u'_context_roles':
> [u'Admin', u'Admin', u'KeystoneAdmin', u'KeystoneServiceAdmin'],
> u'_context_request_id': u'd4ef443f-6a9d-479a-b76d-87f6f496f28c',
> u'_context_read_deleted': u'no', u'args': {u'instance_uuid':
> u'ef91a007-bccc-4b3b-a8dc-ec3fb6c3df0d', u'mountpoint': u'/dev/vdb',
> u'volume_id': u'2'}, u'_context_auth_token': u'tokentoken',
> u'_context_strategy': u'keystone', u'_context_is_admin': True,
> u'_context_project_id': u'1', u'_context_timestamp':
> u'2011-12-23T18:36:18.661390', u'_context_user_id': u'admin', u'method':
> u'attach_volume', u'_context_remote_address': u'192.168.68.11'} from
> (pid=1090) __call__ /opt/stack/nova/nova/rpc/impl_kombu.py:607
> 2011-12-23 10:36:18,781 DEBUG nova.rpc [-] unpacked context: {'user_id':
> u'admin', 'roles': [u'Admin', u'Admin', u'KeystoneAdmin',
> u'KeystoneServiceAdmin'], 'timestamp': u'2011-12-23T18:36:18.661390',
> 'auth_token': u'tokentoken', 'msg_id': None, 'remote_address':
> u'192.168.68.11', 'strategy': u'keystone', 'is_admin': True, 'request_id':
> u'd4ef443f-6a9d-479a-b76d-87f6f496f28c', 'project_id': u'1',
> 'read_deleted': u'no'} from (pid=1090) _unpack_context
> /opt/stack/nova/nova/rpc/impl_kombu.py:653
> 2011-12-23 10:36:18,782 INFO nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] check_instance_lock:
> decorating: |<function attach_volume at 0x26310c8>|
> 2011-12-23 10:36:18,783 INFO nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] check_instance_lock:
> arguments: |<nova.compute.manager.ComputeManager object at 0x24370d0>|
> |<nova.rpc.impl_kombu.RpcContext object at 0x40b0450>|
> |ef91a007-bccc-4b3b-a8dc-ec3fb6c3df0d|
> 2011-12-23 10:36:18,787 DEBUG nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] instance
> ef91a007-bccc-4b3b-a8dc-ec3fb6c3df0d: getting locked state from (pid=1090)
> get_lock /opt/stack/nova/nova/compute/manager.py:1407
> 2011-12-23 10:36:18,952 INFO nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] check_instance_lock: locked:
> |False|
> 2011-12-23 10:36:18,952 INFO nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] check_instance_lock: admin:
> |True|
> 2011-12-23 10:36:18,953 INFO nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] check_instance_lock:
> executing: |<function attach_volume at 0x26310c8>|
> 2011-12-23 10:36:19,068 AUDIT nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] instance
> ef91a007-bccc-4b3b-a8dc-ec3fb6c3df0d: attaching volume 2 to /dev/vdb
> 2011-12-23 10:36:19,129 DEBUG nova.rpc [-] Making asynchronous call on
> volume.devstack-1 ... from (pid=1090) multicall
> /opt/stack/nova/nova/rpc/impl_kombu.py:737
> 2011-12-23 10:36:19,129 DEBUG nova.rpc [-] MSG_ID is
> 56115d8b6a1a464693c29d8775597b26 from (pid=1090) multicall
> /opt/stack/nova/nova/rpc/impl_kombu.py:740
> 2011-12-23 10:36:19,704 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iscsiadm -m node -T iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> -p 192.168.69.211:3260 from (pid=1090) execute
> /opt/stack/nova/nova/utils.py:201
> 2011-12-23 10:36:19,764 DEBUG nova.virt.libvirt.volume [-] iscsiadm ():
> stdout=# BEGIN RECORD 2.0-871
> node.name = iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> node.tpgt = -1
> node.startup = automatic
> iface.hwaddress = <empty>
> iface.ipaddress = <empty>
> iface.iscsi_ifacename = default
> iface.net_ifacename = <empty>
> iface.transport_name = tcp
> iface.initiatorname = <empty>
> node.discovery_address = <empty>
> node.discovery_port = 0
> node.discovery_type = static
> node.session.initial_cmdsn = 0
> node.session.initial_login_retry_max = 8
> node.session.xmit_thread_priority = -20
> node.session.cmds_max = 128
> node.session.queue_depth = 32
> node.session.auth.authmethod = CHAP
> node.session.auth.username = devstack-1-1
> node.session.auth.password = ********
> node.session.auth.username_in = <empty>
> node.session.auth.password_in = <empty>
> node.session.timeo.replacement_timeout = 120
> node.session.err_timeo.abort_timeout = 15
> node.session.err_timeo.lu_reset_timeout = 20
> node.session.err_timeo.host_reset_timeout = 60
> node.session.iscsi.FastAbort = Yes
> node.session.iscsi.InitialR2T = No
> node.session.iscsi.ImmediateData = Yes
> node.session.iscsi.FirstBurstLength = 262144
> node.session.iscsi.MaxBurstLength = 16776192
> node.session.iscsi.DefaultTime2Retain = 0
> node.session.iscsi.DefaultTime2Wait = 2
> node.session.iscsi.MaxConnections = 1
> node.session.iscsi.MaxOutstandingR2T = 1
> node.session.iscsi.ERL = 0
> node.conn[0].address = 192.168.69.211
> node.conn[0].port = 3260
> node.conn[0].startup = manual
> node.conn[0].tcp.window_size = 524288
> node.conn[0].tcp.type_of_service = 0
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 5
> node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
> node.conn[0].iscsi.HeaderDigest = None
> node.conn[0].iscsi.DataDigest = None
> node.conn[0].iscsi.IFMarker = No
> node.conn[0].iscsi.OFMarker = No
> # END RECORD
>  stderr= from (pid=1090) _run_iscsiadm
> /opt/stack/nova/nova/virt/libvirt/volume.py:98
> 2011-12-23 10:36:19,766 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iscsiadm -m node -T iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> -p 192.168.69.211:3260 --op update -n node.session.auth.authmethod -v
> CHAP from (pid=1090) execute /opt/stack/nova/nova/utils.py:201
> 2011-12-23 10:36:19,791 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--op', 'update', '-n', 'node.session.auth.authmethod', '-v', u'CHAP'):
> stdout= stderr= from (pid=1090) _run_iscsiadm
> /opt/stack/nova/nova/virt/libvirt/volume.py:98
> 2011-12-23 10:36:19,792 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iscsiadm -m node -T iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> -p 192.168.69.211:3260 --op update -n node.session.auth.username -v
> devstack-1-1 from (pid=1090) execute /opt/stack/nova/nova/utils.py:201
> 2011-12-23 10:36:19,817 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--op', 'update', '-n', 'node.session.auth.username', '-v',
> u'devstack-1-1'): stdout= stderr= from (pid=1090) _run_iscsiadm
> /opt/stack/nova/nova/virt/libvirt/volume.py:98
> 2011-12-23 10:36:19,819 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iscsiadm -m node -T iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> -p 192.168.69.211:3260 --op update -n node.session.auth.password -v
> DOMB9W14FK8R from (pid=1090) execute /opt/stack/nova/nova/utils.py:201
> 2011-12-23 10:36:19,838 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--op', 'update', '-n', 'node.session.auth.password', '-v',
> u'DOMB9W14FK8R'): stdout= stderr= from (pid=1090) _run_iscsiadm
> /opt/stack/nova/nova/virt/libvirt/volume.py:98
> 2011-12-23 10:36:19,843 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iscsiadm -m node -T iqn.2010-01.com.solidfire:wmit.volume-00000002.51
> -p 192.168.69.211:3260 --login from (pid=1090) execute
> /opt/stack/nova/nova/utils.py:201
> 2011-12-23 10:36:19,855 DEBUG nova.utils [-] Result was 255 from
> (pid=1090) execute /opt/stack/nova/nova/utils.py:217
> 2011-12-23 10:36:19,856 ERROR nova.compute.manager
> [d4ef443f-6a9d-479a-b76d-87f6f496f28c admin 1] instance
> ef91a007-bccc-4b3b-a8dc-ec3fb6c3df0d: attach failed /dev/vdb, removing
> (nova.compute.manager): TRACE: Traceback (most recent call last):
> (nova.compute.manager): TRACE:   File
> "/opt/stack/nova/nova/compute/manager.py", line 1510, in attach_volume
> (nova.compute.manager): TRACE:     mountpoint)
> (nova.compute.manager): TRACE:   File "/opt/stack/nova/nova/exception.py",
> line 115, in wrapped
> (nova.compute.manager): TRACE:     return f(*args, **kw)
> (nova.compute.manager): TRACE:   File
> "/opt/stack/nova/nova/virt/libvirt/connection.py", line 427, in
> attach_volume
> (nova.compute.manager): TRACE:     mount_device)
> (nova.compute.manager): TRACE:   File
> "/opt/stack/nova/nova/virt/libvirt/connection.py", line 419, in
> volume_driver_method
> (nova.compute.manager): TRACE:     return method(connection_info, *args,
> **kwargs)
> (nova.compute.manager): TRACE:   File
> "/opt/stack/nova/nova/virt/libvirt/volume.py", line 128, in connect_volume
> (nova.compute.manager): TRACE:     self._run_iscsiadm(iscsi_properties,
> ("--login",))
> (nova.compute.manager): TRACE:   File
> "/opt/stack/nova/nova/virt/libvirt/volume.py", line 96, in _run_iscsiadm
> (nova.compute.manager): TRACE:     check_exit_code=check_exit_code)
> (nova.compute.manager): TRACE:   File "/opt/stack/nova/nova/utils.py",
> line 225, in execute
> (nova.compute.manager): TRACE:     cmd=' '.join(cmd))
> (nova.compute.manager): TRACE: ProcessExecutionError: Unexpected error
> while running command.
> (nova.compute.manager): TRACE: Command: sudo iscsiadm -m node -T
> iqn.2010-01.com.solidfire:wmit.volume-00000002.51 -p 192.168.69.211:3260--login
> (nova.compute.manager): TRACE: Exit code: 255
> (nova.compute.manager): TRACE: Stdout: 'Logging in to [iface: default,
> target: iqn.2010-01.com.solidfire:wmit.volume-00000002.51, portal:
> 192.168.69.211,3260]\n'
> (nova.compute.manager): TRACE: Stderr: 'iscsiadm: Could not login to
> [iface: default, target: iqn.2010-01.com.solidfire:wmit.volume-00000002.51,
> portal: 192.168.69.211,3260]: \niscsiadm: initiator reported error (15 -
> already exists)\n'
> (nova.compute.manager): TRACE:
> 2011-12-23 10:36:19,888 DEBUG nova.rpc [-] Making asynchronous call on
> volume.devstack-1 ... from (pid=1090) multicall
> /opt/stack/nova/nova/rpc/impl_kombu.py:737
> 2011-12-23 10:36:19,888 DEBUG nova.rpc [-] MSG_ID is
> 86f9dbb136324b5394a209ab68dbf0af from (pid=1090) multicall
> /opt/stack/nova/nova/rpc/impl_kombu.py:740
> 2011-12-23 10:36:20,064 ERROR nova.rpc [-] Exception during message
> handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE:   File "/opt/stack/nova/nova/rpc/impl_kombu.py", line
> 627, in _process_data
> (nova.rpc): TRACE:     rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE:   File "/opt/stack/nova/nova/exception.py", line 115,
> in wrapped
> (nova.rpc): TRACE:     return f(*args, **kw)
> (nova.rpc): TRACE:   File "/opt/stack/nova/nova/compute/manager.py", line
> 115, in decorated_function
> (nova.rpc): TRACE:     function(self, context, instance_uuid, *args,
> **kwargs)
> (nova.rpc): TRACE:   File "/opt/stack/nova/nova/compute/manager.py", line
> 138, in decorated_function
> (nova.rpc): TRACE:     self.add_instance_fault_from_exc(context,
> instance_uuid, e)
> (nova.rpc): TRACE:   File "/usr/lib/python2.7/contextlib.py", line 24, in
> __exit__
> (nova.rpc): TRACE:     self.gen.next()
> (nova.rpc): TRACE:   File "/opt/stack/nova/nova/compute/manager.py", line
> 133, in decorated_function
> (nova.rpc): TRACE:     return function(self, context, instance_uuid,
> *args, **kwargs)
> (nova.rpc): TRACE:   File "/opt/stack/nova/nova/compute/manager.py", line
> 1519, in attach_volume
> (nova.rpc): TRACE:     raise exc
> (nova.rpc): TRACE: ProcessExecutionError: Unexpected error while running
> command.
> (nova.rpc): TRACE: Command: None
> (nova.rpc): TRACE: Exit code: -
> (nova.rpc): TRACE: Stdout: None
> (nova.rpc): TRACE: Stderr: None
> (nova.rpc): TRACE:
>
>
>
>
> ~~~~ nova-volume.log~~~~~~~~~~~
>
> 2011-12-23 10:54:12,481 DEBUG nova.rpc [-] received {u'_context_roles':
> [u'Admin', u'Admin', u'KeystoneAdmin', u'KeystoneServiceAdmin'],
> u'_msg_id': u'd3203d11b49b4ab28b5a15b13126093b', u'_context_read_deleted':
> u'no', u'_context_request_id': u'90be39e2-b970-4954-afb3-83dd054bc02a',
> u'args': {u'volume_id': u'2', u'address': u'192.168.68.11'},
> u'_context_auth_token': u'tokentoken', u'_context_strategy': u'keystone',
> u'_context_is_admin': True, u'_context_project_id': u'1',
> u'_context_timestamp': u'2011-12-23T18:54:11.923697', u'_context_user_id':
> u'admin', u'method': u'initialize_connection', u'_context_remote_address':
> u'192.168.68.11'} from (pid=1112) __call__
> /opt/stack/nova/nova/rpc/impl_kombu.py:607
> 2011-12-23 10:54:12,482 DEBUG nova.rpc [-] unpacked context: {'user_id':
> u'admin', 'roles': [u'Admin', u'Admin', u'KeystoneAdmin',
> u'KeystoneServiceAdmin'], 'timestamp': u'2011-12-23T18:54:11.923697',
> 'auth_token': u'tokentoken', 'msg_id': u'd3203d11b49b4ab28b5a15b13126093b',
> 'remote_address': u'192.168.68.11', 'strategy': u'keystone', 'is_admin':
> True, 'request_id': u'90be39e2-b970-4954-afb3-83dd054bc02a', 'project_id':
> u'1', 'read_deleted': u'no'} from (pid=1112) _unpack_context
> /opt/stack/nova/nova/rpc/impl_kombu.py:653
> 2011-12-23 10:54:13,088 DEBUG nova.rpc [-] received {u'_context_roles':
> [u'Admin', u'Admin', u'KeystoneAdmin', u'KeystoneServiceAdmin'],
> u'_msg_id': u'6ba61e4083204d55bcf3762cd68b34a6', u'_context_read_deleted':
> u'no', u'_context_request_id': u'90be39e2-b970-4954-afb3-83dd054bc02a',
> u'args': {u'volume_id': u'2', u'address': u'192.168.68.11'},
> u'_context_auth_token': u'tokentoken', u'_context_strategy': u'keystone',
> u'_context_is_admin': True, u'_context_project_id': u'1',
> u'_context_timestamp': u'2011-12-23T18:54:11.923697', u'_context_user_id':
> u'admin', u'method': u'terminate_connection', u'_context_remote_address':
> u'192.168.68.11'} from (pid=1112) __call__
> /opt/stack/nova/nova/rpc/impl_kombu.py:607
> 2011-12-23 10:54:13,088 DEBUG nova.rpc [-] unpacked context: {'user_id':
> u'admin', 'roles': [u'Admin', u'Admin', u'KeystoneAdmin',
> u'KeystoneServiceAdmin'], 'timestamp': u'2011-12-23T18:54:11.923697',
> 'auth_token': u'tokentoken', 'msg_id': u'6ba61e4083204d55bcf3762cd68b34a6',
> 'remote_address': u'192.168.68.11', 'strategy': u'keystone', 'is_admin':
> True, 'request_id': u'90be39e2-b970-4954-afb3-83dd054bc02a', 'project_id':
> u'1', 'read_deleted': u'no'} from (pid=1112) _unpack_context
> /opt/stack/nova/nova/rpc/impl_kombu.py:653
> 2011-12-23 10:54:30,152 DEBUG nova.manager
> [03fbf039-8c6b-4515-a800-a202bc105337 None None] Running periodic task
> VolumeManager._publish_service_capabilities from (pid=1112) periodic_tasks
> /opt/stack/nova/nova/manager.py:152
> 2011-12-23 10:54:30,152 DEBUG nova.manager
> [03fbf039-8c6b-4515-a800-a202bc105337 None None] Running periodic task
> VolumeManager._report_driver_status from (pid=1112) periodic_tasks
> /opt/stack/nova/nova/manager.py:152
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111223/0b734069/attachment.html>


More information about the Openstack mailing list