[Openstack] volume cannot be attached to a server

DeadSun mwjpiero at gmail.com
Tue Oct 25 07:47:48 UTC 2011


Razique:
I have fixed it
modified three fils:(Actually only nova/virt/libvirt/volume.py need to be
modified)

/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py
nova/volume/driver.py
nova/virt/libvirt/volume.py

change lun0->lun1

- host_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0"
+ host_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-1"

it works well. but are the developer ready to fix this bugs?


2011/10/25 Razique Mahroua <razique.mahroua at gmail.com>

> Hi livemoon,
> the error is here :
> *ISCSI volume not yet found at: vdc. Will rescan & retry. Try number: 2*
>
> From the node which is supposed to get the volume, run :
> $ iscsiadm -m discovery -t st -p 10.200.200.4
>
> You should see a list of volumes (aka "targets"):
> IP NAME
>
> Then run :
> $iscsiadmn -m node --targetname $NAME -p 10.200.200.4 -l
>
> An tell us what's the output
>
> Razique
>
> Le 25 oct. 2011 à 03:33, DeadSun a écrit :
>
> I have create a volume sucessfully. When I use it to attach a server, the
> nova-compute log show :"ERROR nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] instance 7: attach failed
> /dev/vdc, removing"
>
> how to fix it? Thank you;
>
> $ nova --debug volume-attach 7 3 /dev/vdc
>
> nova-compute.log
> ***********************************************
> 2011-10-25 09:08:08,371 DEBUG nova.rpc [-] received {u'_context_roles':
> [u'Admin', u'Admin', u'KeystoneAdmin', u'KeystoneServiceAdmin'],
> u'_context_request_id': u'df1f81f6-66b4-441e-8996-690e74265fef',
> u'_context_read_deleted': False, u'args': {u'instance_id': u'7',
> u'mountpoint': u'/dev/vdc', u'volume_id': 3}, u'_context_auth_token':
> u'Mko09ijnbhu87ygv', u'_context_strategy': u'keystone',
> u'_context_is_admin': True, u'_context_project_id': u'1',
> u'_context_timestamp': u'2011-10-25T01:07:53.314598', u'_context_user_id':
> u'admin', u'method': u'attach_volume', u'_context_remote_address':
> u'10.200.200.4'} from (pid=13955) __call__
> /data/nova/nova/rpc/impl_kombu.py:600
> 2011-10-25 09:08:08,371 DEBUG nova.rpc [-] unpacked context: {'user_id':
> u'admin', 'roles': [u'Admin', u'Admin', u'KeystoneAdmin',
> u'KeystoneServiceAdmin'], 'timestamp': u'2011-10-25T01:07:53.314598',
> 'auth_token': u'Mko09ijnbhu87ygv', 'msg_id': None, 'remote_address':
> u'10.200.200.4', 'strategy': u'keystone', 'is_admin': True, 'request_id':
> u'df1f81f6-66b4-441e-8996-690e74265fef', 'project_id': u'1', 'read_deleted':
> False} from (pid=13955) _unpack_context
> /data/nova/nova/rpc/impl_kombu.py:646
> 2011-10-25 09:08:08,376 INFO nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] check_instance_lock:
> decorating: |<function attach_volume at 0x215a7d0>|
> 2011-10-25 09:08:08,377 INFO nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] check_instance_lock:
> arguments: |<nova.compute.manager.ComputeManager object at 0x1a38f90>|
> |<nova.rpc.impl_kombu.RpcContext object at 0x3dc6c50>| |7|
> 2011-10-25 09:08:08,377 DEBUG nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] instance 7: getting locked
> state from (pid=13955) get_lock /data/nova/nova/compute/manager.py:1276
> 2011-10-25 09:08:08,464 INFO nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] check_instance_lock: locked:
> |False|
> 2011-10-25 09:08:08,464 INFO nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] check_instance_lock: admin:
> |True|
> 2011-10-25 09:08:08,465 INFO nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] check_instance_lock:
> executing: |<function attach_volume at 0x215a7d0>|
> 2011-10-25 09:08:08,572 AUDIT nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] instance 7: attaching volume
> 3 to /dev/vdc
> 2011-10-25 09:08:08,631 DEBUG nova.rpc [-] Making asynchronous call on
> volume.node2 ... from (pid=13955) multicall
> /data/nova/nova/rpc/impl_kombu.py:721
> 2011-10-25 09:08:08,632 DEBUG nova.rpc [-] MSG_ID is
> 0380a1f3eeb049c182128c3b2ebb806f from (pid=13955) multicall
> /data/nova/nova/rpc/impl_kombu.py:724
> 2011-10-25 09:08:09,481 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p
> 10.200.200.4:3260 from (pid=13955) execute /data/nova/nova/utils.py:168
> 2011-10-25 09:08:09,499 DEBUG nova.virt.libvirt.volume [-] iscsiadm ():
> stdout=# BEGIN RECORD 2.0-871
> node.name = iqn.2010-10.org.openstack:volume-00000003
> node.tpgt = 1
> node.startup = manual
> iface.hwaddress = <empty>
> iface.ipaddress = <empty>
> iface.iscsi_ifacename = default
> iface.net_ifacename = <empty>
> iface.transport_name = tcp
> iface.initiatorname = <empty>
> node.discovery_address = node2
> node.discovery_port = 3260
> node.discovery_type = send_targets
> node.session.initial_cmdsn = 0
> node.session.initial_login_retry_max = 8
> node.session.xmit_thread_priority = -20
> node.session.cmds_max = 128
> node.session.queue_depth = 32
> node.session.auth.authmethod = None
> node.session.auth.username = <empty>
> node.session.auth.password = <empty>
> node.session.auth.username_in = <empty>
> node.session.auth.password_in = <empty>
> node.session.timeo.replacement_timeout = 120
> node.session.err_timeo.abort_timeout = 15
> node.session.err_timeo.lu_reset_timeout = 20
> node.session.err_timeo.host_reset_timeout = 60
> node.session.iscsi.FastAbort = Yes
> node.session.iscsi.InitialR2T = No
> node.session.iscsi.ImmediateData = Yes
> node.session.iscsi.FirstBurstLength = 262144
> node.session.iscsi.MaxBurstLength = 16776192
> node.session.iscsi.DefaultTime2Retain = 0
> node.session.iscsi.DefaultTime2Wait = 2
> node.session.iscsi.MaxConnections = 1
> node.session.iscsi.MaxOutstandingR2T = 1
> node.session.iscsi.ERL = 0
> node.conn[0].address = 10.200.200.4
> node.conn[0].port = 3260
> node.conn[0].startup = manual
> node.conn[0].tcp.window_size = 524288
> node.conn[0].tcp.type_of_service = 0
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 5
> node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
> node.conn[0].iscsi.HeaderDigest = None
> node.conn[0].iscsi.DataDigest = None
> node.conn[0].iscsi.IFMarker = No
> node.conn[0].iscsi.OFMarker = No
> # END RECORD
> stderr= from (pid=13955) _run_iscsiadm
> /data/nova/nova/virt/libvirt/volume.py:76
> 2011-10-25 09:08:09,499 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p
> 10.200.200.4:3260 --login from (pid=13955) execute
> /data/nova/nova/utils.py:168
> 2011-10-25 09:08:10,032 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--login',): stdout=Logging in to [iface: default, target:
> iqn.2010-10.org.openstack:volume-00000003, portal: 10.200.200.4,3260]
> Login to [iface: default, target:
> iqn.2010-10.org.openstack:volume-00000003, portal: 10.200.200.4,3260]:
> successful
> stderr= from (pid=13955) _run_iscsiadm
> /data/nova/nova/virt/libvirt/volume.py:76
> 2011-10-25 09:08:10,033 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p
> 10.200.200.4:3260 --op update -n node.startup -v automatic from
> (pid=13955) execute /data/nova/nova/utils.py:168
> 2011-10-25 09:08:10,050 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--op', 'update', '-n', 'node.startup', '-v', 'automatic'): stdout= stderr=
> from (pid=13955) _run_iscsiadm /data/nova/nova/virt/libvirt/volume.py:76
> 2011-10-25 09:08:10,051 WARNING nova.virt.libvirt.volume [-] ISCSI volume
> not yet found at: vdc. Will rescan & retry. Try number: 0
> 2011-10-25 09:08:10,051 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p
> 10.200.200.4:3260 --rescan from (pid=13955) execute
> /data/nova/nova/utils.py:168
> 2011-10-25 09:08:10,073 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--rescan',): stdout=Rescanning session [sid: 4, target:
> iqn.2010-10.org.openstack:volume-00000003, portal: 10.200.200.4,3260]
> stderr= from (pid=13955) _run_iscsiadm
> /data/nova/nova/virt/libvirt/volume.py:76
> 2011-10-25 09:08:11,074 WARNING nova.virt.libvirt.volume [-] ISCSI volume
> not yet found at: vdc. Will rescan & retry. Try number: 1
> 2011-10-25 09:08:11,074 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p
> 10.200.200.4:3260 --rescan from (pid=13955) execute
> /data/nova/nova/utils.py:168
> 2011-10-25 09:08:11,099 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--rescan',): stdout=Rescanning session [sid: 4, target:
> iqn.2010-10.org.openstack:volume-00000003, portal: 10.200.200.4,3260]
> stderr= from (pid=13955) _run_iscsiadm
> /data/nova/nova/virt/libvirt/volume.py:76
> 2011-10-25 09:08:15,103 WARNING nova.virt.libvirt.volume [-] ISCSI volume
> not yet found at: vdc. Will rescan & retry. Try number: 2
> 2011-10-25 09:08:15,103 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000003 -p
> 10.200.200.4:3260 --rescan from (pid=13955) execute
> /data/nova/nova/utils.py:168
> 2011-10-25 09:08:15,128 DEBUG nova.virt.libvirt.volume [-] iscsiadm
> ('--rescan',): stdout=Rescanning session [sid: 4, target:
> iqn.2010-10.org.openstack:volume-00000003, portal: 10.200.200.4,3260]
> stderr= from (pid=13955) _run_iscsiadm
> /data/nova/nova/virt/libvirt/volume.py:76
> 2011-10-25 09:08:24,130 ERROR nova.compute.manager
> [df1f81f6-66b4-441e-8996-690e74265fef admin 1] instance 7: attach failed
> /dev/vdc, removing
> (nova.compute.manager): TRACE: Traceback (most recent call last):
> (nova.compute.manager): TRACE: File "/data/nova/nova/compute/manager.py",
> line 1360, in attach_volume
> (nova.compute.manager): TRACE: mountpoint)
> (nova.compute.manager): TRACE: File "/data/nova/nova/exception.py", line
> 113, in wrapped
> (nova.compute.manager): TRACE: return f(*args, **kw)
> (nova.compute.manager): TRACE: File
> "/data/nova/nova/virt/libvirt/connection.py", line 379, in attach_volume
> (nova.compute.manager): TRACE: mount_device)
> (nova.compute.manager): TRACE: File
> "/data/nova/nova/virt/libvirt/connection.py", line 371, in
> volume_driver_method
> (nova.compute.manager): TRACE: return method(connection_info, *args,
> **kwargs)
> (nova.compute.manager): TRACE: File
> "/data/nova/nova/virt/libvirt/volume.py", line 120, in connect_volume
> (nova.compute.manager): TRACE: (host_device))
> (nova.compute.manager): TRACE: Error: iSCSI device not found at
> /dev/disk/by-path/ip-10.200.200.4:3260-iscsi-iqn.2010-10.org.openstack:volume-00000003-lun-0
> (nova.compute.manager): TRACE:
> 2011-10-25 09:08:24,204 DEBUG nova.rpc [-] Making asynchronous call on
> volume.node2 ... from (pid=13955) multicall
> /data/nova/nova/rpc/impl_kombu.py:721
> 2011-10-25 09:08:24,205 DEBUG nova.rpc [-] MSG_ID is
> 3224ff5ac0c74a8e907b32c10a7ea0e4 from (pid=13955) multicall
> /data/nova/nova/rpc/impl_kombu.py:724
> 2011-10-25 09:08:24,261 ERROR nova.rpc [-] Exception during message
> handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE: File "/data/nova/nova/rpc/impl_kombu.py", line 620, in
> _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE: File "/data/nova/nova/compute/manager.py", line 119, in
> decorated_function
> (nova.rpc): TRACE: function(self, context, instance_id, *args, **kwargs)
> (nova.rpc): TRACE: File "/data/nova/nova/compute/manager.py", line 1369, in
> attach_volume
> (nova.rpc): TRACE: raise exc
> (nova.rpc): TRACE: Error: None
> (nova.rpc): TRACE:
>
> --
> 非淡薄无以明志,非宁静无以致远
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>


-- 
非淡薄无以明志,非宁静无以致远
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111025/52d2f11a/attachment.html>


More information about the Openstack mailing list