[Openstack] Cinder issues...
Erich Weiler
weiler at soe.ucsc.edu
Tue Apr 8 23:41:35 UTC 2014
Whoops, I got too eager and tried to change the value to 'error'. Now I
can't seem to do anything with nova or cinder...
# nova list
ERROR: <attribute 'message' of 'exceptions.BaseException' objects> (HTTP
500)
# cinder list
ERROR: <attribute 'message' of 'exceptions.BaseException' objects> (HTTP
500)
I switched the value back to 'attaching' but I'm stuck with no command
line cinder or nova (I get the error above for all commands).... I seem
to kind of be able to do things with horizon however, but not
everything.. Did I mess it up bad?
I rebooted my controller and cinder nodes (and all associated api's and
services)and it still won't work... I have a feeling I screwed
something up by changing that value in mysql... Any pointers much
appreciated... :(
On 04/08/14 16:28, Remo Mattei wrote:
> I just found it
>
> force-delete
>
>
>
>
>> Arg, I think I see the issue... I created my cinder endpoints to
>> point to thew wrong internal hosts. I fixed the endpoints, but the
>> volume still sits in "Attaching" state, so I can't touch it.
>>
>> Should I manually tweak mysql to fix this? I see:
>>
>> mysql> use cinder;
>>
>> mysql> select * from volumes\G
>> *************************** 1. row ***************************
>> created_at: 2014-04-08 21:36:17
>> updated_at: 2014-04-08 21:36:17
>> deleted_at: 2014-04-08 21:37:52
>> deleted: 1
>> id: 756ca5f0-bcdf-40e9-a62c-26a648be753f
>> ec2_id: NULL
>> user_id: f8fdf7f84ad34c439c4075b5e3720211
>> project_id: f7e61747885045d8b266a161310c0094
>> host: NULL
>> size: 10
>> availability_zone: nova
>> instance_uuid: NULL
>> mountpoint: NULL
>> attach_time: NULL
>> status: deleted
>> attach_status: detached
>> scheduled_at: NULL
>> launched_at: NULL
>> terminated_at: NULL
>> display_name: test-volume-1
>> display_description: Just testing yo.
>> provider_location: NULL
>> provider_auth: NULL
>> snapshot_id: NULL
>> volume_type_id: NULL
>> source_volid: NULL
>> bootable: 0
>> attached_host: NULL
>> provider_geometry: NULL
>> _name_id: NULL
>> encryption_key_id: NULL
>> migration_status: NULL
>> *************************** 2. row ***************************
>> created_at: 2014-04-08 21:38:52
>> updated_at: 2014-04-08 21:42:44
>> deleted_at: NULL
>> deleted: 0
>> id: e34524ea-cd2f-41e8-b37e-ac15456275d7
>> ec2_id: NULL
>> user_id: f8fdf7f84ad34c439c4075b5e3720211
>> project_id: f7e61747885045d8b266a161310c0094
>> host: genome-cloudstore
>> size: 10
>> availability_zone: nova
>> instance_uuid: NULL
>> mountpoint: NULL
>> attach_time: NULL
>> status: attaching
>> attach_status: detached
>> scheduled_at: 2014-04-08 21:38:52
>> launched_at: 2014-04-08 21:38:53
>> terminated_at: NULL
>> display_name: test-volume-1
>> display_description: Just testing.
>> provider_location: NULL
>> provider_auth: NULL
>> snapshot_id: NULL
>> volume_type_id: NULL
>> source_volid: NULL
>> bootable: 0
>> attached_host: NULL
>> provider_geometry: NULL
>> _name_id: NULL
>> encryption_key_id: NULL
>> migration_status: NULL
>> 2 rows in set (0.00 sec)
>>
>> I can change the value of 'status' from 'attaching' to 'error'... In
>> theory I can delete it then. Any reason why I shouldn't do that?
>>
>> Thanks,
>> -erich
>>
>>
>> On 04/08/14 15:27, Erich Weiler wrote:
>>> Hey Y'all,
>>>
>>> Thanks a bunch for all the help so far by the way - I'm nearly done with
>>> standing up my POC OpenStack system.
>>>
>>> This is Icehouse RDO on RedHat BTW.
>>>
>>> I'm hvaing this odd thing happening with cinder. In horizon, I can see
>>> the cinder volume storage and even create a volume, no problem. I try
>>> to attach the storage to a VM and it sits in "Attaching..." in Horizon,
>>> and never seems to attach. I can also not delete it using horizon or
>>> the command line:
>>>
>>> # cinder list
>>> +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
>>>
>>> | ID | Status | Display Name |
>>> Size | Volume Type | Bootable | Attached to |
>>> +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
>>>
>>> | e34524ea-cd2f-41e8-b37e-ac15456275d7 | attaching | test-volume-1 | 10
>>> | None | false | |
>>> +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
>>>
>>>
>>> # cinder delete e34524ea-cd2f-41e8-b37e-ac15456275d7
>>> Delete for volume e34524ea-cd2f-41e8-b37e-ac15456275d7 failed: Invalid
>>> volume: Volume status must be available or error, but current status is:
>>> attaching (HTTP 400) (Request-ID:
>>> req-1d2351e0-48e8-4b23-8c9b-d6f2d0b21718)
>>> ERROR: Unable to delete any of the specified volumes.
>>>
>>> On the compute node that has the VM, I see this in the nova logs:
>>>
>>> 2014-04-08 14:42:12.440 14756 WARNING nova.network.neutronv2 [-] Using
>>> neutron_admin_tenant_name for authentication is deprecated and will be
>>> removed in the next release. Use neutron_admin_tenant_id instead.
>>> 2014-04-08 14:42:17.167 14756 WARNING nova.virt.disk.vfs.guestfs
>>> [req-8c668c6c-9184-4f31-891d-93b7ddb1c25b
>>> f8fdf7f84ad34c439c4075b5e3720211 f7e61747885045d8b266a161310c0094]
>>> Failed to close augeas aug_close: do_aug_close: you must call 'aug-init'
>>> first to initialize Augeas
>>> 2014-04-08 14:42:44.971 14756 ERROR nova.compute.manager
>>> [req-59b34801-a0b8-4113-987f-95b134f1e181
>>> f8fdf7f84ad34c439c4075b5e3720211 f7e61747885045d8b266a161310c0094]
>>> [instance: 3aef6998-0aa4-40f5-a5be-469edefe98fc] Failed to attach
>>> e34524ea-cd2f-41e8-b37e-ac15456275d7 at /dev/vdb
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] Traceback (most recent call last):
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3886,
>>> in _attach_volume
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] do_check_attach=False,
>>> do_driver_attach=True)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 44,
>>> in wrapped
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] ret_val = method(obj, context,
>>> *args, **kwargs)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 215,
>>> in attach
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] volume =
>>> volume_api.get(context, self.volume_id)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 174, in
>>> wrapper
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] res = method(self, ctx,
>>> volume_id, *args, **kwargs)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 207,
>>> in get
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] item =
>>> cinderclient(context).volumes.get(volume_id)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 196,
>>> in get
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] return self._get("/volumes/%s"
>>> % volume_id, "volume")
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/cinderclient/base.py", line 145, in
>>> _get
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] resp, body =
>>> self.api.client.get(url)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 207,
>>> in get
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] return self._cs_request(url,
>>> 'GET', **kwargs)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] File
>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in
>>> _cs_request
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] raise
>>> exceptions.ConnectionError(msg)
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc] ConnectionError: Unable to
>>> establish connection: [Errno 101] ENETUNREACH
>>> 2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:
>>> 3aef6998-0aa4-40f5-a5be-469edefe98fc]
>>> 2014-04-08 14:42:44.977 14756 ERROR root [-] Original exception being
>>> dropped: ['Traceback (most recent call last):\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3886,
>>> in _attach_volume\n do_check_attach=False, do_driver_attach=True)\n',
>>> ' File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py",
>>> line 44, in wrapped\n ret_val = method(obj, context, *args,
>>> **kwargs)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 215,
>>> in attach\n volume = volume_api.get(context, self.volume_id)\n', '
>>> File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 174,
>>> in wrapper\n res = method(self, ctx, volume_id, *args, **kwargs)\n',
>>> ' File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line
>>> 207, in get\n item = cinderclient(context).volumes.get(volume_id)\n',
>>> ' File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py",
>>> line 196, in get\n return self._get("/volumes/%s" % volume_id,
>>> "volume")\n', ' File
>>> "/usr/lib/python2.6/site-packages/cinderclient/base.py", line 145, in
>>> _get\n resp, body = self.api.client.get(url)\n', ' File
>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 207, in
>>> get\n return self._cs_request(url, \'GET\', **kwargs)\n', ' File
>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in
>>> _cs_request\n raise exceptions.ConnectionError(msg)\n',
>>> 'ConnectionError: Unable to establish connection: [Errno 101]
>>> ENETUNREACH\n']
>>> 2014-04-08 14:42:45.130 14756 ERROR oslo.messaging.rpc.dispatcher [-]
>>> Exception during message handling: Unable to establish connection:
>>> [Errno 101] ENETUNREACH
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> Traceback (most recent call last):
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
>>> line 133, in _dispatch_and_reply
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> incoming.message))
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
>>> line 176, in _dispatch
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return
>>> self._do_dispatch(endpoint, method, ctxt, args)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
>>> line 122, in _do_dispatch
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher result
>>> = getattr(endpoint, method)(ctxt, **new_args)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 360, in
>>> decorated_function
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return
>>> function(self, context, *args, **kwargs)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> payload)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return
>>> f(self, context, *args, **kw)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 244, in
>>> decorated_function
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> pass
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 230, in
>>> decorated_function
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return
>>> function(self, context, *args, **kwargs)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 272, in
>>> decorated_function
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher e,
>>> sys.exc_info())
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 259, in
>>> decorated_function
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return
>>> function(self, context, *args, **kwargs)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3876,
>>> in attach_volume
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> bdm.destroy(context)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in
>>> post
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return
>>> self._cs_request(url, 'POST', **kwargs)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in
>>> _cs_request
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher raise
>>> exceptions.ConnectionError(msg)
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> ConnectionError: Unable to establish connection: [Errno 101] ENETUNREACH
>>> 2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher
>>> 2014-04-08 14:42:45.133 14756 ERROR oslo.messaging._drivers.common [-]
>>> Returning exception Unable to establish connection: [Errno 101]
>>> ENETUNREACH to caller
>>> 2014-04-08 14:42:45.134 14756 ERROR oslo.messaging._drivers.common [-]
>>> ['Traceback (most recent call last):\n', ' File
>>> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
>>> line 133, in _dispatch_and_reply\n incoming.message))\n', ' File
>>> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
>>> line 176, in _dispatch\n return self._do_dispatch(endpoint, method,
>>> ctxt, args)\n', ' File
>>> "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
>>> line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt,
>>> **new_args)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 360, in
>>> decorated_function\n return function(self, context, *args,
>>> **kwargs)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in
>>> wrapped\n payload)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__\n six.reraise(self.type_, self.value,
>>> self.tb)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in
>>> wrapped\n return f(self, context, *args, **kw)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 244, in
>>> decorated_function\n pass\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__\n six.reraise(self.type_, self.value,
>>> self.tb)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 230, in
>>> decorated_function\n return function(self, context, *args,
>>> **kwargs)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 272, in
>>> decorated_function\n e, sys.exc_info())\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__\n six.reraise(self.type_, self.value,
>>> self.tb)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 259, in
>>> decorated_function\n return function(self, context, *args,
>>> **kwargs)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3876,
>>> in attach_volume\n bdm.destroy(context)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
>>> line 68, in __exit__\n six.reraise(self.type_, self.value,
>>> self.tb)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3873,
>>> in attach_volume\n return self._attach_volume(context, instance,
>>> driver_bdm)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3894,
>>> in _attach_volume\n self.volume_api.unreserve_volume(context,
>>> bdm.volume_id)\n', ' File
>>> "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 174, in
>>> wrapper\n res = method(self, ctx, volume_id, *args, **kwargs)\n', '
>>> File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 250,
>>> in unreserve_volume\n
>>> cinderclient(context).volumes.unreserve(volume_id)\n', ' File
>>> "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 293,
>>> in unreserve\n return self._action(\'os-unreserve\', volume)\n', '
>>> File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line
>>> 250, in _action\n return self.api.client.post(url, body=body)\n', '
>>> File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line
>>> 210, in post\n return self._cs_request(url, \'POST\', **kwargs)\n', '
>>> File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line
>>> 199, in _cs_request\n raise exceptions.ConnectionError(msg)\n',
>>> 'ConnectionError: Unable to establish connection: [Errno 101]
>>> ENETUNREACH\n']
>>>
>>> I see several messages in there about "Unable to attach /dev/vdb" and
>>> also some "unable to establish connection". Any ideas where I went
>>> wrong?
>>>
>>> Also, is there a way I can delete the volume that is in "attaching"
>>> status? I terminated the VM successfully, but the volume still sits in
>>> "attaching" status...
>>>
>>> Hopefully I'm missing something basic... ;)
>>>
>>> Thanks for any ideas!!
>>>
>>> cheers,
>>> erich
>>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org>
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>> !DSPAM:1,5344824b244361176810642!
>
>
More information about the Openstack
mailing list