<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">I just found it<div><br></div><div><pre class="screen" style="color: rgb(35, 48, 45); font-family: Monaco, 'Courier New', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; overflow-x: scroll; width: 938px; border-bottom-color: rgb(222, 222, 222) !important; border-bottom-style: solid !important; border-bottom-width: 1px !important; border-top-color: rgb(222, 222, 222) !important; border-top-style: solid !important; border-top-width: 1px !important; font-size: 12px !important; padding: 0.5em !important;">force-delete </pre><div><br></div><div><br></div><div><br class="Apple-interchange-newline"><blockquote type="cite"><div></div><div style="font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">Arg, I think I see the issue... I created my cinder endpoints to point to thew wrong internal hosts. I fixed the endpoints, but the volume still sits in "Attaching" state, so I can't touch it.<br><br>Should I manually tweak mysql to fix this? I see:<br><br>mysql> use cinder;<br><br>mysql> select * from volumes\G<br>*************************** 1. row ***************************<br> created_at: 2014-04-08 21:36:17<br> updated_at: 2014-04-08 21:36:17<br> deleted_at: 2014-04-08 21:37:52<br> deleted: 1<br> id: 756ca5f0-bcdf-40e9-a62c-26a648be753f<br> ec2_id: NULL<br> user_id: f8fdf7f84ad34c439c4075b5e3720211<br> project_id: f7e61747885045d8b266a161310c0094<br> host: NULL<br> size: 10<br> availability_zone: nova<br> instance_uuid: NULL<br> mountpoint: NULL<br> attach_time: NULL<br> status: deleted<br> attach_status: detached<br> scheduled_at: NULL<br> launched_at: NULL<br> terminated_at: NULL<br> display_name: test-volume-1<br>display_description: Just testing yo.<br> provider_location: NULL<br> provider_auth: NULL<br> snapshot_id: NULL<br> volume_type_id: NULL<br> source_volid: NULL<br> bootable: 0<br> attached_host: NULL<br> provider_geometry: NULL<br> _name_id: NULL<br> encryption_key_id: NULL<br> migration_status: NULL<br>*************************** 2. row ***************************<br> created_at: 2014-04-08 21:38:52<br> updated_at: 2014-04-08 21:42:44<br> deleted_at: NULL<br> deleted: 0<br> id: e34524ea-cd2f-41e8-b37e-ac15456275d7<br> ec2_id: NULL<br> user_id: f8fdf7f84ad34c439c4075b5e3720211<br> project_id: f7e61747885045d8b266a161310c0094<br> host: genome-cloudstore<br> size: 10<br> availability_zone: nova<br> instance_uuid: NULL<br> mountpoint: NULL<br> attach_time: NULL<br> status: attaching<br> attach_status: detached<br> scheduled_at: 2014-04-08 21:38:52<br> launched_at: 2014-04-08 21:38:53<br> terminated_at: NULL<br> display_name: test-volume-1<br>display_description: Just testing.<br> provider_location: NULL<br> provider_auth: NULL<br> snapshot_id: NULL<br> volume_type_id: NULL<br> source_volid: NULL<br> bootable: 0<br> attached_host: NULL<br> provider_geometry: NULL<br> _name_id: NULL<br> encryption_key_id: NULL<br> migration_status: NULL<br>2 rows in set (0.00 sec)<br><br>I can change the value of 'status' from 'attaching' to 'error'... In theory I can delete it then. Any reason why I shouldn't do that?<br><br>Thanks,<br>-erich<br><br><br>On 04/08/14 15:27, Erich Weiler wrote:<br><blockquote type="cite">Hey Y'all,<br><br>Thanks a bunch for all the help so far by the way - I'm nearly done with<br>standing up my POC OpenStack system.<br><br>This is Icehouse RDO on RedHat BTW.<br><br>I'm hvaing this odd thing happening with cinder. In horizon, I can see<br>the cinder volume storage and even create a volume, no problem. I try<br>to attach the storage to a VM and it sits in "Attaching..." in Horizon,<br>and never seems to attach. I can also not delete it using horizon or<br>the command line:<br><br># cinder list<br>+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+<br><br>| ID | Status | Display Name |<br>Size | Volume Type | Bootable | Attached to |<br>+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+<br><br>| e34524ea-cd2f-41e8-b37e-ac15456275d7 | attaching | test-volume-1 | 10<br> | None | false | |<br>+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+<br><br><br># cinder delete e34524ea-cd2f-41e8-b37e-ac15456275d7<br>Delete for volume e34524ea-cd2f-41e8-b37e-ac15456275d7 failed: Invalid<br>volume: Volume status must be available or error, but current status is:<br>attaching (HTTP 400) (Request-ID: req-1d2351e0-48e8-4b23-8c9b-d6f2d0b21718)<br>ERROR: Unable to delete any of the specified volumes.<br><br>On the compute node that has the VM, I see this in the nova logs:<br><br>2014-04-08 14:42:12.440 14756 WARNING nova.network.neutronv2 [-] Using<br>neutron_admin_tenant_name for authentication is deprecated and will be<br>removed in the next release. Use neutron_admin_tenant_id instead.<br>2014-04-08 14:42:17.167 14756 WARNING nova.virt.disk.vfs.guestfs<br>[req-8c668c6c-9184-4f31-891d-93b7ddb1c25b<br>f8fdf7f84ad34c439c4075b5e3720211 f7e61747885045d8b266a161310c0094]<br>Failed to close augeas aug_close: do_aug_close: you must call 'aug-init'<br>first to initialize Augeas<br>2014-04-08 14:42:44.971 14756 ERROR nova.compute.manager<br>[req-59b34801-a0b8-4113-987f-95b134f1e181<br>f8fdf7f84ad34c439c4075b5e3720211 f7e61747885045d8b266a161310c0094]<br>[instance: 3aef6998-0aa4-40f5-a5be-469edefe98fc] Failed to attach<br>e34524ea-cd2f-41e8-b37e-ac15456275d7 at /dev/vdb<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] Traceback (most recent call last):<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3886,<br>in _attach_volume<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] do_check_attach=False,<br>do_driver_attach=True)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 44,<br>in wrapped<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] ret_val = method(obj, context,<br>*args, **kwargs)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 215,<br>in attach<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] volume =<br>volume_api.get(context, self.volume_id)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 174, in<br>wrapper<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] res = method(self, ctx,<br>volume_id, *args, **kwargs)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 207, in get<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] item =<br>cinderclient(context).volumes.get(volume_id)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 196,<br>in get<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] return self._get("/volumes/%s"<br>% volume_id, "volume")<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/cinderclient/base.py", line 145, in _get<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] resp, body =<br>self.api.client.get(url)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 207, in get<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] return self._cs_request(url,<br>'GET', **kwargs)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] File<br>"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in<br>_cs_request<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] raise<br>exceptions.ConnectionError(msg)<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc] ConnectionError: Unable to<br>establish connection: [Errno 101] ENETUNREACH<br>2014-04-08 14:42:44.971 14756 TRACE nova.compute.manager [instance:<br>3aef6998-0aa4-40f5-a5be-469edefe98fc]<br>2014-04-08 14:42:44.977 14756 ERROR root [-] Original exception being<br>dropped: ['Traceback (most recent call last):\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3886,<br>in _attach_volume\n do_check_attach=False, do_driver_attach=True)\n',<br>' File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py",<br>line 44, in wrapped\n ret_val = method(obj, context, *args,<br>**kwargs)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 215,<br>in attach\n volume = volume_api.get(context, self.volume_id)\n', '<br>File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 174,<br>in wrapper\n res = method(self, ctx, volume_id, *args, **kwargs)\n',<br>' File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line<br>207, in get\n item = cinderclient(context).volumes.get(volume_id)\n',<br>' File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py",<br>line 196, in get\n return self._get("/volumes/%s" % volume_id,<br>"volume")\n', ' File<br>"/usr/lib/python2.6/site-packages/cinderclient/base.py", line 145, in<br>_get\n resp, body = self.api.client.get(url)\n', ' File<br>"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 207, in<br>get\n return self._cs_request(url, \'GET\', **kwargs)\n', ' File<br>"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in<br>_cs_request\n raise exceptions.ConnectionError(msg)\n',<br>'ConnectionError: Unable to establish connection: [Errno 101]<br>ENETUNREACH\n']<br>2014-04-08 14:42:45.130 14756 ERROR oslo.messaging.rpc.dispatcher [-]<br>Exception during message handling: Unable to establish connection:<br>[Errno 101] ENETUNREACH<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>Traceback (most recent call last):<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",<br>line 133, in _dispatch_and_reply<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>incoming.message))<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",<br>line 176, in _dispatch<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return<br>self._do_dispatch(endpoint, method, ctxt, args)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",<br>line 122, in _do_dispatch<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher result<br>= getattr(endpoint, method)(ctxt, **new_args)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 360, in<br>decorated_function<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return<br>function(self, context, *args, **kwargs)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher payload)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>six.reraise(self.type_, self.value, self.tb)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return<br>f(self, context, *args, **kw)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 244, in<br>decorated_function<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher pass<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>six.reraise(self.type_, self.value, self.tb)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 230, in<br>decorated_function<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return<br>function(self, context, *args, **kwargs)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 272, in<br>decorated_function<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher e,<br>sys.exc_info())<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>six.reraise(self.type_, self.value, self.tb)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 259, in<br>decorated_function<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return<br>function(self, context, *args, **kwargs)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3876,<br>in attach_volume<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>bdm.destroy(context)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in<br>post<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher return<br>self._cs_request(url, 'POST', **kwargs)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher File<br>"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in<br>_cs_request<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher raise<br>exceptions.ConnectionError(msg)<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>ConnectionError: Unable to establish connection: [Errno 101] ENETUNREACH<br>2014-04-08 14:42:45.130 14756 TRACE oslo.messaging.rpc.dispatcher<br>2014-04-08 14:42:45.133 14756 ERROR oslo.messaging._drivers.common [-]<br>Returning exception Unable to establish connection: [Errno 101]<br>ENETUNREACH to caller<br>2014-04-08 14:42:45.134 14756 ERROR oslo.messaging._drivers.common [-]<br>['Traceback (most recent call last):\n', ' File<br>"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",<br>line 133, in _dispatch_and_reply\n incoming.message))\n', ' File<br>"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",<br>line 176, in _dispatch\n return self._do_dispatch(endpoint, method,<br>ctxt, args)\n', ' File<br>"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",<br>line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt,<br>**new_args)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 360, in<br>decorated_function\n return function(self, context, *args,<br>**kwargs)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in<br>wrapped\n payload)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__\n six.reraise(self.type_, self.value,<br>self.tb)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in<br>wrapped\n return f(self, context, *args, **kw)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 244, in<br>decorated_function\n pass\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__\n six.reraise(self.type_, self.value,<br>self.tb)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 230, in<br>decorated_function\n return function(self, context, *args,<br>**kwargs)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 272, in<br>decorated_function\n e, sys.exc_info())\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__\n six.reraise(self.type_, self.value,<br>self.tb)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 259, in<br>decorated_function\n return function(self, context, *args,<br>**kwargs)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3876,<br>in attach_volume\n bdm.destroy(context)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",<br>line 68, in __exit__\n six.reraise(self.type_, self.value,<br>self.tb)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3873,<br>in attach_volume\n return self._attach_volume(context, instance,<br>driver_bdm)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3894,<br>in _attach_volume\n self.volume_api.unreserve_volume(context,<br>bdm.volume_id)\n', ' File<br>"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 174, in<br>wrapper\n res = method(self, ctx, volume_id, *args, **kwargs)\n', '<br>File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 250,<br>in unreserve_volume\n<br>cinderclient(context).volumes.unreserve(volume_id)\n', ' File<br>"/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 293,<br>in unreserve\n return self._action(\'os-unreserve\', volume)\n', '<br>File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line<br>250, in _action\n return self.api.client.post(url, body=body)\n', '<br>File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line<br>210, in post\n return self._cs_request(url, \'POST\', **kwargs)\n', '<br> File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line<br>199, in _cs_request\n raise exceptions.ConnectionError(msg)\n',<br>'ConnectionError: Unable to establish connection: [Errno 101]<br>ENETUNREACH\n']<br><br>I see several messages in there about "Unable to attach /dev/vdb" and<br>also some "unable to establish connection". Any ideas where I went wrong?<br><br>Also, is there a way I can delete the volume that is in "attaching"<br>status? I terminated the VM successfully, but the volume still sits in<br>"attaching" status...<br><br>Hopefully I'm missing something basic... ;)<br><br>Thanks for any ideas!!<br><br>cheers,<br>erich<br><br></blockquote><br>_______________________________________________<br>Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br><br>!DSPAM:1,5344824b244361176810642!</div></blockquote></div><br><div><br></div></div></body></html>