[Openstack-operators] Issue when trying to snapshot an instance
Grant Morley
grant at absolutedevops.io
Fri Jun 3 12:22:20 UTC 2016
Hi,
Thanks for your help all, glance is also implemented with ceph rbd and
both have ample amounts of space available:
compute node: /dev/sda1 5.5T 890G 4.3T 17% /
Ceph cluster: 16555 GB used, 23666 GB / 40221 GB avail
Turns out is in an issue with the keystone tokens that are timing out
when the snapshot is taking place.
I will get onto looking into that now.
Thanks again for the advice and help.
Grant
On 03/06/16 11:57, Saverio Proto wrote:
> Hello,
>
> what is the state of the instance before asking the snapshot ? Is it
> running or paused ?
>
> Check on the hypervisor when the snapshot starts if you see files in
> these folders:
>
> /var/lib/libvirt/qemu/save/
> /var/lib/nova/instances/snapshots/
>
> How is your glance implemented ? Also with ceph rbd ? Remember that a
> "nova snapshot" is a glance image.
>
> Saverio
>
>
>
> 2016-06-03 12:17 GMT+02:00 Grant <grant at absolutedevops.io
> <mailto:grant at absolutedevops.io>>:
>
> Hi all,
>
> I was wondering if someone could shed any light on an issue we are
> seeing. We are running Kilo in our production environment and when
> we are trying to create a snapshot for a particular instance it
> gets stuck in a "saving" state and doesn't actually ever save the
> image.
>
> We are using a ceph back-end and the user that is trying to take a
> snapshot is able to take snapshots of all of their other
> instances, it is just one that is failing.
>
> Error log from the nova compute host below:
>
> 2016-06-02 17:13:48.594 52559 WARNING urllib3.connectionpool
> [req-8200a3b0-ad2a-406e-969e-c22762db3455
> bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
> - - -] HttpConnectionPool is full, discarding connection: 10.5.0.205
> 2016-06-02 17:14:00.042 52559 ERROR nova.compute.manager
> [req-8200a3b0-ad2a-406e-969e-c22762db3455
> bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
> - - -] [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] Error
> while trying
> to clean up image f9844dd5-5a92-4cd4-956d-8ad04cfc5e84
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] Traceback (most
> recent call last):
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
> 405, in decorated_function
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
> self.image_api.delete(context, image_id)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 141, in
> delete
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] return
> session.delete(context, image_id)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 410,
> in delete
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
> self._client.call(context, 1, 'delete', image_id)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 218,
> in call
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] return
> getattr(client.images, method)(*args, **kwargs)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line
> 255, in delete
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] resp, body =
> self.client.delete(url)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 271, in delete
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] return
> self._request('DELETE', url, **kwargs)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] File
> "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 227, in _request
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] raise
> exc.from_response(resp, resp.content)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] HTTPUnauthorized:
> <html>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <head>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <title>401
> Unauthorized</title>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] </head>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <body>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <h1>401
> Unauthorized</h1>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] This server
> could not verify that you are authorized to access the document
> you requested. Either you supplied the wrong credential
> s (e.g., bad password), or your browser does not understand how to
> supply the credentials required.<br /><br />
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] </body>
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] </html> (HTTP 401)
> 2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
> [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
> 2016-06-02 17:14:00.173 52559 ERROR oslo_messaging.rpc.dispatcher
> [req-8200a3b0-ad2a-406e-969e-c22762db3455
> bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
> - - -] Exception during message handling: Not authorized for imag
> e f9844dd5-5a92-4cd4-956d-8ad04cfc5e84.
>
> Any help will be appreciated.
>
> Regards,
>
> --
> Grant Morley
> Cloud Lead
> Absolute DevOps Ltd
> Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
> www.absolutedevops.io <http://www.absolutedevops.io>
> grant at absolutedevops.io <mailto:grant at absolutedevops.io> 0845 874
> 0580
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> <mailto:OpenStack-operators at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/>
grant at absolutedevops.io <mailto:grant at absolutedevops.i> 0845 874 0580
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160603/80d207c8/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 4369 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160603/80d207c8/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ado_new.png
Type: image/png
Size: 4369 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160603/80d207c8/attachment-0001.png>
More information about the OpenStack-operators
mailing list