[Openstack-operators] [glance] Image enters "killed" state on upload

Kris G. Lindgren klindgren at godaddy.com
Tue Feb 2 23:34:07 UTC 2016


Not related to your issue, but something to keep an eye out for, is that you need to keep the uid for glance synced across your glances servers when using an nfsv3 store.  Since nfsv3 stores the uid & gid for the file perms.  You can run into weird issues if glance is uid/gid 501 on one glance server and 502 on another.  We had that problem crop up in production when packages were doing "useradd" without specifying a uid/gid.  So you could end up with systems with different id's and permissions that are all screwed up between multiple servers.

So related to your question .. If I remember correctly you need read/execute permissions to list the contents/enter a directory under linux.

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Liam Haworth <liam.haworth at bluereef.com.au<mailto:liam.haworth at bluereef.com.au>>
Date: Tuesday, February 2, 2016 at 4:25 PM
To: Abel Lopez <alopgeek at gmail.com<mailto:alopgeek at gmail.com>>
Cc: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] [glance] Image enters "killed" state on upload

Here is a output from my system instead of my blabbering in a long winded email

root at ctrl1:~# uname -a
Linux ctrl1 3.19.0-43-generic #49~14.04.1-Ubuntu SMP Thu Dec 31 15:44:49 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

root at ctrl1:~# df -h
Filesystem                                  Size        Used        Avail     Use%         Mounted on
udev                                           7.9G       4.0K         7.9G         1%          /dev
tmpfs                                          1.6G       724K        1.6G         1%          /run
/dev/mapper/ctrl1--vg-root        396G      6.4G        370G         2%         /
none                                           4.0K        0               4.0K         0%         /sys/fs/cgroup
none                                           5.0M       0               5.0M         0%        /run/lock
none                                           7.9G        0              7.9G          0%        /run/shm
none                                           100M      0              100M         0%        /run/user
/dev/sdc1                                   236M      38M         186M       17%        /boot
10.16.16.30:/srv/glance              739G     97G          604G        14%        /var/lib/glance/images

And to save from massed output from a LS, ever file in /var/lib/glance/images is: -rw-r----- 1 glance glance

No apparmour installed or configured

On Wed, 3 Feb 2016 at 10:17 Abel Lopez <alopgeek at gmail.com<mailto:alopgeek at gmail.com>> wrote:
Ok, with file store, some of the silly things that crop up are around directory permissions, disk space, SELinux/apparmour.

Make sure the glance user and group have ownership (recursively) of the /var/lib/glance directory, make sure you're not low on space, if you have SELinux set to enforcing, test setting it to permissive (if that is the issue, resolve the contexts)

On Feb 2, 2016, at 3:13 PM, Liam Haworth <liam.haworth at bluereef.com.au<mailto:liam.haworth at bluereef.com.au>> wrote:

Glance is configured to use file store to /var/lib/glance/images

On Wed, 3 Feb 2016 at 10:12 Abel Lopez <alopgeek at gmail.com<mailto:alopgeek at gmail.com>> wrote:
I ran into a similar issue in Havana, but that was because we were doing some 'behind-the-scenes' modification of the image (format conversion)
Once we stopped that, the issue went away.

What is your glance store configured as?

On Feb 2, 2016, at 3:05 PM, Liam Haworth <liam.haworth at bluereef.com.au<mailto:liam.haworth at bluereef.com.au>> wrote:

Hey All,

This sounds like an old bug after trying to google it but everything I found doesn't really seem to help. I'm trying to upload a 2.5GB QCOW2 image to glance to be used by users, the upload goes fine and in the glance registry logs I can see that it has successfully saved the image but then it does this

2016-02-03 09:51:49.607 2826 DEBUG glance.registry.api.v1.images [req-5ba18ea3-5777-4023-9f85-040aca48dfa7 --trunced-- - - -] Updating image 03a920ce-7979-4439-ab71-bc3dd34df3d3 with metadata: {u'status': u'killed'} update /usr/lib/python2.7/dist-packages/glance/registry/api/v1/images.py:470

What reasons are their for it to do this to an image that just successfully uploaded?

Thanks,

Liam Haworth.
--
Liam Haworth | Junior Software Engineer | www.bluereef.com.au<http://www.bluereef.com.au/>
_________________________________________________________________
T: +61 3 9898 8000 | F: +61 3 9898 8055


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Liam Haworth | Junior Software Engineer | www.bluereef.com.au<http://www.bluereef.com.au/>
_________________________________________________________________
T: +61 3 9898 8000 | F: +61 3 9898 8055



--
Liam Haworth | Junior Software Engineer | www.bluereef.com.au<http://www.bluereef.com.au/>
_________________________________________________________________
T: +61 3 9898 8000 | F: +61 3 9898 8055


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160202/68f36ac2/attachment.html>


More information about the OpenStack-operators mailing list