[openstack-dev] [nova] [glance] How to deal with aborted image read?

Flavio Percoco flavio at redhat.com
Thu Jun 4 09:01:29 UTC 2015


On 03/06/15 16:46 -0600, Chris Friesen wrote:
>We recently ran into an issue where nova couldn't write an image file 
>due to lack of space and so just quit reading from glance.
>
>This caused glance to be stuck with an open file descriptor, which 
>meant that the image consumed space even after it was deleted.
>
>I have a crude fix for nova at 
>"https://review.openstack.org/#/c/188179/" which basically continues 
>to read the image even though it can't write it.  That seems less than 
>ideal for large images though.
>
>Is there a better way to do this?  Is there a way for nova to indicate 
>to glance that it's no longer interested in that image and glance can 
>close the file?
>
>If I've followed this correctly, on the glance side I think the code 
>in question is ultimately 
>glance_store._drivers.filesystem.ChunkedFile.__iter__().

Actually, to be honest, I was quite confused by the email :P

Correct me if I still didn't understand what you're asking.

You ran out of space on the Nova side while downloading the image and
there's a file descriptor leak somewhere either in that lovely (sarcasm)
glance wrapper or in glanceclient.

Just by reading your email and glancing your patch, I believe the bug
might be in glanceclient but I'd need to five into this. The piece of
code you'll need to look into is[0].

glance_store is just used server side. If that's what you meant -
glance is keeping the request and the ChunkedFile around - then yes,
glance_store is the place to look into.

[0] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/images.py#L152

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150604/3a53020d/attachment.pgp>


More information about the OpenStack-dev mailing list