[Openstack-operators] /var/lib/nova/instances fs filled up corrupting my Linux instances

Diego Parrilla Santamaría diego.parrilla.santamaria at gmail.com
Thu Mar 14 15:07:53 UTC 2013


On Thu, Mar 14, 2013 at 3:10 PM, Joe Topjian <joe.topjian at cybera.ca> wrote:

>
>
>
> On Thu, Mar 14, 2013 at 4:13 AM, Diego Parrilla Santamaría <
> diego.parrilla.santamaria at gmail.com> wrote:
>
>> Hi all,
>>
>> we use Razique's script on shared storage frequently. We recommend to
>> take a snapshot first, just in case.
>>
>> A few comments about what I have read in this thread:
>> - The script works (for us) and does not break anything (we modified it
>> to perform a delete instead of writing to a file) on a shared storage
>> - I was really scared when I read about the migration problems from Essex
>> to Folsom and the new automatic clean up code.
>> remove_unused_base_images=false
>> - 650GB in _base directory in a shared storage is not so big... specially
>> if your users are taking snapshots.
>> - If you are using Netapp, Nexenta or other filers, this is the right
>> folder to enable deduplication. You will see massive savings.
>> - Snapshots can make your _base directory HUGE. We have sent a blueprint
>> about snapshotting and quotas to discuss next summit.
>> https://blueprints.launchpad.net/nova/+spec/snapshot-tenant-quotas
>>
>
> +1 this. I have implemented this myself in Nova for now by utilizing Nova
> notifications and the existing quota & quota_usages table. It's very
> hackish and I'd definitely support an official solution.
>
>
We have a chargeback component that counts the snapshots and charge back to
the client, but Cloud Hosting solutions paying for reserved resources
cannot take advantage of the pay per use component we have.

Are you going to the Portland Summit? We would like to discuss about this
topic, at StackOps we can develop it with all your help and support of
course.



>
>
>> -  With a rather paranoid customer, we use the use_cow_images to false.
>>
>> There is a great article of Pádraig Brady about the different options:
>> http://www.pixelbeat.org/docs/openstack_libvirt_images/
>>
>> Enjoy!
>> Diego
>>
>>
>>  --
>> Diego Parrilla
>> <http://www.stackops.com/>*CEO*
>> *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 |
>> skype:diegoparrilla*
>> * <http://www.stackops.com/>
>> *
>>
>> *
>>
>>
>>
>> On Thu, Mar 14, 2013 at 9:35 AM, Razique Mahroua <
>> razique.mahroua at gmail.com> wrote:
>>
>>> Hi,
>>> I wrote a script a couple of times ago
>>>
>>> https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5008_V00_NUAC-OPENSTACK-Nova-compute-images-prunning.sh
>>> It looks for images that have a backing file, for the backing files that
>>> are not used, you can remove it :)
>>> the script doesn't remove anyhthing, it just indicates you what base
>>> files can safely be removed - given as well the very fact that the base
>>> files are not the ones that are used, you can maybe put the dir - in  a
>>> shared directory, instances may spawn slower, but after that you shouldn't
>>> have any problem
>>>
>>> regards,
>>>
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua at gmail.com
>>> Tel : +33 9 72 37 94 15
>>>
>>>
>>> Le 13 mars 2013 à 23:29, Michael Still <mikal at stillhq.com> a écrit :
>>>
>>> On Wed, Mar 13, 2013 at 5:23 PM, Joe Topjian <joe.topjian at cybera.ca>
>>> wrote:
>>>
>>> On Wed, Mar 13, 2013 at 5:12 PM, Michael Still <mikal at stillhq.com>
>>> wrote:
>>>
>>> On Wed, Mar 13, 2013 at 4:42 PM, Joe Topjian <joe.topjian at cybera.ca>
>>> wrote:
>>>
>>> It would, yes, but I think your caveat trumps that idea. Having x nodes
>>> be
>>> able to work with a shared _base directory is great for saving space and
>>> centrally using images. As an example, one of my OpenStack's _base
>>> directory
>>> is 650gb in size. It's currently shared via NFS. If it was not shared or
>>> used a _base_$host scheme, that would be 650gb per compute node. 10
>>> nodes
>>> and you're already at 6.5TB.
>>>
>>>
>>> Is that _base directory so large because its never been cleaned up
>>> though? What sort of maintenance are you performing on it?
>>>
>>>
>>> It's true that I haven't done any maintenance to _base. From my
>>> estimations,
>>> a cleanup wouldn't reclaim a substantial amount of space to warrant me
>>> doing
>>> an actual cleanup (basically "benefits of disk space reclaimed" is not
>>> greater than "risk of accidentally corrupting x users instances" yet).
>>>
>>>
>>> What release of openstack are you running? I think you might get
>>> significant benefits from turning cleanup on, so long as you're using
>>> grizzly [1]. I'd be very very interested in the results of a lab test.
>>>
>>> Michael
>>>
>>> 1: yes I know its not released yet, but if you found a bug now we
>>> could fix it before it hurts everyone else...
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> Joe Topjian
> Systems Administrator
> Cybera Inc.
>
> www.cybera.ca
>
> Cybera is a not-for-profit organization that works to spur and support
> innovation, for the economic benefit of Alberta, through the use
> of cyberinfrastructure.
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130314/a136e515/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130314/a136e515/attachment.jpg>


More information about the OpenStack-operators mailing list