[Openstack] Wiping of old cinder volumes

David Hill david.hill at ubisoft.com
Sat Nov 2 00:33:37 UTC 2013


Hello John,

	Well, if it has an impact on the other volumes that are still being used by 
some other VMs, this is worse in my opinion as it will degrade the service level
of the other VMs that need to get some work done.  If that space is not immediately
needed we can take our time to delete it or at least delay the deletion.  Or perhaps
the scheduler should try to delete the volumes when there's less activity on the storage
device (SAN, disks, etc) and even throttle the rate at which the bites are overwritten 
by zeros.    The fact is that our internal cloud users can delete multiple volumes at 
the same time and thus, have an impact on other users VMs that may or may not
be doing critical operations and sometimes, Windows may even blue screen because
of the disk latency and this is very bad.   

Here are the answer to the alternatives:
1) I don't think we do need secure delete but I'm not the one who will make this call but
If I could, I would turn it off right away as it would remove some stress over the storage 
Systems.

2) We're using Grizzly on Centos 6.4 and openstack is dealing with the LVM stuff.

Thank you very much,

Dave



-----Original Message-----
From: John Griffith [mailto:john.griffith at solidfire.com] 
Sent: November-01-13 7:47 PM
To: David Hill
Cc: openstack at lists.openstack.org
Subject: Re: [Openstack] Wiping of old cinder volumes

On Fri, Nov 1, 2013 at 4:20 PM, David Hill <david.hill at ubisoft.com> wrote:
> Hi guys,
>
>
>
>                 I was wondering there was some better way of wiping the
> content of an old EBS volume before actually deleting the logical volume in
> cinder ?  Or perhaps, configure or add the possibility to configure the
> number of parallel "dd" processes that will be spawn at the same time...
>
> Sometimes, users will simply try to get rid of their volumes ALL at the same
> time and this is putting a lot of pressure on the SAN servicing those
> volumes and since the hardware isn't replying fast enough, the process then
> fall in D state and are waiting for IOs to complete which slows down
> everything.
>
> Since this process isn't (in my opinion) as critical as a EBS write or read,
> perhaps we should be able to throttle the speed of disk wiping or number of
> parallel wipings to something that wouldn't affect the other read/write that
> are most probably more critical.
>
>
>
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Hi Dave,

To be honest, this has been an ongoing battle.  The idea of throttling
or spreading the dd's is a good one, but the problem I had here was
then the delete operation can take *even longer* than it does already.
 That makes some people rather unhappy, but I think we need to take
another look at the approach, I ran into similar issues today that
drove me crazy deleting a large number of volumes, so I completely
understand/agree with what you're saying.  It may actually be best to
go ahead and bight the bullet on the very long delete and lower the
priority as you suggest.

The other alternatives to consider are:
1. do you need secure delete, this can be disabled if the answer is no
2. what platform are you on and could you use thin provisioned LVM
        LVM does some things internally here to help us with the
security concerns around data leakage across tenants/volumes

Thanks,
John




More information about the Openstack mailing list