[Openstack] Wiping of old cinder volumes
Jeffrey Walton
noloader at gmail.com
Sat Nov 2 02:56:46 UTC 2013
On Fri, Nov 1, 2013 at 10:06 PM, David Hill <david.hill at ubisoft.com> wrote:
> Hello Jeff,
>
> I understand that but does that mean it HAS to be done right away?
> I mean, performances for the rest of the VMs are sacrificed over security concern
> (which are legitimate) but still have an impact over the remainder of the EBS
> volumes being attached to other VMs. There're no better ways that could
> be implemented to deal with that? Or maybe some faster ways ? What
> if the LVM would be kept for a bit longer and be deleted slowly but surely?
The folks on openstack-security are probably in a better position to
comment than me. Shooting from the hip, I think there are a couple of
ways to handle it.
The easiest would probably be to encrypt the cinder blocks, and then
securely wipe the key upon deletion of the VM. That would only take
one write of 32 bytes, and its surely less than a single disk sector.
The remaining [encrypted] data should be indistinguishable from random
because the ciphers possess the Pseudorandom Permutation (PRP) notion
of security [0]. Here, the ciphers would be (1) AES/CBC (and other
similar modes) are properly "chained" or (2) AES/CTR (and other modes
that generate a keystream and use XOR).
I think an auditor would accept a lazy eraser as long as the operation
is reasonably bounded, but I'm not certain. Sometimes the answer will
differ among auditors. So if you first get a NO, try a different
auditor :) In the case of the lazy eraser, model it like the zero page
writer for memory. On Windows, the thread runs at a low priority and
zeroizes pages in spare cycles. The thread is elevated to wipe dirty
pages whenever the memory manager cannot satisfy a request for a page
because the zero page list is empty [1]. I think similar could be done
for storage blocks.
If you are using SSDs for storage, then all bets are off due to write
leveling. I know how iOS handles it on their iDevices (a keybag in
effaceable storage is wiped [2]), but I'm not sure how its handled in
a general manner. And I know a alarming number of hard drive
manufacturers did not deliver on promises made regarding secure
deletions from a Usenix paper [3].
Jeff
[0] Boneh, https://crypto.stanford.edu/~dabo/cs255/lectures/PRP-PRF.pdf
[1] Russinovich,
http://blogs.msdn.com/b/tims/archive/2010/10/29/pdc10-mysteries-of-windows-memory-management-revealed-part-two.aspx
[2] B Ěedrune and Sigwald,
http://esec-lab.sogeti.com/dotclear/public/publications/11-hitbamsterdam-iphonedataprotection.pdf
[3] Wei, Grupp, Spada, Swanson,
https://www.usenix.org/legacy/event/fast11/tech/full_papers/Wei.pdf
> -----Original Message-----
> From: Jeffrey Walton [mailto:noloader at gmail.com]
> Sent: November-01-13 9:21 PM
> To: David Hill
> Cc: openstack at lists.openstack.org
> Subject: Re: [Openstack] Wiping of old cinder volumes
>
> On Fri, Nov 1, 2013 at 8:33 PM, David Hill <david.hill at ubisoft.com> wrote:
>> Hello John,
>>
>> Well, if it has an impact on the other volumes that are still being used by
>> some other VMs, this is worse in my opinion as it will degrade the service level
>> of the other VMs that need to get some work done. If that space is not immediately
>> needed we can take our time to delete it or at least delay the deletion. Or perhaps
>> the scheduler should try to delete the volumes when there's less activity on the storage
>> device (SAN, disks, etc) and even throttle the rate at which the bites are overwritten
>> by zeros. The fact is that our internal cloud users can delete multiple volumes at
>> the same time and thus, have an impact on other users VMs that may or may not
>> be doing critical operations and sometimes, Windows may even blue screen because
>> of the disk latency and this is very bad.
>>
>> Here are the answer to the alternatives:
>> 1) I don't think we do need secure delete but I'm not the one who will make this call but
>> If I could, I would turn it off right away as it would remove some stress over the storage
>> Systems.
> For some folks, this can be a compliance problem. If an organization
> is using a cloud provider, then it could be a governance issue too.
> See, for example, NIST Special Publication 800-63-1 and the
> discussions surrounding zeroization.
More information about the Openstack
mailing list