[openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.
alan.kavanagh at ericsson.com
Fri Jan 17 04:28:03 UTC 2014
From: CARVER, PAUL [mailto:pc2929 at att.com]
Sent: January-16-14 8:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.
Alan Kavanagh wrote:
>I posted a query to Ironic which is related to this discussion. My thinking was I want to ensure the case you note here (1) " a >tenant can not read another tenants disk......" the next (2) was where in Ironic you provision a baremetal server that has an >onboard dish as part of the blade provisioned to a given tenant-A. then when tenant-A finishes his baremetal blade lease and >that blade comes back into the pool and tenant-B comes along, I was asking what open source tools guarantee data destruction >so that no ghost images or file retrieval is possible?
That is an excellent point. I think the needs of Ironic may be different from Cinder. As a volume manager Cinder isn't actually putting the raw disk under the control of a tenant. If it can be assured that (as is the case with NetApp and other storage vendor hardware) that a "fake" all zeros is returned on a read-before-first-write of a chunk of disk space then that's sufficient to address the case of some curious ne'er-do-well alllancating volumes purely for the purpose of reading them to see what's left on them.
>> exactly, that was my thinking too. My main concern is to ensure that no ghost file and no way for another tenant to retrieve any data stored from a previous tenant.
But with bare metal the whole physical disk is at the mercy of the tenant so you're right that it must be ensured that the none of the previous tenant's bits are left lying around to be snooped on.
>> Fully agree Paul,. What I was thinking was that when the tenants bare metal node lease has expired and the blade is to be brought back into the pool for available scheduling to other tenants, we should run a "disk eraser" before making the blade available, so Ironic would run a "disk eraser" and validate this before taking it back into the pool. IF folks think this is a good idea, I will write up a blueprint on this for Ironic.
But I still think an *option* of wipe=none may be desirable because a cautious client might well take it into their own hands to wipe the disk before releasing it (and perhaps encrypt as well). In which case always doing an additional wipe is going to be more disk I/O for no real benefit.
>> I hear you on this one, and most clients who go for baremetal service are not novice, however it is also good to not take the chance and ensure all data is wiped before taking the blade back into service, at least that is what I would do and we typically do that with our laptops and PC's that we move/lease around ;-)
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
More information about the OpenStack-dev