[openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.
Jay S Bryant
jsbryant at us.ibm.com
Thu Jan 16 00:30:05 UTC 2014
There is already an option that can be set in cinder.conf using
'volume_clear=none'
Is there a reason that that option is not sufficient?
Jay S. Bryant
IBM Cinder Subject Matter Expert & Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail: jsbryant at us.ibm.com
--------------------------------------------------------------------
All the world's a stage and most of us are desperately unrehearsed.
-- Sean O'Casey
--------------------------------------------------------------------
From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>,
Date: 01/15/2014 06:06 PM
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance
blueprint of cinder.
What about a configuration option on the volume for delete type? I can see
some possible options:
* None - Don't clear on delete. Its junk data for testing and I don't want
to wait.
* Zero - Return zero's from subsequent reads either by zeroing on delete,
or by faking zero reads initially.
* Random - Write random to disk.
* Multipass - Clear out the space in the most secure mode configured.
Multiple passes and such.
Kevin
________________________________________
From: CARVER, PAUL [pc2929 at att.com]
Sent: Wednesday, January 15, 2014 2:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance
blueprint of cinder.
Chris Friesen [mailto:chris.friesen at windriver.com] wrote:
>I read a proposal about using thinly-provisioned logical volumes as a
>way around the cost of wiping the disks, since they zero-fill on demand
>rather than incur the cost at deletion time.
I think it make a difference where the requirement for deletion is coming
from.
If it's just to make sure that a tenant can't read another tenant's disk
then what
you're talking about should work. It sounds similar (or perhaps identical
to) how
NetApp (and I assume others) work by tracking whether the current client
has
written to the volume and returning zeros rather than the actual contents
of the
disk sector on a read that precedes the first write to that sector.
However, in that case the previous client's bits are still on the disk. If
they were
unencrypted then they're still available if someone somehow got ahold of
the
physical disk out of the storage array.
That may not be acceptable depending on the tenant's security
requirements.
Though one may reasonably ask why they were writing unencrypted bits to
a disk that they didn't have physical control over.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140115/25d13891/attachment.html>
More information about the OpenStack-dev
mailing list