[openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

Fox, Kevin M Kevin.Fox at pnnl.gov
Thu Jan 16 01:08:31 UTC 2014


That option is too corse. For some vm's, for testing for example, I really dont need it clearing the data, and for some of my tests, I was creating 6 vm's with 4, 40 gig volumes each all on one test machine and it was taking several minutes to whipe each stack create/delete cycle I was doing while debugging the heat templates. At the same time, There are other volumes that I really do want cleared, just not the test ones. I'm guessing its something the Solum project might run into too. You may not need to wipe a test deployment, but really want to wipe a production deployment, for example, both of which may be in the same tenant?

Thanks,
Kevin


________________________________
From: Jay S Bryant [jsbryant at us.ibm.com]
Sent: Wednesday, January 15, 2014 4:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

There is already an option that can be set in cinder.conf using 'volume_clear=none'

Is there a reason that that option is not sufficient?


Jay S. Bryant
       IBM Cinder Subject Matter Expert  &  Cinder Core Member

Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbryant at us.ibm.com
--------------------------------------------------------------------
All the world's a stage and most of us are desperately unrehearsed.
                  -- Sean O'Casey
--------------------------------------------------------------------



From:        "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
To:        "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>,
Date:        01/15/2014 06:06 PM
Subject:        Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.
________________________________



What about a configuration option on the volume for delete type? I can see some possible options:

* None - Don't clear on delete. Its junk data for testing and I don't want to wait.
* Zero - Return zero's from subsequent reads either by zeroing on delete, or by faking zero reads initially.
* Random - Write random to disk.
* Multipass - Clear out the space in the most secure mode configured. Multiple passes and such.

Kevin
________________________________________
From: CARVER, PAUL [pc2929 at att.com]
Sent: Wednesday, January 15, 2014 2:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

Chris Friesen [mailto:chris.friesen at windriver.com] wrote:

>I read a proposal about using thinly-provisioned logical volumes as a
>way around the cost of wiping the disks, since they zero-fill on demand
>rather than incur the cost at deletion time.

I think it make a difference where the requirement for deletion is coming from.

If it's just to make sure that a tenant can't read another tenant's disk then what
you're talking about should work. It sounds similar (or perhaps identical to) how
NetApp (and I assume others) work by tracking whether the current client has
written to the volume and returning zeros rather than the actual contents of the
disk sector on a read that precedes the first write to that sector.

However, in that case the previous client's bits are still on the disk. If they were
unencrypted then they're still available if someone somehow got ahold of the
physical disk out of the storage array.

That may not be acceptable depending on the tenant's security requirements.

Though one may reasonably ask why they were writing unencrypted bits to
a disk that they didn't have physical control over.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140116/bfec810b/attachment.html>


More information about the OpenStack-dev mailing list