[openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

Preston L. Bannister preston at bannister.us
Thu Oct 23 07:30:03 UTC 2014


John,

As a (new) OpenStack developer, I just discovered the
"CINDER_SECURE_DELETE" option.

As an *implicit* default, I entirely approve.  Production OpenStack
installations should *absolutely* insure there is no information leakage
from one instance to the next.

As an *explicit* default, I am not so sure. Low-end storage requires you do
this explicitly. High-end storage can insure information never leaks.
Counting on high level storage can make the upper levels more efficient,
can be a good thing.

The debate about whether to wipe LV's pretty much massively depends on the
intelligence of the underlying store. If the lower level storage never
returns accidental information ... explicit zeroes are not needed.



On Wed, Oct 22, 2014 at 11:15 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

>
>
> On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas <duncan.thomas at gmail.com>
> wrote:
>
>> For LVM-thin I believe it is already disabled? It is only really
>> needed on LVM-thick, where the returning zeros behaviour is not done.
>>
>> On 21 October 2014 08:29, Avishay Traeger <avishay at stratoscale.com>
>> wrote:
>> > I would say that wipe-on-delete is not necessary in most deployments.
>> >
>> > Most storage backends exhibit the following behavior:
>> > 1. Delete volume A that has data on physical sectors 1-10
>> > 2. Create new volume B
>> > 3. Read from volume B before writing, which happens to map to physical
>> > sector 5 - backend should return zeroes here, and not data from volume A
>> >
>> > In case the backend doesn't provide this rather standard behavior, data
>> must
>> > be wiped immediately.  Otherwise, the only risk is physical security,
>> and if
>> > that's not adequate, customers shouldn't be storing all their data there
>> > regardless.  You could also run a periodic job to wipe deleted volumes
>> to
>> > reduce the window of vulnerability, without making delete_volume take a
>> > ridiculously long time.
>> >
>> > Encryption is a good option as well, and of course it protects the data
>> > before deletion as well (as long as your keys are protected...)
>> >
>> > Bottom line - I too think the default in devstack should be to disable
>> this
>> > option, and think we should consider making the default False in Cinder
>> > itself.  This isn't the first time someone has asked why volume deletion
>> > takes 20 minutes...
>> >
>> > As for queuing backup operations and managing bandwidth for various
>> > operations, ideally this would be done with a holistic view, so that for
>> > example Cinder operations won't interfere with Nova, or different Nova
>> > operations won't interfere with each other, but that is probably far
>> down
>> > the road.
>> >
>> > Thanks,
>> > Avishay
>> >
>> >
>> > On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen <
>> chris.friesen at windriver.com>
>> > wrote:
>> >>
>> >> On 10/19/2014 09:33 AM, Avishay Traeger wrote:
>> >>>
>> >>> Hi Preston,
>> >>> Replies to some of your cinder-related questions:
>> >>> 1. Creating a snapshot isn't usually an I/O intensive operation.  Are
>> >>> you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
>> >>> CPU usage of cinder-api spike sometimes - not sure why.
>> >>> 2. The 'dd' processes that you see are Cinder wiping the volumes
>> during
>> >>> deletion.  You can either disable this in cinder.conf, or you can use
>> a
>> >>> relatively new option to manage the bandwidth used for this.
>> >>>
>> >>> IMHO, deployments should be optimized to not do very long/intensive
>> >>> management operations - for example, use backends with efficient
>> >>> snapshots, use CoW operations wherever possible rather than copying
>> full
>> >>> volumes/images, disabling wipe on delete, etc.
>> >>
>> >>
>> >> In a public-cloud environment I don't think it's reasonable to disable
>> >> wipe-on-delete.
>> >>
>> >> Arguably it would be better to use encryption instead of
>> wipe-on-delete.
>> >> When done with the backing store, just throw away the key and it'll be
>> >> secure enough for most purposes.
>> >>
>> >> Chris
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Duncan Thomas
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> We disable this in the Gates "CINDER_SECURE_DELETE=False"
>
> ThinLVM (which hopefully will be default upon release of Kilo) doesn't
> need it because internally it returns zeros when reading unallocated blocks
> so it's a non-issue.
>
> The debate of to wipe LV's or not to is a long running issue.  The default
> behavior in Cinder is to leave it enable and IMHO that's how it should
> stay.  The fact is anything that might be construed as "less secure" and
> has been defaulted to the "more secure" setting should be left as it is.
> It's simple to turn this off.
>
> Also, nobody seemed to mention that in the case of Cinder operations like
> copy-volume and the delete process you also have the ability to set
> bandwidth limits on these operations, and in the case of delete even
> specify different schemes (not just enabled/disabled but other options that
> may be less or more IO intensive).
>
> For further reference checkout the config options [1]
>
> Thanks,
> John
>
> [1]:
> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L69
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141023/bb1e9f5a/attachment.html>


More information about the OpenStack-dev mailing list