[Openstack] Wiping of old cinder volumes
Robert Collins
robertc at robertcollins.net
Sat Nov 2 06:35:02 UTC 2013
On 2 November 2013 12:46, John Griffith <john.griffith at solidfire.com> wrote:
> To be honest, this has been an ongoing battle. The idea of throttling
> or spreading the dd's is a good one, but the problem I had here was
> then the delete operation can take *even longer* than it does already.
> That makes some people rather unhappy, but I think we need to take
> another look at the approach, I ran into similar issues today that
> drove me crazy deleting a large number of volumes, so I completely
> understand/agree with what you're saying. It may actually be best to
> go ahead and bight the bullet on the very long delete and lower the
> priority as you suggest.
>
> The other alternatives to consider are:
> 1. do you need secure delete, this can be disabled if the answer is no
> 2. what platform are you on and could you use thin provisioned LVM
> LVM does some things internally here to help us with the
> security concerns around data leakage across tenants/volumes
You've probably thought of this but:
- seems like there is a trivial dos for non-thin provisioned setups
(unless a dirty blockmap is kept):
- while True:
- request the largest available volume
- spin up a vm and write a few blocks (defeat trivial 'ever-used' tests)
- delete it
- It seems to me that IO should be subject to a quota same as
networking can be; otherwise in any environment folk will have nothing
preventing them pushing IO to the wall and causing massive latency for
smaller access pattern (but more latency sensitive) applications [like
DB's :)].
- if the IO involved in deleting volumes was accrued to the same
quota, you'd have protection against both the DOS, and multiple
volumes in a big heat cluster being deleted all at once (and other
similar triggers for this situation).
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
More information about the Openstack
mailing list