[openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

Clint Byrum clint at fewbar.com
Thu Jan 16 22:28:21 UTC 2014


Excerpts from CARVER, PAUL's message of 2014-01-16 05:21:24 -0800:
> 
> Alan Kavanagh wrote: 
> 
> >I posted a query to Ironic which is related to this discussion. My thinking was I want to ensure the case you note here (1) " a >tenant can not read another tenants disk......" the next (2) was where in Ironic you provision a baremetal server that has an >onboard dish as part of the blade provisioned to a given tenant-A. then when tenant-A finishes his baremetal blade lease and >that blade comes back into the pool and tenant-B comes along, I was asking what open source tools guarantee data destruction >so that no ghost images  or file retrieval is possible?
> 
> That is an excellent point. I think the needs of Ironic may be different from Cinder. As a volume manager Cinder isn't actually putting the raw disk under the control of a tenant. If it can be assured that (as is the case with NetApp and other storage vendor hardware) that a "fake" all zeros is returned on a read-before-first-write of a chunk of disk space then that's sufficient to address the case of some curious ne'er-do-well allocating volumes purely for the purpose of reading them to see what's left on them.
> 
> But with bare metal the whole physical disk is at the mercy of the tenant so you're right that it must be ensured that the none of the previous tenant's bits are left lying around to be snooped on.
> 
> But I still think an *option* of wipe=none may be desirable because a cautious client might well take it into their own hands to wipe the disk before releasing it (and perhaps encrypt as well). In which case always doing an additional wipe is going to be more disk I/O for no real benefit.
> 

I am of a mind that multi-tenant hardware is something that needs to be
designed at the hardware level. I am not aware of anyone doing that.

For hardware that exists today, we have virtualization to protect the
hardware from its users.  A single VM that takes all the resources will
not have most of the contention that lead to virtualization performance
problems, and will have the benefit of not requiring any sort of process
to return it to the pool.

For the topic at hand, said VM will only have access to volumes in
the same way as VMs and thus the problem stays solved for multi-tenant
hardware if we keep that restriction.



More information about the OpenStack-dev mailing list