[openstack-dev] [Glance] Recall for previous iscsi backend BP
henry4hly at gmail.com
Wed Nov 19 08:40:16 UTC 2014
Thanks for your information about Cinder Store, Yet I have a little
concern about Cinder backend: Suppose cinder and glance both use Ceph
as Store, then if cinder can do instant copy to glance by ceph clone
(maybe not now but some time later), what information would be stored
in glance? Obviously volume UUID is not a good choice, because after
volume is deleted then image can't be referenced. The best choice is
that cloned ceph object URI also be stored in glance location, letting
both glance and cinder see the "backend store details".
However, although it really make sense for Ceph like All-in-one Store,
I'm not sure if iscsi backend can be used the same way.
On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco <flavio at redhat.com> wrote:
> On 19/11/14 15:21 +0800, henry hly wrote:
>> In the Previous BP , support for iscsi backend is introduced into
>> glance. However, it was abandoned because of Cinder backend
>> The reason is that all storage backend details should be hidden by
>> cinder, not exposed to other projects. However, with more and more
>> interest in "Converged Storage" like Ceph, it's necessary to expose
>> storage backend to glance as well as cinder.
>> An example is that when transferring bits between volume and image,
>> we can utilize advanced storage offload capability like linked clone
>> to do very fast instant copy. Maybe we need a more general glance
>> backend location support not only with iscsi.
>>  https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
> Hey Henry,
> This blueprint has been superseeded by one proposing a Cinder store
> for Glance. The Cinder store is, unfortunately, in a sorry state.
> Short story, it's not fully implemented.
> I truly think Glance is not the place where you'd have an iscsi store,
> that's Cinder's field and the best way to achieve what you want is by
> having a fully implemented Cinder store that doesn't rely on Cinder's
> API but has access to the volumes.
> Unfortunately, this is not possible now and I don't think it'll be
> possible until L (or even M?).
> FWIW, I think the use case you've mentioned is useful and it's
> something we have in our TODO list.
> Flavio Percoco
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
More information about the OpenStack-dev