[openstack-dev] [Glance] Recall for previous iscsi backend BP
duncan.thomas at gmail.com
Thu Nov 20 10:13:39 UTC 2014
It is quite possible that the requirement for glance to own images can be
achieved by having a glance tenant in cinder, and using clone and
volume-transfer functionalities in cinder to get copies to the right place.
I know there is some attempts to move away from the single glance tenant
model for swift usage, but doing anything else in cinder will require
significantly more thought/
On 19 November 2014 23:04, Alex Meade <mr.alex.meade at gmail.com> wrote:
> Hey Henry/Folks,
> I think it could make sense for Glance to store the volume UUID, the idea
> is that no matter where an image is stored it should be *owned* by Glance
> and not deleted out from under it. But that is more of a single tenant vs
> multi tenant cinder store.
> It makes sense for Cinder to at least abstract all of the block storage
> needs. Glance and any other service should reuse Cinders ability to talk to
> certain backends. It would be wasted effort to reimplement Cinder drivers
> as Glance stores. I do agree with Duncan that a great way to solve these
> issues is a third party transfer service, which others and I in the Glance
> community have discussed at numerous summits (since San Diego).
> On Wed, Nov 19, 2014 at 3:40 AM, henry hly <henry4hly at gmail.com> wrote:
>> Hi Flavio,
>> Thanks for your information about Cinder Store, Yet I have a little
>> concern about Cinder backend: Suppose cinder and glance both use Ceph
>> as Store, then if cinder can do instant copy to glance by ceph clone
>> (maybe not now but some time later), what information would be stored
>> in glance? Obviously volume UUID is not a good choice, because after
>> volume is deleted then image can't be referenced. The best choice is
>> that cloned ceph object URI also be stored in glance location, letting
>> both glance and cinder see the "backend store details".
>> However, although it really make sense for Ceph like All-in-one Store,
>> I'm not sure if iscsi backend can be used the same way.
>> On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco <flavio at redhat.com>
>> > On 19/11/14 15:21 +0800, henry hly wrote:
>> >> In the Previous BP , support for iscsi backend is introduced into
>> >> glance. However, it was abandoned because of Cinder backend
>> >> replacement.
>> >> The reason is that all storage backend details should be hidden by
>> >> cinder, not exposed to other projects. However, with more and more
>> >> interest in "Converged Storage" like Ceph, it's necessary to expose
>> >> storage backend to glance as well as cinder.
>> >> An example is that when transferring bits between volume and image,
>> >> we can utilize advanced storage offload capability like linked clone
>> >> to do very fast instant copy. Maybe we need a more general glance
>> >> backend location support not only with iscsi.
>> >>  https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
>> > Hey Henry,
>> > This blueprint has been superseeded by one proposing a Cinder store
>> > for Glance. The Cinder store is, unfortunately, in a sorry state.
>> > Short story, it's not fully implemented.
>> > I truly think Glance is not the place where you'd have an iscsi store,
>> > that's Cinder's field and the best way to achieve what you want is by
>> > having a fully implemented Cinder store that doesn't rely on Cinder's
>> > API but has access to the volumes.
>> > Unfortunately, this is not possible now and I don't think it'll be
>> > possible until L (or even M?).
>> > FWIW, I think the use case you've mentioned is useful and it's
>> > something we have in our TODO list.
>> > Cheers,
>> > Flavio
>> > --
>> > @flaper87
>> > Flavio Percoco
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev