[openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

John Griffith john.griffith at solidfire.com
Wed Oct 22 14:05:03 UTC 2014


On Wed, Oct 22, 2014 at 7:33 AM, Flavio Percoco <flavio at redhat.com> wrote:

> On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
> > Greetings,
> >
> > On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco <flavio at redhat.com>
> wrote:
> >> Greetings,
> >>
> >> Back in Havana a, partially-implemented[0][1], Cinder driver was merged
> >> in Glance to provide an easier and hopefully more consistent interaction
> >> between glance, cinder and nova when it comes to manage volume images
> >> and booting from volumes.
> >
> > With my idea, it not only for VM provisioning and consuming feature
> > but also for implementing a consistent and unified block storage
> > backend for image store.  For historical reasons, we have implemented
> > a lot of duplicated block storage drivers between glance and cinder,
> > IMO, cinder could regard as a full-functional block storage backend
> > from OpenStack's perspective (I mean it contains both data and control
> > plane), glance could just leverage cinder as a unified block storage
> > backend. Essentially, Glance has two kind of drivers, block storage
> > driver and object storage driver (e.g. swift and s3 driver),  from
> > above opinion, I consider to give glance a cinder driver is very
> > sensible, it could provide a unified and consistent way to access
> > different kind of block backend instead of implement duplicated
> > drivers in both projects.
>
> Let me see if I got this right. You're suggesting to have a cinder
> driver in Glance so we can basically remove the
> 'create-volume-from-image' functionality from Cinder. is this right?
>
> > I see some people like to see implementing similar drivers in
> > different projects again and again, but at least I think this is a
> > hurtless and beneficial feature/driver.
>
> It's not as harmless as it seems. There are many users confused as to
> what the use case of this driver is. For example, should users create
> volumes from images? or should the create images that are then stored in
> a volume? What's the difference?
>
> Technically, the answer is probably none, but from a deployment and
> usability perspective, there's a huge difference that needs to be
> considered.
>
> I'm not saying it's a bad idea, I'm just saying we need to get this
> story straight and probably just pick one (? /me *shrugs*)
>
> >> While I still don't fully understand the need of this driver, I think
> >> there's a bigger problem we need to solve now. We have a partially
> >> implemented driver that is almost useless and it's creating lots of
> >> confusion in users that are willing to use it but keep hitting 500
> >> errors because there's nothing they can do with it except for creating
> >> an image that points to an existing volume.
> >>
> >> I'd like us to discuss what the exact plan for this driver moving
> >> forward is, what is missing and whether it'll actually be completed
> >> during Kilo.
> >
> > I'd like to enhance cinder driver of course, but currently it blocked
> > on one thing it needs a correct people believed way [0] to access
> > volume from Glance (for both data and control plane, e.g. creating
> > image and upload bits). During H cycle I was told cinder will release
> > a separated lib soon, called Brick[0], which could be used from other
> > project to allow them access volume directly from cinder, but seems it
> > didn't ready to use still until now. But anyway, we can talk this with
> > cinder team to get Brick moving forward.
> >
> > [0] https://review.openstack.org/#/c/20593/
> > [1] https://wiki.openstack.org/wiki/CinderBrick
> >
> > I really appreciated if somebody could show me a clear plan/status on
> > CinderBrick, I still think it's a good way to go for glance cinder
> > driver.
>
> +1 Mike? John ? Any extra info here?
>
> If the brick's lib is not going to be released before k-2, I think we
> should just remove this driver until we can actually complete the work.
>
> As it is right now, it doesn't add any benefit and there's nothing this
> driver adds that cannot be done already (creating volumes from images,
> that is).
>
> >> If there's a slight chance it won't be completed in Kilo, I'd like to
> >> propose getting rid of it - with a deprecation period, I guess - and
> >> giving it another chance in the future when it can be fully implemented.
> >>
> >> [0] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
> >> [1] https://review.openstack.org/#/c/32864/
>
> Fla
>
>
> --
> @flaper87
> Flavio Percoco
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

"Sorry State" is probably fair, the issue here is as you pointed out it's
something that's partially done.  To be clear about the intended use-case
here; my intent was mostly to utilize Cinder Block Devices similar to the
model Ceph has in place.  We can make instance creation and migration quite
a bit more efficient IMO and also there are some of the points you made
around cloning and creating new volumes.

Ideas started spreading from there to "Using a Read Only Cinder Volume per
image", to "A Glance owned Cinder Volume" that would behave pretty much the
current local disk/file-system model (Create a Cinder Volume for Glance,
attach it to the Glance Server, partition, format and mount... use as image
store).

Anyway, I'd like to propose that we either move forward or remove the code
that's there now.  My opinion is that we resync on specs and bp's by K-1
and have an agreed upon plan and way forward.  Otherwise remove it, you're
absolutely right that it's causing a good deal of confusion.  I've received
a fair number of inquiries from people trying to use it and I have to say
"ummmm..... yeah, about that" which is not fun.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141022/126ecf45/attachment.html>


More information about the OpenStack-dev mailing list