[openstack-dev] [cinder] about block device driver

John Griffith john.griffith8 at gmail.com
Wed Aug 1 14:25:30 UTC 2018


On Fri, Jul 27, 2018 at 8:44 AM Matt Riedemann <mriedemos at gmail.com> wrote:

> On 7/16/2018 4:20 AM, Gorka Eguileor wrote:
> > If I remember correctly the driver was deprecated because it had no
> > maintainer or CI.  In Cinder we require our drivers to have both,
> > otherwise we can't guarantee that they actually work or that anyone will
> > fix it if it gets broken.
>
> Would this really require 3rd party CI if it's just local block storage
> on the compute node (in devstack)? We could do that with an upstream CI
> job right? We already have upstream CI jobs for things like rbd and nfs.
> The 3rd party CI requirements generally are for proprietary storage
> backends.
>
> I'm only asking about the CI side of this, the other notes from Sean
> about tweaking the LVM volume backend and feature parity are good
> reasons for removal of the unmaintained driver.
>
> Another option is using the nova + libvirt + lvm image backend for local
> (to the VM) ephemeral disk:
>
>
> https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653
>
> --
>
> Thanks,
>
> Matt
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We've had this conversation multiple times, here were the results from past
conversations and the reasons we deprecated:
1. Driver was not being tested at all (no CI, no upstream tests etc)
2. We sent out numerous requests trying to determine if anybody was using
the driver, didn't receive much feedback
3. The driver didn't work for an entire release, this indicated that
perhaps it wasn't that valuable
4. The driver is unable to implement a number of the required features for
a Cinder Block Device
5. Digging deeper into performance tests most comparisons were doing things
like
    a. Using the shared single nic that's used for all of the cluster
communications (ie DB, APIs, Rabbit etc)
    b. Misconfigured deployment, ie using a 1Gig Nic for iSCSI connections
(also see above)

The decision was that raw-block was not by definition a "Cinder Device",
and given that it wasn't really tested or
maintained that it should be removed.  LVM is actually quite good, we did
some pretty extensive testing and even
presented it as a session in Barcelona that showed perf within
approximately 10%.  I'm skeptical any time I see
dramatic comparisons of 1/2 performance, but I could be completely wrong.

I would be much more interested in putting efforts towards trying to figure
out why you have such a large perf
delta and see if we can address that as opposed to trying to bring back and
maintain a driver that only half
works.

Or as Jay Pipes mentioned, don't use Cinder in your case.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180801/5a2af44a/attachment.html>


More information about the OpenStack-dev mailing list