[openstack-tc] [OpenStack-TC] Block Storage abstractions in Cinder

Mark McLoughlin markmc at redhat.com
Tue Jul 15 12:51:19 UTC 2014


Hi John,

Top-posting because I don't really have specific answers to any of your
questions, but I thought I'd try and summarize how I'd go about thinking
it through ...

What Cinder provides from a user perspective:

  - volumes
  - attaching or booting from volumes via Nova
  - snapshots
  - backups
  - different storage tiers - i.e. volume types, qos specs
  - quotas/limits

If the backing storage technology could sanely be used to provide those
features then, conceptually at least, a Cinder driver for that
technology makes some sense.

(An example of not sane IMO would be a storage driver that creates
volumes that can only be attached to VMs running on a single compute
node and users of the Nova API would need to jump through hopes to deal
with this restriction)

I don't have a problem with vendors wanting to add drivers for their
technologies. I don't like to see us assume that this is purely about
marketing. I'd rather assume good faith on the part of contributing
vendors - they see value for real-life users in the combination of
OpenStack and their technology, and they're trying to bring that value
to as many people as possible. As much of an open-source advocate as I
am, I also wouldn't differentiate between drivers for proprietary or
open-source technologies.

I also don't like to assume that contributions to drivers only come from
vendors. That may be mostly true, but if we can build mini, productive,
healthy communities around only a handful of drivers then it will be
worth our while to be careful with our language by calling them driver
contributors, developers, maintainers, etc.

Having drivers in-tree is not so much about "quality" as it is about
enabling and encouraging collaboration between everyone interested in
working on the driver, but also those working on the driver and those
working on the core of the project. For example, if a driver was low
quality but keeping it in-tree was going to enable that collaboration,
then I'd be more in favor of marking it as "experimental" or
"incomplete" or similar rather than kicking it out of the tree.

Now, the specific situation you describe sounds pretty crappy - a
proposal to include a whole bunch of infrastructure in a driver that
could be useful to other drivers. That rather than figure out how to
evolve the core of Cinder to support the scheduling requirements of that
storage architecture, the driver developers chose to build their own
stuff. We absolutely do not need to encourage that, but the flip-side is
that we also need to be open to those driver developers evolving the
core to support that use case.

What I really see (and dig) across the board is this - where once, when
our review queues were under control, we would have given a giant big
hug to anyone wanting to contribute to a driver, we are now massively
over-burdened with contributions of varying quality and our ability to
nurture each contributor (and empathize with their goals) has almost
vanished.

That's totally understandable, but IMO we should be clear-headed about
what we want to achieve in discussions like this - getting these pesky,
clueless, leaching vendors to bugger off somewhere else or evolving our
community and processes to cope with, and nurture, the massive interest
we're seeing.

HTH,
Mark.

On Mon, 2014-07-14 at 10:03 -0600, John Griffith wrote:
> Hello TC Members,
> 
> 
> Some of you have maybe heard my opinions and position on the so called
> "Software Defined Storage" trend in Cinder.  For those that aren't
> familiar here's the summary:
> 
> 
> Basicly the first proposal from EMC came several months back where
> they submitted a "Driver" that was an abstraction layer for multiple
> EMC and non-EMC storage devices [1].  The goal being to provide a
> Cinder Volume-Driver that implements an entire new abstraction and
> scheduling layer for heterogenous devices that are connected behind an
> EMC VIPR node.  The first problem I had with this was patch-set 1 was
> 28924 line code dump of a "driver".  The second problem was the fact
> that it duplicates scheduler, API abstraction etc.
> 
> 
> My bigger concern is that every vendor and now some other Open Source
> groups are also seeing this as a great marketing tool and potential
> for driving some new sales in their company.  There have been
> additional proposals for similar abstractions in the Juno cycle (and
> notification from at least 2 other vendors that they'll be submitting
> something similar).  For those that support a "software based storage
> on top of commodity servers or JBODS" it seems reasonable/good to have
> a Cinder driver.  The problem is you can't have just that piece of
> their product, you get all or nothing.
> 
> 
> The concept itself (and the implementation) in some of these is pretty
> cool and I see real value in them.  What I don't see as beneficial
> however is them being in the Cinder Source code.  I don't think that
> support and overhead of maintaining entire duplications of the Cinder
> services is necessarily a great idea.  To be honest, I've spent the
> last year trying to figure out a reasonable way to have ZERO 3'rd
> party drivers in Cinder, and still provide some way of providing
> quality.
> 
> 
> Anyway, I'd like to get some feedback from the TC on all of this.
>  Whether it be an official agenda item for an upcoming meeting, or at
> the very least feedback to this email.  In my opinion this is very
> much a TC item and is exactly the sort of thing that the TC should be
> interested in.  I can certainly make decisions on my own based on
> feedback from the rest of the Cinder community and my own personal
> view, however I believe this has a broader impact. 
> 
> 
> Most of these products also extend at some point to direct
> integrations as Object Stores, Image Repositories, Manilla Share
> Plugins and direct integrations in to Nova.  I think it's something
> that needs to be well thought out and the long terms impacts on the
> OpenStack projects should be considered.
> 
> 
> Thanks,
> John
> 
> 
> 
> 
> [1]: https://review.openstack.org/#/c/74158/
> 
> 
> Additional/Similar proposals:
> https://review.openstack.org/#/c/106742/
> 
> https://review.openstack.org/#/c/101688/
> 
> _______________________________________________
> OpenStack-TC mailing list
> OpenStack-TC at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc





More information about the OpenStack-TC mailing list