[openstack-tc] [OpenStack-TC] Block Storage abstractions in Cinder

John Griffith john.griffith8 at gmail.com
Tue Jul 15 15:31:25 UTC 2014


On Tue, Jul 15, 2014 at 7:55 AM, Doug Hellmann <doug.hellmann at dreamhost.com>
wrote:

> On Tue, Jul 15, 2014 at 8:56 AM, Thierry Carrez <thierry at openstack.org>
> wrote:
> > John Griffith wrote:
> >> [...]
> >> Anyway, I'd like to get some feedback from the TC on all of this.
> >>  Whether it be an official agenda item for an upcoming meeting, or at
> >> the very least feedback to this email.  In my opinion this is very much
> >> a TC item and is exactly the sort of thing that the TC should be
> >> interested in.  I can certainly make decisions on my own based on
> >> feedback from the rest of the Cinder community and my own personal view,
> >> however I believe this has a broader impact.
> >
> > The TC is responsible for the "scope" of OpenStack and making sure we
> > spend our common resources where they are the most needed, so I think
> > it's relevant for us to at least give our opinion there.
> >
> > There are IMHO two different issues here. The first is a technical
> > issue: what type of functionality you want your drivers to cover. In
> > Cinder, drivers should be a relatively thin indirection layer where you
> > implement the glue between the Cinder driver API on one side and what
> > the pure storage backend expects on the other. They are not really
> > supposed to be big or reimplement advanced scheduling features.
> > Accepting such monster drivers in mainline code changes the relative
> > weight of code areas in cinder and therefore makes it a costly
> > maintenance proposition for the project, with little on the benefits
> > side compared to growing that driver out of tree.
> >
> > The second issue is more of a social issue: we want as much as possible
> > of the smart in Cinder being implemented in open source in OpenStack. If
> > vendors decide to implement the smart parts in closed source software
> > shipped in storage hardware gateways, they may win but OpenStack surely
> > loses.
> >
> > I think it's a perfect example of where out-of-tree makes the most
> > sense: when the added value to the project is limited (or negative),
> > while the benefit for one vendor is enormous.
>
> We also need to keep in mind the DefCore discussions, and the TC
> position that the code the community delivers in the integrated
> release should be used as the "designated sections". Vendors want to
> use the trademark. We want them to contribute to core. Working
> together on their drivers in the tree is part of the bargain, and it
> would be a huge mistake to push them out.
>
> If this particular driver reproduces too much existing cinder
> functionality in a way that can't be reused, that's both a technical
> issue and a project management issue. Maybe the new initiative to
> bring product managers into the community will address it in the
> future by helping them to understand how to work on features with the
> community. In the mean time I don't have a problem with the Cinder
> team responding to this contribution that it duplicates functionality
> in their core, and is therefore not an appropriate implementation for
> a driver. As long we suggest changes rather than rejecting it
> entirely, that sort of review is part of the process. Would a spec
> review have caught this earlier?
>
> We should also consider writing down guidelines for what sort of
> functionality should and should not be included in the various driver
> layers, to help vendors understand our expectations.
>
> Doug
>
> > The other classic example of this is the Nova VMWare support... and it's
> > questionable as well on both counts. That said the dynamics are slightly
> > different there, VMWare being the legacy standard for old-style
> > datacenter virtualization, supporting it is a decent way to co-opt that
> > ecosystem and encourage migrating off it.
> >
> > --
> > Thierry Carrez (ttx)
> >
> > _______________________________________________
> > OpenStack-TC mailing list
> > OpenStack-TC at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc
>
> _______________________________________________
> OpenStack-TC mailing list
> OpenStack-TC at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc
>

> solutions. So vmware for example is built so we don't have a
> nova-compute per hypervisor node, but one per cluster, and then rely
> on vsphere to do all the cluster leg work. This isn't as bad as it
> sounds like it is in cinder, but is still a watering down in the value
> of nova.

Good analogy, however also keep in mind in this model you're not just
plugging in Vendor-X's product, you're also saying you can plug in
Vendor-X, Y and Z's product (in the case of EMC that list includes at least
4 drivers that exist in Cinder, this would be a way to have support in
tree/openstack without having to go through the OpenStack processes).  I
see this as good and bad personally, until it comes to bug reports and
maintenance.  I also don't see how you possibly test these models in the
General OpenStack framework.  Burden is on the Vendor here (and it's a
significant burden).


> Accepting such monster drivers in mainline code changes the relative
> weight of code areas in cinder and therefore makes it a costly
> maintenance proposition for the project, with little on the benefits
> side compared to growing that driver out of tree.

This is one of my biggest concerns on the topic, particularly as these
Vendors historically provide nothing in terms of Core Cinder contribution,
and in fact some of them are banned by their company from working on
anything other than their Companies driver submission.


I like Mark's optimism, however I'm a bit more cynical after working in
Cinder for a while.  Keep in mind as I mentioned earlier some companies
like EMC still won't allow their employees to contribute anywhere except
their drivers (based on conversations I've had with EMC management),
although they are working on changing that.

> One other observation about your request, John. It reminds me of John
Dickinson's original move of all swift-related code to outside > of the
code base. He may be able to comment better about whether it was about
onboarding contributors or about maintenance
> burden. I think it was the maintenance that concerned him.

Yeah, I think there were a number of motivations there; the idea of not
having drivers in tree period is not something I really want, especially
not the way SWIFT does it.  What I've been thinking about over the years
was a way to test and contribute driver integration as it's own separate
effort but still under the umbrella and sanctioning of OpenStack; but I
haven't figured out a good way to do that so we should ignore that piece.
 It's not something I'm prepared to tackle or that I think is that big of a
deal right now.

My current take on this is:
If you have what I've dubbed an UBER driver that abstracts multiple backend
devices it's a case by case basis.

1. Are you developing this to enable a Software Defined Storage solution
(ie turning a cluster of servers or jbods into a full featured storage
product).  If yes, then 'ok', even though it's a loop-hole when you also
support a multitude of other vendors.

2. Does your UBER driver provide support for a backend device that is NOT
already available and maintained in Cinder?  If NO then your offering
probably doesn't make much sense, and I'd very much welcome your
suggestions/improvements to the Cinder version of those drivers that
already exists.

That's about the best, least subjective I can really come up with at this
point.  It's either that or it's back to my original stance of "if you
abstract out multiple backends, and implement a secondary scheduler you're
out".  Those have been the two approaches I've gone back and forth on.

One of the arguments that gets thrown out a lot is "if it doesn't work and
it doesn't get support, just remove it", but the reality is that's a very
difficult thing to do.  Even if there are just a handful of customers that
go down this path, removing support for their configuration is not
something that I would like to do, no matter how painful supporting them
might be.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-tc/attachments/20140715/1f2af792/attachment-0001.html>


More information about the OpenStack-TC mailing list