<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 15, 2014 at 7:51 AM, Mark McLoughlin <span dir="ltr"><<a href="mailto:markmc@redhat.com" target="_blank">markmc@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi John,<br>
<br>
Top-posting because I don't really have specific answers to any of your<br>
questions, but I thought I'd try and summarize how I'd go about thinking<br>
it through ...<br>
<br>
What Cinder provides from a user perspective:<br>
<br>
- volumes<br>
- attaching or booting from volumes via Nova<br>
- snapshots<br>
- backups<br>
- different storage tiers - i.e. volume types, qos specs<br>
- quotas/limits<br>
<br>
If the backing storage technology could sanely be used to provide those<br>
features then, conceptually at least, a Cinder driver for that<br>
technology makes some sense.<br>
<br>
(An example of not sane IMO would be a storage driver that creates<br>
volumes that can only be attached to VMs running on a single compute<br>
node and users of the Nova API would need to jump through hopes to deal<br>
with this restriction)<br>
<br>
I don't have a problem with vendors wanting to add drivers for their<br>
technologies. I don't like to see us assume that this is purely about<br>
marketing. I'd rather assume good faith on the part of contributing<br>
vendors - they see value for real-life users in the combination of<br>
OpenStack and their technology, and they're trying to bring that value<br>
to as many people as possible. As much of an open-source advocate as I<br>
am, I also wouldn't differentiate between drivers for proprietary or<br>
open-source technologies.<br>
<br>
I also don't like to assume that contributions to drivers only come from<br>
vendors. That may be mostly true, but if we can build mini, productive,<br>
healthy communities around only a handful of drivers then it will be<br>
worth our while to be careful with our language by calling them driver<br>
contributors, developers, maintainers, etc.<br>
<br>
Having drivers in-tree is not so much about "quality" as it is about<br>
enabling and encouraging collaboration between everyone interested in<br>
working on the driver, but also those working on the driver and those<br>
working on the core of the project. For example, if a driver was low<br>
quality but keeping it in-tree was going to enable that collaboration,<br>
then I'd be more in favor of marking it as "experimental" or<br>
"incomplete" or similar rather than kicking it out of the tree.<br>
<br>
Now, the specific situation you describe sounds pretty crappy - a<br>
proposal to include a whole bunch of infrastructure in a driver that<br>
could be useful to other drivers. That rather than figure out how to<br>
evolve the core of Cinder to support the scheduling requirements of that<br>
storage architecture, the driver developers chose to build their own<br>
stuff. We absolutely do not need to encourage that, but the flip-side is<br>
that we also need to be open to those driver developers evolving the<br>
core to support that use case.<br>
<br>
What I really see (and dig) across the board is this - where once, when<br>
our review queues were under control, we would have given a giant big<br>
hug to anyone wanting to contribute to a driver, we are now massively<br>
over-burdened with contributions of varying quality and our ability to<br>
nurture each contributor (and empathize with their goals) has almost<br>
vanished.<br></blockquote><div><br></div><div>I think this is what's going on across the board (not to be too sweeping with that observation). Such as this thread "What projects need help?" [1] Our ability to nurture is relegated to the intern programs. And the varying quality has come up in other conversations. </div>
<div><br></div><div>One other observation about your request, John. It reminds me of John Dickinson's original move of all swift-related code to outside of the code base. He may be able to comment better about whether it was about onboarding contributors or about maintenance burden. I think it was the maintenance that concerned him.</div>
<div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
That's totally understandable, but IMO we should be clear-headed about<br>
what we want to achieve in discussions like this - getting these pesky,<br>
clueless, leaching vendors to bugger off somewhere else or evolving our<br>
community and processes to cope with, and nurture, the massive interest<br>
we're seeing.<br></blockquote><div><br></div><div>Yes, it's a balancing act. John has outlined the technical concerns though, and to me it sounds a bit like the ML2 driver in neutron. Which team came up with the ml2 driver, core or a vendor? Is the maintenance expected from the core cinder team?</div>
<div>Thanks,</div><div>Anne</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
HTH,<br>
Mark.<br>
<div class=""><div class="h5"><br>
On Mon, 2014-07-14 at 10:03 -0600, John Griffith wrote:<br>
> Hello TC Members,<br>
><br>
><br>
> Some of you have maybe heard my opinions and position on the so called<br>
> "Software Defined Storage" trend in Cinder. For those that aren't<br>
> familiar here's the summary:<br>
><br>
><br>
> Basicly the first proposal from EMC came several months back where<br>
> they submitted a "Driver" that was an abstraction layer for multiple<br>
> EMC and non-EMC storage devices [1]. The goal being to provide a<br>
> Cinder Volume-Driver that implements an entire new abstraction and<br>
> scheduling layer for heterogenous devices that are connected behind an<br>
> EMC VIPR node. The first problem I had with this was patch-set 1 was<br>
> 28924 line code dump of a "driver". The second problem was the fact<br>
> that it duplicates scheduler, API abstraction etc.<br>
><br>
><br>
> My bigger concern is that every vendor and now some other Open Source<br>
> groups are also seeing this as a great marketing tool and potential<br>
> for driving some new sales in their company. There have been<br>
> additional proposals for similar abstractions in the Juno cycle (and<br>
> notification from at least 2 other vendors that they'll be submitting<br>
> something similar). For those that support a "software based storage<br>
> on top of commodity servers or JBODS" it seems reasonable/good to have<br>
> a Cinder driver. The problem is you can't have just that piece of<br>
> their product, you get all or nothing.<br>
><br>
><br>
> The concept itself (and the implementation) in some of these is pretty<br>
> cool and I see real value in them. What I don't see as beneficial<br>
> however is them being in the Cinder Source code. I don't think that<br>
> support and overhead of maintaining entire duplications of the Cinder<br>
> services is necessarily a great idea. To be honest, I've spent the<br>
> last year trying to figure out a reasonable way to have ZERO 3'rd<br>
> party drivers in Cinder, and still provide some way of providing<br>
> quality.<br>
><br>
><br>
> Anyway, I'd like to get some feedback from the TC on all of this.<br>
> Whether it be an official agenda item for an upcoming meeting, or at<br>
> the very least feedback to this email. In my opinion this is very<br>
> much a TC item and is exactly the sort of thing that the TC should be<br>
> interested in. I can certainly make decisions on my own based on<br>
> feedback from the rest of the Cinder community and my own personal<br>
> view, however I believe this has a broader impact.<br>
><br>
><br>
> Most of these products also extend at some point to direct<br>
> integrations as Object Stores, Image Repositories, Manilla Share<br>
> Plugins and direct integrations in to Nova. I think it's something<br>
> that needs to be well thought out and the long terms impacts on the<br>
> OpenStack projects should be considered.<br>
><br>
><br>
> Thanks,<br>
> John<br>
><br>
><br>
><br>
><br>
> [1]: <a href="https://review.openstack.org/#/c/74158/" target="_blank">https://review.openstack.org/#/c/74158/</a><br>
><br>
><br>
> Additional/Similar proposals:<br>
> <a href="https://review.openstack.org/#/c/106742/" target="_blank">https://review.openstack.org/#/c/106742/</a><br>
><br>
> <a href="https://review.openstack.org/#/c/101688/" target="_blank">https://review.openstack.org/#/c/101688/</a><br>
><br>
</div></div><div class=""><div class="h5">> _______________________________________________<br>
> OpenStack-TC mailing list<br>
> <a href="mailto:OpenStack-TC@lists.openstack.org">OpenStack-TC@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc</a><br>
<br>
<br>
<br>
_______________________________________________<br>
OpenStack-TC mailing list<br>
<a href="mailto:OpenStack-TC@lists.openstack.org">OpenStack-TC@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc</a><br>
</div></div></blockquote></div><br></div></div>