<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 2, 2017 at 3:51 PM, Jay Bryant <span dir="ltr"><<a href="mailto:jsbryant@electronicjungle.net" target="_blank">jsbryant@electronicjungle.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I had forgotten that we added this and am guessing that other cores did as well. As a result, it likely, was not enforced in driver reviews.<br><br>I need to better understand the benefit. In don't think there is a hurry to remove this right now. Can we put it on the agenda for Denver?</blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">Yeah, I think it's an out of sight out of mind... and maybe just having the volume/targets module alone</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">is good enough regardless of whether drivers want to do child inheritance or member inheritance against</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">it.</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"><br></div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">Meh... ok, never mind.</div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888"><br><br>Jay</font></span><div class="HOEnZb"><div class="h5"><br><div class="gmail_quote"><div dir="ltr">On Fri, Jun 2, 2017 at 4:14 PM Eric Harney <<a href="mailto:eharney@redhat.com" target="_blank">eharney@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 06/02/2017 03:47 PM, John Griffith wrote:<br>
> Hey Everyone,<br>
><br>
> So quite a while back we introduced a new model for dealing with target<br>
> management in the drivers (ie initialize_connection, ensure_export etc).<br>
><br>
> Just to summarize a bit: The original model was that all of the target<br>
> related stuff lived in a base class of the base drivers. Folks would<br>
> inherit from said base class and off they'd go. This wasn't very flexible,<br>
> and it's why we ended up with things like two drivers per backend in the<br>
> case of FibreChannel support. So instead of just say having "driver-foo",<br>
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their<br>
> own CI, configs etc. Kind of annoying.<br>
<br>
We'd need separate CI jobs for the different target classes too.<br>
<br>
<br>
> So we introduced this new model for targets, independent connectors or<br>
> fabrics so to speak that live in `cinder/volume/targets`. The idea being<br>
> that drivers were no longer locked in to inheriting from a base class to<br>
> get the transport layer they wanted, but instead, the targets class was<br>
> decoupled, and your driver could just instantiate whichever type they<br>
> needed and use it. This was great in theory for folks like me that if I<br>
> ever did FC, rather than create a second driver (the pattern of 3 classes:<br>
> common, iscsi and FC), it would just be a config option for my driver, and<br>
> I'd use the one you selected in config (or both).<br>
><br>
> Anyway, I won't go too far into the details around the concept (unless<br>
> somebody wants to hear more), but the reality is it's been a couple years<br>
> now and currently it looks like there are a total of 4 out of the 80+<br>
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd<br>
> (and I implemented 3 of them I think... so that's not good).<br>
><br>
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR<br>
> design to what we had, I don't like having both code-paths and designs in<br>
> the code base. Should we consider reverting the drivers that are using the<br>
> new model back and remove cinder/volume/targets? Or should we start<br>
> flagging those new drivers that don't use the new model during review?<br>
> Also, what about the legacy/burden of all the other drivers that are<br>
> already in place?<br>
><br>
> Like I said, I'm biased and I think the new approach is much better in a<br>
> number of ways, but that's a different debate. I'd be curious to see what<br>
> others think and what might be the best way to move forward.<br>
><br>
> Thanks,<br>
> John<br>
><br>
<br>
Some perspective from my side here: before reading this mail, I had a<br>
bit different idea of what the target_drivers were actually for.<br>
<br>
The LVM, block_device, and DRBD drivers use this target_driver system<br>
because they manage "local" storage and then layer an iSCSI target on<br>
top of it. (scsi-target-utils, or LIO, etc.) This makes sense from the<br>
original POV of the LVM driver, which was doing this to work on multiple<br>
different distributions that had to pick scsi-target-utils or LIO to<br>
function at all. The important detail here is that the<br>
scsi-target-utils/LIO code could also then be applied to different<br>
volume drivers.<br>
<br>
The Solidfire driver is doing something different here, and using the<br>
target_driver classes as an interface upon which it defines its own<br>
target driver. In this case, this splits up the code within the driver<br>
itself, but doesn't enable plugging in other target drivers to the<br>
Solidfire driver. So the fact that it's tied to this defined<br>
target_driver class interface doesn't change much.<br>
<br>
The question, I think, mostly comes down to whether you get better code,<br>
or better deployment configurability, by a) defining a few target<br>
classes for your driver or b) defining a few volume driver classes for<br>
your driver. (See coprhd or Pure for some examples.)<br>
<br>
I'm not convinced there is any difference in the outcome, so I can't see<br>
why we would enforce any policy around this. The main difference is in<br>
which cinder.conf fields you set during deployment, the rest pretty much<br>
ends up the same in either scheme.<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</blockquote></div>
</div></div><br>______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
<br></blockquote></div><br></div></div>