[openstack-dev] [cinder] Target classes in Cinder

John Griffith john.griffith8 at gmail.com
Fri Jun 2 22:41:57 UTC 2017


On Fri, Jun 2, 2017 at 3:51 PM, Jay Bryant <jsbryant at electronicjungle.net>
wrote:

> I had forgotten that we added this and am guessing that other cores did as
> well. As a result, it likely, was not enforced in driver reviews.
>
> I need to better understand the benefit. In don't think there is a hurry
> to remove this right now. Can we put it on the agenda for Denver?

Yeah, I think it's an out of sight out of mind... and maybe just having the
volume/targets module alone
is good enough regardless of whether drivers want to do child inheritance
or member inheritance against
it.

Meh... ok, never mind.​


>
>
> Jay
>
> On Fri, Jun 2, 2017 at 4:14 PM Eric Harney <eharney at redhat.com> wrote:
>
>> On 06/02/2017 03:47 PM, John Griffith wrote:
>> > Hey Everyone,
>> >
>> > So quite a while back we introduced a new model for dealing with target
>> > management in the drivers (ie initialize_connection, ensure_export etc).
>> >
>> > Just to summarize a bit:  The original model was that all of the target
>> > related stuff lived in a base class of the base drivers.  Folks would
>> > inherit from said base class and off they'd go.  This wasn't very
>> flexible,
>> > and it's why we ended up with things like two drivers per backend in the
>> > case of FibreChannel support.  So instead of just say having
>> "driver-foo",
>> > we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
>> > own CI, configs etc.  Kind of annoying.
>>
>> We'd need separate CI jobs for the different target classes too.
>>
>>
>> > So we introduced this new model for targets, independent connectors or
>> > fabrics so to speak that live in `cinder/volume/targets`.  The idea
>> being
>> > that drivers were no longer locked in to inheriting from a base class to
>> > get the transport layer they wanted, but instead, the targets class was
>> > decoupled, and your driver could just instantiate whichever type they
>> > needed and use it.  This was great in theory for folks like me that if I
>> > ever did FC, rather than create a second driver (the pattern of 3
>> classes:
>> > common, iscsi and FC), it would just be a config option for my driver,
>> and
>> > I'd use the one you selected in config (or both).
>> >
>> > Anyway, I won't go too far into the details around the concept (unless
>> > somebody wants to hear more), but the reality is it's been a couple
>> years
>> > now and currently it looks like there are a total of 4 out of the 80+
>> > drivers in Cinder using this design, blockdevice, solidfire, lvm and
>> drbd
>> > (and I implemented 3 of them I think... so that's not good).
>> >
>> > What I'm wondering is, even though I certainly think this is a FAR
>> SUPERIOR
>> > design to what we had, I don't like having both code-paths and designs
>> in
>> > the code base.  Should we consider reverting the drivers that are using
>> the
>> > new model back and remove cinder/volume/targets?  Or should we start
>> > flagging those new drivers that don't use the new model during review?
>> > Also, what about the legacy/burden of all the other drivers that are
>> > already in place?
>> >
>> > Like I said, I'm biased and I think the new approach is much better in a
>> > number of ways, but that's a different debate.  I'd be curious to see
>> what
>> > others think and what might be the best way to move forward.
>> >
>> > Thanks,
>> > John
>> >
>>
>> Some perspective from my side here:  before reading this mail, I had a
>> bit different idea of what the target_drivers were actually for.
>>
>> The LVM, block_device, and DRBD drivers use this target_driver system
>> because they manage "local" storage and then layer an iSCSI target on
>> top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
>> original POV of the LVM driver, which was doing this to work on multiple
>> different distributions that had to pick scsi-target-utils or LIO to
>> function at all.  The important detail here is that the
>> scsi-target-utils/LIO code could also then be applied to different
>> volume drivers.
>>
>> The Solidfire driver is doing something different here, and using the
>> target_driver classes as an interface upon which it defines its own
>> target driver.  In this case, this splits up the code within the driver
>> itself, but doesn't enable plugging in other target drivers to the
>> Solidfire driver.  So the fact that it's tied to this defined
>> target_driver class interface doesn't change much.
>>
>> The question, I think, mostly comes down to whether you get better code,
>> or better deployment configurability, by a) defining a few target
>> classes for your driver or b) defining a few volume driver classes for
>> your driver.   (See coprhd or Pure for some examples.)
>>
>> I'm not convinced there is any difference in the outcome, so I can't see
>> why we would enforce any policy around this.  The main difference is in
>> which cinder.conf fields you set during deployment, the rest pretty much
>> ends up the same in either scheme.
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170602/475d719f/attachment.html>


More information about the OpenStack-dev mailing list