[openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

Clay Gerrard clay.gerrard at gmail.com
Fri Aug 12 17:59:32 UTC 2016


The use_untested_probably_broken_deprecated_manger_so_maybe_i_can_migrate_cross_fingers
option sounds good!  The experiment would be then if it's still enough of a
stick to keep 3rd party drivers pony'd up on their commitment to the Cinder
team to consistently ship quality releases?

What about maybe the operator just not upgrading till post migration?  It's
the migration that sucks right?  You either get to punt a release and hope
it gets "back in good faith" or do it now and that 3rd party driver has
lost your business/trust.

-Clay

On Friday, August 12, 2016, Walter A. Boring IV <walter.boring at hpe.com
<javascript:_e(%7B%7D,'cvml','walter.boring at hpe.com');>> wrote:

>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate this:http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __________________________________________________________________________
>
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver requirements
> and allow upgrades to work
> without outright immediately removing a driver.
>
>
>    1. Add a 'supported = True' attribute to every driver.
>    2. When a driver no longer meets Cinder community requirements, put a
>    patch up against the driver
>    3. When c-vol service starts, check the supported flag.  If the flag
>    is False, then log an exception, and disable the driver.
>    4. Allow the admin to put an entry in cinder.conf for the driver in
>    question "enable_unsupported_driver = True".  This will allow the c-vol
>    service to start the driver and allow it to work.  Log a warning on every
>    driver call.
>    5. This is a positive acknowledgement by the operator that they are
>    enabling a potentially broken driver. Use at your own risk.
>    6. If the vendor doesn't get the CI working in the next release, then
>    remove the driver.
>    7. If the vendor gets the CI working again, then set the supported
>    flag back to True and all is good.
>
>
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have on
> those back-ends.  It will give them time to contact the community and/or do
> some research, and find out what happened to the driver.   This also
> potentially gives the operator time to find a new supported backend and
> start migrating volumes.  I say potentially, because the driver may be
> broken, or it may work enough to migrate volumes off of it to a new backend.
>
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, because as soon as they upgrade OpenStack,
> they lose all access to managing their existing volumes.   Just because we
> leave a driver in tree in this state, doesn't mean that the operator will
> be able to migrate if the drive is broken, but they'll have a chance
> depending on the state of the driver in question.  It could be horribly
> broken, but the breakage might be something fixable by someone that just
> knows Python.   If the driver is gone from tree entirely, then that's a lot
> more to overcome.
>
> I don't think there is a way to make everyone happy all the time, but I
> think this buys operators a small window of opportunity to still manage
> their existing volumes before the driver is removed.  It also still allows
> the Cinder community to deal with unsupported drivers in a way that will
> motivate vendors to keep their stuff working.
>
> My $0.02
> Walt
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160812/2c2b739b/attachment.html>


More information about the OpenStack-dev mailing list