[openstack-dev] [OpenStack][Cinder] Driver qualification
John Griffith
john.griffith at solidfire.com
Tue Jul 30 19:04:31 UTC 2013
On Tue, Jul 30, 2013 at 11:47 AM, Walter A. Boring IV
<walter.boring at hp.com>wrote:
> So how frequently would this be required and when do the results need to
> be provided?
>
To start I'd like to propose any new driver, and then each driver upon
milestone release, or at the very least each cycle.
> I generally think this is a good idea, but as well all know that the
> project seems to be in flux at times around the milestones. Especially
> around G3 when we changed the way the CONF object was accessed, almost all
> drivers failed to work right after G3 when cinder.conf was configured with
> a multi-backend setup. It took a week or so after G3 before everyone
>
IMO this is exactly why something like this would actually be more
beneficial.
> was able to go through drivers and clean them up and get them to work in
> this scenario. I'm not trying to be a wet blanket on this idea, but I
> think we just have to be careful. I have no problem running my drivers
> through this and providing the results. What happens when something
> changes in cinder that causes drivers to fail and the test runs fail? How
> long
>
We should be extra careful to keep this sort of thing from happening, but
again without doing any sort of public testing/results here we run the risk
of nobody knowing it's broken until a customer encounters it. I'm also not
quite sure I see the issue here, I mean if something in the cinder base
code breaks a driver it's still going to be broken, the only difference is
that we'll actually know that it's broken.
> does a maintainer get to fix the issue before X happens as a result?
> Does this happen every milestone I-1, I-2, I-3? What happens if a
> maintainer can't do it for every milestone for whatever reason?
>
I don't think we've arrived at a point where we say X happens yet. To
start I'd view this as logging a bug if it doesn't pass. Ultimately it
would probably be interesting to consider things like removing a driver but
there would have to be some process setup regarding how we try and address
the issue before doing something drastic.
>
> Just playing a bit of devil's advocate here. I do like the idea though,
> just depends on the "rules" setup and how it all applies when things don't
> go well for a particular driver.
>
Sure, that's fine and I think you bring up a good point. The idea here is
NOT to make things difficult or to try and keep drivers out etc, the idea
is to release a better product. There are a number of folks that have
drivers that are "believed" to work but since there's no formal
testing/integration it's just an assumption in the community. This
proposal would at least make it public information regarding whether a
driver actually works or not. I mean really, I don't think this is asking
too much considering we require any patch to the projects in OpenStack to
run these tests against the LVM driver to make sure things work. This
really isn't any different, except we don't *require* it for every check
in. We just do checks to make sure drivers are actually doing what we
expect and end-users aren't surprised to find their driver doesn't actually
work.
>
> Cheers,
> Walt
>
>
> Hey Everyone,
>
> Something I've been kicking around for quite a while now but never
> really been able to get around to is the idea of requiring that drivers in
> Cinder run a qualification test and submit results prior to introduction in
> to Cinder.
>
> To elaborate a bit, the idea could start as something really simple like
> the following:
> 1. We'd add a functional_qual option/script to devstack
>
> 2. Driver maintainer runs this script to setup devstack and configure it
> to use their backend device on their own system.
>
> 3. Script does the usual devstack install/configure and runs the volume
> pieces of the Tempest gate tests.
>
> 4. Grabs output and checksums of the directories in the devstack and
> /opt/stack directories, bundles up the results for submission
>
> 5. Maintainer submits results
>
> So why would we do this you ask? Cinder is pretty heavy on the third
> party driver plugin model which is fantastic. On the other hand while
> there are a lot of folks who do great reviews that catch things like syntax
> or logic errors in the code, and unit tests do a reasonable job of
> exercising the code it's difficult for folks to truly verify these devices
> all work.
>
> I think it would be a very useful tool for initial introduction of a new
> driver and even perhaps some sort of check that's run and submitted again
> prior to milestone releases.
>
> This would also drive some more activity and contribution in to Tempest
> with respect to getting folks like myself motivated to contribute more
> tests (particularly in terms of new functionality) in to Tempest.
>
> I'd be interested to hear if folks have any interest or strong opinions
> on this (positive or otherwise). I know that some vendors like RedHat have
> this sort of thing in place for certifications, and to be honest that
> observation is something that caused me to start thinking about this again.
>
> There are a lot of gaps here regarding how the submission process would
> look, but we could start relatively simple and grow from there if it's
> valuable or just abandon the idea if it proves to be unpopular and a waste
> of time.
>
> Anyway, I'd love to get feed-back from folks and see what they think.
>
> Thanks,
> John
>
>
>
> _______________________________________________
> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130730/4b80527d/attachment.html>
More information about the OpenStack-dev
mailing list