[openstack-dev] [OpenStack-Dev] Third party testing

John Griffith john.griffith at solidfire.com
Thu Jan 16 01:51:38 UTC 2014


On Wed, Jan 15, 2014 at 6:41 PM, Michael Still <mikal at stillhq.com> wrote:
> John -- I agree with you entirely here. My concern is more that I
> think the CI tests need to run more frequently than weekly.

Completely agree, but I guess in essence to start these aren't really
CI tests.  Instead it's just a public health report for the various
drivers vendors provide.  I'd love to see a higher frequency, but some
of us don't have the infrastructure to try and run a test against
every commit.  Anyway, I think there's HUGE potential for growth and
adjustment as we go along.  I'd like to get something in place to
solve the immediate problem first though.

To be honest I'd even be thrilled just to see every vendor publish a
passing run against each milestone cut.  That in and of itself would
be a huge step in the right direction in my opinion.

>
> Michael
>
> On Thu, Jan 16, 2014 at 9:30 AM, John Griffith
> <john.griffith at solidfire.com> wrote:
>> On Wed, Jan 15, 2014 at 6:03 PM, Michael Still <mikal at stillhq.com> wrote:
>>> On Thu, Jan 16, 2014 at 6:28 AM, John Griffith
>>> <john.griffith at solidfire.com> wrote:
>>>> Hey Everyone,
>>>>
>>>> A while back I started talking about this idea of requiring Cinder
>>>> driver contributors to run a super simple cert script (some info here:
>>>> [1]).  Since then I've been playing with introduction of a third party
>>>> gate check here in my own lab.  My proposal was to have a non-voting
>>>> check that basically duplicates the base devstack gate test in my lab,
>>>> but uses different back-end devices that I have available configured
>>>> in Cinder to run periodic tests against.  Long term I'd like to be
>>>> able to purpose this gear to also do something "more useful" for the
>>>> over all OpenStack gating effort but to start it's strictly an
>>>> automated verification of my Cinder driver/backend.
>>>>
>>>> What I'm questioning is how to report this information and the
>>>> results.  Currently patches and reviews are our mechanism for
>>>> triggering tests and providing feedback.  Myself and many other
>>>> vendors that might like to participate in something like this
>>>> obviously don't have the infrastructure to try and run something like
>>>> this on every single commit.  Also since it would be non-voting it's
>>>> difficult to capture and track the results.
>>>>
>>>> One idea that I had was to set something like what I've described
>>>> above to run locally on a periodic basis (weekly, nightly etc) and
>>>> publish results to something like a "third party verification
>>>> dashboard".  So the idea would be that results from various third
>>>> party tests would all adhere to a certain set of criteria WRT what
>>>> they do and what they report  and those results would be logged and
>>>> tracked publicly for anybody in the OpenStack community to access and
>>>> view?
>>>
>>> My concern here is how to identify what patch broke the third party
>>> thing. If you run this once a week, then there are possible hundreds
>>> of patches which might be responsible. How do you identify which one
>>> is the winner?
>>
>> To be honest I'd like to see more than once a week, however the main
>> point of this is to have public testing of third party drivers.
>> Currently we say "it's in trunk and passed review and unit tests" so
>> you're good to go.  Frankly that's not sufficient, there needs to be
>> some sort of testing publicly that shows that a product/config
>> actually works in the minimum sense at least.  This won't address
>> things like a bad patch breaking things, but again in Cinder's case
>> this is a bit different, it is designed more to show compatibility and
>> integration completeness.  If a patch goes in and breaks a vendors
>> driver but not the reference implementation, that means the vendor has
>> work to do bring their driver up to date.
>>
>> Cinder is not a dumping ground, the drivers in the code base should no
>> be static but require continued maintenance and development as the
>> project grows.
>>
>> Non-Voting tests on every patch seems unrealistic, however there's no
>> reason that if vendors have the resources they couldn't do that if
>> they so choose.
>>
>>>
>>>> Does this seem like something that others would be interested in
>>>> participating in?  I think it's extremely valuable for projects like
>>>> Cinder that have dozens of backend devices, and regardless of other
>>>> interest or participation in the community I intend to implement
>>>> something like this on my own regardless.  It would just be
>>>> interesting to see if we could have an organized and official effort
>>>> to gather this sort of information and run these types of tests.
>>>>
>>>> Open to suggestions and thoughts as well as any of you that may
>>>> already be doing this sort of thing.  By the way, I've been looking at
>>>> things like SmokeStack and other third party gating checks to get some
>>>> ideas as well.
>>>
>>> Michael
>>>
>>> --
>>> Rackspace Australia
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list