[openstack-dev] [OpenStack-Dev] Third party testing

Robert Collins robertc at robertcollins.net
Sat Jan 18 01:24:14 UTC 2014


On 18 January 2014 06:42, John Griffith <john.griffith at solidfire.com> wrote:
> On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
> <robertc at robertcollins.net> wrote:

> Maybe this is going a bit sideways, but my point was that making a
> first step of getting periodic runs on vendor gear and publicly
> submitting those results would be a good starting point and a
> SIGNIFICANT improvement over what we have today.
>
> It seems to me that "requiring" every vendor to have a deployment in
> house dedicated and reserved 24/7 might be a tough order right out of
> the gate.  That being said, of course I'm willing and able to do that
> for my employer, but feedback from others hasn't been quite so
> amiable.
>
> The feedback here seems significant enough that maybe gating every
> change is the way to go though.  I'm certainly willing to opt in to
> that model and get things off the ground.  I do have a couple of
> concerns (number 3 begin the most significant):
>
> 1. I don't want ANY commit/patch waiting for a Vendors infrastructure
> to run a test.  We would definitely need a timeout mechanism or
> something along those lines to ensure none of this disrupts the gate
>
> 2. Isolating this to changes in Cinder seems fine, the intent was
> mostly a compatability / features check.  This takes it up a notch and
> allows us to detect when something breaks right away which is
> certainly a good thing.
>
> 3. Support and maintenance is a concern here.  We have a first rate
> community that ALL pull together to make our gating and infrastructure
> work in OpenStack.  Even with that it's still hard for everybody to
> keep up due to number of project and simply the volume of patches that
> go in on a daily basis.  There's no way I could do my regular jobs
> that I'm already doing AND maintain my own fork/install of the
> OpenStack gating infrastructure.
>
> 4. Despite all of the heavy weight corporation throwing resource after
> resource at OpenStack, keep in mind that it is an Open Source
> community still.  I don't want to do ANYTHING that would make it some
> unfriendly to folks who would like to commit.  Keep in mind that
> vendors here aren't necessarily all large corporations, or even all
> paid for proprietary products.  There are open source storage drivers
> for example in Cinder and they may or may not have any of the
> resources to make this happen but that doesn't mean they should not be
> allowed to have code in OpenStack.
>
> The fact is that the problem I see is that there are drivers/devices
> that flat out don't work and end users (heck even some vendors that
> choose not to test) don't know this until they've purchased a bunch of
> gear and tried to deploy their cloud.  What I was initially proposing
> here was just a more formal public and community representation of
> whether a device works as it's advertised or not.
>
> Please keep in mind that my proposal here was a first step sort of
> test case.  Rather than start with something HUGE like deploying the
> OpenStack CI in every vendors lab to test every commit (and I"m sorry
> for those that don't agree but that does seem like a SIGNIFICANT
> undertaking), why not take incremental steps to make things better and
> learn as we go along?

Certainly - I totally agree that anything >> nothing. I was asking
about your statement of not having enough infra to get a handle on
what would block things. As you know, tripleo is running up a
production quality test cloud to test tripleo, Ironic and once we get
everything in place - multinode gating jobs. We're *super* interested
in making the bar to increased validation as low as possible.

I broadly agree with your points 1 through 4, of course!

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list