[all] stable/ocata gate failure

Ghanshyam Mann gmann at ghanshyammann.com
Tue Sep 17 13:55:28 UTC 2019


 ---- On Mon, 16 Sep 2019 02:57:33 -0700 Ghanshyam Mann <gmann at ghanshyammann.com> wrote ----
 >  ---- On Sun, 15 Sep 2019 02:01:56 +0900 Matt Riedemann <mriedemos at gmail.com> wrote ----
 >  > On 9/14/2019 12:19 AM, Ghanshyam Mann wrote:
 >  > > If you have noticed that stable/ocata gate is blocked where 'legacy-tempest-dsvm-neutron-full/-*' job
 >  > > is failing due to the latest Tempest changes.
 >  > > 
 >  > > Tempest started the JSON schema strict validation for Volume APIs which caught the failure or you can say
 >  > > Tempest master cannot be used in Ocata testing. More details-https://bugs.launchpad.net/tempest/+bug/1843762  
 >  > > 
 >  > > As per the Tempest stable branch testing policy[1], Tempst does not support the stable/ocata (which is EM )in the
 >  > > current development cycle. Current supported stable branches by Tempest are Queens, Rocky, Stein and Train-on-going.
 >  > > We can keep using Tempest master on EM stable/branches as long as it run successfully and if it start failing which is current
 >  > > case of stable/ocata then use Tempest tag to test that EM stable branch.
 >  > > 
 >  > > To unblock the stable/ocata gate, I am trying to install the Tempest 20.0.0(compatible version for Ocata) in ocata gate
 >  > > -https://review.opendev.org/#/c/681950/
 >  > > Fix is not working as of now (it still install Tempest master). I will debug that later (my current priority is for Train feature freeze).
 >  > > 
 >  > > [1]https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html
 >  > 
 >  > Thanks for the heads up. I agree that being able to continue to run 
 >  > tempest integration jobs on stable/ocata changes, even with a frozen 
 >  > tempest version, is better than not running integration testing on 
 >  > stable/ocata at all. When I was at IBM and we were supported branches 
 >  > downstream that were end of life upstream what I'd do was create an 
 >  > internal branch for tempest (stable/ocata in this case) so we'd run 
 >  > against that rather than master tempest, just in case we needed to make 
 >  > changes and couldn't use a tag (back then tags for tempest were also 
 >  > pretty new as I recall). I'm not advocating creating a stable/ocata 
 >  > branch for tempest upstream, I'm just giving an example of one 
 >  > downstream process for this sort of thing.
 > 
 > Thanks for that information. I think creating stable/ocata in Tempest will face the maintenance issue.
 > Let's try with tag first if that work fine.

I fixed it with refs instead of the tag. tag support is not there in git_clone() function for case RECYCLE=FALSE which we can add but that should
be coming from master and then backport. 

Fix is working and merged now. - https://review.opendev.org/#/c/681950/

-gmann

 > 
 > 
 >  > 
 >  > Alternatively Cinder could fix the API regression, but that would likely 
 >  > be a regression of its own at this point right? Meaning if they added 
 >  > something to an API response without a microversion and then removed it 
 >  > without a microversion, that's not really helping the situation. As it 
 >  > stands clients (in this case tempest) have to deal with the API change.
 > 
 > I am on same page with you on this but there are different opinion on how to change API with microversion.
 > I have started a separate thread on this to discuss the correct way to change API
 > - http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009365.html
 > 
 > -gmann
 > 
 >  > 
 >  > Another alternative would be putting some kind of compat code in tempest 
 >  > for this particular API breakage but if Tempest isn't going to gate on 
 >  > stable/ocata then that's not really the responsibility of the QA team to 
 >  > carry that compat code.
 > 
 > Yeah, as per Extended Maintainance stable branch testing policy, Tempest would not be able
 > to maintain those code.  It becomes difficult from maintenance as well as strict verification side also.
 > 
 > -gmann
 > 
 > 
 > 
 >  > 
 >  > -- 
 >  > 
 >  > Thanks,
 >  > 
 >  > Matt
 >  > 
 >  > 
 > 




More information about the openstack-discuss mailing list