Speaking as just one member of the group, and not as one of the co-chairs, I’m ok with strict checking going in for a few reasons. * We're doing it for Nova, so a prescedent has been set. * Vendors aren't supposed to change the API code as all endpoints designated sections in the interoperability guidelines. * This adds a layer of robustness to the API by forcing correct responses. As for our normal policy, if vendors object to the change we have release valves in place for them to flag tests and possibly have temporary policies put into place to give them an opportunity to migrate modified API code to upstream versions. The only thing I might advise is a temporary flag to disable checks so that we can give vendors an appropriate period of time to adjust to the new test requirements. I recall that the sudden switch of test behaviors caught vendors off-guard and was the primary source of problems surrounding the change. -Chris
On Jan 21, 2019, at 2:53 AM, Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
---- On Mon, 03 Dec 2018 10:58:51 +0900 Ghanshyam Mann <gmann@ghanshyammann.com> wrote ----
---- On Sat, 01 Dec 2018 02:58:45 +0900 Mark Voelker <mvoelker@vmware.com> wrote ----
On Nov 29, 2018, at 9:28 PM, Matt Riedemann <mriedemos@gmail.com> wrote:
On 11/29/2018 10:17 AM, Ghanshyam Mann wrote:
- To improve the volume API testing to avoid the backward compatible changes. Sometime we accidentally change the API in backward incompatible way and strict validation with JSON schema help to block those.
+1 this is very useful to avoid unintentionally breaking the API.
We want to hear from cinder and interop team about any impact of this change to them.
I'm mostly interested in what the interop WG would do about this given it's a potentially breaking change for interop without changes to the guidelines. Would there be some sort of grace period for clouds to conform to the changes in tempest?
That’s more or less what eventually happened when we began enforcing strict validation on Nova a few years ago after considerable debate. Clouds that were compliant with the interop guidelines before the strict validation patch landed and started failing once it went in could apply for a waiver while they worked on removing or upstreaming the nonstandard stuff. For those not familiar, here’s the patch that created a waiver program:
https://review.openstack.org/#/c/333067/
Note that this expired with the 2017.01 Guideline:
https://review.openstack.org/#/c/512447/
While not everyone was totally happy with the solution, it seemed to work out as a middle ground solution that helped get clouds on a better path in the end. I think we’ll discuss whether we’d need to do something like this again here. I’d love to hear:
1.) If anyone knows of clouds/products that would be fail interop testing because of this. Not looking to name and shame, just to get an idea of whether or not we have a concrete problem and how big it is.
2.) Opinions on how the waiver program went last time and whether the rest of the community feels like it’s something we should consider again.
Personally I’m supportive of the general notion of improving API interoperability here…as usual it’s figuring out the mechanics of the transition that take a little figuring. =)
Thanks Mark for response. I think point 1 is important, it is good to get the list of clouds or failure due to this this strict validation change. And accordingly, we can wait on Tempest side to merge those changes for this cycle (but personally I do not want to delay that if everything is fine), so that we can avoid the immediate failure of interop program.
Any update/feedback from interop/cloud provider side on strict API validation ? We are holding the Tempest patches to merge and waiting to hear form interop group.
-gmann
-gmann
At Your Service,
Mark T. Voelker
--
Thanks,
Matt
_______________________________________________ Interop-wg mailing list Interop-wg@lists.openstack.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openst...