[Interop-wg] Next steps for microversion testing

Mark Voelker mvoelker at vmware.com
Fri Sep 29 21:27:57 UTC 2017


We talked about this a bit this past week…basically it boils down to:

1.)  hodgepodge is looking into a schema update to accommodate microversion info if it’s even necessary.  If nothing else it may be informative…e.g. we can add a new capability and tests that happens to depend on a particular microversion to the Guidelines now without doing anything special, but it might be useful for folks to be able to easily see “oh hey, this requires version X.Y of the API”.

2.)  There were four API’s mentioned in [1] that we wanted to consider.  As Rocky noted, we’ll be working on scoring for those shortly (off the top of my head I think zhipeng is playing point on Nova scoring for this cycle).  Basically that means taking a first pass at adding new capabilities to [2] and posting a patch we can debate/iterate on.  If any of those things are found to meet the necessary criteria [3] but don’t have tests yet, we should definitely write some tests (the sooner the better).  If they don’t score high enough…well, far be it from me to tell anyone to not write tests.  =)  But it’s probably less urgent from an interop point of view since that signals they turned out not currently be good candidates anyway.  If someone’s chomping at the bit to write those tests, awesome—they can do that regardless without waiting on us.  There’s some information in [4] about what constitutes a useful test for interop purposes that test writers may want to look over.  

[1] https://etherpad.openstack.org/p/InteropDenver2017PTG_Microversions
[2] http://git.openstack.org/cgit/openstack/interop/tree/working_materials/scoring.txt
[3] https://github.com/openstack/interop/blob/master/doc/source/process/CoreCriteria.rst < Note that a capability doesn’t need to meet *all* of these…just enough to score 74 points.  See [2] for more some further color.
[4] https://github.com/openstack/interop/blob/master/working_materials/interop_test_spec.rst

At Your Service,

Mark T. Voelker


> On Sep 29, 2017, at 4:51 PM, Rochelle Grober <rochelle.grober at huawei.com> wrote:
> 
> Scoring for 2018.01 is starting now.  Scoring is separate from testing.  The scoring says that the capability should be tested by the tests included in the guidelines.  But even if a capability is scored as being required, if there are no tests available to demonstrate the capability, that capability can't be included in required because of the lack of tests. If non-admin test(s) exist for a capability, it/the will be added to the list of guideline tests during the review period.
> 
> So, scoring can be done in parallel to writing the tests.  If the tests exist by the time scoring is finished, then life is easier.  From http://eavesdrop.openstack.org/meetings/interopwg/2017/interopwg.2017-09-20-16.01.log.html (16:48:24 on), first draft of the scores should be ready by 10/6.  That's when PTLs (who haven't already) and others should get involved and review/comment.  Final scores will be approved by the Summit.  
> Once scoring is complete, the capabilities with passing scores, and their tests are reviewed and updated.
> 
> Operators and vendors start getting involved when the scores are official and first pass of the list of tests are out.  The downstream folks start running the tests and commenting in issues, config requirements, etc. 
> 
> In this particular instance, Refstack also needs to review its schema and design to ensure that it can also handle the microversion tests and reporting.  But, since Chris Hoge is the PTL, it's a pretty well known and visible issue.
> 
> So, does this help clarify anything?  Hope it helps.
> 
> --Rocky
> 
>> -----Original Message-----
>> From: Matt Riedemann [mailto:mriedemos at gmail.com]
>> Sent: Friday, September 29, 2017 1:08 PM
>> To: interop-wg at lists.openstack.org
>> Subject: [Interop-wg] Next steps for microversion testing
>> 
>> At the Queens PTG in Denver there was general agreement on starting to
>> require some compute API microversions in the interop guidelines:
>> 
>> https://etherpad.openstack.org/p/InteropDenver2017PTG_Microversions
>> 
>> I hadn't seen anything in the ML yet so wanted to ask about status, as I don't
>> attend the meetings.
>> 
>> There are four proposed microversions in that etherpad, but only one has an
>> existing Tempest test.
>> 
>> Before working on writing Tempest tests for the others, what needs to
>> happen (I don't know the process details).
>> 
>> Sounds like they need to be "scored", but who does that and/or when does
>> it happen? Does a change need to be proposed somewhere and if so, should
>> I do that?
>> 
>> I'd like to keep momentum on this given it was pretty positive at the PTG, so
>> please let me know where I can help.
>> 
>> --
>> 
>> Thanks,
>> 
>> Matt
>> 
>> _______________________________________________
>> Interop-wg mailing list
>> Interop-wg at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/interop-wg
> _______________________________________________
> Interop-wg mailing list
> Interop-wg at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/interop-wg



More information about the Interop-wg mailing list