[Fits] FW: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

Rochelle.RochelleGrober rochelle.grober at huawei.com
Fri May 9 19:06:04 UTC 2014


I think this discussion about tempest config directly relates to the refstack effort.  I may be mistaken, but I think we need to follow up on this topic.

--Rocky

> -----Original Message-----
> From: Matthew Treinish [mailto:mtreinish at kortar.org]
> Sent: Friday, May 09, 2014 9:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] Branchless Tempest QA Spec - final
> draft
> 
> On Sat, May 03, 2014 at 07:41:54AM +0000, Kenichi Oomichi wrote:
> >
> > Hi Matthew,
> >
> > > -----Original Message-----
> > > From: Matthew Treinish [mailto:mtreinish at kortar.org]
> > > Sent: Friday, May 02, 2014 12:36 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [all] Branchless Tempest QA Spec -
> final draft
> > >
> > > > > When adding new API parameters to the existing APIs, these
> parameters should
> > > > > be API extensions according to the above guidelines. So we have
> three options
> > > > > for handling API extensions in Tempest:
> > > > >
> > > > > 1. Consider them as optional, and cannot block the incompatible
> > > > > changes of them. (Current)
> > > > > 2. Consider them as required based on tempest.conf, and can
> block the
> > > > > incompatible changes.
> > > > > 3. Consider them as required automatically with
> microversioning, and
> > > > > can block the incompatible changes.
> > > >
> > > > I investigated the way of the above option 3, then have one
> question
> > > > about current Tempest implementation.
> > > >
> > > > Now verify_tempest_config tool gets API extension list from each
> > > > service including Nova and verifies API extension config of
> tempest.conf
> > > > based on the list.
> > > > Can we use the list for selecting what extension tests run
> instead of
> > > > the verification?
> > > > As you said In the previous IRC meeting, current API tests will
> be
> > > > skipped if the test which is decorated with requires_ext() and
> the
> > > > extension is not specified in tempest.conf. I feel it would be
> nice
> > > > that Tempest gets API extension list and selects API tests
> automatically
> > > > based on the list.
> > >
> > > So we used to do this type of autodiscovery in tempest, but we
> stopped because
> > > it let bugs slip through the gate. This topic has come up several
> times in the
> > > past, most recently in discussing reorganizing the config file. [1]
> This is why
> > > we put [2] in the tempest README. I agree autodiscovery would be
> simpler, but
> > > the problem is because we use tempest as the gate if there was a
> bug that caused
> > > autodiscovery to be different from what was expected the tests
> would just
> > > silently skip. This would often go unnoticed because of the sheer
> volume of
> > > tempest tests.(I think we're currently at ~2300) I also feel that
> explicitly
> > > defining what is a expected to be enabled is a key requirement for
> branchless
> > > tempest for the same reason.
> >
> > Thanks for the explanation, I understand the purpose of static config
> for
> > the gate. We could not notice some unexpected skips due to the test
> volume
> > as you said. but the autodiscovery still seems attractive for me,
> that would
> > make it easy to run Tempest on production environments. So how about
> implementing
> > the autodiscovery as just one option which is disabled as default
> value in
> > tempest.conf?
> > For example, current config of nova v3 API extensions are
> >
> >  api_v3_extensions=all
> >
> > and we will be able to specify "auto" instead of "all" if
> autodiscovery
> > is necessary:
> >
> >  api_v3_extensions=auto
> >
> > It would be nice to define it as experimental on the gate and check
> the
> > number of test skips sometimes by comparing the legitimate gate?
> 
> So the problem with this is actually the same one we have in the gate.
> Even
> though it's not part of the automated testing system, the issue with
> using
> discovery as part of the testing as an end user you'll never know
> what's
> expected to be running. How can you know if the results of your test
> run are
> ever valid if the set of things you're trying to verify isn't fixed? If
> there
> was a configuration error and some things were disabled by accident
> wouldn't
> you want to catch that when you're using tempest to verify you're
> deployment.
> For this reason feature discovery should not really ever be a run time
> decision.
> This is really why we currently have have discovery decoupled in
> outside
> tooling. Which currently is only verify_tempest_config, but that will
> probably
> grow into other things as well.
> 
> I understand that this a current pain point with using tempest. Heck, I
> even
> tried to put together an example of configuring tempest manually as
> part of my
> summit talk and realized it was far too large for the time window I
> would have
> during the presentation. We really need to come up with a good solution
> for
> handling tempest configuration outside of devstack. But, I don't want
> to rush
> into the wrong solution just because we are having issues with it right
> now.
> 
> As it stands now within the last few weeks I've seen 2 proposals for
> very
> different configuration tools in addition to one we apparently have in
> tree,
> tempest_auto_config. (which I'm probably going to rip out because I
> don't see any
> value in it) I think one of the results I want to get out of summit
> next week is
> to have a plan for doing this and get a spec drafted from that
> discussion. I
> feel that this probably does warrant it's own session, but the schedule
> is
> locked down now. Depending on how the discussions go during the week I
> may
> re-purpose a good chunk of my last session to discuss this.
> 
> >
> > > The verify_tempest_config tool was an attempt at a compromise
> between being
> > > explicit and also using auto discovery. By using the APIs to help
> create a
> > > config file that reflected the current configuration state of the
> services. It's
> > > still a WIP though, and it's really just meant to be a user tool. I
> don't ever
> > > see it being included in our gate workflow.
> >
> > I see, thanks.
> >
> > > > In addition, The methods which are decorated with requires_ext()
> are
> > > > test methods now, but I think it would be better to decorate
> client
> > > > methods(get_hypervisor_list, etc.) because each extension loading
> > > > condition affects available APIs.
> > >
> > > So my concern with decorating the client methods directly is that
> it might raise
> > > the skip too late and we'll end up leaking resources. But, I
> haven't tried it so
> > > it might work fine without leaking anything. I agree that it would
> make skipping
> > > based on extensions easier because it's really the client methods
> that depend on
> > > the extensions. So give it a shot and lets see if it works. The
> only other
> > > complication is the scenario, and cli tests because they don't use
> the tempest
> > > clients. But, we can just handle that by decorating the test
> methods like we do
> > > now.
> >
> > Thanks again, I got current implementations are nice for avoiding
> unnecessary
> > operations against environments.
> >
> >
> > Thanks
> > Ken'ichi Ohmichi
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the FITs mailing list