[openstack-dev] [qa] Selecting Compute tests by driver/hypervisor
Matthew Treinish
mtreinish at kortar.org
Tue Apr 22 01:25:50 UTC 2014
On Mon, Apr 21, 2014 at 10:52:39PM +0000, Daryl Walleck wrote:
> I nearly opened a spec for this, but I'd really like to get some feedback first. One of the challenges I've seen lately for Nova teams not using KVM or Xen (Ironic and LXC are just a few) is how to properly run the subset of Compute tests that will run for their hypervisor or driver. Regexes are what Ironic went with, but I'm not sure how well that will work long term since it's very much dependent on naming conventions. The good thing is that the capabilities for each hypervisor/driver are well defined (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so it's just a matter of how to convey that information. I see a few ways forward from here:
If you're willing to drive this effort then please submit a spec review for it.
These kind of discussions are perfect for doing in a spec review.
>
>
> 1. Expand the compute_features_group config section to include all Compute actions and make sure all tests that require specific capabilities have skipIfs or raise a skipException. This options seems it would require the least work within Tempest, but the size of the config will continue to grow as more Nova actions are added.
This is the only path forward I can see for this issue. We're going to have to
get better about consistently skipping based on config feature flags. This is
also a big part of what is required moving forward with the branchless tempest
work. [1] The config growth is really unavoidable considering the myriad of
configuration possibilities and new features we get in OpenStack. We will just
have to come up with some new ways and tooling to deal with the rapid growth in
config file size.
I honestly think the best approach for doing this is probably abstracting away
individual conditional skip calls and make a unified feature skip decorator.
Similar to what we already do for the requires_ext() decorator. That way we can
have consistent logic and style around how to annotate what a test requires.
>
> 2. Create a new decorator class like was done with service tags that defines what drivers the test does or does not work for, and have the definitions of the different driver capabilities be referenced by the decorator. This is nice because it gets rid of the config creep, but it's also yet another decorator, which may not be desirable.
The problem with this approach is that tempest isn't supposed to care about what
is underneath the api layer. You should just tell it what the OpenStack
deployment is capable of and let it go. If a driver or any other backend
configuration doesn't support certain functionality then that should be an
explicit knob in the config file. Also, another harm with this approach is that
in effect we end up codifying in tempest, through decorators which seems really
messy, the set of features that should work for a particular
configuration/driver; which feels way outside the scope of tempest.
>
> I'm going to continue working through both of these possibilities, but any feedback either solution would be appreciated.
>
Thanks,
Matt Treinish
[1] https://github.com/openstack/qa-specs/blob/master/specs/branchless-tempest.rst
More information about the OpenStack-dev
mailing list