[openstack-dev] [PTG][QA] Queens PTG - QA Summary
andrea.frittoli at gmail.com
Tue Sep 19 19:37:21 UTC 2017
On Tue, Sep 19, 2017 at 6:51 PM Andrea Frittoli <andrea.frittoli at gmail.com>
> Dear Bats,
> thanks everyone for participating to the Queens PTG - in person and
> remotely - and for making it a successful and enjoyable event.
> Here's a summary of what we discussed and achieved during the week.
> This report is far from complete - please complement it with QA related
> stories I may have missed.
> Unfortunately I ran out of bat stickers, I hope to have some more for the
> next time we meet :)
> Pike Retrospective
> We experimented this time with running a retrospective .
> The input for the retrospective was almost exclusively from members of the
> QA team, which means that either the retrospective was poorly advertised by
> me, or that we did a nice job for the community during the QA cycle. I hope
> the latter :)
> The outcome of the retrospective was mostly 'yay' for tasks completed, and
> 'next cycle' for things we did not have time to work on.
> A few extra things that came up were:
> - bug triage and fix: we may need a bug czar and more automation to stay
> on top of the bug queue
> - elastic recheck categorisation: we may need an e-r czar to ensure we
> don't leave gate failures and races uncategorised and accumulating
> - meetings are mostly attended in the APAC time zone and very seldom
> attended by non-qa folks. We will consider shortening / reducing them and
> complementing them with QA office hours
> Monitoring the Gate
> At the beginning of the Pike cycle we had a number of stability issues in
> the gate.
> To prevent issues from accumulating over time we discussed a few ideas
> about monitoring the status of the gate and providing more feedback to
> reviewers to help catch patches that may introduce issues .
> There are a few things that can be done relatively easily (or that we
> already have) in terms of data collection: job duration and failure rates,
> aggregated dstat data, resources created by tests.
> We miss OpenStack Health contributors to create new views for this data.
> Links from gerrit and zuul dashboard into OpenStack Health would help
> making the data discoverable.
> A large chunk of the patches required to mark test.py as a stable
> interface for plugins was merged during the summit.
> It was good to have them merged during PTG so it was easier to fix
> resulting issues in Tempest plugins - at least Manila and Sahara needed a
> small patch.
> There are two patch series left  which should have very little
> impact on plugins - we'll try to merge them as soon as possible to avoid
> disruptions later in the Queens cycle.
> We worked with a few project teams on the goal to split Tempest plugins to
> a dedicated repo.
> The step by step process  linked to the goal includes an example
> section - we hope to get more and more examples in there from team who
> already went through the process.
> We discussed about devstack runtime. The time to setup an OpenStack cloud
> using devstack seems to have increased over time  - it would be good
> to investigate why and see if there is time we can save in the gate and on
> each developer laptop :) Between August and September the average runtime
> in the gate increased by about 200s.
> Upgrade Testing
> Rolling upgrade testing via grenade is important for project that seek
> obtaining the support rolling upgrade tag .
> While the scope of Grenade is rather fixed, it should be possible to
> support ordering (or relative ordering) in project updates.
> Policy Testing
> We discussed what's next for the Queens cycle - support for multi-policy
> testing is the largest chunk of work planned for now.
> The migration in ostestr to use stestr internally happened shortly before
> the PTG  and we worked through the PTG to amend any deviation in
> behaviour that this may have caused.
> Next in the todo list is to run stestr natively in Tempest, bypassing
> ostestr completely.
> The plan is for this to lead the way for projects to gradually remove the
> dependency to ostestr completely.
> HA Testing
> We talked quite a bit about HA and non-functional testing in general.
> Non-functional testing is not a good fit for gate testing, since it's not
> as reliable as functional / integration testing and it often produces
> results which needs to be interpreted by a human being.
> It also has strong dependencies to the deployment tooling and
> architecture. Until now most of OpenStack non-functional testing has been
> done by vendors and operators using downstream tools.
> SamP and guatamdivgi presented to the QA team a proposal for an Ansible
> based framework for HA testing . Plugins will allow to seamlessly port
> different test scenario against different cloud architecture, thus
> rendering the framework of interest as a general testing tool for
> OpenStack. The same concept can be extended to non-functional testing in
> It's not clear yet if any of this could run as part of OpenStack CI. We
> hope to see a PoC after a couple of months in the Queens cycle.
> The NFV ecosystem would be happy to see publicly available non-functional
> tests for OpenStack as well - if we had a common framework for such tests
> we'd have an opportunity to attract contributions into tests from them as
> All things Zuul V3
> Zuul v3 will provide a DB backend for test runs data, which we want to
> integrate as a new data source for OpenStack Health, so that we can then
> provide a full picture of all job runs in O-H.
> Job names will change with zuul v3. While we found a simple solution to
> keep OpenStack Health data functional with Zuul v3 provided metadata, we
> will still loose 6 months of historical data in the process of migration,
> for which we don't have a solution yet.
> In terms of Zuul v3 jobs, we started working on Zuul v3 native Tempest
> jobs . this will be the basis for most jobs that run Tempest against
> The patch is still WIP, but it already runs the full Tempest run, with
> only one test failure which I need to investigate.
> QA Help Room
> The help room was helpful (heh, I couldn't resist) mostly on day 1.
> It was good to have dedicated time allocated to answer questions, since it
> gave us space to open the laptop and work on code directly to answer
> specific questions.
> There were no questions asked during day 2. We used the time to join other
> sessions and work on QA topics, but perhaps in Dublin we could consider
> having a single help room day.
> It was very interesting to discuss with the OPNFV community about CI and
> test processes and architecture.
> The aim on both sides to re-use / share tools and best practices, share
> test results and avoid duplicate efforts where relevant.
> We discussed with dmellado the possibility of having a basic k8s client
> hosted in a Tempest plugin , that could be used by k8s related
> OpenStack project to write OpenStack / k8s scenario tests. The main
> advantages of a k8s client written in Tempest format and hosted by
> OpenStack would be a uniform interface with the OpenStack Tempest client
> and a stable testing API.
> QA Themes for Queens
> We summarised the topics the QA team will focus on through the Queens
> cycle in out queens priority etherpad .
> The main etherpad for QA at the PTG in Denver is available as well .
> QA Team Pictures
> I'll forward the pictures as soon as I have a high-res copy available :)
> Thanks for reading!
> Andrea Frittoli (andreaf)
>  https://etherpad.openstack.org/p/qa-pike-retrospective
>  https://etherpad.openstack.org/p/qa-pike-gate-performance
>  https://etherpad.openstack.org/p/tempest-separate-plugin
>  https://etherpad.openstack.org/p/qa-queens-ptg-destructive-testing
>  https://review.openstack.org/#/c/504246/
>  https://etherpad.openstack.org/p/qa-queens-priorities
>  https://etherpad.openstack.org/p/qa-queens-ptg
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev