[openstack-dev] [neutron] Call for help with in-tree tempest scenario test failures

Sławek Kapłoński slawek at kaplonski.pl
Thu Aug 10 21:10:53 UTC 2017


Hello,

I’m still checking this QoS scenario test and I found something strange IMHO.
For example, almost all failed tests from last 2 days were executed on nodes with names like:
* ubuntu-xenial-2-node-citycloud-YYY-XXXX - on those nodes almost (or even all) all scenario tests was failed due to failed SSH connection to instance,
* ubuntu-xenial-2-node-rax-iad-XXXX - on those nodes QoS test was failed because of timeout during reading data

I’m noob in gate tests and how it’s exactly working so my conclusions can be completely wrong but maybe those issues are related somehow to some cloud providers which provides infrastructure for tests?
Maybe someone more experienced could take a look on that and help me? Thx in advance.

—
Best regards
Slawek Kaplonski
slawek at kaplonski.pl




> Wiadomość napisana przez Ihar Hrachyshka <ihrachys at redhat.com> w dniu 03.08.2017, o godz. 23:40:
> 
> Thanks for those who stepped in (Armando and Slawek).
> 
> We still have quite some failures that would benefit from initial log
> triage and fixes. If you feel like in this feature freeze time you
> have less things to do, helping with those scenario failures would be
> a good way to contribute to the project.
> 
> Thanks,
> Ihar
> 
> On Fri, Jul 28, 2017 at 6:02 AM, Sławek Kapłoński <slawek at kaplonski.pl> wrote:
>> Hello,
>> 
>> I will try to check QoS tests in this job.
>> 
>>>> Best regards
>> Slawek Kaplonski
>> slawek at kaplonski.pl
>> 
>> 
>> 
>> 
>>> Wiadomość napisana przez Jakub Libosvar <jlibosva at redhat.com> w dniu 28.07.2017, o godz. 14:49:
>>> 
>>> Hi all,
>>> 
>>> as sending out a call for help with our precious jobs was very
>>> successful last time and we swept all Python 3 functional from Neutron
>>> pretty fast (kudos the the team!), here comes a new round of failures.
>>> 
>>> This time I'm asking for your help <imagine Uncle Sam "We want you"
>>> poster here> with gate-tempest-dsvm-neutron-dvr-multinode-scenario
>>> non-voting job. This job has been part of check queue for a while and is
>>> very very unstable. Such job covers scenarios like router dvr/ha/legacy
>>> migrations, qos, trunk and dvr. I went through current failures and
>>> created an etherpad [1] with categorized failures and logstash queries
>>> that give you latest failures with given particular tests.
>>> 
>>> If you feel like doing troubleshooting and sending fixes for gates,
>>> please pick one test and write down your name to the test.
>>> 
>>> Thanks to all who are willing to participate.
>>> 
>>> Have a great weekend.
>>> Jakub
>>> 
>>> 
>>> [1]
>>> https://etherpad.openstack.org/p/neutron-dvr-multinode-scenario-gate-failures
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170810/5b73e148/attachment.sig>


More information about the OpenStack-dev mailing list