[openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?
ihrachys at redhat.com
Fri Feb 10 04:59:09 UTC 2017
I noticed lately a number of job failures in neutron gate that all
result in job timeouts. I describe
gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
timeouts happening in other jobs too.
The failure mode is all operations, ./stack.sh and each tempest test
take significantly more time (like 50% to 150% more, which results in
job timeout triggered). An example of what I mean can be found in .
A good run usually takes ~20 minutes to stack up devstack; then ~40
minutes to pass full suite; a bad run usually takes ~30 minutes for
./stack.sh; and then 1:20h+ until it is killed due to timeout.
It affects different clouds (we see rax, internap, infracloud-vanilla,
ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
pypi or apt mirrors because then we would see slowdown in ./stack.sh
We can't be sure that CPUs are the same, and devstack does not seem to
dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
if it would help anyway). Neither we have a way to learn whether
slowliness could be a result of adherence to RFC1149. ;)
We discussed the matter in neutron channel  though couldn't figure
out the culprit, or where to go next. At this point we assume it's not
neutron's fault, and we hope others (infra?) may have suggestions on
where to look.
More information about the OpenStack-dev