<div dir="ltr">I believe those are traces left by the reference implementation of cinder setting very high debug level on tgtd. I'm not sure if that's related or the culprit at all (probably the culprit is a mix of things).<div><br></div><div>I wonder if we could disable such verbosity on tgtd, which certainly is going to slow down things.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 10, 2017 at 9:07 AM, Antonio Ojea <span dir="ltr"><<a href="mailto:aojea@midokura.com" target="_blank">aojea@midokura.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I guess it's an infra issue, specifically related to the storage, or the network that provide the storage.<br><br>If you look at the syslog file [1] , there are a lot of this entries:<br><br><pre><span class="m_7057898010568257006gmail-INFO m_7057898010568257006gmail-_Feb_09_04_20_42"><a name="m_7057898010568257006__Feb_09_04_20_42" class="m_7057898010568257006gmail-date" href="http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz#_Feb_09_04_20_42" target="_blank">Feb 09 04:20:42</a> ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(2024) no more data
</span><span class="m_7057898010568257006gmail-INFO m_7057898010568257006gmail-_Feb_09_04_20_42"><a name="m_7057898010568257006__Feb_09_04_20_42" class="m_7057898010568257006gmail-date" href="http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz#_Feb_09_04_20_42" target="_blank">Feb 09 04:20:42</a> ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(1996) found a task 71 131072 0 0
</span><span class="m_7057898010568257006gmail-INFO m_7057898010568257006gmail-_Feb_09_04_20_42"><a name="m_7057898010568257006__Feb_09_04_20_42" class="m_7057898010568257006gmail-date" href="http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz#_Feb_09_04_20_42" target="_blank">Feb 09 04:20:42</a> ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_data_rsp_build(1136) 131072 131072 0 26214471
</span><span class="m_7057898010568257006gmail-INFO m_7057898010568257006gmail-_Feb_09_04_20_42"><a name="m_7057898010568257006__Feb_09_04_20_42" class="m_7057898010568257006gmail-date" href="http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz#_Feb_09_04_20_42" target="_blank">Feb 09 04:20:42</a> ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: __cmd_done(1281) (nil) 0x2563000 0 131072<br></span></pre>grep tgtd syslog.txt.gz| wc<br>  139602 1710808 15699432<br><br>[1] <a href="http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz" target="_blank">http://logs.openstack.org/95/<wbr>429095/2/check/gate-tempest-<wbr>dsvm-neutron-dvr-ubuntu-<wbr>xenial/35aa22f/logs/syslog.<wbr>txt.gz</a><br><br><br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 10, 2017 at 5:59 AM, Ihar Hrachyshka <span dir="ltr"><<a href="mailto:ihrachys@redhat.com" target="_blank">ihrachys@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
I noticed lately a number of job failures in neutron gate that all<br>
result in job timeouts. I describe<br>
gate-tempest-dsvm-neutron-dvr-<wbr>ubuntu-xenial job below, though I see<br>
timeouts happening in other jobs too.<br>
<br>
The failure mode is all operations, ./stack.sh and each tempest test<br>
take significantly more time (like 50% to 150% more, which results in<br>
job timeout triggered). An example of what I mean can be found in [1].<br>
<br>
A good run usually takes ~20 minutes to stack up devstack; then ~40<br>
minutes to pass full suite; a bad run usually takes ~30 minutes for<br>
./stack.sh; and then 1:20h+ until it is killed due to timeout.<br>
<br>
It affects different clouds (we see rax, internap, infracloud-vanilla,<br>
ovh jobs affected; we haven't seen osic though). It can't be e.g. slow<br>
pypi or apt mirrors because then we would see slowdown in ./stack.sh<br>
phase only.<br>
<br>
We can't be sure that CPUs are the same, and devstack does not seem to<br>
dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure<br>
if it would help anyway). Neither we have a way to learn whether<br>
slowliness could be a result of adherence to RFC1149. ;)<br>
<br>
We discussed the matter in neutron channel [2] though couldn't figure<br>
out the culprit, or where to go next. At this point we assume it's not<br>
neutron's fault, and we hope others (infra?) may have suggestions on<br>
where to look.<br>
<br>
[1] <a href="http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/console.html#_2017-02-09_04_47_12_874550" rel="noreferrer" target="_blank">http://logs.openstack.org/95/4<wbr>29095/2/check/gate-tempest-dsv<wbr>m-neutron-dvr-ubuntu-xenial/<wbr>35aa22f/console.html#_2017-02-<wbr>09_04_47_12_874550</a><br>
[2] <a href="http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2017-02-10.log.html#t2017-02-10T04:06:01" rel="noreferrer" target="_blank">http://eavesdrop.openstack.org<wbr>/irclogs/%23openstack-neutron/<wbr>%23openstack-neutron.2017-02-<wbr>10.log.html#t2017-02-10T04:06:<wbr>01</a><br>
<br>
Thanks,<br>
Ihar<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</blockquote></div><br></div>
</div></div><br>______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
<br></blockquote></div><br></div>