<div dir="ltr"><div><div><div><div><div><div><div><div>FWIW, we tried to run our job in a rax provider VM (provided by ianw from his personal account)<br></div>and we ran the tempest tests twice, but the OOM did not re-create. Of the 2 runs, one of the run<br></div>used the same PYTHONHASHSEED as we had in one of the failed runs, still no oom.<br><br></div>Jeremy graciously agreed to provide us 2 VMs , one each from rax and hpcloud provider<br></div>to see if provider platform has anything to do with it.<br><br></div>So we plan to run again wtih the VMs given from Jeremy , post which i will send<br></div>next update here.<br><br></div>thanx,<br></div>deepak<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 24, 2015 at 4:50 AM, Jeremy Stanley <span dir="ltr"><<a href="mailto:fungi@yuggoth.org" target="_blank">fungi@yuggoth.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Due to an image setup bug (I have a fix proposed currently), I was<br>
able to rerun this on a VM in HPCloud with 30GB memory and it<br>
completed in about an hour with a couple of tempest tests failing.<br>
Logs at: <a href="http://fungi.yuggoth.org/tmp/logs3.tar" target="_blank">http://fungi.yuggoth.org/tmp/logs3.tar</a><br>
<br>
Rerunning again on another 8GB Rackspace VM with the job timeout<br>
increased to 5 hours, I was able to recreate the network<br>
connectivity issues exhibited previously. The job itself seems to<br>
have run for roughly 3 hours while failing 15 tests, and the worker<br>
was mostly unreachable for a while at the end (I don't know exactly<br>
how long) until around the time it completed. The OOM condition is<br>
present this time too according to the logs, occurring right near<br>
the end of the job. Collected logs are available at:<br>
<a href="http://fungi.yuggoth.org/tmp/logs4.tar" target="_blank">http://fungi.yuggoth.org/tmp/logs4.tar</a><br>
<br>
Given the comparison between these two runs, I suspect this is<br>
either caused by memory constraints or block device I/O performance<br>
differences (or perhaps an unhappy combination of the two).<br>
Hopefully a close review of the logs will indicate which.<br>
<div class="HOEnZb"><div class="h5">--<br>
Jeremy Stanley<br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>