<div dir="auto">And who said openstack wasn't growing? ;) <div dir="auto"><br></div><div dir="auto">I think reducing API workers is a nice quick way to bring back some stability. </div><div dir="auto"><br></div><div dir="auto">I have spent a bunch of time digging into the OOM killer events and haven't yet figured out why they are being triggered. There is significant swap space remaining in all of the cases I have seen so it's likely some memory locking issue or kernel allocations blocking swap. Until we can figure out the cause, we effectively have no usable swap space on the test instances so we are limited to 8GB.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Feb 1, 2017 17:27, "Armando M." <<a href="mailto:armamig@gmail.com">armamig@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div>[TL;DR]: OpenStack services have steadily increased their memory footprints. We need a concerted way to address the oom-kills experienced in the openstack gate, as we may have reached a ceiling.</div><div><br></div><div>Now the longer version:</div><div>------------------------------<wbr>--<br></div><div><br></div><div><div>We have been experiencing some instability in the gate lately due to a number of reasons. When everything adds up, this means it's rather difficult to merge anything and knowing we're in feature freeze, that adds to stress. One culprit was identified to be [1].</div><div><br></div><div>We initially tried to increase the swappiness, but that didn't seem to help. Then we have looked at the resident memory in use. When going back over the past three releases we have noticed that the aggregated memory footprint of some openstack projects has grown steadily. We have the following:</div><div><ul><li>Mitaka</li><ul><li>neutron: 1.40GB</li><li>nova: 1.70GB</li><li>swift: 640MB</li><li>cinder: 730MB</li><li>keystone: 760MB</li><li>horizon: 17MB</li><li>glance: 538MB</li></ul><li>Newton<br></li><ul><li>neutron: 1.59GB (+13%)</li><li>nova: 1.67GB (-1%)</li><li>swift: 779MB (+21%)<br></li><li>cinder: 878MB (+20%)</li><li>keystone: 919MB (+20%)<br></li><li>horizon: 21MB (+23%)</li><li>glance: 721MB (+34%)</li></ul><li>Ocata</li><ul><li>neutron: 1.75GB (+10%)</li><li>nova: 1.95GB (%16%)</li><li>swift: 703MB (-9%)</li><li>cinder: 920MB (4%)</li><li>keystone: 903MB (-1%)</li><li>horizon: 25MB (+20%)</li><li>glance: 740MB (+2%)</li></ul></ul></div><div>Numbers are approximated and I only took a couple of samples, but in a nutshell, the majority of the services have seen double digit growth over the past two cycles in terms of the amount or RSS memory they use.</div><div><br></div><div>Since [1] is observed only since ocata [2], I imagine that's pretty reasonable to assume that memory increase might as well be a determining factor to the oom-kills we see in the gate.</div><div><br></div><div>Profiling and surgically reducing the memory used by each component in each service is a lengthy process, but I'd rather see some gate relief right away. Reducing the number of API workers helps bring the RSS memory down back to mitaka levels:</div><div><ul><li>neutron: 1.54GB</li><li>nova: 1.24GB<br></li><li>swift: 694MB</li><li>cinder: 778MB</li><li>keystone: 891MB<br></li><li>horizon: 24MB</li><li>glance: 490MB</li></ul><div>However, it may have other side effects, like longer execution times, or increase of timeouts.</div><div><br></div><div>Where do we go from here? I am not particularly fond of stop-gap [4], but it is the one fix that most widely address the memory increase we have experienced across the board.<br></div></div><div><br></div><div>Thanks,</div><div>Armando</div><div><br></div><div>[1] <a href="https://bugs.launchpad.net/neutron/+bug/1656386" target="_blank">https://bugs.launchpad.net<wbr>/neutron/+bug/1656386</a><br></div><div>[2] <a href="http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22oom-killer%5C%22%20AND%20tags:syslog" target="_blank">http://logstash.openstack.<wbr>org/#/dashboard/file/logstash.<wbr>json?query=message:%5C%22oom-<wbr>killer%5C%22%20AND%20tags:<wbr>syslog</a></div><div>[3] <a href="http://logs.openstack.org/21/427921/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/82084c2/" target="_blank">http://logs.openstack.org/<wbr>21/427921/1/check/gate-<wbr>tempest-dsvm-neutron-full-<wbr>ubuntu-xenial/82084c2/</a></div><div>[4] <a href="https://review.openstack.org/#/c/427921" target="_blank">https://review.openstack.<wbr>org/#/c/427921</a></div></div></div>
<br>______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
<br></blockquote></div></div>