[openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)
Joe Gordon
joe.gordon0 at gmail.com
Mon Sep 8 22:24:29 UTC 2014
Hi All,
We have recently started seeing assorted memory issues in the gate
including the oom-killer [0] and libvirt throwing memory errors [1].
Luckily we run ps and dstat on every devstack run so we have some insight
into why we are running out of memory. Based on the output from job taken
at random [2][3] a typical run consists of:
* 68 openstack api processes alone
* the following services are running 8 processes (number of CPUs on test
nodes)
* nova-api (we actually run 24 of these, 8 compute, 8 EC2, 8 metadata)
* nova-conductor
* cinder-api
* glance-api
* trove-api
* glance-registry
* trove-conductor
* together nova-api, nova-conductor, cinder-api alone take over 45 %MEM
(note: some of that is memory usage is counted multiple times as RSS
includes shared libraries)
* based on dstat numbers, it looks like we don't use that much memory
before tempest runs, and after tempest runs we use a lot of memory.
Based on this information I have two categories of questions:
1) Should we explicitly set the number of workers that services use in
devstack? Why have so many workers in a small all-in-one environment? What
is the right balance here?
2) Should we be worried that some OpenStack services such as nova-api,
nova-conductor and cinder-api take up so much memory? Does there memory
usage keep growing over time, does anyone have any numbers to answer this?
Why do these processes take up so much memory?
best,
Joe
[0]
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwib29tLWtpbGxlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDEwMjExMjA5NzY3fQ==
[1] https://bugs.launchpad.net/nova/+bug/1366931
[2] http://paste.openstack.org/show/108458/
[3]
http://logs.openstack.org/83/119183/4/check/check-tempest-dsvm-full/ea576e7/logs/screen-dstat.txt.gz
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140908/99c8fc74/attachment.html>
More information about the OpenStack-dev
mailing list