[Openstack] OpenStack API workers count

Kevin Benton blak111 at gmail.com
Wed Feb 3 11:18:31 UTC 2016


For neutron you can safely set api_workers and rpc_workers to 1.

On Wed, Feb 3, 2016 at 4:06 AM, Alexander Simeonov <simeonov at gmail.com>
wrote:

> Hello,
>
> I am running OpenStack on a single physical node, installed with packstack
> —allinone. The host has 32gb of memory, 32 logical cores and currently runs
> a single VM with 16gb RAM allocated.  However, after just a few hours of
> usage, the host starts swapping. After inspection of the current memory
> usage, I see most of the OpenStack services have spawned multiple processed
> each which in total consume about 12gb of memory (shared+resident per
> process):
>
> MiB Process # processes
> 3686.4  nova-api 97
> 2355.2  neutron-server 65
> 2048    nova-conductor 33
> 1536    heat-engine 33
> 819.2   glance-registry 33
> 657     glance-api 33
> 504.6   swift-proxy-ser 33
> 183.9   neutron-metadat 33
> 168.8   cinder-api 33
>
> According to the docs at
> http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html,
> OpenStack defaults the number of workers for each of this service to the
> number of logical cores on the server, but the nova-api and the
> neutron-servers seem to have multipliers of 3 and 2 respectively.
>
> Since this machine will only run just a few (3-4) VMs and modifications or
> creation of guests will rarely occur once it has been fully set up, what
> are the minimum values that I could use in the relevant config files in
> order to decrease the memory usage of this services? Can I safely run a
> single worker for each of the internal services?
>
> Thanks.
>
> Regards,
> Alexander
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160203/e47c8a00/attachment.html>


More information about the Openstack mailing list