[Openstack-operators] nova-placement-api tuning

Alex Schultz aschultz at redhat.com
Tue Apr 3 16:30:08 UTC 2018

On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent <cdent+os at anticdent.org> wrote:
> On Mon, 2 Apr 2018, Alex Schultz wrote:
>> So this is/was valid. A few years back there was some perf tests done
>> with various combinations of process/threads and for Keystone it was
>> determined that threads should be 1 while you should adjust the
>> process count (hence the bug). Now I guess the question is for every
>> service what is the optimal configuration but I'm not sure there's
>> anyone who's looking at this in the upstream for all the services.  In
>> the puppet modules for consistency we applied a similar concept for
>> all the services when they are deployed under apache.  It can be tuned
>> as needed for each service but I don't think we have any great
>> examples of perf numbers. It's really a YMMV thing. We ship a basic
>> default that isn't crazy, but it's probably not optimal either.
> Do you happen to recall if the trouble with keystone and threaded
> web servers had anything to do with eventlet? Support for the
> eventlet-based server was removed from keystone in Newton.

It was running under httpd I believe.

> I've been doing some experiments with placement using multiple uwsgi
> processes, each with multiple threads and it appears to be working
> very well. Ideally all the OpenStack HTTP-based services would be
> able to run effectively in that kind of setup. If they can't I'd
> like to help make it possible.
> In any case: processes 3, threads 1 for WSGIDaemonProcess for the
> placement service for a deployment of any real size errs on the
> side of too conservative and I hope we can make some adjustments
> there.

You'd say that until you realize that the deployment may also be
sharing every other service api running on the box.  Imagine keystone,
glance, nova, cinder, gnocchi, etc etc all running on the same
machine. Then 3 isn't so conservative. They start adding up and
exhausting resources (cpu cores/memory) really quickly.  In a perfect
world, yes each api service would get it's own system with processes
== processor count but in most cases they end up getting split between
the number of services running on the box.  In puppet we did a sliding
scale and have several facts[0] that can be used if a person doesn't
want to switch to $::processorcount.  If you're rolling your own you
can tune it easier but when you have to come up with something that
might be collocated with a bunch of other services you have to hedge
your bets to make sure it works most of the time.


[0] http://git.openstack.org/cgit/openstack/puppet-openstacklib/tree/lib/facter/os_workers.rb

> --
> Chris Dent                       ٩◔̯◔۶           https://anticdent.org/
> freenode: cdent                                         tw: @anticdent
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

More information about the OpenStack-operators mailing list