[Openstack-operators] [Scale][Performance] <nodes_with_someting> / compute_nodes ratio experience

Belmiro Moreira moreira.belmiro.email.lists at gmail.com
Wed Nov 18 13:17:39 UTC 2015

we are still running nova Juno and I don't see this performance issue.
(I can comment on Kilo next week).

Per cell, we have a node that runs conductor + other control plane services.
The number of conductor workers can change between 16 to 48.
We try to not have more than 200 compute nodes per cell.


On Wed, Nov 18, 2015 at 10:56 AM, Dina Belova <dbelova at mirantis.com> wrote:

> Dear operators,
> yesterday we (Performance Team) had weekly IRC meeting
> <http://eavesdrop.openstack.org/meetings/performance_team/2015/performance_team.2015-11-17-15.00.html>,
> one of the things on board to discuss was nova-conductor performance issue
> <https://etherpad.openstack.org/p/remote-conductor-performance> raised by
> Kris Lindgren (GoDaddy).
> There are still things to investigate and several suggestions about nature
> of this behaviour, but one of the questions/ideas raised was "If the
> conductor nodes to compute nodes ratio was ok?".
> In fact I never saw any recommendations on safe enough ratio for conductor
> nodes or networking nodes or whatever nodes vs compute ones.
> As providing OpenStack community with understandable recommendations and
> instructions on performant OpenStack cloud deployments is part of
> Performance Team mission, I'm kindly asking you to share your experience on
> safe cloud deployment ratio between various types of nodes you're having
> right now and the possible issues you observed (as an example: discussed
> GoDaddy's cloud is having 3 conductor boxes vs 250 computes in the cell,
> and there was an opinion that it's simply not enough and that's it).
> Thanks in advance!
> Cheers,
> Dina
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151118/61295ad7/attachment.html>

More information about the OpenStack-operators mailing list