[Openstack-operators] nova-conductor scale out
Gustavo Randich
gustavo.randich at gmail.com
Tue Mar 15 15:38:05 UTC 2016
PD: 32 cores
On Tue, Mar 15, 2016 at 12:37 PM, Gustavo Randich <gustavo.randich at gmail.com
> wrote:
> We are melting right now (rpc timeouts, rabbitmq connection timeouts, high
> load on controller, etc.): we are running 375 compute nodes, and only one
> controller (on vmware) on which we run rabbitmq + nova-conductor with 28
> workers
>
> So I can seamlessly add more controller nodes with more nova-conductor
> workers?
>
>
> On Tue, Mar 15, 2016 at 11:59 AM, Kris G. Lindgren <klindgren at godaddy.com>
> wrote:
>
>> We run cells, but when we reached about 250 hv in a cell we needed to add
>> another cell api (went from 2 to 3) to help with the cpu load caused by
>> novaconductor. Nova-conductor was/is constantly crushing the cpu on those
>> servers.
>>
>> ___________________________________________________________________
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: David Medberry <openstack at medberry.net>
>> Date: Tuesday, March 15, 2016 at 8:54 AM
>> To: Gustavo Randich <gustavo.randich at gmail.com>
>> Cc: "openstack-operators at lists.openstack.org" <
>> openstack-operators at lists.openstack.org>
>> Subject: Re: [Openstack-operators] nova-conductor scale out
>>
>> How many compute nodes do you have (that is triggering your controller
>> node limitations)?
>>
>> We run nova-conductor on multiple control nodes. Each control node runs
>> "N" conductors where N is basically the HyperThreaded CPU count.
>>
>> On Tue, Mar 15, 2016 at 8:44 AM, Gustavo Randich <
>> gustavo.randich at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Simple question: can I deploy nova-conductor across several servers?
>>> (Icehouse)
>>>
>>> Because we are reaching a limit in our controller node....
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160315/2d208f26/attachment.html>
More information about the OpenStack-operators
mailing list